Layers of Code, Layers of Subjectivity – Chris Chesher

Great! Great! Perfect! Perfect! Perfect! Perfect! Great! Miss. Miss. Miss.

A crowd encircles Sam, who is staring downwards, stomping his feet on the dance floor, building up a sweat. The chants of ‘Perfect!’ come not from the mystified passers-by who have stopped to watch, but appear as animated text flashing on the screen of the video game, Dance Dance Revolution.1 The game itself is rating how well Sam is dancing by electronically measuring each foot stomp to the millisecond. The game sits in an arcade in the middle of the city, but Sam seems indifferent to the world around, and stares intently at a relentless stream of arrows (¬­¯®¬¬¯¯) flowing from the bottom of the screen. He has to match the arrows by stepping on four footpads, marked with similar arrows. Each time he misses the beat he loses points and shortens his game.

Meanwhile, across town, I am about to write this article, and I am greeted by a paperclip. It says ‘Type your question here and click Search’. It’s the Microsoft Office Assistant, an animated software agent that predicts, with uncanny imprecision, the help I need at this moment. It asks again, ‘What would you like to do?’ I click the close box to try to get rid of it. It waves goodbye and disappears to leave me in peace with my cursor.

There’s now just a cursor flashing at the top of a blank page in my word processor. It patiently waits for me to start typing. It is the most intense point in my field of action. It marks a gap where my words are about to appear. The flashing cursor calls me as a prospective user and a writer. Microsoft Word asks me to type. In fact, I have been hailed continually since I started up the computer and the words ‘Welcome to MacOS’ appeared on the screen. What else can I do but start typing?

As my fingers start typing the cursor slides across the page, a vanguard ahead of my emerging words, a compact but powerful on-screen avatar giving birth to sentence after sentence. Its flow marks my presence and activity, increasing the word count, increasing the file size, recording my inspiration. As long as my inspiration comes, it is alive. I pause again. The cursor flashes. It waits in untiring subservience. I continue writing, and once again it is the centre of my power over this emerging text. It gives me power, but only within certain prescribed limits. My desires and intentions are constrained and directed through the narrow space that the cursor cuts into the computer-invoked page.

When a computer addresses users, it doesn’t speak as an authority like a schoolteacher or dance instructor. The cursor doesn’t demand obedience so much as make an offer. It addresses me individually, because I personalised the system myself. It asks: ‘Where do you want to go today?’ (Microsoft, 1999). The cursor is not telling me something, but indicating that it is listening for my command. It doesn’t demand that I write, but offers support if I want to write. ‘Dance Dance Revolution’ doesn’t command Sam to dance. It’s not like my primary school teachers who circled around my 8-year-old classmates and me and told us to dance the ‘Pride of Erin’. Instead, it offers him constant depersonalised feedback — praise and warnings. It asks only for another $2.

As users we take on special hailing powers — powers of invocation. Sam’s magical feet call up an impressive high score. My dancing fingers summon words onto the screen. And I have other invocational powers, too. When I demand a printed document it slips out of my laser printer. I connect to the Internet and call up a reference on a library catalogue. I e-mail this document to a friend. I put it online as a web page. In magical and technological senses, the computer is the medium through which we call into presence new daemons: charmed dance floors, writing environments, databases, e-mail systems, electronic journals. Each of these daemons that is invoked has a logic and an economics of its own. Each offers the user some different kinds of power. As a new media form, it is not computation that makes these devices distinctive, but invocation. Computers should not be called computers, but invocators.

Different invocators address users in different ways. Inside institutions they speak voices of authority: as punch cards often implored, ‘do not bend, tear or mutilate’. Personal computers offer invocational powers directly to private individuals: a happy Mac smile or a ‘Start’ button. Arcade games are spruikers, hawking magical spectacles. The Internet seems to offer a democratic invocational space: compared with print and broadcasting, a medium like the Net seems more socially and personally liberating: email anyone! Post your own message to a newsgroup! Go anywhere on the web!

All that invocators ask in return for the powers they offer is that we become users: that we take up positions as invoking subjects. User subjectivity is difficult to define because it is based on taking up capacities to invoke. Unlike churches, prisons, schools and factories that position individuals as subjects in relation to institutions (Foucault, 1977), ideology (Althusser, 1971), or a ‘big Other’ (Zizek, 1998), invocators don’t necessarily make users subservient. The question is: what new modes of power come into play with the proliferation of invocational media?

Invocations, avocations and vocations

The paper clip and the dance floor daemons call Sam and me away from what we would otherwise be doing. I may not be an office worker, and Sam may not be a dancer, but we are both summoned to take on these identities by what we were able to invoke. Like any invocator, Dance Dance Revolution and Microsoft Word both have their own conventions and standard rules for operating. While some systems require substantial training for users, the ideal ‘user friendly’ system is ‘intuitive’. It supposedly requires no tuition. It is immediately apparent. It speaks for itself. This ‘speaking for itself’ of software and hardware is the avocation.

To do anything with an invocator, a user must know the avocations it offers. All invocations answer avocations. A keyboard offers an array of characters; an application offers a whole set of features. Avocations prescribe the limits of the invocations that may be performed using a particular invocational assemblage. However, avocations do not limit users to predictable pre-programmed sequences of technical operations. An avocation constructs what Adrian Mackenzie calls a ‘margin of indeterminacy’ (2002: 26). It constrains the domains of possibility, but remains open to be further informed. The keyboard only offers a limited number of characters, but this still leaves open an enormous set of things that may be typed.

Avocations provide the languages in which invocations are spoken, the platforms on which they are made, and the vectors along which they are articulated. Invocator platforms are built and stabilised in layers — application, operating system and hardware. A user’s invocation must cross all avocational layers to take effect. If a user tries to utter a command that is not supported by any one of these layers, it will have either no effect at all, or, worse, produce an unanticipated result. Therefore mnemonic and graphical conventional mechanisms are higher-level avocations that help users to compose invocations that will perform correctly.

Avocations are not simply technical procedures. They draw semantic connections with wider cultural fields. Avocations serve to generate the desires, imaginations and identities of users. Word draws on typographic conventions (bold type), metaphors from art (palettes), and even work routines (revisions). The Dance Dance Revolution game draws, of course, on the cultural practices of dance. Avocations operate at the intersections of individual intentionality and collective expectation. When users answer avocations by making invocations, they identify themselves as the source of an action. They feel the invocator actualises their own intention. However, to a certain extent their invocation puts into action possibilities already laid down by avocations.

The avocation is a minor form of vocation. To find a vocation was once primarily a theological concept — a sense that you were chosen by God to perform religious work. The term was secularised to refer to any individual who feels particularly suited to a profession. When subjects take up a vocation they internalise collective conventions, expectations and stereotypes and turn them outwards again. The vocation determines both a person’s internal sense of self, and how others perceive them. Vocations are crucial in securing social power: if someone is perceived as having been called to a position of authority, their power is legitimate.

Weber identifies the importance of the vocation in establishing the legitimacy use of force. He distinguishes three forms of authority: traditional, charismatic and legal. Traditional authority such as royal inheritance invokes the past as its right to exercise power. Charismatic authority is invested in ‘the authority of the extraordinary and personal gift of grace (charisma), the absolutely personal devotion and personal confidence in revelation, heroism, or other qualities of individual leadership’ (1946: 82). Legal authority is based in the law and rational bureaucracy. In this case, an individual’s authority is invested in his or her professional position, rather than in any claimed link to the eternal past or to personal gravitas. This form of vocation legitimates an office-holder’s actions, and also operates as a force of predestiny for those in a position of power.

Avocations offer a fourth mode of authority: an invocational authority that calls on technological infrastructures for legitimacy. This can be heard most loudly in the vocational authority of institutional databases: credit card transactions, police records (Chesher, 1998) and social security files. These largely enforce and extend existing power relations. But many avocations are distractions from a central path rather than a calling to a whole system, and it is these forces below the level of the vocation that I’m interested in. Many avocations don’t assert vocations in a total way. However, particularly where social roles are not strongly defined, they do redistribute power. Avocations incorporate implied social positions and facilitate particular modes of praxis.

Avocations are impulses that call people away from their usual activities. They divert users from the path that they would otherwise take to solve a problem. Faced with a series of numbers, rather than resorting to mental arithmetic, or to a pen and paper, a user might decide to enter the problem as invocations into a calculator. Users make pragmatic judgements about the most efficient means of approaching a task. The historical success of invocational media is built on the range and power of avocational offerings that calculators, personal computers and other invocators have made. While each event where a user chooses an invocation to a computer over another means is an individual choice, the outcomes of the millions of tiny choices are dramatic for social collectives.

Affordances and avocations

If layers of avocations make invocations possible, only parts of these are perceptible to users. What users perceive directly are affordances. Graphical interfaces present integrated avocational arrangements that are perceived as complete affordances.The concept of the affordance was developed by J.J. Gibson (1979) and popularised in the human interface design community by Donald Norman (1990). A user perceives an affordance when he or she identifies the boundaries of an object, and recognises the possibilities for acting on that object. Donald Norman uses the doorknob as an example of an affordance. He argues that users should ideally understand immediately how to open the door from looking at it (1990: 87-91). When the user turns a mechanical doorknob, it physically releases a catch. In this case the affordance directly connects with the physical operations of the mechanism.

In invocational media, though, relationships between input and output are arbitrary, and parts of an affordance are actually invoked. For example, when I move the mouse, the pointer on the screen also moves. But the link between mouse input and invoked screen output passes through several layers of avocations. The pointer movement is a cinematic illusion. That is how I can change my preferences to set the speed of mouse movement, and why the pointer stops moving when the invocator crashes. The avocations create the affordance relationship. Signals from input devices are sampled as user invocations, temporarily stored and interpreted by operating system and application layers before being manifest again as perceptible images in output devices.

The way I experience it, my word processor is one cohesive environment, with no significant boundaries between software and hardware. If I want to make some part of my text appear bold, I select it with my mouse and click on the bold icon in the formatting palette. Parts of these affordances are invoked as images on screen: the palettes, icons, the pointer and the selection. However, parts of the affordances are actual physical objects: the mouse and the screen itself. The affordances in invocational systems must necessarily cross several avocational layers. The objects’ behaviours are also partly invoked, according to a ‘look and feel’ designed by programmers. The circuit between user perception and action is completed across the invocational interval: a CPU constantly waiting for inputs, reading and writing to memory, and sending outputs. This ongoing reconstruction of the layered invoked environment is what gives the invocator its fluidity and flexibility. It also presents designers with particular challenges in creating affordances that behave as though they were object-permanent.

Human interface designers work with consistent conceptual models to give users a sense of stability in the midst of this fluidity and indeterminacy. Their designs articulate users’ invocations through the many layers of the invocator so users perceive they are working with consistent and stable affordances. With the ideal interface ‘the method of operating it would be apparent — to most people in the culture for which it is intended — by merely looking at it’ (Raskin, 2000: 63). It would create an affordance effect that directs users towards particular modes of interaction that appear as natural. When an affordance appears natural, the avocational voice is at its strongest.

Many software applications assemble a cohesive set of affordances and avocations that correspond closely with a social vocation. Desktop publishing systems gather together and integrate avocations for the vocation of the graphic designer or typographer. A financial package has affordances and avocations to suit accountants. While some avocations are generic (search and replace, cut and paste), some applications gather features to support a particular profession. Since the 1980s both accountancy and design have been transformed by the invocational vocational systems. Graphic designers’ and type-setters’ work is now performed predominantly as invocations, rather than using more heterogeneous mechanical, chemical and photographic processes. Electronic communication, calculation and data basing of financial information has been tied up in significant changes in the way that accounts and transactions are handled.

The game Sam was playing is at the far end of the scale of strong user avocations, or dedicated computers. It offers a limited range of functions, and special customised input and output devices. At the same time, it is a machine for play, and the social force of the invocations it performs is not substantial. The PC of the early 2000s, on the other hand, remains a system with relatively weak user avocations. It is a general-purpose machine; even if its avocations are biased towards business, document production, records keeping, network communication, education and home computer games.

However, the image of an isolated user with a device gives a misleading framing to questions concerning human-computer interaction. People are already at home, at work, at a museum or outside a cybercafé before they become users. And they don’t use invocators on their own. The question is: How do they become users? How do they develop individual and collective behaviours and routines that depend upon particular avocations? How are collectives of people, architectures and other cultural practices reconfigured to incorporate invocational media? The topology of invocational environments enables, structures, and constrains what users can do and how they do it. If such hardware and software standards tend already to position users in relation to invocational events, why do people still adopt them? These forces are not violence or intimidation, and not quite coercion. Users produce themselves by taking up the power of invocation.


Computers are machines that generate users. In learning to invoke, a user refrains the invocator’s avocations. The layered avocations in invocator hardware and software incorporate strategies to create users: careful visual and ergonomic design, help systems, interface conventions and so on. These features of the invocator itself are complemented by advertising, documentation, training, support groups and so on that provide images of prototypical uses and users. Usergenesis involves entire assemblages — users, machines, texts, and the social situations around them.

Usergenesis is always an ongoing process that sees constant adjustments in both the device and the user. Users modify and customise their hardware and software configurations, while adjusting to the device’s capacities and limitations. I change the settings in Microsoft Word, and I change how I write and edit according to the features of the program. If I want to add page numbers to the bottom of every page of this document, I can invoke automatic page numbering. This avocation works because page numbering is already a cultural convention in document production. Using this feature will save me the work of adding these numbers one at a time, and reduce the likelihood of error. However, it requires me to negotiate the ‘Header and Footer’ avocations, calling me away from my task of writing, and forcing me to try to understand the avocational structures of document sections, page layout mode and page number formats. Over time, my work practices become bound inseparably to this particular word processor and how it handles page numbering. I am becoming a user.

In using an invocator, I am constantly repeating similar, but never identical, processes and events. I type keys, but rarely in the same order. I move the mouse around, always following slightly different paths. I choose menu items, but always as part of broader tasks. These are constant refrains, but not repetitions. Over time, and many refrains, I come to feel at home with the invocator, and complete the tasks of my vocation without thinking about how I am doing them. This is how I am becoming a user.

Refrain is a term borrowed from music, but can be extended to all manner of regularities and rhythms, repetitions and continuities. Refrains mediate ongoing relationships between heterogeneous components: living and non-living. The user comes into existence through refrains that exchange avocations for invocations:

. . . the refrain is not based on elements of form, material, or ordinary signification, but on the detachment of an existential ‘motif’ (or leitmotiv) which installs itself like an ‘attractor’ within a sensible and significational chaos. The different components conserve their heterogeneity, but are nevertheless captured by a refrain which couples them to the existential Territory of my self. (Guattari, 1995A: 17)

Users do not robotically follow scripts written for them within computer programs. Avocations are arrangements of forces that participate in, but do not determine, invocational events. Invocations emerge within margins of indeterminacy that remain open between users and invocators. Invocations are not pure technical events, but mediated refrains that draw on users’ desires, intentions, affects and obligations, and invocator inputs, variables and conditional events. Invocations pass along the lines of force of avocational systems to establish connections with ‘outsides’.

Invocational media leave open spaces or intervals that users fill. There are four distinctive kinds of user-avocation:

  • material interfaces are the physical objects that users come face to face with, such as keyboards, mice and screens. Industrial design configures the physical spaces around computers to encourage users to take up certain physical positions in relation to the machine;
  • invoking languages are the linguistic and quasi-linguistic formations and standards that constitute the virtual language system (langue) with which users perform invocations (parole). These delimit the statements users can make in specific programming languages and environments;
  • user avatars are the logical and physical entities like the cursor that stand in for the user in invoked environments;
  • user modes prescribe what avatars can do from moment to moment. With logins, passwords and other strategies, user modes help secure the identity and enforce the privileges of different classes of user.

Even within an apparently simple invocational relationship such as writing a document on a word processor, there are several different levels in play — bodily, linguistic, political and psychological. Picture me again writing this document. There I am, a user in-the-world, sitting at a desk typing and waving a mouse around. There I am again, a reader/writer working with several symbol systems including English, MacOS and Microsoft Word. And that’s me playing the academic, employee, author, citizen, customer, and probably some other roles. Here I am once more, thinking about this, as a rational thinking subject, or a bundle of subconscious urges ruled by desire and Oedipal dramas. In each case, the presence of the computer complicates the picture. It is not a question of how computers change the user, or how users change the computer, but what collectivity emerges through the invocational refrains between users and avocations.

Material interfaces: the physical machine

All computer use involves some interface between hardware and wetware: invocators and bodies. The human is the interval between computer output and input devices. The shape and physical configuration of most invocators requires that users literally take up certain positions in relation to the machine. Like many technologies, invocators are engineered to suit the capacities of human bodies. The mouse fits users’ hands. The screen displays images for a user’s eyes, refreshing at 60 to over 100 times each second. The keyboard is the width of two hands, with pads for users’ fingers. Users internalise QWERTY key positions until typing is second nature. In all these ways, the invocator presents a world to the scale of the human body. It is in tune with how users perceive, learn and act.

The design of the modern conventions for interactive invocating is based not only on designs for the machine, but designs of the augmented human. Doug Engelbart is the most prominent proponent of the vision of the augmented human.2 He has based his technological life’s work on a vision for humanity. From the Second World War he proposed that the invocator should augment institutions and humans, and that it offered ‘huge potential gains for mankind’ (Engelbart, 1988: 188):

Metaphorically, I see the augmented organisation or institution of the future as changing, not as an organism merely to be a bigger and faster snail, but to achieve such new levels of sensory capability, speed, power, and coordination as to become a new species — a cat. (Engelbart 1988: 188)

His thirty-year ‘framework’ for computer development began as a humanist vision of social evolution. Faced with exploding complexity after the Second World War he believed that ‘mankind’ was ‘in trouble’, and he hoped to find ways of ‘[b]oosting mankind’s ability to deal with complex, urgent problems’ (1988: 189). The solution came to Engelbart in a flash of inspiration:

Ahah– graphic vision surges forth of me sitting at a large CRT console, working in ways that are rapidly evolving in front of my eyes. (1988: 189)

This vision drew from his experience as a radar operator during WWII. It became a more general vision of an entirely screen-based worldview. The dream took shape in the Online System (NLS), which was demonstrated at a famous conference presentation in San Francisco in 1968. The demo was the first public appearance of the mouse, and introduced the germ of the conceptual framework that would become the standard invocator-user system.

At one moment during the demonstration, the computer text and images were directly superimposed over the user’s face. Here was not only a vision of the computer of the future, but also a vision of the cyborg human. While the human was ostensibly at the centre, in command, simply augmented by the machine, this image told a different story. The dynamic text and images ran over the user’s face. The human face mingled with the invoked face of the machine. The augmented human was not simply empowered by this relationship, but transformed by it.

If human and machine are to coexist they need to function at the same speed. Invocational systems became present for users only once they achieved the speed to generate simulations in real time. Users have limited tolerance for delays in what they invoke. They quickly become frustrated at losing responsiveness. There is a threshold passed when the invocations are fast enough to give a sense of presence. When it is fast enough, the computer gives a feeling of open-ended possibility. The capacity to invoke virtually instantly through the interface is primary in the user experience.

Arcade-style game players demonstrate this present-time sense in the twitch: a constant, rapid but almost indiscernible series of movements that are the minimum required to handle the fast-paced sensorimotor challenges the game presents. Playability is the primary aesthetic, and centres on maintaining a sense of spatial and temporal engagement within a field of play. Players respond to system events at speeds at the boundaries of human reaction time. The best games give a seamless illusion, where players’ actions and their awareness of their actions merge. The subjective experience of this timescape is one of losing touch with clock time. Players talk about not noticing the hours pass. Invocational time is the perpetual present.

In many contexts, software designers develop avocations that manage all users’ sense of time. The progress bar, now a standard interface element, indicates the probable duration of the current process. Similarly, the spinning hourglass of Windows, or Mac’s watch cursor, signifies a suspension of the power of invocation, but promises imminent relief from that pause (Apple Computer, 1992). The obsession in human interface design with ‘intuitiveness’, which promises to reduce the time to become a user, is symptomatic of the present-centrism of the technology. The mobile phone and personal digital assistant started taking the place of the pocket watches and diaries. People are constantly on call, as well as being on time for appointments.

All these designed experiences are planned and built. Avocations are generated through social and technical processes. When teams build avocations they employ strategies or programs for creating users. Steve Woolgar describes this process as ‘configuring the user’. During the 1980s he observed the development process of a new series of 286-based personal computers. His ethnographic study traced the design of the product (known only by the acronym ‘DNS’) from inception through to release. He watched the interplays between different sections of the company, all negotiating over images of the ideal user (who he compared to the reader of a text):

In configuring the user, the architects of DNS, its hardware engineers, product engineers, project managers, salespersons, technical support, purchasing, finance and control, legal personnel and the rest are both contributing to a definition of the reader of their text and establishing parameters for readers actions. Indeed the whole history of the DNS project can be construed as a struggle to configure (that is define, enable and constrain) the user. (Woolgar, 1992: 69)

Among the design teams there was a range of conceptions of the user. The hardware engineers worked at a distance from the final users, and had very abstract understanding of their needs. Technical support staff, who had direct experience of common problems of users, were scornful of these engineers who seemed indifferent to the problems that their design choices would cause users. A technical writer expressed her surprise at the marketing department’s unsophisticated conception of the profile of future users (1992: 70). Over time, though, an overlapping, but not entirely shared image of the ideal user was negotiated through processes of collective decision-making and usability testing.

Throughout, the image of the user was highly fluid. Neither the user (who was inexperienced), nor the machine (which was incomplete), was settled or established until quite late in the project. Woolgar interprets the entire design process as a struggle to capture, fix and control the user, and to define the computer as a distinct object in relation to that user. He compares designers’ objectives with a writer’s desire for seamlessness. Designers wanted to create the impression that the computer was an object with its own integrity, and not just a collection of components. He notices how the engineers delayed the usability trials until the machine had a proper case. Although they themselves regularly worked with the boxes open, the engineers perceived it to be important for end users that the box be sealed. It defined the boundaries of the machine, like a book’s covers distinguishes one work from another. This was part of ‘black boxing’ (or beige-boxing) by which much scientific and technical work becomes invisible (Latour, 1999: 304).

Just as readers can have different understandings of the same written text, different users read the computer differently. However, readers’ paths are guided and constrained by the writers’ strategies and embedded assumptions. The success of a text is measured by how many different readers find uses for the text. The historical timing of Woolgar’s research is significant because it was during the late 1980s that many of the standard user conventions were being established. At the time, responsibility for designing user interfaces was shifting from engineers to designers. The work was shifting from being seen predominately as a science to an art.3 Many came to see that designing user experience was about creating feelings and sensations as much as providing functions. That is, while the computer had to work as an engineering system, it also had to work as something that was sensed, read, and enacted.

Software changes at a faster rate than physical hardware of computers because it invokes machines. More powerful software makes the black-boxed machines simultaneously more and less ‘black’. They become less black because software allows users to customise the machine in apparently unlimited ways, and to use them in wider domains. But in the same move, they become blacker. As software becomes more userly,4 exactly what users invoke becomes more obscure. It escapes being read. And more than this, judgement of contemporary systems is indefinitely deferred by the perpetual promise that the next version of software. . . the next hardware upgrade. . . the next generation of machines. . . would resolve any limitations. But these promises are unachievable. Invocational media always incorporate avocations with implications for restructuring psychological, cultural and social practices.

Invoking languages: writing users

In Computer Lib, Ted Nelson promised that the computer would be ‘a completely general device, whose method of operation may be changed, for handling symbols in any way’ (Nelson, 1974: 10). Nelson’s faith in the absolute versatility of the general-purpose computer machine extended Von Neumann and Turing’s image of the universal machine. The concept of the universal machine underestimated how complex and culturally loaded programming would become. The promise of complete generality turned out to be true for users in only a very limited sense because of the force of avocations. Much of Nelson’s book concentrates on programming skills. By a decade later, when invocators had become consumer devices, the vast majority of users never considered learning FORTRAN or even BASIC. Instead they chose to use commercial games and office software like the spreadsheets, word processors and databases, and later, web browsers and email clients.

Users almost never invoke the CPU directly in its own language (machine code), but through the mediation of several layers of hardware and software. It could be said that everything experienced on a computer is already written. Programmers have already conceived and written every avocation, every behaviour, and every function that a user invokes. Like the reader of a piece of writing, users unleash and actualise a virtuality written into the work. Also like writing, programs can be designed to encourage certain conceptions of reality. If the book is ‘a machine to think with’ (Richards, 1960: 1, 1924 in Iser 1974: 45), the computer must be a supercharged instance of such a machine.

Computers established a new distinction between ‘natural language’ and ‘computer language’. But how close is the parallel between computers and natural language? Social semioticians argue that language is always a social, not an individual, phenomenon. In a sense, you don’t speak language — it speaks you. To make a statement you have to borrow words that are already circulating within communities of speakers and listeners who use that language. The meaning of any statement you can make is possible only because it will make sense to others. Any individual utterance is only ever a part of a far broader collective situation:

Any utterance, no matter how weighty and complete in and of itself, is only a moment in the continuous process of verbal communication. But that continuous verbal communication is, in turn, itself only a moment in the continuous, all-inclusive, generative process of a given social collective. . . (Volosinov, 1985: 62)

The same is true for invoking things from a computer. When a user invokes an event, who is the source of what is invoked? Users delegate their commands to a technical apparatus. Like an utterance in language, an invocation is not simply generated from nowhere by the individual: it selects from and arranges an available repertoire of avocations.

But is that the same as language? In some ways it does seem to be like communication between humans. Natural language requires that both speaker and listener speak the same language. This requirement seems to be repeated in invocational media, because when computers transfer data they must use the same protocols.

On closer examination, however, language and computers are not equivalent, but distinct and complementary. Data moves through its own layers that don’t duplicate the linguistic level, but add to it and remain largely independent from it –invocationary acts. A failure to transfer data is not equivalent to a misunderstanding between humans. Their margins of indeterminacy are different in kind. A human listener will tend to interpret and accommodate things that are not fully comprehended (in fact it could be argued that complete comprehension is rare, if not impossible). In the operation of data transfers, techniques such as checksums and error correction procedures ensure precise transmissions of digital data, but this is not equivalent to understanding. On top of these differences, computers open up a range of ‘natural language’ situations that would otherwise not be possible or practical, such as e-mail, online discussion, mobile text messaging and live chat. These facilitate different social situations and modes of social interaction.

Another difference between natural language and invocational standards is their political and economic status. Many avocations are proprietary — copyrighted or patented as commercial intellectual property. Unlike natural language, entire systems of avocations are owned and protected by patent and copyright laws. They are both social conventions and private property. Furthermore, many invocations on computers aren’t language as such, but algorithms, images, simulations and other symbolic activities. Theories of language won’t easily map across onto theories of users’ relationships with computer events.

Programming languages like FORTRAN or C++ are not languages in the usual sense at all. They are systems of mnemonic machine instructions with only a distant resemblance to, or overlapping with, human languages. The process of using the computer is a kind of reading, but users don’t read the programs themselves. Before programs can execute, they are ‘compiled’ into machine language. Users read the behaviours of the executing compiled applications — a double separation from the program code. What users read includes not only written components of ‘user dialogue’, but also a full range of behaviours in the invoked environment. Users read invocations as they appear as outputs, not the avocations that make those events possible.

This avocational alienation produces a paradox, or trade-off, as users move away from the fine avocations that programmers invoke to end user avocations. Programmers invoke with a high degree of precision using machine code. Each command in a processor’s command set, or even commands in assembler languages, correlates with circuits inside the machine. By contrast, users invoke programs written by other people. In using software applications, users take on faith most of the detail of the commands they make. Old time hackers hold ‘lusers’ in contempt (Barry, 1993: 158). They see graphical user interfaces as a form of pandering to the unworthy and incompetent.

But even the most low-level programmer is a luser. They rely on the chip and hardware designers, code libraries and operating systems as much as the users rely on them. Everyone relies on beige-boxed engineering techniques, components and infinitely extended techno-cultural baggage. Every invocation answers avocations. The outputs always remain literally behind a screen. Invocations are always articulated in the voices of others. Users are always also being used. In these ways, invocations are alienating, which is why so much work is put into designing them not to seem that way.

Human-machine relationships resemble, or in fact extend upon, social practices of delegation. Delegations pass as undulating folds through layered structures of avocational architectures and into worlds of social events. An operating system typically sits on top of hardware; applications sit on top of the operating system. Users perform their work in another layer again. Each of these layers is itself layered — an operating system might handle file management, networking, printing, device control and so on. Because each layer is logically separated, black-boxed from the other layers, it is (at least in theory) possible to insert new components into any level. As long as that layer does its delegated and delegating jobs in relation to other layers, the system will keep functioning. Over time all the components in the entire system could be replaced, but it supposedly remains the ‘same’ system.

This strategy of layering dramatically accelerates the processes for developing new machine logics. Software engineering becomes a new kind of writing. Hardware developers can also work independently to build new generations of general-purpose equipment that speed up (quicker access) and expand the capacities (storage, resolution, reliability) of the generic invocational medium. At the same time, software developers refine techniques for procedural, logical, algebraic, arithmetical and other transformations of data.

Platforms add higher levels by folding avocations together into compounds. Each layer works with a higher order of abstract entities and events. The development of the string and the integer establishes a capacity to invoke numeric and text variables by name instead of memory address. The use of arrays allows groups of similar variables to be held together as a logical table, which can be looked up easily. Sequences of avocations are named as a function, which can be used and re-used with different variables. Every one of these generic abstractions embodies certain historically specific conceptual assumptions.

The need for layers relates to divisions of human skill and responsibility as much as to technical imperatives. Over time, the job of adding new features within a layer becomes more specialised, and the human roles associated with each layer is formalised as a vocation in itself. Programmers develop not only applications and operating systems, but also meta-tools like compilers and other development tools that assist programmers to create applications. Computer science has gradually differentiated into specialised sub-fields.5 These sub-disciplines formalise the tasks involved in creating and managing avocational structures, virtual machines and invoked environments.

In trade-offs of precision for power invocational systems tend to build higher and higher layers as they mature. While using a word processor concedes many choices to Microsoft or other software companies, for practical purposes a writer has no other choice (any more than there is for building one’s own car). Powerful features that once seemed like magic become sanctified as invocational religious standards. Some standards are set by official standards organisations like the ISO (International Standards Organisation), but more often they emerge by the force of ‘installed base’. If enough people come to rely on the stability provided by proprietary layers, they become the de facto standard — the ‘reference platform’, or just the ‘environment’.

Between the 1950s and 1970s IBM held the dominant position in controlling standards. During the 1980s and 1990s Microsoft took over this mantle. Invocational religions provide seamless, consistent userly environments. These are always to some extent deals with the devil. They offer users comfort and power, but at the same time tie them in to dependence on the developers of the system and those who controlled the standards.

When they combine to form larger structures like operating systems, programming languages and applications, some avocational systems achieve a totalising, interpellating force. These higher-level strata invariably make connections into social institutions and vocations in the wider sense. FORTRAN (FORmula TRANslator) is for scientists who already know mathematical equations. COBOL (COmmon Business Oriented Language) is for business. It is more suited to handling large quantities of data with relatively simple algorithms. COBOL programming standards strictly separate data from procedures, and require clear statements of authorship (see upcoming section on user modes). BASIC and LOGO, by contrast, are less structured languages designed to be easy to learn and rewarding for users (Walter, 1993; Appleby, 1991; Biermann, 1997). But even they carry a certain ideological load.

Seymour Papert’s LOGO language was designed as part of a humanistic project to make creative use of computers in education, providing a softer alternative to drill and repeat or instructional paradigms (Papert, 1980). However, Papert’s vision was unapologetically a strategy to produce users in a certain image:

. . .computers can be carriers of powerful ideas and of the seeds of cultural change. . . they can help people form new relationships with knowledge that cut across the traditional lines separating humanities from sciences and knowledge of the self from both of those. (Papert, 1980: 4)

Papert was only one of those at the time (the early 1980s) advocating extending computer use beyond science and engineering practices. The conventional view of users began to shift (as Woolgar also documented). Computers became ‘information appliances’ rather than an obscure form of deep magic. Hackers became outlaws (Sterling, 1992). A term that once meant ‘inquisitive and independent-thinking technical genius’ came to mean ‘anti-social outcast bent on wreaking havoc on legitimate system users’. At the same time ‘power users’ gained a bit more respect. With visual tools, using became more like programming. Faster hardware and more ‘mature’ software gave users capacities to perform more and more work on their desktop machines. The most dramatic shifts came when software companies began realising that their task was not so much designing the machine in hardware, but designing the user in software.

Ben Shneiderman’s book Designing The User Interface (1986) sits in the middle of this transition in computer design that helped bring users into the world. He starts by summarising much of the scientific empiricist work on user interfaces from ergonomics, cognitive science and computer science. Much of this research concentrated on human capacities rather than on the machine. It measured perceptual abilities (1986: 19), human ‘central processes’ and ‘factors affecting perceptual motor performance’ (22). It took measures of short and long term memory, problem solving, decision-making and time perception (22). It defined the tolerances within which the engineers were working with human components of the system.

As the book progresses, though, Shneiderman develops more qualitative ‘theories, principles and guidelines’ (1986: 41-81) about designing for users. He emphasises principles of consistency, informative feedback, easy reversal of actions and reducing short-term memory load (61-62). Shneiderman’s most celebrated contribution is the principle of ‘direct manipulation’ (180-223), which he says creates ‘the really pleased user’ (180):

The central ideas seem to be the visibility of the objects and actions of interest, rapid reversible incremental actions, and replacement of complex command language syntax by direct manipulation of the object of interest. (Shneiderman, 1986: 180)

Direct manipulation is one of the distinctive features of the WIMP (Windows, Icon, Menus and Pointing Device) interfaces that had become the dominant standard by the mid-1990s. This directness is, of course, only possible by a radical move towards something indirect: the creation of a ‘user illusion’. Graphical user interfaces allow users to generate computer commands by manipulating affordances rather than writing direct invocations. Commands are replaced by gestures and menu selections.

In the 1980s a global market for commoditised hardware and ‘shrink-wrapped’ software grew. The discipline that was known as ‘human factors’ research came to be called ‘human-computer interface design’ or ‘usability research’. Rather than studying human or machine capabilities independently, it concentrates on emergent dynamics between the two. Research also moved towards interdisciplinary approaches. Interface design became an art more than a science (although empiricist user testing was still used to quantify the merits of different design decisions).

Userly invoked universes provide stable platforms upon which users perform higher-level activities: writing, playing, emailing, researching, etc. Avocational features are (relatively) integrated and consistent. They give users an appropriate choice of abstract entities and functions for conventional tasks. These designed environments are not only software designs, but also user designs. Moves toward ‘user friendliness’ established a range of consistent conventions about how and what users can command within invoked worlds. Among these practices are new ways to represent and perform the self.


Avatars are entities within invoked environments that function as the grammatical subject in invocational statements. They double the user during invoked events. Avatars appear in a range of forms. Sometimes they are visible sprites, or fully 3D modeled bodies, while at other times they are only implied by a point of view. In still other cases they are logical entities: names, numbers or placeholders within simulations. Avatars perform as a special category of actor in the chains of human and non-human actors in invocational systems (Latour, 1991: 110). When a user’s body manipulates a hardware device, the avatar tracks that movement to act as the stand-in for the user within the invoked environments.

The archetypical instance of user avatar is the cursor, which marks the text insertion point. First used in command line interfaces such as DOS and UNIX, it also features in word processors and text editing software. The term ‘cursor’ is of Latin origin (OCED). It means ‘runner’, and also applies to the pointer in a mechanical slide rule, where the cursor position marks the progress of a calculation. The computer cursor is a placeholder or progress mark in a position within grided rows and columns of text.

The user directs and controls the cursor as an on-screen token or agent. The cursor functions as a virtual tool with powers to perform the specified set of tasks proscribed by user modes. Its shape as well as its position has significance: in one mode it inserts text, in another it types over text and replaces it. The trained user and cursor perform as a singular cyborg assemblage, jumping across the screen, transforming its textual landscape.

The graphical user interface creates even more complex relationships between user and avatars — as discussed earlier it generates affordances that cross multiple layers of avocation. The relationship between a pointer and its mouse parallels, but differs from, the link between the cursor and the keyboard. Mouse users perform relatively gross gestures rather than precise discrete decisions to invoke effects. By sampling over time, the computer follows the arcs of a user’s extensive gestures. The introduction of the mouse provoked a prolonged controversy during the 1980s over whether command line interfaces (CLI) were better than graphical user interfaces (GUI).

The so-called ‘interface wars’ represented two sets of values on user subjectivity: a precise, rational, masculine CLI against the fuzzy, bodily, feminine GUI. It was not just that the Macintosh’s ‘cuteness’ was perceived as ‘unmanly’ (Gelertner, 1998: 36). The Mac’s implied user was quite different to the PC’s. A mouse seemed to bring back everything that digital electronic engineers had worked to eliminate — imprecision, ambiguity, embodiment and analogue meaning systems. Ultimately, neither side won the war completely. Graphical interfaces support visual modes of thinking/acting, where commands and keyboard shortcuts support more verbal modes. Some tasks (working with images, grouping objects, mapping relations) are better handled in tactile/visual modes using invoked images, while others (defining conditional instructions, editing text) are better articulated as verbal invocations.

The most diverse range of avatars is found in computer games. As with other forms of play, computer games invariably entail taking on roles — invoking the identity of others. Game avatars take on more culturally specific personae than cursors. They borrow from science fiction, sport, military forms, games and puzzles, and even dance. The space ships in Space War, Space Invaders and Asteroids are relatively simple to render, but capture a cultural imaginary of radar screens and science fiction laser battles. A game scenario creates a world in miniature, with physical and ethical boundaries and proscribed roles for avatars to play. id Software’s Doom gives players a first person view down the gun barrel of a warrior-hero’s weapon. Its science fiction scenario invokes a moral universe that justifies slaughtering anything that appears in front of your gun.

The identification between player and avatars can be quite intricately articulated. Each game genre has its own modes of connecting players with the games world. In many games switching affordances is an important part of the gameplay: choosing weapons, changing cars or even alternating between characters. Some simulate sports players, or parts and multiples of players. The bat in Atari’s Pong operates as part of a player: a metonymic table tennis paddle. In the virtual soccer arcade game Virtual Striker 2, on the other hand, there is a projected shifting user identity that floats around as control over the ball jumps from on-screen player to player. The system selectively charms players so that the one closest to the ball is always under the user’s control. An arrow shifts to the active player, mimicking a television viewer’s capacity to find a collective identification with a whole team as play progresses.

In some ways the proliferation of conventions for user-avatars with more powerful avocational forms of expression and affordances gives users new powers. However, at the same time users are faced with a new set of constraints, based not on physical coercion, but on implicit but invisible control over their avatars through user modes.

User modes and the individual

In presence-based cultural forms, a subject who is asked to give evidence or write a signature is bound to be in only a particular place at a particular time. By contrast, users can act when they are not present. If subjects are increasingly present as avatars, and perform social actions as invocations, the question arises of how their behaviour is regulated — how they are made accountable. Social controls have predominantly been imposed on the body. Invocational systems often impose control with methods that seem more liberal than disciplinary systems such as the architecture of prisons, hospitals and schools (although these institutions are among the heaviest users of invocational media).

Invocational media police social roles by controlling user modes. A user’s field of action is quietly constrained by the modes to which they have access. They can (at least in theory) be held accountable for any actions made under their name. These ‘control society’ mechanisms are no longer based on containment. Instead, users leave data trails behind their transactions and constitute a number of partial identities as profiles in different database systems (Mackenzie, Sutton and Patton, 1996). Disciplinary institutions haven’t disappeared altogether, but control societies have started to offer new modes of control.

Control undermines the liberal notions of privacy based on the inviolability of the subject. It changes what a subject is. Katherine Hayles observes that until some time after the Second World War the primary questions about the human related to where the subject was physically placed. She traces a cultural and technological transition towards new regimes that privilege pattern and noise over absence and presence (1999: 29). For example, in what she calls the ‘posthuman’ condition the measure of a bank customer becomes the pattern of the password or PIN, rather than their bodily presence in a bank branch. The word processor document exists as invisible patterns on a hard drive, which can be physically located anywhere, or in multiple places, rather than on physical pages with print on them. These posthuman politics are based not only on the pattern/noise opposition, but also on a politics of user modes.

The efficacy of any invocation depends on which user modes are available in a particular state of a particular application on a particular platform. User modes are the implicit or explicit categorisations of avocations that delimit users’ authorised fields of action. In general computer discourse, the term ‘mode’ refers to a ‘particular method of operation for a hardware or software device’ (Chandor, Graham and Williamson, 1987: 304). For example, if a programmer wants to write a program that sends data between two devices, they would need to establish the mode of the connection before the process could start. In ‘byte mode’ data would be sent in one-byte chunks, while in ‘binary mode’ the transfer would be a stream of single bits. Users are unlikely to notice a difference such as this directly, although the programmer’s choice of mode might affect how the system performs. Programming is full of these kinds of decisions.

User modes define what a user can invoke at any particular moment. A familiar example of a mode is the choice between ‘insert’ or ‘replace’ mode in word processing. Insert mode adds new characters at the insertion point, pushing any text to its right further across. Replace mode overtypes text to the right of the cursor. The same physical action by the user has a different effect on the text depending upon which mode is operational.

Any complex software application uses a wide range of modes, but they are frequently hidden. Graphical user interfaces often integrate modes into the design of the software features. One of Apple Computer’s ‘human interface principles’ is to create an impression of ‘modelessness’ (Apple, 1992: 12). For example, a graphics program provides a palette of ‘tools’ and ‘brushes’. Each tool performs a different transformation on the image. One tool creates lines of variable width and shape. Another makes selections of portions of the image. Another adds or erases parts of the image, and so on. Because these modes correspond with ‘real life’ modes, like the difference between using a paintbrush and an airbrush, they don’t seem to restrict the operations of a user. But each has been carefully designed, debugged and made natural.

User modes are implicit to some degree in every avocation in any genre of software. The best way to define software genres is to look at the user modes that are offered.6 Banking computers support the user modes of the customer or account holder from the outside, and administrators, technicians and other bank staff from the inside. PhotoShop has modes familiar to the worlds of photographers and artists. Word processors obviously create user modes for authors, but also for designers, editors and readers. Microsoft Word has modes (or views) for different classes of user: users enter text and formatting in one mode, and preview the appearance of the document in another. The ‘Revisions’ function introduced in Microsoft Word 6 allows editors to make changes that can later be approved or rejected. Each of these modes implies an imagined or virtual type of user who will play the role of author, editor or customer.

Many systems restrict certain modes to privileged groups of users. The user enters a user name and password, or swipes a card and enters their personal identification number (PIN), or even scans their hand or iris using biometric methods of identification. These identification mechanisms are equivalent to signatures on a contract, but have invocationary force. The virtual signature can then be attached to any invocation that the user performs, along with an automatic time and date stamp. Their identity is invoked with any changes they make to the system.

System administrators and developers usually have a higher and special level of privileges, because they are responsible for the overall functioning of the systems. They tend to follow the dynamics and logics of the invocational assemblages as much as fulfilling the requests of those who commission the development. They know the system inside out, and have special rights because of this. Arthur Kroker refers to this group as the ‘virtual class’. As beneficiaries of these special invocational powers, the virtual class agitates to further its own interests. But it tends to present the expansion of invocational systems as though it was inherently in the common good (Kroker, 1994).

In other cases, user privileges reflect the stations in institutional hierarchies. A university’s central computer gives students, administrative staff and academic staff privileges to access the system appropriate to their responsibilities. Each can see or change only specified parts of the total data set. Students can change their own contact details, but only certain staff can add or change grades. Mechanisms that identify operators are essential to any system that tracks other individuals as objects. Student records are only valid when the system has ensured that every change is authorised. Data integrity is critical. Any forceful invocational statement has to be legitimated with the identification of the operator who makes the change.

The rights assigned to each operator (and to other subjects as data entities) seem at first simply to manifest agreements founded on Enlightenment political theory and articulated in contract law. Locke helped establish the modern principle that authority should derive not from inherited right, but through operations of reason (Locke, 1988: 267-282). In invocational systems, users (re)establish contractual relationships with the entity that controls the system. However, the reasoned relations between parties to contracts become manifest as Boolean variables, and ultimately as switched signals in integrated circuits. Invocations blindly allow changes only if particular modes are in operation. This blindness, and the blinding speeds at which invocations operate, start to expose the contradictions and limitations of liberal subjectivity. How can users remain stable at the centre when processes follow invocational dynamics?

Struggles to define the user

The problem of the user resembles the ambiguous status of the author in literate cultures. There is a user-function in the same sense that Foucault talked about an author function (Foucault, 1977). Users became a software category in the time-sharing systems in the 1960s. Time-sharing supported several simultaneous users of the same machine. Like the origins of authorship, logging on enforced user accountability. It also gave users special rights that separated their own work from others. However, most of these systems were not particularly secure. They made it easy to play pranks. These started to expose more fundamental problems with identity as simulation became just another space for social interaction.

The emergence of unsecured invoked environments, most notably the Internet, and even the commercial and community-based online services that preceded it, gave many people direct experience of breakdowns in boundaries of identity in invoked environments. A ‘crisis’ in the liberal humanist conception of identity is a well-explored theme in cyber-cultural literature. Alluquére Rosanne Stone’s The War of Desire and Technology at the Close of the Mechanical Age (1995) explores a series of cases where there are ambiguities about identity. In each example, identities are split, multiple, or unreliable. Part of the breakdown is associated with the capabilities of information technology. When there is no infrastructure to police user identities in invocational systems, users can easily manipulate others’ assumptions and perceptions of their identity.

One of Stone’s stories is a postmodern morality tale about online identity (Stone, 1995, 65-81). It was originally told by Lindsy Van Gelder in an article in Ms. magazine in 1985. William J Mitchell (1995: 177 n11) also refers to the event. A New York psychiatrist Sanford Lewin joined an online discussion forum ‘Between the Sexes’ on America Online in the early 1980s. He chose to log on to the forum with a woman’s name: Julie Graham. In this forum he developed relationships with women online who believed they were confiding in another woman. They were more honest and open with him than they would have been if they had known he was a man.

When he finally did reveal his ‘true’ gender these women felt betrayed and compromised. In the medium of online chat, user names are arbitrary handles that lack the redundancy that accompany face-to-face interactions. With a log-on name (and a made-up persona that he had performed) he had invoked a gender switch. The participants felt outrage at this early breech of faith, but these practices of passing became an expected part of many online spaces. Without systems enforcing a correspondence between social categories and secured user modes, identity in online environments is prone to easy manipulation. Invoked identities complicate notions of subjectivity that have conventionally been based on presence and absence.

In a world where identity is invoked, users have to become pragmatic about the identity of others. Rather than constantly seeking proof of some stable foundation to every person or institution with which they deal, users have to evaluate the thresholds of proof that are appropriate to each invocational transaction. Users also become expert at presenting their own invocable identities: customising their own avatars. These tactical interventions resist the strategic programs of avocational modes, tweaking the parameters of systems of control. Of course, avocational guerrillas tend to be captured again by the virtual class. But for many users, the instability of invocational identity is unsettling and threatening.

Licensing the user

While the Internet seems to support an undifferentiated chorus (or chaos) of invocational voices, many interests rely on regulating and capturing users. The licensed user is the necessary fiction that allowed the commodification of ‘shrink-wrapped’ software since the 1970s. As John Perry Barlow emphasises, software can be copied perfectly and undetected with little or no cost. When it is ‘stolen’, the original owner still has it (Barlow, 1995). The commercial software industry was built on a new contractual relationship that was invented to resolve that problem: the licensed user.

Microsoft was the first major software-only company that began by licensing programming languages to microcomputer manufacturers in the mid-1970s (Nash and Ranade, 1994). In the ‘Open Letter to Hobbyists’ of 1976, Microsoft founder Bill Gates argues that software should be protected from copying (Gates, 2003 [1976]). He appeals to people to stop copying his version of the BASIC programming language, and to maintain the incentive for programmers to develop new software. This new model of commercial microcomputer software demanded a new range of machineries: sales, distribution, support and enforcement. Organisations such as the UK Federation against Software Theft and the Business Software Alliance invented and began to enforce the new crime of ‘software theft’: As the ‘voice’ of the software industry, we help governments and consumers understand how software strengthens the economy, worker productivity and global development; and how its further expansion hinges on the successful fight against software piracy and Internet theft. (BSA, 2001) The user mode of licensed user is a political and economic category more than a technical one, although software companies have attempted a range of techniques to make it technical: copy protection encryption, hardware dongles, click-through contracts and databases of registered users. The commercialisation of software applications saw the emergence of new genres of software, new legal concepts and new dynamics in economics. Applications became more comprehensive and ‘user friendly’. Software didn’t quite fit existing copyright, patent or trademark laws, so legal changes were passed in many countries to protect intellectual property rights. This new mode also produced a new economy based on controlling a monopoly over standards through market share.

The desire to police user modes continued into the 1990s. The ease of copying software expanded even further with the Internet, extending the threat to other industries that depend upon control over intellectual property. Book publishers and newspapers became uneasy. The music sharing software program Napster began scaring the recorded music industry, inciting a frenzy of legal actions that finally destroyed the company. Content industries are alternately excited and threatened by the emerging regime under which it seems all mediated experience might ultimately become invocable.

At the same time, others began to engage directly in invocational politics by working according to different rules: copyleft rather than copyright. The GNU (‘GNU’s Not Unix’) project sustains the tradition of distributing software freely.7 It rejects proprietary standards, arguing not only that they encourage monopolies, but that they create inferior software. Richard Stallman, founder of the GNU project continues in the Nelsonian tradition of computing power to the people: My work on free software is motivated by an idealistic goal: spreading freedom and cooperation. I want to encourage free software to spread, replacing proprietary software which forbids cooperation, and thus make our society better. (Stallman, 1999) In spite of these micropolitical interventions, invocational power remains unevenly distributed socioeconomically. PCs are expensive consumer products with accelerated obsolescence plans. They continue a tendency Raymond Williams saw in the 1970s of an increasing self-sufficiency of the family, and increasing mobility — ‘mobile privatisation’ (Williams, 1990: 26). Portable consumer products, rather than public technologies like public lighting and train lines, are becoming dominant.

The demographics of users strongly correlate with pre-existing social power (as well as age). This is significant not only for individuals, but for collectives, as more and more ‘normal’ social transactions take place as invocations. In the same way pedestrians only came into existence with carriages and cars, the expansion of invocation has the potential to further disenfranchise groups without access.


The processes that create users are gradually becoming invisible. Just as the market regulations that governed private property after the 16th century became generally accepted as the natural order, by the beginning of the 21st century invoked environments were becoming naturalised modes of perception, action and identity. The apparent fluidity of these categories during the historical transitions gradually crystallised as communities of users adjusted to these dynamics. Users developed invocational literacies and cultural conventions that presumed that the invoked platforms had always been in operation. Users perceive the world differently, without thinking about it. In the 19th century perceptions of space and time changed when trains brought distant places closer in time, and took on the subject positions of train passengers (Schivelbusch,1979). But not only perceptions changed.

If there are many processes generating users, it does not necessarily mean these produce homogeneous or predictable outcomes. While there are numerous devices constructed according to the invocational diagram (the Von Neumann machine), they speak in many different avocational tongues and are articulated through very different pathways. Each opens up quite different zones of indeterminacy. In the process of taking up a position as invokers, facing material interfaces, speaking in avocational refrains, embracing avatars, many more users are appearing. But these transformations are heterogeneous and contradictory, reconfiguring some struggles over power and legitimacy, and creating new fields of conflict and collaboration.


1Dance Dance Revolution was released by Japanese game manufacturer Konami in October 1998. Korean company Andamiro released a clone machine Pump It Up in 1999, prompting Konami to seek an injunction for breach of copyright. See the Konami website. (accessed May, 2000).

2 Licklider (1960) is the other key figure credited with establishing the goal of using computers to augment humans.

3 This shift is suggested in the title and contents of the influential collection of essays by Laurel (1990).

4 To say that an avocation is userly is equivalent to calling a text readerly (Barthes, 1998). Like the readerly text that offers a single, transparent meaning, the userly avocation offers a feature that can be used with minimal opportunity for modification. It can easily be invoked without question.

5 A report by the ACM Task Force on the Core of Computer Science (1989) defined nine sub-areas that the authors thought defined the field:

  • Algorithms and data structures
  • Programming languages
  • Architecture
  • Numerical and symbolic calculation
  • Operating systems
  • Software methodology and engineering
  • Database and information retrieval systems
  • Artificial intelligence and robotics
  • Human-computer communication

6 Genre theory has been a highly active field in Australian linguistics, cultural studies and literary education since the 1980s. These approaches tend to move away from classical Aristotelian concepts of genre as modes of classification, towards seeing genres more in relation to social action, social process and pedagogy (Knapp, 1997).

7 The open source software movement advocates a direct alternative to copyright law. Copyleft licences not only allow the software to be freely distributed, but are also requires that developers release their programs with the uncompiled source code, so that anyone else can modify it. This is a direct political intervention to break down the distinction between programmer and end-user modes. The ideal of free software refers to the concept of political freedom, rather than the absence of payment:

‘”Free software” refers to the users’ freedom to run, copy, distribute, study, change and improve the software. More precisely, it refers to four kinds of freedom, for the users of the software:

— The freedom to run the program, for any purpose (freedom 0).

— The freedom to study how the program works, and adapt it to your needs (freedom 1).

— The freedom to redistribute copies so you can help your neighbour (freedom 2).

— The freedom to improve the program, and release your improvements to the public, so that the whole community benefits. (freedom 3).

(GNU Project web server, 2000)


ACM Task Force on the Core of Computer Science (1989) ‘Computing as a discipline’, Communications of the ACM, Vol. 32, No. 1 (January): 1-5.

Althusser, L. (1971) Lenin and Philosophy and Other Essays. London: Unwin.

Apple Computer Inc. (1992) Macintosh Human Interface Guidelines. Reading: Addison-Wesley Publishing Company.

Appleby, D. (1991) Programming Languages: Paradigm and Practice. New York: McGraw-Hill.

Barlow, J. P. (1995) ‘Selling Wine Without Bottles: The Economy of Mind on the Global Net’, The Electronic Frontier Foundation. (accessed June 2000).

Barry, J. A. (1993) Technobabble. Cambridge and London: MIT Press.

Barthes, R. (1974) S/Z. New York: Hill and Wang, 1988.

Biermann, A. W. (1990) Great Ideas In Computer Science. Cambridge and London: MIT Press,1997.

Business Software Alliance (2001). (accessed April 2001).

Chandor, A., Graham J. & Williamson, R. (1987) The Penguin Dictionary of Computers. London: Penguin.

Chesher, C. (1998) ‘Digitising the Beat: Police Databases and Incorporeal Transformations’, Convergence, Spring 1997, Vol. 3 No 2: 72-81.

Delacour, J. Heyward, M. & Polak, S. (1996) ‘Game Language’ panel at Language of Interactivity conference on the Australian Film Commission website. (accessed January 2001).

Deleuze, G. (1990A) Negotiations. New York: Columbia University Press.

Deleuze, G. (1990B) ‘Postscript on Control Societies’, Negotiations. New York: Columbia University Press: 177-182.

Engelbart, D. (1988) ‘The Augmented Language Workshop’ in Goldberg, A. (ed.), A History of Personal Workstations. New York: ACM Press.

Foucault, M. (1977) ‘What is an Author?’ in M. Foucault & D.F. Bouchard (eds), Language, Counter-Memory, Practice. Ithica: Cornell University Press:124-127.

Gates, W. H. III (1976) ‘An Open Letter to Hobbyists’, The Blinkenlights Archaeological Institute. (accessed January 2003).

Gelertner, D. (1998) The Aesthetics of Computing. London: Weidenfeld and Nicholson.

Gibson, J.J. (1977) ‘The Theory of Affordances’, in R.E. Shaw & J. Bransford (eds), Perceiving, Acting and Knowing. Hillsdale, NJ: Erlbaum Associates.

Green, J. (1997) The New Age of Communications. St Leonards: Allen and Unwin.

Guattari, F. (1995A) Chaosmosis. An Ethico-Political Paradigm. Sydney: Power Publications.

GNU Project web server (2000): ‘The Free Software Definition’.

Hayles, N. K. (1999) How We Became Posthuman. Virtual Bodies in Cybernetics, Literature and Informatics. Chicago and London: University of Chicago Press.

Iser, W. (1974) The Implied Reader: Patterns of Communication in Prose Fiction From Bunyan to Beckett. Baltimore: Johns Hopkins University Press.

Knapp, P. (1997) Virtual Grammar: Writing as Affect/Effect. (Doctoral Thesis) Sydney: University of Technology.

Kroker, A. & Weinstein, M. A. (1994) Data Trash: The Theory of the Virtual Class. New York: St Martin’s Press.

Latour, B. (1991) ‘Technology is Society Made Durable’, in J. Law (1991) A Sociology of Monsters. Essays on Power, Technology and Domination. London: Routledge: 103-131.

Latour, B. (1999) Pandora’s Hope. Essays on the Reality of Science Studies. Cambridge: Harvard University Press.

Laurel, B. 1990) The Art of Human Interface Design. Reading, MA: Addison Wesley Publishing.

Law, J. (1991) A Sociology of Monsters. Essays on Power, Technology and Domination. London: Routledge.

Licklider, J. C. R. (1960) ‘Man-Computer Symbiosis’, IRE Transactions On Human Factors in Electronics HFE-1 (March): 4-11.

Locke, J. (1988) Two Treatises of Government. Cambridge: Cambridge University Press.

Mackenzie, A. (2002) Transductions: Bodies and Machines at Speed. London and New York: Continuum.

Mackenzie, A., Sutton, D. & Patton, P. (1996) ‘Phantoms of Individuality: Technology and Our Right to Privacy’, Polemic, Vol. 7 No. 1: 20-25.

Microsoft Inc. (1999) ‘Where Do You Want To Go Today’ (Television advertisement).

Mitchell, W.J. (1995) City of Bits: Space, Place and Infobahn. Cambridge, Mass.: MIT Press.

Nash, A. and Ranade, J. (1994) The Best of Byte. New York: McGraw Hill.

Nelson, T. H. (1974) Computer Lib: You Can and Must Understand Computers Now/Dream Machines. Chicago: Hugo’s Book Service.

Norman, D. A. (1990)The Design of Everyday Things. London, England: MIT Press.

Papert, S. (1980) Mindstorms: Children, Computers and Powerful Things. New York: Basic Books.

Raskin, J. (2000) The Humane Interface: New Directions for Designing Interactive Systems. Boston: Addison Wesley.

Richards, I. A. (1960) Principles of Literary Criticism. London: Routledge & Kegan Paul Ltd.

Schivelbusch, W (1979) The Railway Journey. New York: Unizen Books.

Shneiderman, Ben (1986) Designing the User Interface: Strategies for Effective Human-Computer Interaction. Reading: Addison-Wesley.

Stallman, R. (1999) ‘Copyleft: Pragmatic Idealism’, Free Software Foundation. (accessed January 2000).

Sterling, B. (1992) The Hacker Crackdown. New York: Bantam Books.

Stone, A. R. (1995) The War of Desire and Technology At the Close of the Machine Age. Cambdridge, MA: MIT Press.

Volosinov, V.N. (1985) ‘Verbal Interaction’ in Innis, R. E. (ed.), Semiotics. An Introductory Anthology. Bloomington: Indiana University Press: 47-65.

Walter, R. (1993) The Secret Guide to Computers. Boston: Russ Walter.

Weber, M. (1921), ‘Politics as a Vocation’ in C W. Mills & H.H. Gerth (eds), From Max Weber. New York: Oxford University Press, 1946: 77-128.

Williams, R. (1975) Television: Technology and Cultural Form. London: Fontana, 1990.

Woolgar, S. (1991) ‘Configuring the User: The Case of Usability Trials’ in J. Law (ed.), A Sociology of Monsters. Essays on Power, Technology and Domination. London: Routledge: 57-99.

Zizek, S. (1998) ‘Cyberspace, or How to Traverse the Fantasy in the Age of the Retreat of the Big Other’, Public Culture, Vol. 10 No. 3: 483-513.

Leave a Reply