Sometime in the late ’60s my father took me to see the film 2001 A Space Odyssey. I think he was bored, whereas, even at seven or eight years old, I was fascinated. I was particularly taken with the figure of HAL 9000, the intelligent computer on board the spaceship Explorer. Hal, who is by far the most sympathetic and, dare I say, human character in the film, is mostly represented through his single red ‘eye’ and his soft machine voice. As Michel Chin puts it in his study of 2001, ‘he has “an eye” or several “an eyes”‘ (Chin, 2001: 83). This eye becomes the principle means by which the human actors, the camera and, by extension, the audience engage with HAL. It is also the means by which his subjectivity, such as it is, is represented in the film. At several points we see things from his point of view, as in a fish-eye lens. Hall’s eyes are distributed throughout the ship, which grants him/it a panoptic and even god-like perspective on events on board. He/it is also possessed of great visual sensitivity, able to lip-read the conversation between Bowman and Poole in the pod, where they have retreated to discuss Hall’s increasing eccentricity out of earshot. That I should have first seen this film with my father seems, more than thirty years later, oddly apt. He too had an eye. Not just any eye, but ‘a better eye . . . than anyone else alive today’ (White, 1996: 373). This last comment was made, in conversation with a friend of my father’s, by Kenneth Clark. My father was Keeper of the Prints and Drawing Department at the British Museum, of which Clark was a trustee. The comment was presumably inspired by conversations between the two about the works of art on the walls of Clark’s apartment.
Taken at face value Clark’s comment perfectly encapsulates a certain kind of art history and the assumptions of elitism and superiority it embodied. In case this sounds overly critical these are qualities both Lord Clark and my father would have fully endorsed and supported. By ‘anyone else alive today’ Clark of course did not mean literally the full complement of living humanity, but rather the few score connoisseurs of art, among whom he counted both himself, and on the evidence above, my father. By ‘a better eye’ he meant a capacity to look carefully at works of art, drawings especially, and in particular to be able to make attributions on the grounds of style. To be a connoisseur was to be part of a tradition established ‘without the aid of photography, teams of graduate students, Witt libraries and other modern amenities now taken for granted’ which involved the ‘scientific and methodical study of an artist’s drawings considered as essential elements in the construction and assessment of his work as a whole’ (Gere, 1985). The last two quotations come from my father’s introduction to the catalogue of an exhibition at the Fitzwilliam, devoted to the connoisseurial achievement of his mentor and colleague Philip Pouncey, but they can also stand as a self portrait of a self-confessed connoisseur and even a manifesto of connoisseurship. To possess an eye, as my father reputedly did, required, according to my father’s own proscription, ‘a particular combination of qualities of mind, some more scientific than artistic and others more artistic than scientific: a visual memory for compositions and details of compositions, exhaustive knowledge of the school or period in question, awareness of all the possible answers, a sense of artistic quality, a capacity for assessing evidence, and a power of empathy with the creative processes of each individual artist and a positive conception of him as an individual artistic personality’ (Gere, 1985).
My father is now dead. His body, mind, and the accumulated knowledge and intellectually capacities they contained, now longer exist, though in a sense his body of work, his corpus, continues to endure in the form of books and catalogue essays. But what if it was possible to embody such capabilities in a machine such as HAL. Thus, in theory at least, my father’s frail mortal capacities could endure, preserved beyond the degradations and finitude wrought by entropy. The juxtaposition of HAL and my father allows me to indulge in a little thought experiment and ask whether the history of art can go on without a body. HAL and my father are connected by their possession of an ‘eye’. Both are ‘limit cases’ of more general phenomena. My father (or perhaps the ideal of connoisseurship he represented) is the extreme example of art history as a humanist discipline. HAL is the limit case for expectations for Artificial Intelligence. Their juxtaposition is a way of thinking, through extremes, what the future of Art History as a certain kind of human activity might be in an age when the evolution of technology brings into question the very existence of the ‘human’. Or to put it another way: is it possible to imagine HAL being programmed to be a connoisseur? And if so, what would that mean for such an apparently human activity? In 2001 HAL shows little evidence of the capacity to be a connoisseur in the terms defined above. This is unsurprising, given that he is programmed to run a spaceship, rather than, for example, a prints and drawings department. He is however capable of rudimentary art appreciation. Early on in the sequences set on the spaceship Explorer astronaut Dave Bowman is shown sketching the ship’s interior and the pods containing the hibernating scientists. HAL asks to see the sketches, which Bowman shows it by holding the pad up to the eye. HAL remarks that ‘[T]hat’s a very nice rendering, Dave. I think you’ve improved a great deal. Can you hold it a bit closer? That’s Doctor Hunter, isn’t it?’ While this remark does not suggest a capacity to distinguish, let us say, between the drawings of Federico and Taddeo Zuccaro (my father’s specialism), it does show that HAL is capable of understanding a drawing and even of appreciating it aesthetically.
This question was provoked in part by Jean-François Lyotard’s brilliant and provocative essay ‘Can Thought Go On Without A Body?’, in which he frames a discussion of the nature of thinking through a typically Lyotardian conceit, that of how it might continue in light of the imminent explosion of the Sun (imminent at least in the next four and a half billion years), and the consequent cessation of the earth’s existence and the death of all that is earthbound (Lyotard, 1991: 8 – 23). Constructed as a dialogue between a male and a female voice the essay debates the consequences of such an event for thought. With such an end thinking of any sort will utterly cease, by the abolition of the very horizon of thinking. This is, according to the first speaker, the man, radically different to any normal conception of death, which incorporates the idea of survivors and thus of death in human terms. The death of the Sun by contrast will destroy all matter, and thus all witnesses. He continues that the Earth, which is the precondition of human existence and thought, is anyway far less stable than it might appear. It is continually subject to material change, of which its destruction in its present form is only one example. The Earth in its current stable form is only a few billion years old, which is nothing in human terms. It is merely a temporary stabilisation of energy in a remote corner of the Universe. To imagine that this apparently stable situation is actually so or that our relationship with the Earth is equally stable is illusory. Solar Death renders all human attempts to come to terms with death, as we understand it pointless, and any familiar idea of disaster a pale imitation. Solar Death is inevitable. So you can either ignore it, and remain within the way of thinking that connects thought with the Earth and nature, and just remain vaguely aware of future disaster, or you can decide to deal with it, by accepting and exploiting the transformation of matter by working out how human thought can survive after the annihilation of the Earth. This work is already under way in a number of different fields, including dietetics, neurophysiology, genetics, tissue synthesis, particle physics, astrophysics, electronics, information science and nuclear physics.
The male voice points out that technology is what invents us, rather than the other way round, and as anthropologists and scientists have shown, all organisms are technical devices inasmuch as they filter information necessary for their survival and are capable of remembering and processing and making decisions based on that information, including modifying the environment in order to perpetuate its survival. Human beings are exactly the same, except that they have a more complex and more differentiated regulation system based on codes and rules, and that this system, relying on arbitrary codes, is less dependent on the environmental context and therefore more capable of reflexive responses to its environment and self. Inasmuch as humans need to live on the earth the responses of this system to the environment are geared towards that end. The body is the hardware to thought’s software. Without the body functioning properly there can be no thought. All that philosophy concerns itself with in terms of thought is just an advanced stage of the process of regulation with the environment, a more evolved version of the ‘memories’ by which organisms regulate their relationships with their surroundings. The human mind is highly sophisticated but it is dependent on the hardware, the body, which will disappear in the event of Solar Death. Thus the problem for techno-science is how to develop the hardware to allow the software to survive beyond the earth, or in other words how to make thought without a body possible. Only by being able to imagine the continuation of thought without a body, as we understand it in terms of the complex human organism, can we think about the Solar Death. Thus we need to build hardware capable of nurturing the software that is our thoughts. This needs to be some kind of nutrient or support that can survive beyond the Earth, using cosmic sources of energy. Such a thing is clearly possible, though the technology for storing the capacity to think outside of organic bodies is also clearly much less advanced.
What Lyotard is talking about is, of course, Artificial Intelligence (AI), the name for research into programming machines to be intelligent and even conscious. HAL is an optimistic but serious extrapolation of the future progress of AI, as seen from the late sixties. Kubrick consulted with AI experts such as Marvin Minsky to make HAL as scientifically plausible as possible. Now that the year 2001 has come and gone it is obvious that AI has largely failed to fulfil most of its original claims for future developments. But despite its failure to progress as was expected in the early days, AI remains a highly funded endeavour to which many academics, scientists and engineers are dedicated. Against all the odds expectations remain high and many apparently hyperbolic claims are made. Some of the most interesting, philosophically plausible and successful work is being done in robotics, in which AI is combined with material engineering to produce artificially intelligent creatures capable of engaging with their environment. Among the leading exponents of AI robotics are Hans Moravec, Ray Kurzweil and Rodney Brooks, who have all published works of popular science about their ideas and inventions. Each has made extraordinary claims both for what computers and robots have achieved and what they will be capable of in the future. Brooks and his colleagues, for example, have made some extraordinary advances in machine vision, partly by eschewing the conventional model that treats human vision as something like video input. Instead they have modelled their robots’ capacity to see on the complexities of human vision, such as the blindspot and the fovea, as well as hearing and motion sensors that enable movements to be compensated for visually. Using such techniques they have produced surprisingly encouraging results with robots such as Kismet that can respond to faces, movement and speech with humanlike gestures. But despite such advances machine vision is, by Brooks’s own admission, limited and cannot begin to do the things humans do effortlessly, such as recognise individual objects (Brooks, 2002: 74 – 97).
Thus it seems we are a long way from producing a machine capable of Hall’s simple art appreciation, let alone the complex visual and intellectual actions needed to distinguish the works of different old masters. This is not to say that such an achievement is impossible. But, of course, as the quotes above about connoisseurship may suggest, having an ‘eye’ means more than simply possessing the means to distinguish phenomena visually. It is a question rather of a capacity for visual discrimination and judgement working in conjunction with a considerable amount of knowledge; often acquired through experience and the transmission of tacit practice by another practitioner. One of the most ambitious Artificial Intelligence projects currently being pursued is intended to invest a machine with such tacit knowledge. Douglas Lenat’s CYC project is a twenty-year, $25 million endeavour that aims to compile a database of ‘common-sense knowledge’, or as the Cyc Corporation’s website puts it, ‘an immense multi-contextual knowledge base and an efficient inference engine. The knowledge base is built upon a core of over 1,000,000 hand-entered assertions (or “rules”) designed to capture a large portion of what we normally consider consensus knowledge about the world. For example, Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried rightside-up.’
Whether CYC has the capacity to be programmed with the numerous subtle rules and data that comprise a connoisseur’s understanding, and whether such an inference engine can be usefully connected to a robotic eye, are questions that may never be answered. Apart from anything else it is unlikely that, in the near future at least, art history will be seen as important enough a domain of knowledge to be so treated. Possibly, when the Sun’s death is imminent and the ships have been built that will take the remnants of humanity to the stars, such knowledge will be deemed useful. But at the moment such questions seem a little premature. Artificial Intelligence, despite the claims of Moravec, Kurzweil, Brooks et al, is a long way from achieving its aim of making machines intelligent or conscious. It is very possible that AI will never succeed in such an aim and that the very premises upon which it is based are fallacious. AI has been the subject of strong and convincing criticism by philosophers such as Hubert Dreyfus, John Searle and Hilary Putnam and Lyotard himself. And anyway it seems unlikely that the Human race itself will survive for the next four and a half billion years (sometimes it seems amazing that it survives from one week to the next). If it does, then it is unlikely that human thought will in any way resemble its current manifestations, let alone be conducted through contemporary disciplinary paradigms. Even less likely is that the discipline known as the history of art will still exist in any form we would recognise. But Lyotard’s conceit is a good way of thinking through the more immediate effects of recent technological developments on such a discipline. There can be no question that the development of Artificial Intelligence, Virtual Reality, Electronic Networking and even Email are having profound effects on a discipline that already has a complex relationship to the means of production, reproduction and distribution. The way that such developments affect art history is of direct professional concern to me. I am Course Director of the MA Digital Art History at the School of History of Art, Film and Visual Media at Birkbeck College in the University of London. This is a course intended to introduce art historians and those working more generally within visual culture to the possibilities and issues of digital technology in relation to their practice. Necessarily it engages with precisely the questions that have been set out as the basis for this issue of Culture Machine. Given the apparently banal uses to which this technology is mostly applied and the insidious manner in which it becomes ever more ubiquitous in our lives, the degree to which it is affecting disciplines such as art history is hard to determine. It is perhaps through a question as extreme as that posed by Lyotard that we can grasp some of the more mundane issues involved.
In this context it is possible to understand Artificial Intelligence, not as a plausible scientific endeavour, but as a way of coming to terms with our rapidly changing relationship with technology. ‘In this age of contemporary technics’, writes Bernard Stiegler in Technics and Time, ‘it might be thought that technological power risks sweeping the human away’. He continues that:
Work, family, and traditional forms of communities would be swept away by the deterritorialisation (that is, by destruction) of ethnic groups, and also of knowledge, nature and politics (not only of the delegation of decision making but by the ‘marketization’ of democracy), the economy (by the electronization of the financial activity that completely dominates it), the alteration of space and time (not only inter-individual spaces and times, by the globalization of interactions through the deployment of telecommunication networks, the instantaneity of the processes, the ‘real time’ and the ‘live’, but also the space and time of the ‘body proper’ itself, by tele-aesthesia or ‘telepresence’ . . . (Stiegler, 1998: 88)
Stiegler follows Maurice Blanchot in seeing the world made possible by contemporary technology as ushering in a new era. According to Blanchot, modern technics, ‘the collective organization on a planetary scale for the calculated establishment of plans, mechanization and automation, and finally atomic energy’ (quoted in Stiegler, 1998: 89), allows mankind to achieve what, hitherto, only stars could accomplish. Thus the human itself has become a star. This has a dramatic effect on the human relation to temporality ‘which was once conceived as that of a sublunary world whose bearings were constituted from the standpoint of the stars’ (Stiegler, 1998: 89). The new astral era ‘no longer belongs to the measures of history’, which ‘belonged to the divide separating the human world from the stars. Humanity (the human world) was history . . . ‘ (Stiegler, 1998: 89).
The end of history is the end of the contingent world of sublunary humanity and its supercession by astral man, for whom technics renders the world entirely amenable to planning and control. But paradoxically it is also the point at which technics sweeps away the human. Both Artificial Intelligence research and proposals for interstellar travel to escape the earth are attempts to mediate and represent this supercession and to preserve some sense of the human in this situation. But in truth human intelligence is already fully artificial, as it is increasingly bound up with its prosthetic external storage devices, its networks, databases and expert systems (and perhaps always was). Ironically AI, far from being the means by which machines will supersede humans, is in fact the last redoubt of humanness, in which ‘thought’ is rendered at something like human speed and routed through systems of symbolic representation. While Moravec, Kurzweil and others are making their usual apocalyptic predictions, in which machines will become more intelligent than humans in 2020, 2030 or whenever is just long enough away to be plausible, machines have long since bypassed the problems of intelligence and consciousness. Thought is merely an epiphenomenona of our technically mediated relationship with the world, an increasingly unnecessary loop in the systems of data storage, manipulation and exchange that characterise the astral era. The human and all that it is defined by, history, art (and of course the history of art), is a momentary episode before real-time processing renders the human superfluous. As Bernhard Siegart puts it:
. . . [T]he impossibility of technologically processing data in real time is the possibility of art . . . As long as processing in real time was not available, data always had to be stored intermediately somewhere – on skin, wax, clay, stone, papyrus, linen, paper, wood, or on the cerebral cortex- in order to be transmitted or otherwise processed. It was precisely in this way that data became something palpable for human beings, that it opened up the field of art. Conversely it is nonsensical to speak of the availability of real-time processing . . . insofar as the concept of availability implies the human being as subject. After all, real-time processing is the exact opposite of being available. It is not available to the feedback loops of the human senses, but instead to the standards of signal processors, since real-time processing is defined precisely as the evasion of the senses. (Siegart, 1999: 12)
References
Brooks, R. A. (2002) Robot: The Future of Flesh and Machines. London: Penguin Books.
Chin, M. (2001) Kubrick’s Cinema Odyssey. Trans. C. Gorbman. London: British Film Institute.
Gere, John (1985) ‘Introduction’ to J. Stock & D. Scrase, The Achievement of a Connoisseur: Philip Pouncey. Cambridge: Fitzwilliam Museum.
Lyotard, J.-F. (1991) The Inhuman: Reflections on Time. Trans. G. Bennington & R. Bowlby. Cambridge: Polity Press.
Stiegler, B. (1998) Technics and Time: The Fault of Epimetheus. Trans. R. Beardsworth & G. Collins. Stanford: Stanford University Press.
Siegart, B. (1999) Relays: Literature as an Epoch of the Postal System. Stanford: Stanford University Press.
White, C. (1996) ‘John Arthur Giles Gere 1921 – 1995’ in
Proceedings of the British Academy, 90: 1995 Lectures and
Memoirs. London: The British Academy.