What is Virtual Light? – Cathryn Vasseleu

To Western eyes, the stars receded after the discovery that light’s transmission occurred at a finite speed — and not, as had been assumed, instantaneously. Roemer’s astronomical finding of 1676 profoundly altered the viewer’s relation to the cosmos. What light-speed inadvertently revealed was that the heavens were not in such immediate proximity as they appeared to human sight. As Hans Blumenberg articulates in The Genesis of the Copernican World, the dramatic spatiotemporal consequences and phenomenal aberrations produced by the discovery that light possessed its own velocity were vital factors in our understanding of the condition of modernity (1987: 91-119). Today light’s propagation has been profoundly altered by its mobilisation within an electronic communicational network, which has consequently produced its own virtual order of spatiotemporal and perceptual aberrations. Might it be the case that these too can be understood in terms of a broader refiguration of light’s ontological status?

Some commentators have re-addressed the speed of light as a key to understanding current modes of technologically mediated experience. As I will discuss, Jean Baudrillard, Martin Jay and Paul Virilio are three such commentators whose varying conclusions regarding the changed conditions of experience wrought by electronic media are underpinned by their individual interpretations of the role played by light-speed. I am proposing that light’s mobilisation within an electronic communicational network has given us cause to question the way we view the presence of light. In addressing this question, I will not be singling out the speed of light as a significant factor in itself. I will be arguing that light’s ontological status is altered by another factor: namely, the replacement of light as an organising principle by the language of digital script in digital image-synthesis.

The Speed of Light as a World-Defining Factor

A concern for the signifying function of the speed of light in contemporary consciousness lies at the heart of Jean Baudrillard’s intentionally unbearable pronouncement that ‘we too shall see the stars fading away’ (as did the computer technicians of Arthur C. Clarke’s allegorical tale) once the virtualization of the world is fully realised (1997: 23, 27). This will have been achieved, Baudrillard envisions, when all our acts, historical events, material substance and energy have been transformed into pure information. What is at stake in all this, as Baudrillard sees it, is ‘the place of reality, the question of its degree’ (2000: 4). Baudrillard does not argue that photographic and digital technologies and media have caused the real to disappear. If anything, he argues, these and the flourishing of all our technologies can be regarded as offspring that have arisen from the progressive extinction of an unsignifiable reality.

Baudrillard reads the significance of the speed of light in this context: ‘[t]he speed of light protects the reality of things by guaranteeing that the images we have of them are contemporaneous’ (1988: 193). Any appreciable change in light’s conveyance of meaning in this way undermines the plausibility of a causal universe, as for instance occurs with the light of celestial bodies, which we now perceive as a pale glimmer lagging behind and disconnected from its source. From this apparent ‘slowing down’ of light, Baudrillard argues that a catastrophe involving the involution of time into the pure event follows, leaving us with nothing but unintelligible events devoid of precedent or consequences: ‘the slower light becomes, the less it escapes its source; thus things and events tend not to release their meaning…’ (1998: 193). Given that astrophysicists are currently investigating the possibility that light’s constant speed is measurably slowing down,1 Baudrillard’s assertion almost has the ring of prophesy rather than suggesting a faulty grasp of one of the basic laws of physics. However, for Baudrillard, the significance of the speed of light, as far as its conveyance of meaning is concerned, lies in its provision of a guaranteed causal connection between the image and its emanation from an originating material source. Whatever light’s speed is in terms of measurement, what matters in this respect is that its speed is absolute: ‘imagine a universe…where bodies would travel at phenomenal speeds, but light would travel very slowly. It would be total chaos, no longer regulated by the simultaneity of optical messages’ (1998: 194). The circumstances in which we witness the slowing down of light as Baudrillard understands it are those of electronic media. Here, information in the form of electromagnetic radiation, circulating everywhere at the speed of light, robs light-speed of its power as an absolute measure of everything else, reducing it to a ‘human’ speed, relative to and capable even of lagging (in significance as its source becomes more obscure) behind our own accelerated movements.

Martin Jay’s reading of the cultural consequences of the discovery that light possessed its own velocity contests Baudrillard’s prognostication of an entirely self-sufficient simulacral reality made up of instantaneous events. Rather than arguing that the speed of light is an absolute that guarantees our sense of the reality of things, Jay argues that the delay in the light of the stars, now viewed in hindsight as the ancient remnants of light’s velocity, raised ontological and epistemological questions. These questions were provoked by the delayed starlight’s revelation of the ‘virtualization of reality’ (1995: 158). The realisation that light travelled at a finite speed put paid to the camera obscura model of synchronous, atemporal presence and the privilege accorded to sight’s veracity in ‘classical’ models of vision. Nevertheless, for Jay, it does not follow that virtual reality and telerobotic technologies in effect represent the disappearance of the indexical traces of an original referent altogether. Despite their further disruption of the truthfulness of vision, these technologies do not effect the disappearance of all links between viewer and source. Instead they introduce ever more delays and systems of mediation: ‘the speed of electrical signals, 60 to 90 percent that of the speed of light, combined with delays introduced by digitizing and relay circuits, introduces perceptible time delays’ (1995: 158). If anything, virtual reality and telerobotic technologies underline that in actuality, presence is disjunctive and apparitional.

Jay points out that by comparing the world’s virtualization with disappearing starlight, Baudrillard undoes his own equation of virtual reality with a non-referential system of signs. Baudrillard can only envision the experience of modern stargazing as an example par excellence of a catastrophic virtuality while he still clings to the ineradicable traces of a displaced existential source. As Jay observes: ‘Baudrillard alerts us to the attenuated indexical trace of an objective real that haunts the apparently self-referential world of pure simulacra’ (1995: 161). Jay thus finds a toehold against Baudrillard’s controversial scenario of a universe of indeterminate, uncontrollable and interminable phenomena (Baudrillard & Sutton, 1997). He finds this point of resistance in the attenuated light from the distant stars, whose filtered appearance through multiple sign systems is nevertheless rooted in the existential presence of its source. Rather than devolving into a self-referential world of purely simulacral construction, Jay argues that ‘an indexical trace survives in both virtual reality and telerobotic technologies and that each resists complete virtualization’ (Jay, 1995: 158).

Jay’s reading of the autonomy of starlight is precisely the kind of intervention that Baudrillard’s dire prediction of an autonomous virtuality calls out for. Countering Baudrillard, Jay proposes that within the traces of the light of the stars we are reminded that there is something questionable and productive of misgivings in the demand for ultimate authenticity in all experience, or the belief (once held by stargazers) in an unmediated relation to an original nature. Jay’s reading therefore re-asserts, rather than problematises, a concept of virtuality rooted in the existential presence of a displaced material source. However, the ‘virtualization of reality’, which Jay demonstrates has already been revealed as an apparitional presence in the modern experience of stargazing, must be differentiated from another, cybernetically implemented concept of virtuality. Rather than recalling a ghostly presence that rematerializes in its traces — thus assigning virtuality an ongoing (indexical) representative value — this cybernetically implemented virtuality is conceived of as the experience of agency and telerobotically-mediated proximity in relation to computer-generated simulacral objects and events. This is a virtuality which operates directly on the time and space of events and images, producing them in a way that renders their connection to an implacable actual reality indeterminable.

Here the issue concerning the limits of virtualization is not primarily related to the speed of light. The issue becomes the technological implementation of light as a force of indetermination rather than the guarantor of an implacable actual reality and veridical perception. A shift in cultural perception away from the latter was seen, for example, in the ‘dis-indexing’ effects of photography. Rosalind Krauss analyses how photography changed the status of the cast shadow, giving the shadows made by three dimensional objects an iconic autonomy, when previously such phenomena were an index of an actual spatiotemporal event (implying causality, and inviting the inference that the referent physically existed) (Krauss, 1985 ).2 Today, the use of computer programs as synthetic light-sources allows computer-graphics artists to play fast and loose with the semiological properties of light, producing shadows and reflections and luminescences in the form of algorithmically formulated, already-worked-out phenomena. The incorporation of computer-generated light within digital image-making processes, involving the development of computer models (algorithms) that calculate the actions of optical phenomena, can give simulated 3D images a realistic appearance that approximates an indexical status. Computer-models can replicate the physics of light and render it pictorially using perceptual models, such that the observer believes that the simulation is an image of a physically real object or scene.

The artificiality of algorithmically-simulated illumination is not itself a significant factor in the semiology of light. A division of light into natural light (derived from the sun), and artificial light (powered by electricity or other sources of energy) occurred with the invention and industrialization of gas and electrically-generated lighting in the nineteenth century (Shivelbusch, 1988). The simulation of light by digital means is indicative of a more radical turn, not from the light of nature, but from the epistemological authority of the heavens of direct perception. Hans Blumenberg discusses how the spatiotemporal displacements introduced into the cosmos by the speed of light precipitated this turn, but he also discusses Roemer’s discovery in relation to the planetarium, a cultural invention which arose from the challenge that the heavens represented for humanity in Enlightenment thinking (Blumenberg, 1987: 116). To a stargazer schooled in the same pattern of thinking, the incomparability of that natural phenomenon to human thoughts and human deeds was deemed to completely overpower all human endeavour.

Blumenberg refers to Nietzsche’s infamous summation of this Enlightenment stance, which was that the heavens were unbearable and irrelevant in view of man’s own difficulties (Blumenberg, 1987: 109). Henceforth, according to Nietzsche, if sublimity was cultivated in the viewing of (an incomparable) nature, this practice existed for the express purpose of acting as a tangible reminder to Western humanity of the heavens’ complete indifference to it. Henceforth also, the advantages of theoretically describable heavens and their technical possession impress themselves on modern thought. The planetarium shows the heavens off to their best advantage — but thanks to acts of technical simulation, not the paling heavens themselves. Technical simulation ceases to take second place to nature’s reality in the planetarium, where it does not matter whether the points of light that move in an exactly describable manner in the heavens are movements that originate in the stars themselves, or whether the stars are seen to move only in the eye of the observer. Stars and planets that move almost imperceptibly (and may be too dim to see at all in the night sky), can be seen moving in the planetarium’s firmament quite clearly, in the space of seconds. The planetarium can employ a theoretical description of the heavens to add motions which, while not necessarily a true reflection of the heavens that are observable to the naked eye or even through a telescope, make visible things about them that could not otherwise be seen. This heavenly vault at humanity’s disposal is also ‘the mausoleum of the starry heavens as the ideal of pure intuition’ (Blumenberg, 1987: 116).

The planetarium dispenses with the notion of duplicating direct perception, in favour of the distinct advantages of being able to augment what can be seen. Abiding by this principle, simulation has been a driving force in the technical creation of dual worlds by doubling and creating a mobile perspective on the world that is constituted by direct sensory proximity. Today astronomers can choose which standpoint in the universe from which to compare Jim Blinn’s fly-by planet simulations with the images taken when NASA was able to send a camera to photograph the planets themselves (Seymour, 1999). Alternatively, we can visualise the heavens as they appear from earth in a whole new light. For instance, a team of computer-graphics researchers is developing more refined physically-based models of the night sky for realistic image synthesis (Jensen et al., 2001).3 The model designed by this team is based on astronomical data regarding both the position and radiometry of diverse heavenly bodies. It simulates the appearance and illumination of all significant sources of natural light in the night sky (apart from unpredictable phenomena), inviting study of the model itself as a source of inspiration, inference and knowledge about the night sky as a natural phenomenon. To add to its naturalistic appeal, great attention has been paid to computing radiometric values with the accuracy required for emulating the way the sky appears to human night (retinal rod) vision, lacking in colour apart from a bluish tinge. The model is based on scientific, instrumentally obtained data, but gives priority to achieving a ‘sense of night’ in the eyes of the observer. The rendering of light’s phenomenal complexity as the combined feat of human eyes, optical instruments and computation is reflected in the research project’s sources of funding, which include both the NSF Science and Technology Centre for Computer Graphic and Scientific Visualization, and Pixar Animation Studios.

Blumenberg describes the inauguration of a system of self-darkening in the turn from the heavens and their incommensurable powers over human imagination, to the artificial extension of visualisation by means of technical acts of simulation. We have turned now to a medium that eclipses light, expressed both in the move towards virtual interaction or actual immersion within computer-generated images, and in a cultural investment in the power of the algorithm to generate experiences that would otherwise be unimaginable. Rather than a source of illumination and truth, light has become a precision instrument and energy source, a medium that penetrates matter, is contained in glass, heats water, conveys messages and teleports us.

The strongest assertion of humanity’s descent within systems of self-darkening can be found in Paul Virilio’s analysis of the post-Einsteinian replacement of light’s role of direct illumination with the speed of light as a cosmological constant. Virilio’s analysis represents the apotheosis of light as the medium of human sight and insight, and its replacement by the speed of light as a world-defining element: ‘Since the turn of the century, the absolute limit of the speed of light has lit up, so to speak, both space and time. So it is not so much light that illuminates things (the object, the subject, the path); it is the constant nature of light’s limit speed that conditions the perception of duration and of the world’s expanse as phenomena’ (Virilio, 1997: 13). Virilio refers to the limit-speed of light as a critical transition that occurs as he says: ‘from the moment that we step beyond the transport age into the organization and electromagnetic conditioning of the territory‘ (1997: 12). This critical transition, where familiar territory gives way to what Virilio calls the interval of the light-kind, or interface, comes about because speed is not a phenomenon but a relationship between phenomena.

Rather than an absolute that in Baudrillard’s estimation once guaranteed the sovereignty of the real, Virilio argues that by adopting the speed of light as a cosmic illuminant, we have become imprisoned in speed. Einstein’s physics makes space and time relative to light-speed, rendering the once expansive spatiotemporal dimensions of the outer world in finite terms. The installation of light-speed as the definitive measure by which time and space are now characterized, has, in the wake of tele-phony and tele-vision technologies, enabled real-time tele-action. The possibility of tele-acting instantaneously, as vision and action become simultaneous, can occur regardless of geographical distance (Virilio & Brügger, 2001: 82-84).

In the wake of the heavens’ illumination by speed, Virilio asserts that cosmic ‘space’ has become a night of spaceless light years, illuminated in what Virilio refers to as time-light. Originally associated with the development of photosensitive media, time-light is not time that passes, but a time that can be exposed by light’s speed. Time-light is an extra-terrestrial ‘cosmic space’ that alters our understanding of both duration and geometrical space (Virilio, 1997: 4). Virilio is perhaps best known for his razor-sharp criticism of the political and social inequalities as well as the perceptual aberrations and aleatory effects speed is producing on a global scale. Despite the power of Virilio’s analysis, however, there is room for question in the idea of speed as an organising principle that opens our way into the entirety of the contemporary world and offers a key for reading it. A problem with this idea is that it attributes to speed the properties that transcendental philosophy has attributed to light.

In view of the issue I have raised above (i.e. Virilio’s depiction of speed as a key-light), I want to discuss just one aspect of Virilio’s complex analysis, and that is the distinction he makes between geometric and physical optics. Virilio defines geometric optics as the ‘passive optics’ of the space of matter — such as the behaviour of light within glass, water, or air. Physical optics are the ‘active optics’ of the time of the speed of light. Born of the convergence of optical and electronic technology, Virilio also refers to this active optics as optoelectronics or electro-optics. Optoelectronics is not concerned with properties of light rays that are directly related to the visualisation of phenomena. It is concerned with the generation, processing and detection of optical signals that represent electrical quantities (1997: 35, 151).

Optoelectronics appertains to properties of light as an optical signal, that is, with the illuminating powers of videography, referred to by Virilio as ‘indirect light’ (2000: 1-16). Virilio argues that indirect light represents a new division of light, not into natural and artificial light sources, but direct (which includes both natural and artificial radiant light) and indirect light. He characterises geometric optics as small-scale optics because they are concerned only with a world of immediate proximity, perceivable by means of direct light. Optoelectronics, on the other hand, is large-scale optics insofar as it is concerned with the otherworldly time-feats of indirect light, including instantaneous transmission. The indirect light of electro-optics can illuminate distant occurrences in real-time by means of a camera and a monitor, transmitting signals at the speed of light. By means of this technology, visual appearances can themselves be instantaneously transmitted, allowing information objectives to be attained and action to be taken immediately, without regard for physical distances, and without being seen by human eyes directly.

Nostalgic for an imaginary immediacy of vision, Virilio describes the experience created by the realm of optoelectronics as a state of being humanly unable to compete with the witnessing powers of sightless vision machines. Photography, cinema and video are all named as culprits in the technical creation of a visually challenged humanity, and Virilio condemns digital technologies the most for laying the grounds for a future in which the visible will be doctored by wild accelerations of ordinary, everyday representations (1997: 91).

Of equal relevance to the question of light’s ontological refiguration, however, is Virilio’s characterisation of small scale optics as an optics of dwindling importance, outdone as geometric perspective is replaced with an electronic perspective. The place of small-scale optics in digital image synthesis is the issue that I will now turn to.

While it is undeniable that the practice of large-scale optics is profoundly altering the way that our perception of space and time is constituted, the relationship between small-scale optics and large-scale optics is far less clear cut. It is not the case that small-scale optics have dwindled in importance in relation to large-scale optics — they have flourished beside them. If anything, the algorithmic refinement of geometric optics, and the development of more sophisticated non-linear approaches to modelling the physics of light in relation to the space of matter, has become a growth area in a computer-graphics industry fiercely competing to develop software programs that enable more realistic image synthesis and reconstructions of 3D space. These developments are occurring hand in hand with the conquest of physical optics, suggesting that rather than being rendered obsolete, the principles of small-scale optics are performing a new role that is both enabled by and vital to the earthly success of optoelectronics.

While information in digital electronic form enables algorithms, vector graphics and animated wire diagrams to take over from the geometric representation of space along the old lines of quattrocento principles (and from the representation of vision along the old lines of converging light rays), geometrization as a practice was, as Virilio notes, concerned with creating a world of immediate proximity to the viewer. Viewers were corporeally implicated in the tensions and thresholds of this light-space — artificial, coercive and stripped of flesh as it may be. Descartes might have taken his inspiration from the perspectival techniques of the Renaissance to define space as a network of relations between objects in such a way that a viewer’s vision could be reconstructed by an outside onlooker, but as Merleau-Ponty (whom Virilio credits with formulating the prime tele-communicational image)4 argues, Descartes’ move was meant as a corrective measure. In Descartes’ view, Merleau-Ponty argues: ‘The thinking that belongs to vision functions according to a program and a law which it has not given itself. It does not possess its own premises; it is not a thought altogether present and actual; there is in its center a mystery of passivity’ (Merleau-Ponty, 1964: 175). In short, Descartes was keen to avoid getting the light of the mind tangled up (as he thought all artists other than copper engravers did) in the promiscuous and enigmatic workings of sensible vision.

Passed over by the electronic perspective of optoelectronics, making an interval for the visible to stay in view has become the new brief of small-scale optics. This interval is obliterated by the speed of the interface. Rather than an indirect light, the light of small-scale optics is a phantom light that recuperates something of an immediate visuality within the sightless realm of physical optics, where speed overrides and automatically re-defines (without regard for phenomenal vision) the interval of perception. A phantom light can be phantom in the sense of a phantom limb, or a light that is missing but still integral to a mutilated, disabled field of electronically-assisted vision. Alternately, it can be phantom in the sense of a de facto light that, by enacting a conjuration of the visible, enables the extravagance of seeing with an intensity that could not otherwise occur.

The Computer Program As Light-Source

The use of computer programs to simulate light in digital image-making represents a departure, in terms of reception and techniques, from image-making that employs radiant forms of light (this includes film and video). In technical terms, the production of images by digital means does not involve the fixation of chemical or electromagnetic alterations wrought by the direct contact of light. Instead, digital image-making involves the use of programmed data to manipulate electron light beams that are emitted by a cathode-ray tube housed within a computer (Hilf, 1996). The information that is used to generate a digital image is not stored as patterns of light. It is stored as groups of numbers that symbolize the light at different points in the image. By the same process, simulated light-sources can be employed in innumerable ways — to create naturalistic light-effects in a scene, in real-time (processing of) animation in games, for inventing new architectural forms, or to generate otherworldly artworks.

Film and video lighting are based on the response of a photosensitive surface to changes in the amount of light. Video responds in a linear relationship to the level of light exposure, and film responds in a logarithmic relationship at high and low levels of exposure. Because light in computer graphics is not recorded by the same means by which the contact of light with a photosensitive surface is recorded, the lighting of objects and scenes is not calculated in the same way. Computer graphics use algorithmic calculations of light to model its behaviour in a variety of defined circumstances. A common aim of algorithmic calculation in computer graphics is to achieve a realism in lighting that matches the way light responds in film formats. Another aim is to produce images that are visually and measurably indistinguishable from real-world images.5

Film and video record the variation in incident light in a scene, where light is provided by natural and artificial light sources. Computer-generated graphics do not necessarily involve the use of lenses, shutters, photosensitive recording media or incident light. In order to achieve any sense of illumination in the scene, they must rely on a variety of computer-modelling techniques (algorithms) for synthetically emulating the physical properties of light. In film and video images, the traces of these properties are seen in the artefacts of recording. In 3D computer graphics, a scene is built up piece by piece, and then integrated to form an image. Systems for the conceptualisation of lighting and shadow have been developed expressly for use in computer graphics. Their specificity rests in their application to the pixel grids of computer screens, and in their simplification of light physics to aspects that are of relevance to visual phenomena (Baxandall, 1995: 5). Systems are constantly being developed for specific aesthetic purposes, such as the logarithmic rendering of light to emulate the light fall-off for filmed objects, or physically-based modelling techniques for realistically rendering the complex appearance of material objects.6

Rather than simulating the way light is recorded in photosensitive media, computer graphics simulate the way light rays interact with shapes, material surfaces and with each other. Techniques used include shadow casting, ray tracing, and radiosity. Cost, computational difficulty, rendering time, and the kind of 3D movement effects desired determines their usage. Of these various techniques, radiosity renders light in a way that most closely equates with the visible world that is perceived around us and envelops us in the flesh. Radiosity behaves somewhat like artificial light, and somewhat like natural light. It uses key and fill lights, and bounce cards for places that are underlit. It also resolves the light coming from light sources, windows and opaque surfaces in a naturalistic way. The overall effect is the high quality rendering of shadow, colour and real-world lighting conditions by the realistic simulation of the overall light energy in a scene.

Some scientists and computer-graphics researchers uphold the view that such simulations appear realistic (that is, the observer believes the simulation is an image of a physically real object) because they capture and reconstruct the essential physical arrangement of the empirical world. Some artists, designers and animators adhere to the view that discursive practices determine the way we experience light and its artefacts. Addressing the apparent lack of kinship between the two cultures of art and science, Simon Niedenthal argues that the development of algorithms for modelling light behaviour is a valuable place to explore creative thought in graphics research, that is: ‘to generate a site for collaboration at the intersection of light and art for designers and computer scientists devoted to the development of new digital media’ (Niedenthal, 2002). Such calls do not only question the merit of continuing to maintain entrenched divides between the creativity of artistic and scientific practices. They also raise the need to question how the science of digitally generated pixels of light can be read in the context of the history of pictorial, three-dimensional, photographic and cinematic codes of realism. The image’s artifice is not the issue here. The artifice of realism has already drawn exhaustive criticism from many theoretical quarters. For example, the tradition of photorealism has been thoroughly addressed by the post-Barthes critical practice of mercilessly exposing and deconstructing the ‘reality effect’, that is, the manipulative nature of photorealist representation and its capacity to pass off a constructed and edited representation as real.

Digital images have an inner clarity that is given by the scripting of the image itself. In digital images, every part of the image is rendered equivalent by the script, giving it an authoritarian lucidity and a structural intolerance of illegibility. Absent from this artificially rendered all-seeing perspective, in which every detail of the image is controlled, is the faintest suggestion that anything can be extraneous or unscripted. Nothing of the order of dazzlement, temperature, shadow or other artefacts of light ever materialises of itself (unscripted) in this unrelenting hyperlucidity. Artefacts abound in computer generated images, but they differ in nature from the artefacts of direct illumination. Clarity is no longer associated with the presence or absence of light. Indeed, in the digitally rendered image, the signs of realism are not even written in light. Light is written in code.

As a consequence of light’s codification, special consideration must be given to the way the presence of light and lighting effects in digital images is viewed. The emulation of three dimensional, photographic and cinematic realism in computer graphics is directed towards the image’s own fidelity or believability. Rather than a recorded accident of light that can be read as faithful to an external profilmic reality, something unique about the CG image’s realism, as it is presented, is that it is inwardly-motivated, or related to itself. By this I mean that its aims are directed towards achieving the appearance of realism by perfecting its own formal organisation, that is, by scripting, arranging and rendering an image such that it could be mistaken at a perceptual level for an image of a scene or entity that has an independent, material existence.

Another feature of synthetic realism, one that Lev Manovich has commented on, is that the image’s fidelity is selectively executed rather than homogenous in its realism. Manovich characterises synthetic realism as partial, uneven, and full of gaps (1997: 13). He also remarks that the existence and ever-increasing refinement of techniques to secure a high degree of representational fidelity reflects the circumstances in which computer graphics have been developed, funded and are most widely used — which is to serve military and industrial purposes (1997: 9-13). Here, algorithms, more detailed data, polygon numbers and computing power are regarded as the keys to the use of simulation to create images that appear naturalistic.

A state-of-the art example of the use of naturalistic lighting in computer graphics is a seven minute animation called Bunny, directed by Chris Wedge (1998).7 Described by its makers as ‘the first computer-animated film to use radiosity, an advanced computer rendering technique that mimics the most subtle properties of natural light’, Bunny won its makers the 1998 Academy Award for Best Animated Short Film and well as the Grand Prize for 3D Animation at IMAGINA in France. The film was created with the studio’s own proprietary controller software that incorporates both radiosity and ray tracing.

In Bunny, the computational feats of radiosity are used not only to create the illusion of realistic 3D space, but also to makes reference to the allure of the illusion itself. Rather than disguising the ‘reality effect,’ Bunny flaunts its capacity to pass off a constructed representation as real: ‘What makes Bunny unique is its warm, photorealistic style that belies the computer technology that made it possible’ (Wedge, 1998). Bunny is a remarkable film both as a technical achievement, and because it manages to balance that achievement with a storytelling technique that resists the displacement of interest towards fascination with the illusion. It does so in part by making the illusion integral to the narrative, as a light that illuminates the story. We are drawn like the moth in the opening scene, to this light in which can enter into the touching storyworld of an old rabbit baking a cake in the twilight of her life.

Several kinds of simulated realism are interwoven in the film. The first relates directly to the image’s cinematic look. It takes a moment to realise that the film is not made by the cinematic recording of model animation. Its technical innovation could at first be missed, and is only confirmed by thoroughly scouring the textures and edges of animated objects. A second level of realistic portrayal relates to the depiction of Bunny herself. The anthropomorphic characterisation of the furry, tetchy, frail old creature’s movements — emphasised in the sounds she makes as she bashes and slams about — emulate human experience closely enough to be moving in themselves.

The high fidelity 3D rendering of the play of light in the scene is also underlined in the narrative by a play on the drawing power and interchangeability of various simulated light-sources. The possibility of making a physical distinction between simulated and recorded light dissolves as all forms of light can be emulated in the same way, and with such high fidelity that, just like the moth-character that is drawn to each light-source depicted in the film, we moths take it for real. This degree of technical mastery in the pictorial rendering of the lighting in each scene alludes to its fidelity to the physical behaviour of light, which we see in the film’s virtuosic simulation of different light-sources (from a naked electric light-bulb to a mandala-like sun). The film’s technical mastery also alludes to its fidelity to light-based mechanical-reproduction practices — photography, cinema and video.

An important precept in the use of light rendering techniques in computer graphics is the conception of physical light in eighteenth century pictorial representation. Like those methods of artificial construction, the information used in the digital simulation and naturalistic rendering of light is based on the mathematical description of meticulous empirical observation of visual faculties. As such, naturalistic lighting acts as a potent reminder that the light depicted in computer images may be digitally generated, but it does not originate in algorithms or mathematical theories. The most significant change that computer graphics introduces into the pictorial rendering of light is the automation of the processes of calculation and rendering. On the one hand, machine vision simplifies the physical behaviour of light, selecting only aspects that are relevant to visualization. On the other hand, machines can have a higher quantitative precision than human vision (Baxandall, 1995: 46). The simplification of the physical properties of light enables computers to artificially enhance and intensify the powers of vision with high precision, while still being modelled on human perceptual faculties.

Light-rendering techniques modelled on human perceptual faculties do not necessarily uphold a representational view of the empirical world, or a merely physical-optical relation that can be technically simulated. Light, lighting, shadow, reflections, colour, translucency, luminosity exist visually, not as objectively determinable (nor objectless computational) entities, but by causing us to see the visible in its unexpected and inexhaustible richness, as it happens. Perceptual experience is not a function of a physical-optical relation between a subject and an externally illuminated object-world. Visual perception presupposes a knowledge of light which cannot be derived from observing light’s actions in abstract terms as though we could see without any kind of visual setting, intentionality, or implicit language for making sense of light. Merleau-Ponty describes lighting as the condition for light to appear as a neutral property common to all visibles: ‘neither colour nor, in itself, even light, it is anterior to the distinction between colours and luminosities. This is why it always tends to become “neutral” for us. The penumbra in which we are becomes so natural that it is no longer even perceived as penumbra’ (1962: 311). In other words, we do not simply see light, even when observing it empirically. It is the medium in which we see, that is, the invisible lighting underlining our gaze.

The bearing of computation — and optics for that matter — on the workings of human vision are a whole new realm of perceptual inquiry in themselves, but they are altering our ways of seeing, not disabling or replacing sight. The important matter is their bearing on sight, which (from an electronic perspective) now oscillates between the abstraction of vision to the point of making being in the visible as it takes place impossible, and of seeing everything as it happens, everywhere, at once. The use of simulation as a method of image synthesis in computer graphics cultivates the depiction of phenomenal complexity as a computational (rather than perceptual) feat, automating perception in a way that supposedly circumvents a perceptual system based on light’s mediation of a phenomenal world. In this respect virtual light is more immediate than so-called direct forms of light, giving artists and scientists alike the ability to manipulate the spontaneous appearance of light.

No longer left to move according to its own laws, the virtualization, animation, automation and augmentation of the optical properties of light, using ray-tracing and radiosity software, enable artists and scientists to determine the conditions of virtual light’s self-propagation. However, the ability to model light naturalistically demands an acquaintance with light’s appearance in relation to matter and carnality. The appearance of light, lighting, shadow, reflections, colour, translucence, and luminosity takes place within the penumbra of an invisible lighting in which we already are (together with, mirroring and mirrored by, other beings), such that virtual lighting requires us to already be at home within, and actively a part of the carnal world wherein such appearances are drawn. The opto-electronic displacement of direct light unanchors perception and renders it captive to the speed of vision machines. Virtual light perpetuates the desire to capture the feeling of an (in)formative medium whose texture captivates the gaze, and in doing so, gives us a de facto sense of all that we can grasp of the proximate.

Endnotes

A version of this article was first presented at Craft and Code: Cinema and New Media, Powerhouse Museum, Sydney Oct, 1999. My thanks to Chris Wedge for providing me with a video-copy of Bunny, to Simon Niedenthal for his generosity in the sharing of references, and to Donald P. Greenberg for showing me the Cornell Box set-up at Cornell University and discussing its purpose as a test scene in computer graphics research.

1 For a discussion of the cosmological problems that would follow from any alteration of the speed of light as a constant cosmic speed limit (a possibility that astrophysicists are currently investigating), see Barrow (1999).

2 In doing so, photography altered the shadow’s semiological properties, liberating shadows from, and allowing them to survive beyond, the object that caused them. Surrealist art and photography introduced an indecision around the shadow’s status in the plastic domain, making it impossible to know if the shadow seen is causally related to its object or not. Thus viewers of surrealist art could not be sure if they and the art object were inside an indexical or an iconic space. Denis Hollier (1994) further observes that the surrealist practice of integrating shadows cast upon the art object into the scene of the work itself, brought them into play as a deliberately introduced exteriority (indexical marks in an iconic space) rather than the otherwise unnoticed shadows of objects.

3 View images at http://graphics.stanford.edu/~henrik/papers/nightsky/

4 Quoting Merleau-Ponty’s famous overturning of the metaphysical distinction between vision as a static, reflective stance and the spontaneity of movement, Virilio writes: ‘Everything I see is in principle within my reach, at least within the reach of my sight, marked on the map of the “I can”. In this important formulation, Merleau-Ponty pinpoints precisely what will eventually find itself ruined by the banalization of a certain teletopology’ (1994: 7).

5 The mission statement of the Department of Computer Graphics at Cornell University, which has done pioneering work in the development of algorithms for rendering light naturalistically, and is home of the Cornell Box (the ubiquitous test scene for global illumination) is: ‘Our long term goal is to develop physically-based lighting models and perceptually based rendering procedures to produce images that are visually and measurably indistinguishable from real world images.’ http://www.Graphics.Cornell.EDU/online/box/

6 Proper discussion of methods and technical advances in digital lighting and rendering are beyond scope of this article. For a valuable introductory text, see Birn (2000). I discuss an example of appearance modelling (also by Jensen and co-researchers) elsewhere, in relation to Merleau-Ponty’s account of phenomenal complexity (forthcoming 2003).

7 View images at http://bunny.blueskystudios.com/bunny_home.html

References

Barrow, J. D. (1999) ‘Is Nothing Sacred?’, New Scientist 2196, July 24, 1999: 28-32.

Baudrillard, J. (1988) ‘Fatal Strategies’, in M. Poster (ed.), Jean Baudrillard: Selected Writings. Oxford: Polity Press.

Baudrillard, J. (1997) ‘Aesthetic Illusion and Virtual Reality’, in N. Zurbrugg (ed.), Jean Baudrillard: Art and Artefact. Brisbane: Institute of Modern Art.

Baudrillard, J. (2000) ‘Photography, Or the Writing of Light’, Trans. F. Debrix. CTheory Article A083. http://www.ctheory.net/

Baudrillard, J. and Sutton, P. (1997) ‘Endangered Species? An Interview with Jean Baudrillard’, Angelaki 2:3: 217- 224.

Baxandall, M. (1995) Shadows and Enlightenment. New Haven: Yale University Press.

Birn, J. (2000) Lighting and Rendering. New York: New Riders.

Blumenberg, H. (1987) The Genesis of the Copernican World. Trans. R. M. Wallace. Cambridge, Massachusetts: The MIT Press.

Hilf, B. (1996) ‘Developing a Digital Aesthetic’, Animation Journal 5, 1 (Fall): 4-31.

Hollier, D. (1994) ‘Surrealist Precipitates: Shadows Don’t Cast Shadows’, Trans. R. Krauss. October 69 (Summer): 111-32.

Jay, M. (1995) ‘The Speed of Light and the Virtualization of Reality’, in K. Goldberg (ed.), The Robot in the Garden: Telerobotics and Telepistemology in the Age of the Internet. Cambridge, Massachusetts: The MIT Press.

Jensen, H. W. et al. (2001) ‘A Physically-Based Night Sky Model’, Proceedings of SIGGRAPH 2001, August: 399-408.

Krauss, R. (1985) ‘The Photographic Conditions of Surrealism’, in The Originality of the Avant-Garde and Other Modernist Myths. Cambridge: MIT Press.

Manovich, L. (1997) ‘”Reality” Effects in Computer Animation’, in J. Pilling (ed.), A Reader in Animation Studies. London: John Libbey.

Merleau-Ponty, M. (1962) The Phenomenology of Perception. Trans. C. Smith, London: Routledge & Kegan Paul.

Merleau-Ponty, M. (1964) ‘Eye and Mind’, The Primacy of Perception (ed.), J. M. Edie. Chicago: Northwestern University Press.

Niedenthal, S. (2002) ‘Learning from the Cornell Box’, Leonardo 35, 4: 249-254

Seymour, M. (1999) ‘CGI Through Simulation’, Digital Media World, Aug/Sept 1999: 33.

Shivelbusch, W. (1988) Disenchanted Night: The Industrialization of Light in the Nineteenth Century. Trans. A. Davies. Oxford: Berg.

Vasseleu, C. (forthcoming 2003) ‘An Analysis of the Animation in a Glass of Computer-Generated Milk’, in L. Fisher, S. Gürtler, S. Stoller & V. Vasterling (eds), Feministische Phänomenologie und Hermenuetik. München: Karl Alber.

Virilio, P. & Brügger, N. (2001) ‘Perception, Politics and the Intellectual: Interview with Niels Brügger’, in J. Armitage (ed.), Virilio Live: Selected Interviews. London: Sage.

Virilio, P. (1994) The Vision Machine. London: British Film Institute & Bloomington: Indiana University Press.

Virilio, P. (1997) Open Sky. Trans. J. Rose. London: Verso.

Virilio, P. (2000) Polar Inertia. Trans. P. Camiller. London: Sage Publications.

Wedge, C. (Dir.) (1998) Bunny. New York: Blue Sky Studios. http://bunny.blueskystudios.com/bunny_home.html

Leave a Reply