Machine Intelligences: An Introduction – Peter Jakobsson, Anne Kaun & Fredrik Stiernstedt

The aim of this special issue is to advance new critical perspectives on machine intelligence. Although the current hype surrounding artificial intelligence has been countered by several critical interventions, there is still a long way to go in order to produce a shift in the mainstream discourse concerning these technologies. The AI-hype has support from resourceful and well-connected actors within industry and politics. Within the art world and popular culture, AI appears to be a more ambiguous phenomena, associated with both blessings and grave dangers. Nevertheless, its development is all too often portrayed as though it is inevitable and that the path it will take is already set. The impulse behind this special issue is to deepen and diversify the interrogation of this seemingly inevitable development and to get a look behind the shiny surfaces of these supposedly new technologies. This special issue thus offers historical perspectives, conceptual re-thinking and situated analyses of the technical realities and the social and cultural implications of machine intelligence, in its many different forms and manifestations, with the hope that this will provide opportunities to intervene in and change the course of our technological futures.

There is much futurist speculation, but we do not know yet how AI will develop and what the limits of its use in algorithmic automation will be. This means that there is also room and opportunities for change. The examples of AI that we have seen so far are still miles away from the many speculative visions of the future that draw broad conclusions about the revolutionizing potential of technology. Current AI-based solutions are often automating, repetitive and fairly simple and standardized tasks that at the same time require the adaptation of labor and practices by human workers to become ‘machine readable’. Alessandro Delfanti (2021) and Bronwyn Frey (2021) speak, for instance, of ‘humanly extended machines’ that require human workers as extensions to fully execute tasks, such as picking items in an Amazon warehouse. At the same time, the dream of full automation is still being pursued. Especially public administration struggling with shrinking resources and increasing costs for aging societies has become a frontline of machine enabled automation. The dream of efficiency and effectiveness through algorithmic automation is sold to public representatives at technology fairs and conferences, where they breathe the ‘hot air’ of artificial intelligence (Hockenhull & Cohn, 2021).

The overstatement of the revolutionary potential of machine intelligence is however increasingly being questioned. A number of critical voices are urging us to engage with the preconditions for machine intelligence, including natural resources, repetitive and low skill labor as well as large-scale data extraction, as for example pointed out by Kate Crawford (2021). Rather than concluding that we are in the midst of a fundamental, technological revolution powered by AI, Crawford concludes that artificial intelligence is neither artificial nor intelligent. Instead, she argues, AI has become a master narrative for a ‘planetary system of extraction’ of labor, data and natural resources. Crawford’s move towards the material, political and economic system that has emerged around the notion of AI goes hand in hand with a more general interest in engaging with technology in more holistic ways and, more importantly, with theoretical approaches from social sciences and humanities that help us make visible the hidden layers of complex technologies while also contributing to reimagining what technological futures we want to work towards. Technological development is often far more open than acknowledged in public discourses and imaginaries. As David Noble (2011) has pointed out, technological development is characterized by conscious decisions based on the political, economic and cultural contexts rather than a process of natural selection with the best technology winning over others. It is hence crucial to also consider the routes not taken as well as the conditions for imagining and developing future technologies. These paths not taken make visible the structural conditions of technological potential and our technological futures.

Studying past and present narratives of machine intelligence is one way of engaging with this developing future. We should also ask why certain narratives and imaginaries are preferred over others, what the interests are behind these narratives, how they potentially contradict each other, but also in which ways these narratives are performative and shape concrete developments and implementation processes. One case in point is Simone Natale’s (2021) work on AI as deceitful media. Rather than asking how intelligent machines are, he asks how intelligent they appear to be. According to him, AI development is not primarily about emulating or superseding human intelligence, but rather reaching a stage where we as humans are successfully lured into believing that machines are intelligent. AI is only intelligent as long as we believe in its intelligence. Similarly, the dream that machine intelligence will enhance our future is only worth dreaming as long as we buy into it.

There is of course much at stake in the reproduction of narratives and imaginaries about a glorious techno-future, especially for the technological elite in Silicon Valley but also for other players within industry and financial markets. The dream of a future where machine intelligences are making workers exchangeable, while weakening their position in labor struggles, is currently boosting the stock value of many companies. But dreams about the accomplishment of future machine intelligences are also legitimizing other political futures, which require that we leave worldly constraints and challenges behind. The dream of a singular, self-thinking AI allows us to escape the present world including the large challenges of climate change as well as poverty and suffering.

Governments and politicians have bought into this dream of ultimate machine intelligence and invest heavily in development programs at universities as well as private companies. To meet critical voices, funding goes increasingly also into research on ethical AI. However, developing ethical standards for machine intelligence does not only seem difficult, but also very limited. It is not merely about developing just and human-focused technology, or a technology without bias, but about the imbalances in power relations that emerge and are enhanced through technology. Amanda Lagerkvist (2020) argues that we have to move beyond AI ethics and develop more hopeful, alternative versions of AI futures that are controlled and collectively developed versions of AI. According to her, we are at the crossroads of AI development – a digital limit situation. In this liminal situation, there is a need for political theories about how machine intelligence can be developed without cementing the existing technological elites and social order; machine intelligences that allows for social change and social mobility. Her position, echoing Andrew Feenberg’s (2010) call for ‘democratic rationalization’, is thus that there is nothing in the technology itself that lies in the way for it to become a tool for more equality rather than inequality. This means that there must be a way to imagine, and achieve, forms of machine intelligences that do not depend on the ‘planetary system of extraction’ (Crawford 2021), that characterizes its current manifestations. There must also be a way to develop the technology in a way that avoids the concentration of power and resources in a few hands, even though that such a concentration often results from the construction of large technical systems. It is at this limit point then, that the public discussion needs more insight from social sciences and the humanities to explore alternative paths. The contributions to this special issue are together doing just that, namely, providing alternative visions of our technological futures. The articles speak to histories (Johannes Bruder & Orit Halpern, Sam Kellogg, Evan Donahue) and alternative ways of conceptualizing machine intelligence (Lisa Müller-Trede and Kwasu Tembo), the social, cultural and existential implications of machine intelligence (Andrew Davis, Michelle Pfeifer, Crystal Chokshi, Thao Phan & Scott Wark) as well as the emergence of AI art (Interview with Joanna Zylinska, Martin Zeilinger, and video essay by Andreas Refsgaard).

Together the contributions to this special issue probe ways of thinking critically and critically about technological futures. More specifically, Johannes Bruder and Orit Halpern construct a genealogy of machine intelligence that traces the origins of our present predicaments relating to the pandemic and its catastrophic effects on our health, societies and economies. They argue that some of the solutions that have been proposed to model and handle the pandemic and the spread of the virus spring from early neuroscientific and economic thinking associated with Donald Hebb and Friedrich Hayek. This understanding of minds and markets, which Bruder and Halpern call ‘the neural imaginary’, is then traced to our present where, it is argued, it still plays an important legitimating function for the use of AI and neural networks to model and represent humans and societies. As Bruder and Halpern show, however, the epistemology and mathematical models that are legitimated by this imaginary are fraught with problems and concerns regarding their ability to do the representational work that they are called upon to do.

Sam Kellogg, in his contribution to this special issue, also provides a genealogy of AI. By teasing out the perhaps unexpected overlaps between both the romantic and modernist rhetoric of mountaineering on the one hand, and the language used to rhetorically convey the abstract mathematical concepts behind machine learning on the other hand, Kellogg shows that these overlaps are not coincidences but in fact reveal some things about the mindset of the mathematicians and technologists working towards optimization and mastery through evermore complex algorithms. The metaphorical use of concepts such as peaks, valleys, plateaus, saddle points, ridges, canyons are nowadays so common in machine learning discourse that they almost come across as cliches, Kellogg argues. Mountaineering became after WWII a site of cultural and geopolitical struggle, but first ascents were also shrouded in mythical tales about individual achievements and overcoming difficulties under uncertain circumstances. Kellogg shows how this cultural history reverberates in contemporary understanding of algorithmic gradient optimization and how it indeed legitimates and naturalizes the risks that are inherent in the technical, but inevitable also social and cultural, project of computer engineers and mathematicians working on the development of AI.

Evan Donahue, in turn, adopts a historical perspective in order to engage critically with how AI and machine learning techniques are used and understood. Donhaue argues that the formation of datasets that are used in training AI-systems can serve as a valuable entry point into discussions about the epistemological assumptions underlying contemporary uses of machine learning. Using data sets and methods related to aesthetic and emotional computation, and more specifically color, as a case study, Donahue reveals how technologists’ assumptions about the data – assumptions which are often based on non-specialist knowledge – heavily impacts assumptions about what the trained AI-models can accomplish. Here social scientists and humanities scholars have an important role to play, since their expertise often involves interpretation of the kind of data that are used in AI-training sets.

Two of the contributions provide a rethinking of the central concepts of this special issue, by probing new ways to imagine what artificial intelligence can be and questioning our ability to know what machines are capable of. Lisa Müller-Trede’s essay is a philosophical and conceptual intervention, but also an invitation to experimentation in art, design and technology, in order to construct other forms of intelligence than those currently being pursued. Müller-Trede suggests the concept of artificial relational intelligence to describe a form of technologically shared intelligence between multiple humans, which partly tries to escape the representational and statistical forms of knowledge enacted by the dominant forms of machine intelligence today. The audible breath is suggested as an alternative way of creating an interface between humans and machines, pointing the way towards a new poetics of algorithmic systems. Kwasu Tembo’s contribution is a conceptual and speculative intervention in the debate on the relationship between humans and machines. In focus here is a discussion about how we as humans perceive and understand machine intelligences, and perhaps most importantly, our inability to do so. The 2016 Go tournament between AlphaGo and world champion Go master Lee Sedol, which Tembo uses as a case study, naturally invited a lot of anthropomorphizing commentary about the style and merits of the computer’s gameplay. Beyond this however the tournament also highlighted the human inability to both know what the computer knows and also know what the computer does not know – as well as the difference between human imagination and machine imagination. This inaccessibility and distance is, in Tembo’s essay, discussed as a contemporary form of the technological sublime.

Five of the contributions to this issue engage with the social, cultural and existential implications of developments towards machine intelligence. Andrew Davis’ contribution to this issue is a speculative and critical essay which discusses the quest towards general artificial intelligence and the Singularity – the point where the technical development will outrun human capabilities and understandings – and the political and social implications of this (supposed) future event. Davis focuses particularly on the work and ideas of futurologists, venture capitalists and technologists that aim to turn the Singularity into an opportunity for the merging of human and machine and the overcoming of physical boundaries such as age, decay and death. Instead of viewing this moment as the inevitable outcome of technological progress Davis views the strife towards the Singularity as directed by the economic and corporate interests that, indeed, at the moment are driving this development. Instead of the liberation of mankind Davis thus interprets this development as the fulfilment of these economic and corporate interests, pointing towards the creation of a corporate sovereign – a corporate structure unhindered by any obstacles to achieve corporate domination.

Michelle Pfeifer also touches upon the topic of sovereignty, but instead focuses on the role of machine intelligence in establishing national sovereignty through the construction of national and supranational intelligent borders. The focus in her analysis is on the impact that data-driven and semi-automated technologies for identification and authentication have for political and legal personhood. She argues that the current political fixation on migration, in tandem with border control technologies, has led to a reification of migration management as a seamless system of control, in which media technologies such as mobile phones have become key mediators of human values such as recognition and belonging. Issues of belonging and exclusion are also raised in Thao Phan and Scott Wark’s contribution to this issue, in which they argue that technologies of personalization necessarily also entail discrimination. Although companies like Facebook claim that their technologies for personalized content and advertisements are not using categories such as race when profiling their users, their advertising-based business model compels them to offer various proxies to sensitive categories such as race. Phan and Wark argue that such proxies that can be used for discriminatory purposes are an integral part of technologies of personalization. Crystal Chokshi in turn considers the implications that linguistic computational technologies have for the communicative practices that most of us use our personal media technologies for, namely, to write. Looking beyond the issue of privacy, Chokshi uses the notion of linguistic capitalism to argue that the communicative function of writing today has become secondary to the commodification of language and communication through technologies such as Google’s Smart compose. In her essay, Chokshi juxtaposes the marketing claims and user’s beliefs about such technologies with the technical and economic functions of linguistic computational technologies. She thus demonstrates how these technologies downplay the cultural value of words, while at the same time transforms words into valuable economic resources. Finally, the issue contains three contributions that either discuss the relationship between AI and art or are examples of convergence between the two. The first contribution within this section is a translated interview with Joanna Zylinska by Claudio Celis and Pablo Ortuzar Kunstmann originally published in Spanish by the journal la Fuga. The starting point of the conversation is Zylinska’s (2020) book AI Art: Machine visions and warped dreams in which Zylinska raises questions concerning the purpose and nature of art in the age of AI. In the interview Zylinska connects her writings on AI to questions underlying both this and her previous works which are about the state of humanity as such, both as a species and a historical agent. By referring to Bernard Stiegler, she discusses how intelligence always, in a certain sense, has been artificial. Consequently, she calls for increased attention to concepts such as environmental creativity and post-human understandings of art and creativity. Martin Zeilinger, in his contribution, picks up on precisely such concepts and understandings, in arguing that the coming of AI art, or artistic expressions produced by AI that are accepted as creative expressions and authentic artworks by the art world, will fundamentally impact how we view artistic and aesthetic ideals and concepts tied to the artist figure and works of art. This argument is brought out by an analysis of the technological platforms that are currently employed in much of what has become known as AI art. Zeilinger argues that even if the products of these technologies are often discussed using traditional concepts from the art world such as creativity and originality, a closer look at the algorithms underpinning common AI art-platforms reveals that their basic functionality can be described as copy machines. This, Zeilinger argues, has implications for, among other things, the cultural ownership models of the art world – i.e. copyright – and by extension, the economic function of art. The final contribution to this issue consists of an example of the use of AI in the creation of art. Danish artist Andreas Refsgaard’s contribution consists of a video that collects some of his work that uses various forms of machine learning algorithms that in a playful way shows both how AI can be used in art creation, but also the limits of, and indeed the failures associated with contemporary technologies.

References

Crawford, K. (2021) Atlas of AI. New Haven: Yale University Press.

Delfanti, A. (2021) “Machinic dispossession and augment despotism: digital work in an Amazon warehouse”, New Media & Society 23. No. 1: 39-55.

Delfanti, A. & Frey, B. (2021) “Humanly Extended Automation or the Future of Work Seen through Amazon Patents, Science, Technology, & Human Values 46. No. 3: 655-682.

Feenberg, A. (2010) Between reason and experience: Essays in technology and modernity. Cambridge, Massachusetts: MIT Press.

Hockenhull, M. & Cohn, M. (2021) “Hot air and corporate sociotechnical imaginaries: Performing and translating digital futures in the Danish tech scene”, New Media & Society 23. No. 2: 302-321.

Lagerkvist, A. (2020) “Digital Limit Situations: Anticipatory Media beyond ‘the New AI Era’”, Journal of Digital Social Research 2. No. 3: 16-41.

Natale, S. (2021) Deceitful media: Artificial intelligence and social life after the Turing test. New York: Oxford University Press.

Noble, D. (2011) Forces of production: A social history of industrial automation. New Brunswick: Transaction Publishers.

Zylinska, J (2020) AI Art: Machine visions and warped dreams. Open Humanities Press.

Leave a Reply