Vol 20 (2021) Machine Intelligences

 

What Personalisation Can Do for You! or,

How to Do Racial Discrimination Without ‘Race’

Thao Phan & Scott Wark

 

Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad ‘ethnic affinity’ categories: African American, U.S.-Hispanic, and Asian American. Superficially, these categories were supposed to allow advertisers to target demographic groups without using data about users’ race, which Facebook explicitly does not collect. This article uses the life and death of Facebook’s ‘ethnic affinity’ categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques.

Adopting Wendy H. K. Chun’s conceptualisation of race ‘and/as’ technology as an analytical frame, this article focuses on what ‘ethic affinity’ categories do with race. ‘Ethhnic affinity’worked by analysing users’ preferences and behaviour: they were supposed to capture an ‘affinity’ for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to ‘personalise’ content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a ‘post-racial’ mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals.

Even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate proxies, like racialised preferences or language use, as racialising substitutes. We conclude that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation that are often elided by the claim that using complex machine learning techniques to attend to our preferences will inaugurate a post-racial present. Discrimination is not personalisation’s accidental product; it is its very condition of possibility. Like that of Facebook’s ethnic affinity categories, its death has been greatly exaggerated.


Introduction

On August 11, 2020, Facebook announced a series of changes to their advertising tools. Billed as an effort to ‘simplify’ and ‘streamline’ the targeting options available to advertisers, this announcement quietly marked the demise of what was known as their ‘ethnic affinity’ categories. On the surface, these categories seemed innocuous enough. In a bid to help advertisers reach ‘multicultural audiences’, Facebook created three typologies into which some of its users would be sorted. Depending on how they used the platform, who they interacted with, and what preferences they expressed, users could be classified by Facebook’s machine learning AI as having an affinity for African American, U.S. Hispanic, or Asian American culture. But over the preceding years, these categories had been at the centre of a storm of criticism concerning their dubious classificatory system and its potential for ongoing misuse. From 2016, journalists, commentators, lawmakers, advocates for racial equality, and lawyers had been generating a steady stream of exposés, urgent queries, and lawsuits challenging their legality and their actual and potential discriminatory uses. After years of criticism, this announcement felt like Facebook’s final—and rather belated—acceptance of the inevitable: their ‘ethnic affinity’ categories were, if not racist, then too open to abuse by unscrupulous advertisers.

This article uses the life and death of the Facebook ethnic affinity categories as a case study to reflect on the shifting configurations of race and racial categorisation in algorithmic culture. The story of Facebook’s ‘ethnic affinity’ categories  exemplifies emerging techniques of racialisation today. It is our contention that such techniques operationalise a function that is inherent to algorithmic culture, namely: discrimination. The quiet demise of the categories is supposed to have brought one particularly ignominious chapter in Facebook’s recent history to a close. However, we contend that these categories typify the techniques that platforms like Facebook use to assemble us into groups and, reciprocally, to apprehend us as individuals. This inherent discriminatory function is bound up with one of digital culture’s organising tendencies: the drive to use data collection with AI techniques to provide increasingly personalised services (Kant, 2020). Facebook’s ethnic affinity categories typify a state of affairs that seems paradoxical. In order for a service to be personalised for you, you must first be understood in relation to a set of others (Lury and Day, 2019). By their very nature, these services are discriminatory. For us, the pressing question that needs to be asked is this: what new forms of racial discrimination does algorithmic culture impose under personalisation’s aegis?

The title for this article provides clues as to how we aim to tackle this question. The title’s structure is a play on media scholar Wendy H.K Chun’s seminal essay, ‘Race and/as technology; Or, how to do things with race’ (2013). The first half, What personalisation can do for you!, co-opts the advertising rhetoric at the heart of our critique. It is a nod to the difficulties of engaging with a discourse that manifests almost exclusively through spin — web copy, press releases, promotional trailers, and more — a discourse whose defining features are empty promises packaged in shiny promotional hype. The phrasing also brings attention to our own crude efforts at ‘personalisation’. Such keyword titles (like ours) are representative of a strategic impulse within the academic industry to engage with algorithmic optimisation. They are messages targeted to both imagined readers and the algorithms that mediate their exposure; a way of signalling to these audiences that this might be just the article for you (!). The you is the ironic subject of this sentence. As our argument will unpack, by you we don’t mean you the individual. Instead, we mean the target of personalisation techniques: the you who manifests in relation to others like you. It is this relation that personalisation seeks to capitalise upon and the subject around which our discussion revolves.

The second half of the title, how to do racial discrimination without ‘race’, builds on Chun’s framing of race and/as technology by suggesting that racial discrimination is technologised within regimes of personalisation. While this is also true in previous eras, in this article we aim to unpack the material specificity of AI and machine learning as a technology that not only participates in a history of racial discrimination, but that represents a novel step in that history. Finally, the placement of ‘race’ in inverted commas points to the ongoing contestability of this term, not just as a form of categorisation that may or may not have biological or cultural grounding — what Chun calls the ‘ontology of race’ — but as a continuously disavowed and reinscribed form of classification.

As we argue in the following sections, race is completely absent from commercial vocabularies (replaced with empty corporate phrases like multicultural or ethnic affinity, and diversity), yet it is at the centre of an entire ecosystem of commercialisation and contemporary forms of demographic categorisation that are ostensibly designed to usher in a society that is, for all intents and purposes, ‘post-race’. Race might be occluded by such platforms, but it doesn’t disappear. It is reprocessed in and through personalising techniques. As we want to show, this changes its locus — from already-existing communities made up of people raced-as, to individuals racialised by and through the affinities they express as they use computational systems, like Facebook. What’s at stake is not what we understand race to be, but rather the question of how it’s implemented, and/as technologies of personalisation that re-politicise the ‘personal’, again and differently.

Personalisation’s Post-Racial Promise

The relationship between personalisation and racialisation is a counter-intuitive one. In broad terms, personalisation refers to the tailoring of products or services to suit the needs or desires of specific people. Personalisation can be simple: a monogram, for instance, personalises an item of clothing. But with the explosion of digital services designed to collect, process, and aggregate large amounts of data, personalisation is now one of the drivers of contemporary Big Tech. It’s one of the key selling points that has allowed platforms like Facebook to establish themselves as the primary mediators between the people who use their services and the advertisers who want to reach those people. For your data—which, subject to proprietary computational processes, becomes their data—platforms make a double wager. To users, they promise to use this data to tailor newsfeeds and content; to show you only the most relevant posts, comments, stories, ads, and more. To advertisers, they promise to maximise return on investment by more efficiently targeting ads to the most relevant audiences—what in advertising parlance is called reaching ‘the right person, at the right place, at the right time, with the right message’ (Facebook for Business, 2020a; Boudet et. al, 2019).

Apprehended individually, personalisation can sometimes feel creepy or annoying, like when a suspiciously relevant promotion appears on your newsfeed following an offline conversation, or when you’re chased around the internet by an ad for a product you’ve already purchased. This creepy-factor gives personalisation a dull sheen, because it turns it in to a new kind of ‘grey media’ (Fuller and Goffey, 2012)—a form of easily-ignored online clutter that recedes into the internet’s workaday background, enabled by opaque data brokers, loose regulation, mind-numbing terms of service agreements, and user complacency.

In the aggregate, however, personalisation has shaped the internet as we know it. As Tanya Kant persuasively argues, these kinds of mundane transactions have become ‘the driving economic resource of the contemporary free-to-use web’ (2020, 6; emphasis original). Personalisation drives the expansion of what media scholars call ‘platformisation’ (Helmond, 2015; Poell et al, 2019), or the continuing transformation of software and services into avenues for data capture. Reciprocally, it drives the transformation of the role of the internet user, as the design of computational services are increasingly shaped by the capacity to capture data in new ways and in new media (Phan, 2019; Wark, 2019). This is part of what has allowed the platform to exert such profound shaping effects on our societies, our economies, our polities, and even on capitalism itself (Van Dijck et al, 2018; Bratton, 2015; Steinberg, 2019; M. Wark, 2019). Platforms have emerged as technical-infrastructural ‘ensembles’, or large-scale, self-regulating technical systems (Mackenzie, 2018), in part because they’re engineered to turn user data into advertising dollars.

It can be difficult to conceptually grasp the scale and impact of platformisation, partly because of the sheer size and diversity of company holdings, but also because platforms themselves engage in what David Nieborg and Anne Helmond describe as ‘rhetorical interventions’ that are designed to evade traditional forms of classification (2019, 198). For Nieborg and Helmond, to describe a platform like Facebook as a single entity or as just a social networking site is a misleading characterisation that functions to ‘normalise the company’s infrastructural ambitions and prevent a coherent analytical framework that accounts for the platform’s expanding boundaries’ (198). They instead use the phrase ‘a data infrastructure hosting a variety of platform instances’ to capture the evolution of Facebook from a single platform into an ecosystem that contains infrastructural properties (199).

In the same way that the singular term ‘platform’ misrepresents the vastness of Facebook’s operations, the term ‘personalisation’ misleadingly characterises the AI processes that manage platform content. As Aron Darmody and Detlev Zwick (2020) outline, personalisation is underwritten by a justificatory promise to make the internet more relevant to both users and advertisers alike. They argue that this promise helps to elide a key operative contradiction: that platforms claim to ‘liberate’ its users to make better choices about what they want to see, post, share, and consume by actively controlling the choices available (2020, 4). Following this logic, we argue that the promise of relevance elides another, more fundamental contradiction: that personalisation is never really about you, the individual.

The promise of relevance hinges on the tacit claim that personalisation offers advertisers a means to reach individuals, and individuals a means to be recognised as individuals by platforms and the advertisers that they serve. But personalisation doesn’t individualise, not really. Rather, it exploits the ability of AI techniques, like machine learning, to automate the classification of a platform’s users into increasingly granular categories. Whereas the older, broader demographic categories one might have used to break an audience down into segments were relatively fixed, Facebook claims that their tools mean that ‘marketers don’t hold anything constant’ (Facebook for Business, 2017). Labels like ‘Asian’ or ‘Hispanic’ arguably matter less in this context than more detailed descriptors such as ‘interested in yoga’ or ‘frequent engagement with cake related content’.1 Crude categories like race are, in principle, obsolesced by more granular and precise audience attributes. The social promise of AI-driven personalisation, then, is that it represents a ‘post-marketing’ turn (Darmody and Zwick, 2020) that has the potential to institute a ‘post-racial’ reality.2 After all, if we’re no longer using racialised demographic categories to target our goods or services at people, doesn’t personalisation peel race back and reveal the person, the you, underneath?

Yet if this is the case, then how do we explain the ongoing, and arguably intensified, forms of racism and racial discrimination that proliferate today? And how might Facebook specifically be responsible for (or at least complicit in) creating racial division? One answer is that these platforms employ machine learning based categorisation techniques that are not interested in the individual at all—this presumptive, ‘pared-back’ person—but in the production of new forms of categorisation. The best way of serving you is, paradoxically, to compare you with others: to fabricate new categories premised not on individuals, but on individuals’ likenesses to others (Lury and Day, 2019). We argue that it is through this process of determining likeness that categories, like race—which personalisation claims to efface—are reinscribed with a vengeance. The contradiction we mean to unpick is one between a post-racial promise and a reality in which racialisation continues to be profoundly felt by those who labour under the sign, ‘raced’.

My Affines and Yours

Facebook’s ethnic affinity categories offer perhaps the most exemplary model for this form of effacement and reinscription. At first glance, ethnic affinities are indistinguishable from what we typically call categories of race. Facebook’s typology of labels for their affinity groups— African Americans, U.S. Hispanics, and Asian Americans—reproduces an almost prototypical racial taxonomy; the kinds of labels one might find in census data or in biomedical research. But Facebook is very careful to articulate that ethnic affinities explicitly do not designate race. In an online tutorial promoting the benefits of what they called ‘multicultural marketing’,3 Facebook explained:

The word “affinity” can generally be defined as a relationship like a marriage, as a natural liking, and as a similarity of characteristics. We are using the term “Multicultural Affinity” to describe the quality of people who are interested in and likely to respond well to multicultural content. What we are referring to in these affinity groups is not their genetic makeup, but their affinity to the cultures they are interested in.

The Facebook multicultural targeting solution is based on affinity, not ethnicity. This provides advertisers with an opportunity to serve highly relevant ad content to affinity-based audiences.

(cited in Newitz, 2016a, emphasis added)

This final point, that affinity is not ethnicity, is one they emphasised at length. In a statement to online magazine Ars Technica, a Facebook spokesperson reiterated that a label such as ‘African-American affinity’ did not necessarily mean that the platform had identified a person as black. Rather, what it showed was that ‘they like African-American content’. The spokesperson went on to clarify: ‘we cannot and do not say to advertisers that they are ethnically black. Facebook does not have a way for people to self-identify by race or ethnicity on the platform’ (cited in Newitz, 2016b).

For Facebook, the distinction was clear. Race, as either self-determined or biologically determined, was not a point of data collected by the platform. The conception of race that they offered was an essentialist one that reduced racial identity to a single set of data points. By carefully bracketing out what did and did not count as racial data, Facebook was able to proceed with a model of demographic targeting that (in their mind) protected them from the criticisms associated with practices of race-based profiling. The logic was simple: without race there could be no racism—or at the very least, Facebook could not be accused of explicit racial profiling if race itself was absent from their schemeta.

This was a rhetorical intervention enacted in two parts. First was the substitution of the loaded and contested term ‘race’ with the more palatable and ostensibly neutral term ‘ethnicity’.4 This substitution follows a broader trend in scientific and policy discourse after World War II, in which scientists and bureaucrats—seeking to denounce the racism of their fields—began to phase out the use of ‘race’ in favour of terms like ‘ancestry’, ‘ethnic origin’, or more laboriously, ‘ethnic constitutional factor’ (see Kowal and Watt, 2018: 230). These terms all seek to de-emphasise race as a purely biological phenomenon, consciously shifting the discourse instead to an understanding of race as a cultural phenomenon. In Facebook’s case, our argument is that invoking ethnicity provided a way to rebrand racialisation in an age in which ‘race’ had become an increasingly vexed subject. Indeed, moving away from race allowed Facebook to virtuously claim that these categories could be used to ‘promote inclusion of underrepresented communities’ (Egan, 2016). Reframed as an act of social good, ethnic affinities were justified as a way for advertisers to reach an otherwise ‘underserved minority market’ (García Martínez, 2019).

Though arguably well-intentioned, this semantic shift creates significant elisions. It poaches from the language of diversity and inclusion to signal a symbolic commitment to anti-racism that rarely translates to meaningful material changes.5 But more than this, phrases like ‘ethnic affinities’ obscure the ongoing and harmful processes of racialisation and discrimination that Facebook actively participates in.6 It allows Facebook to continue the dubious work of racial categorisation while avoiding explicit association, and therefore responsibility, with historical practices of racism (see Kahn, 2012: 5). As Emma Kowal and Elizabeth Watt argue, ‘rather than changing biological concepts of difference… eschewing the language of ‘race’ may merely displace these concepts onto others, and make their effects more difficult to track’ (2018: 233). A banal and behaviour-driven phrase like ‘ethnic affinity’ is, then, a gift to companies like Facebook who are invested in forms of demographic targeting but conscious of the negative impact race-based profiling might have on their brand image.

Second, Facebook capitalised on the ambiguity of the term ‘affinity’. Within critical scholarship, affinity has a number of different meanings. In disciplines like anthropology, it describes specific kinds of social connections between people in kinship groups that are made through bonds like marriage. It is a connection outside of a direct biological or blood connection, but that is nevertheless characterised by familial intimacy—what Marshall Sahlins (2013) calls a ‘mutuality of being’. In feminist scholarship, this definition of affinity as connection beyond biology is often used as the foundation for political solidarity and as a means to undermine essentialist readings of identity. For instance, queer feminist sociologist Laura Mamo (2005) has used affinity to describe practices of kinship making in LGBT families. In her study of how lesbian couples select potential sperm donors in practices of assisted reproduction, Mamo uses the term ‘affinity-ties’ to describe how sperm is chosen based on an imagined future relation with the child. She highlights race-matching between mothers-to-be and donors as a particularly common practice, with phenotypical likeness identified as a key site for giving the family unit a sense of social legitimacy. Affinity, here, signals the potentiality to foster deep connections with a future child; an assumption that is propelled by the somewhat superficial understanding that families should racially look alike even if they aren’t genetically linked (Mamo, 2005: 248).

Similarly, feminist technoscience scholar Donna Haraway has used the term ‘politics of affinity’ to establish new forms of coalition within feminist movements. Where a ‘politics of identity’ can be problematically reduced to questions of biology—for instance, the transphobic idiom that only ‘biologically born women’ can identify as ‘real women’—the term affinity instead offers an avenue for belonging outside of such crude ideas of natural identification. Haraway argues that political movements grounded in an assumed collective identity have only managed to reproduce logics of domination and assimilation. In particular, she criticizes the tendency in radical feminism to orchestrate political action around the singular category of ‘womanhood’. This position inevitably generates its own exclusions as experiences that deviate from sanctioned narratives of women’s experience are undermined or erased. Just as Sojourner Truth so elegantly articulated over a century earlier in her proclamation ‘Ain’t I a woman?’, Haraway too argues that we must acknowledge ‘the non-innocence of the category “woman”’ (Haraway, 2004: 16) in order to break away from the logics of domination we claim to rally against. Affinity, then, offers a means of coalition outside of the limitations of identity. It presupposes commonality without singularity. In Haraway’s words, it underscores ‘a sea of differences…a self-consciously constructed space that cannot affirm the capacity to act on the basis of natural identification, but only on the basis of conscious coalition, of affinity, of political kinship’ (Haraway, 2004: 14 – 15).

Facebook too used affinity as a method to construct fluid community groups through forms of difference and likeness outside of biology. In their words, ethnic affinity is not ‘ethnic biodiversity’ or ‘genetic makeup’. But in addition to this, affinity was also used to construct groups based on a more abstract sense of potentiality. For Mamo, this potentiality was figured through the construction of the family unit and its imagined future cohesiveness. For Haraway, it was the potential for radical political action; a way to move beyond the limitations of an essentialist identity politics. Facebook also mobilised this figure of affinity as potentiality, not in service of social cohesion or social justice, but rather, for the morally-neutral—or, indeed, morally-vacuous—agenda of generating clicks and converting advertisements into sales. Affinity clusters audiences together based on an assessment of likeness: if one person with your shared characteristics has clicked on an item, then you potentially might click on it too. While it is a common belief that Facebook sells data to advertisers, this is not actually the case; or, at least, is not the case here. Instead, what Facebook sells is access to an audience of affines whose kinship is a shared potentiality of clicks; an audience composed of aggregated likeness packaged as objects of personalisation.

Category-collapse

At one level of generality, personalisation processes use affinity to figure social relations in particular ways. At scale, this industry promises to reach individuals. It’s billed as a means to truly tailor, target, or individualise products and services by replacing yesterday’s handful of blunt demographic segments with a constantly changing set of variables. But in order to find a very specific person in the digital wild, said marketers must also make them apprehensible. According to Kris Cohen, they do so by changing what he calls the ‘form of grouping or group form’ (2019: 167). Personalisation also makes another assumption: that platforms like Facebook can make up for what amounts, in practice, to a paucity of data on a given person by assembling each of us into groups of people who are like us (Lury and Day, 2019). The paradox of the personalisation industry is that it doesn’t actually individualise. Rather, it sorts us into groups of people who are like us because we like the same things as them, as signalled by the actions that platforms are able to capture. Robert Prey expresses this pithily when he notes that on platforms like Facebook, ‘there are in fact no individuals, but only ways of seeing people as individuals’ (2018: 1088).

Facebook’s description of ethnic affinities cited above articulates more than the personalisation industry’s rationale. It naturalises a ‘digital behaviorist’ (Rouvroy and Berns, 2013; Stark, 2018) vision of social organisation in which users are what they do and in which capturing enough of what users do allows platforms to predict what they might want to do next. For Cohen, this set of assumptions engenders a ‘personhood of preference’: from the point of view of the platform, our ‘sovereignty’ and ‘agency’ can be ‘encapsulated as a series of actions to be interpreted flatly as a preference for one thing over another’ (173). Rather than being an ‘individualising vector’, Cohen argues, ‘personalisation names a scattering of the personal across categories, across bulwarks’. He continues:

The spread of personalisation technologies and the ubiquity of personalised address could produce the feeling that we are witnessing the live slow-motion collapse of categories inside of which people are still desperately trying to live.

(2019: 189)

Category-collapse is the end-result of an approach to seeing people as individuals that ‘refuses to hold anything constant’. Race is supposed to be one of the casualties of this collapse. In its stead, Facebook presented marketers with ‘ethnic affinities’: aggregate likes transmuted into likenesses.

Yet, despite these existential claims for category-collapse, what ethnic affinities vividly illustrate is the stubborn persistence of a category like race; or at least, how one set of categories may collapse into another. While in principle there was a reasonable enough distinction between race and ethnic affinity, in practice it was clear that ethnic affinities were nevertheless being used and/as a technology of race. In her seminal analysis on the relationship between race and technology, Wendy H.K. Chun (2013) employs the phrase ‘race and/as technology’ to displace essentialist claims regarding race as either purely biological or purely cultural. Following the work of philosopher of technology Bernard Stiegler, Chun argues that race can instead be understood as a form of mediation that is ‘crucial to negotiating and establishing historically variable definitions’ of either category (2013: 39). In doing so, she shifts the conversation from an analysis of the ontology of race to an analysis of its ‘utility regardless of its essence’ (39). The phrase ‘race and/as technology’ can be understood as a provocation to rethink not only what race is, but what race does. Its punctuating slash emphasises race’s location in the and/as. It emphasises the role of technology in establishing the ‘truth’ of race via technoscientific techniques, but also turns a critical eye to what that ‘truth’ is made to do.

In the context of Facebook, it was evident that while ethnic affinities may have been technically distinct from race—operating within a distinct regime of platform categorisation—they were nevertheless deployed to do almost identical work. In the 2016 news story that cast ethnic affinity categories into the media spotlight, journalists Julie Angwin and Terry Parris Jr. demonstrated how advertisers could easily purchase ads targeted at Facebook users who were house hunting, and then exclude anyone with African-American, US Hispanic, or Asian American affinity (Angwin and Parris Jr., 2016). Here, ethnic affinities continued the work of racial segregation via systematic housing discrimination. As Angwin and Parris Jr. noted, this form of targeting and exclusion represented a ‘blatant violation’ of federal anti-discrimination laws, namely the Fair Housing Act of 1968 and the Civil Rights of 1964. It embodied what critical race and technology studies scholar Safiya Noble describes as ‘technological redlining’—the use of algorithmically driven software to ‘reinforce oppressive social relationships and enact new modes of racial profiling’ (2018: 1). This example demonstrated that even while ‘race’ was removed from the equation, ethnic affinities could nevertheless be used to continue the work of systemic racism and racial inequality.

Facebook was quick to condemn what they described as ‘advertising misusing [their] platform’ (cited in Angwin and Parris Jr, 2016). But even in their own examples, in which advertisers were ‘correctly’ using their marketing tools, ethnic affinities were still explicitly employed and/as a technology of racialisation. In one of their self-cited success stories, Facebook collaborated with Universal Studies to conduct a ‘customized racial marketing’ campaign to promote the 2015 film Straight Outta Compton, a biographical drama chronicling the rise of Compton based hip hop group N.W.A. On a panel at the 2016 South by Southwest Festival, Doug Neil, Universal’s Executive Vice-President of digital marketing, explained how ethnic affinity categories were used to target different demographic groups with tailored movie trailers. Business Insider journalist, Nathan McAlone (2016) summarised the strategy in the following way:

The “general population” (non-African American, non-Hispanic) wasn’t familiar with N.W.A., or with the musical catalog of Ice Cube and Dr. Dre, according to Neil. They connected to Ice Cube as an actor and Dr. Dre as the face of Beats, he said. The trailer marketed to them on Facebook had no mention of N.W.A., but sold the movie as a story of the rise of Ice Cube and Dr. Dre.

The trailer marketed to African Americans was completely different. Universal assumed this segment of the population had a baseline familiarity with N.W.A. “They put Compton on the map,” Neil said. This trailer opens with the word N.W.A. and continues to lean on it heavily throughout.

Nathan McAlone (2016)

What is not mentioned in this summary, but can be gleaned from watching both versions of the trailer, is that this was not just a simple strategy of pitching to different levels of cultural literacy.7 Rather, each trailer adopted a different technique of racialisation.

The first trailer, targeted to the ‘general population’ (i.e. white audiences), flashed the slogan ‘The World’s Most Dangerous Group’ across the screen and emphasised images of gun violence, aggressive clashes with police, and painted the group as ‘dangerous’ characters motivated by money and a desire to change their lives. In contrast, the second trailer, marketed at an African American population, framed the group as community leaders who used their art as a vehicle to express their frustration and anger at ongoing racial injustice. Where the first trailer used the slogan ‘The World’s Most Dangerous Group’, the second rephrased this slogan as ‘In the most dangerous place in America, their voices changed the world’. Though subtle, this new phrasing significantly changed the meaning of the campaign. The threat of ‘danger’ was displaced from individual to society: N.W.A as a threat, to N.W.A (and the people they represented) as living under threat. The group were also sympathetically refigured from individuals in need of personal change, to individuals who instead were agents of social change. This second trailer also included references to police brutality, the beating of Rodney King and the ensuing LA riots, and the use of music as a medium for non-violent protest. Nor did it shy away from featuring some of the group’s more controversial lyrics. In one scene, a crowd chants ‘fuck tha police’ over a montage of protestors rioting in the streets. This lyric was not only absent in the first trailer; so, too, was any reference to the group’s full name — Niggaz Wit Attitudes — a choice presumably made to protect white audiences’ delicate sensibilities.

Both modes of racialisation can be understood in terms of what feminist activist and scholar Bell Hooks (1992) calls the exploitation of Otherness in consumer culture. In the first trailer, white audiences were presented with stereotypes of black masculinity that, in hooks’ words, ‘constructs black men as “failures” who are psychologically “fucked up,” dangerous, violent, sex maniacs whose insanity is informed by their inability to fulfil their phallocentric masculine destiny in a racist context’ (hooks, 1992: 89). These fetishistic and racist representations were then marketed to white audiences as a means to satisfy a transgressive desire to experience the ‘wildness’ of the Other. In the second trailer, resistance to racial inequality and the pursuit of racial injustice was co-opted for promotional purposes. In this instance, blackness becomes doubly objectified. On the one hand, black culture and black resistance is the commodity that is sold back to the community for profit. On the other, black communities themselves are transformed into audience commodities who are then auctioned to advertisers by Facebook. This example underscores how lines of racialisation continue to be reinscribed within a commodity culture that disingenuously claims to not ‘see race’.

Race’s Traces

A few weeks after ProPublica released their investigation and caught the attention of lawmakers and the press in the US, Facebook put out a press release outlining changes they had made to the way these categories could be used to target audiences. Though noting that ‘[t]here are many non-discriminatory uses of our ethnic affinity solution’, Erin Egan, then-VP US Public Policy and Chief Privacy Officer, stated that Facebook would ‘disable the use of ethnic affinity marketing for ads that we identify as offering housing, employment, or credit’ (2016). By the following February, Facebook announced a range of further measures to help curb the misuse of these categories, including a new machine learning tool that was designed to vet potential housing, employment, or credit ads, as well as requiring advertisers to affirm that their ads followed a new set of more stringent advertising guidelines (Facebook Newsroom, 2017). They also quietly renamed the category ‘multicultural affinity’.

As ProPublica reported in November 2017, however, these new changes didn’t stop them from placing a new housing ad with discriminatory targeting parameters (Angwin, Tobin, and Varner, 2017a). In their report, they noted that they could also use Facebook’s geographical targeting tools to exclude users who lived in particular neighbourhoods from seeing these ads—or, that the ads continued to allow ‘redlining’. ProPublica’s investigation generated more than negative press: it launched lawsuits, including one brought against the company by the state of Washington (Statt, 2018). By August 2018, Facebook removed a further 5,000 targeting categories from its advertising tools in response to another suit (Tobin and Merril, 2018)—in order, as they put it, to ‘minimiz[e] the risk of abuse’ (Facebook for Business, 2018)—and in 2019, a further settlement prompted by a suite of lawsuits brought by U.S.-based civil rights organisations forced the company to disallow age, gender, and postcode-based targeting options from housing, employment, and credit ads (Gillum and Tobin, 2019).

Towards the end of 2020, this string of revelations, legal challenges, piecemeal changes and jargon-filled press releases culminated in Facebook finally removing ethnic—by then, multicultural—affinity categories altogether. In their press release, the company described the reasoning behind this removal like this:

Over the past few years, we’ve routinely reviewed and refined our targeting options to make it easier for advertisers to find and use targeting that will deliver the most value for businesses and people. Today, we’re sharing an update on our ongoing review and streamlining the options we provide by removing options that are not widely used by advertisers.

Infrequent use may be because some of the targeting options are redundant with others or because they’re too granular to really be useful. So we’re removing some of these options. For example, we’re removing multicultural affinity segments and encouraging advertisers to use other targeting options such as language or culture to reach people that are interested in multicultural content. We continue to support product solutions for multicultural marketing while guarding against their potential for misuse.

(Facebook for Business, 2020b)

These bare few paragraphs were supposed to perform the trick of announcing the end of those much-maligned ethnic affinity categories, whilst simultaneously eliding their impact and significance. But, as the old saying goes, reports of their death would be greatly exaggerated.

In August of 2020, investigative journalism site The Markup published yet another story in this now-familiar genre pioneered by ProPublica. Their claim went a lot further. Referring to yet another lawsuit served to Facebook, by the United States Department for Housing and Urban Development, this article noted that, ‘[a]side from allowing advertisers to target specific audiences’ using categories like ethnic affinities, ‘some critics say Facebook discriminates all on its own in delivering those ads’ (Merrill, 2020). The implication is not only that Facebook allows advertisers to discriminate explicitly by creating categories that they can deliberately (mis)use to exclude particular groups from seeing an ad; the implication is that Facebook’s methods of selecting audiences for all of its ads are inherently discriminatory. Here, we refer to discrimination’s double meaning as both an act of discernment that distinguishes between things, separating some things from others (from the Latin discernere); and, as an act of segregation that imposes distinctions between persons or things by fiat, what Trinh T. Minh-Ha calls, ‘[t]he apartheid type of difference’ (Minh-Ha, 1988). This example exposes the conceit of personalisation. Despite Facebook’s efforts to bypass race through it semantic refiguration into ethnic affinities and its condemnation of discrimination as a kind of wilful misuse of the platform, racial discrimination is what personalisation is designed to do. Though personalisation is said to collapse race, its traces are reconstituted through the ongoing affirmation of racialisation as a positive outcome of machine learning techniques.

In her brilliant book Cloud Ethics, Louise Amoore incisively describes the discriminatory nature of all algorithms—and especially machine learning techniques:

Algorithms come to act in the world precisely in and through the relations of selves to selves, and selves to others, as these relations are manifest in the clusters and attributes of data. To learn from relations of selves and others, the algorithm must already be replete with values, thresholds, assumptions, probability weighting, and bias. In a real sense, an algorithm must necessarily discriminate to have any traction in the world. The very essence of algorithms is that they afford greater degrees of recognition and value to some features of a scene than they do to others.

(2020: 8, emphasis added)

Discrimination is, therefore, a necessary effect of the call to cluster audiences by likeness. As Amoore goes on to argue, even if one attempts to remove a single element in an algorithm’s “recipe” in order to reconfigure the calculative logic that might lead to something such as a racist or sexist outcome, it is not enough to undo the broader arrangement of recursive functions (11). This is particularly the case if the forms of refining or “tuning” that developers undertake continuously weights race-based outcomes as optimal for their purposes. Though Facebook might not explicitly collect data about a user’s race, this category can nevertheless be made legible, and operated upon, by assembling audiences around affinities. Instead of targeting—that is, including or excluding—a group using explicit racial categories, discrimination can be conducted using other characteristics that operate as ‘proxies’.

Life, by proxy

The argument that Facebook is inherently discriminatory becomes intelligible when we understand its ethnic affinity categories as one set of particularly egregious proxies in a chain of other potential proxies that can be substituted for race. In the scholarly-technical literature on the inferential computational techniques that make personalisation possible—principally, varieties of machine learning—a proxy is a variable that is a known correlate of another and which can be used as a substitute to produce a particular targeting outcome (Datta et al, 2017, 4). As Solon Barocas and Andrew D. Selbst argue, the sorting effect of proxies is particularly difficult to combat, because they might simultaneously be ‘genuinely relevant in making rational and well-informed decisions’ whilst also ‘result[ing] in systematically less favourable determinations for members of protected classes’ (2016, 691) when they correlate with a protected characteristic, like race.

In their investigation of the persistent possibility that Facebook’s targeted advertising tools can be used to discriminate, Till Speicher et. al. (2018) provide a thorough outline of how these substitutions work in practice. Alongside their analyses of how targeting might lead to discrimination by failing to show an ad to an audience that offers a representative cross-section of society (2018: 4), and the details of how the option to generate audiences to be included or excluded using ‘personally-identifiable information’ (5), they also explain how proxies can be exploited. By assembling audiences using keywords that correlate with the preferences of a particular minority group, which can either be arbitrarily selected or which can be generated via suggestions of ‘like’ categories offered by Facebook’s advertising tools, advertisers can use proxies that may seem ‘facially neutral’ to exclude particular demographics based on particular preferences they might express (10). For example, in order to reach (or exclude) Hispanic users, advertisers could target proxy categories such as those classified as interested in the news source Nuestro Diari, 98% of whom are also classified as having an Hispanic American affinity (9). More insidiously, ProPublica has demonstrated how Facebook’s targeted advertising system can be used to target people with anti-Semitic views by bundling explicitly hateful categories, such as ‘Jew hater’ with less explicit categories, such as ‘Second Amendment’ and ‘National Democratic Party of Germany’, a far-right, ultranationalist political party (see Angwin, Varner and Tobin, 2017b).

Facebook’s now-departed ethnic affinity categories formalised the substitution of preference-based affinity for racial categories, using the capacity to collect large amounts of granular data about users to produce arbitrary racialised categories that operate at very high levels of abstraction. These categories can be described as abstract because it’s not clear, for instance, what preferences qualify membership in an “Asian American” grouping, or how this maps on to the reality of Asian community belonging in the contemporary United States. But their recently-announced demise does nothing to tackle the capacity for advertisers to use proxies to carry out more targeted and granular forms of discrimination. In their announcement about the removal of these categories, Facebook stated that they “encourag[e] advertisers to use other targeting options such as language or culture to reach people that are interested in multicultural content” (Facebook for Business, 2020b). As a proxy, language might not offer the same breadth as ethnic affinities, but it still nevertheless has the capacity to generate discriminatory outcomes.

The short statement that announced the death of Facebook’s ethnic affinity categories might be read as an expression of both the promise and failure of the post-racial vision of society that pervades Facebook’s claims about affinity. Taken at face value, this failure is clear in the invocation of the word ‘multicultural’. Mark Zuckerberg famously declared that his vision of Facebook was that it would become the world’s ‘default social’; to invoke multiculturalism is to admit that whiteness is this social’s default racial category. The acknowledgement that other categories can do the same group-assembly work of including and excluding is tantamount to admitting that Facebook’s machine learning systems are inherently discriminatory. But such deliberate acts of discrimination are a distraction from the way that Facebook’s affinity-based group assembly simultaneously normalises discrimination at scale and elides its operations.

The problem with these systems and their use of inferential machine learning techniques is that by automating the capacity to inductively categorise and re-categorise us and to assemble us in to groups, they also elide what racialisation does. These categories aren’t simple reifications of prior racial categories. Rather, they make use of the capacity to collect and process large amounts of data about our preferences—expressed through actions of liking or not-liking, engaging or not-engaging—to establish an other ‘truth’ of race grounded in one’s behavioural affines.

This ‘truth’ may be deformed by the racialised inequalities that haunt data about us, even before they’re entered into a particular system. Proxies are hard to avoid, especially within a country with such entrenched forms of racism as the U.S. As with all settler-colonies, the very founding of the nation was based on forms of differential categorisation—Indigenous peoples as flora and fauna, the doctrine of Terra Nullius in Australia, and so forth. It may be exploited by ill-intentioned advertisers to include or exclude us based on arbitrary characteristics. But the most frightening problem it poses is that many of its operations are occluded. In their influential article on big data’s capacity to discriminate, Barocas and Selbst assert that ‘the worry… is not simply that data mining introduces novel ways for decision makers to satisfy their tastes for illegal discrimination; the worry is that [data mining] may mask actual cases of such discrimination’ (2016: 693). The problem is not only that these systems perform a poor sleight-of-hand, trying to convince us that they can make race disappear whilst recapitulating it in plain sight. It’s that we don’t know what our categories can be made to do—or, indeed, what we can be made to do in service to our categories.

Facebook’s ethnic affinity categories invite us to read them as proxies both ways. They are proxies for more granular targeting options, like language, that can be used to racialise groups of users based on their expressed preferences or their actions. That is, they invite us to trace the proxy’s chains of substitutions down. But in their group-forming capacity, they also invite us to read them as proxies that scale up. These categories are proxies by which we might understand the way that data is used to engender novel cultural formations: forms of grouping that inhabit racialised categories after their supposed collapse. They are proxies by which we might understand a particular conception of society, in which machine learning’s capacity to infer correlations in massive data sets also implements a model of society in which whiteness persists as a grounding norm. Finally, they are proxies by which we might understand race’s persistence in and as a series of datafied substitutions and transformations. They are a proxy, that is, for race’s novel form.

Conclusion

We have been upfront about treating the proxy as an object of critique. We’ve also, tacitly, treated it as a methodological prompt. In order to confront novel modes of racialisation instituted by techniques like machine learning in platforms like Facebook, we’re often told that we ought to try and open the ‘black boxes’ that act as their containers. But the ‘black box’ in this case is not an algorithm that acts on data, but a technical ensemble that uses machine learning techniques to produce classifications.

These techniques are typically inductive, generating outcomes that are not necessarily replicable in other settings and with different data. They are also proprietary. These are not black boxes that researchers are able to prise open. Indeed, because of the scale at which platforms operate and the complexity of their internal systems and the data they handle, they themselves also have to grapple with what Adrian Mackenzie identifies as a ‘margin of indeterminacy’, that is, a degree of internal opacity that is generated because platforms themselves use inductive, machine learning techniques to the user-generated data they own (Mackenzie, 2018: 6, 15).

Nevertheless, these platforms do produce end-products that we can engage with and critique, in the form of new abstractions that act in and on the world. The term ‘abstraction’ might sound, well, abstract, but it has real-world, material effects.  In earlier work, Mackenzie notes that ‘machine learners’—his term for the human-technical ensembles that constitute machine learning in process— ‘generate new categorical workings or mechanisms of differentiation’ (2017: 10). These systems, he argues, are ‘closely interested in producing knowledge, albeit scientific, governmental, or operational’ (14; see Berry, 2017). These systems produce abstractions—in other words, categories—that they put to work on us; or, to recall Chun’s incisive formulation, and/as us.

Subject to these operations, race isn’t straightforwardly reproduced by these systems. It becomes something else: not only another category that operates in the same way, but an alternate practice of categorisation that’s subject to alternate mechanisms of differentiation. Following Chun, we apprehended these mechanisms by turning from questions of what conception of race these systems implement to focus on what these systems do with race. So processed, ‘race’ is neither displaced nor disappears. It sits alongside its prior manifestations; its operations can be apprehended in the slipshod and stereotypical targeting that underwrites the two distinct advertisements that we discussed earlier. At the same time, it also does with us and to us, differently. Its locus is not an already-existing community, which might be vilified for its preferences, but the affines that might be generalised from the preferences expressed by our behaviour in platforms’ circumscribed social spaces. The questions that tacitly guided our analysis —  the questions that, we think, need to be taken up in greater depth in future work on race understood and/as technology — are both operative and epistemological. What models of knowledge, society, and community inform the techniques that are used to produce classifications? What are they produced in aid of? What, crucially, are they used to do to ‘us’, the group form, when this form deliberately or unwittingly reinscribes race?

As technology, the imposition of race on persons categorised as belonging to one or another ethnic affinity takes place as an arbitrary grouping carried out by algorithmic means—discrimination, to riff on Chun, and/as personalisation. What it means to ‘discriminate’ is altered by how one is able to constitute persons and their contexts using personalising techniques. So even if, as Prey notes, ‘[a] good recommendation system… should not rely on demographics because demographics discriminate’ (2017: 10957), this capacity merely refers to a state in which discrimination is both personalisation’s a priori condition of possibility and its a posteriori product.

Acknowledgements

Scott Wark’s research for this article was conducted as part of the “People Like You”: Contemporary Figures of Personalisation project, which is funded by a Wellcome Trust collaborative award 2018–2022 (205456/Z/16/Z).

Notes

1. These are both verbatim categories taken from the ‘Facebook for business’ promotional material. See: https://www.facebook.com/business/learn/lessons/tips-to-create-core-audience-on-facebook?ref=ahc_lwe and https://www.facebook.com/business/news/good-questions-real-answers-how-does-facebook-use-machine-learning-to-deliver-ads.

2. By ‘post-race’ we refer to the political and ideological belief that race is no longer a determining social factor. We draw on critical race scholars, such as Eduardo Bonillo-Silva’s, framing of ‘post-racialism’ as a strategic attempt to represent individuals or society as ‘beyond race’ or ‘above the racial fray’ (2009: 265). As Alana Lentin contends, this discourse is often used to individualise and discredit experiences of racism; in her words, it ‘sustains the general belief that racism is mainly an irrationality now overcome’ (2014: 1268).

3. This tutorial has since been deleted from the Facebook for Business site. The segments quoted here are cited from screenshots available from secondary sources (see Newitz, 2016a).

4. While there is much critical scholarship that explores the nuances and differences between race and ethnicity (see Wills, Hübinette, and Willing, 2020), our aim in this article is not to identify the specificities or boundaries between these two concepts, but rather to interrogate how the topology of race is figured within the commercial sociotechnical imaginaries of Facebook. In this way, we follow Amade M’Charek’s understanding of race as a ‘relational object’—something that is enacted through technoscience, and which can manifest differently according to the techniques of materialisation (see M’Charek, 2013).

5. There is a wealth of research that documents and critiques the phenomenon of diversity management as a field of business that allows corporations to co-opt social justice issues in a superficial effort to perform corporate social responsibility. See Ahmed, 2007; 2012; Berrey, 2015; Gordon 1995.

6. Facebook has been heavily criticised for actively enabling white supremacist organisations to operate and recruit on their network, allowing conspiracy theories and misinformation with racist undertones (and sometimes overtones) to circulate, and for disproportionately flagging cultural content as inappropriate or in violation of their community standards. See Kraft and Donovan, 2020; Nadler et al, 2017; Noble, 2018.

7. Both trailers are available on the Universal Pictures YouTube channel. The first trailer is marked as the ‘global trailer’ (see https://www.youtube.com/watch?v=rsbWEF1Sju0), and the second trailer is marked as the ‘red band trailer’, or trailers that have been given an R rating (see https://www.youtube.com/watch?v=OrlLcb7zYmw).


References

Ahmed, S. (2007). ‘The language of diversity’. Ethnic and Racial Studies, 30(2), 235–256.

Ahmed, S. (2012). On Being Included: Racism and Diversity in Institutional Life. Durham: Duke University Press.

Amoore, L. (2020). Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.

Angwin, J. & Parris Jr., T. (2016) ‘Facebook Lets Advertisers Exclude Users by Race’, ProPublica, viewed 20 September 2020, https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race.

Angwin, J., Tobin, A. & Varner, M. (2017a) ‘Facebook (Still) Letting Housing Advertisers Exclude Users by race’, ProPublica, viewed 20 September: https://www.propublica.org/article/facebook-advertising-discrimination-housing-race-sex-national-origin.

Angwin, J., Varner, M., & Tobin, A. (2017b). ‘Facebook Enabled Advertisers to Reach ‘Jew Haters’’ ProPublica. https://www.propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters?token=k0PuAmvq_Xy63TS9ofcxNn6J431eO1RK.

Barocas, S. & Selbst, A.D. (2016) ‘Big data’s disparate impact’. California Law Review, 104, 671-732.

Berry, D.M. (2017) ‘Prolegomenon to a Media Theory of Machine Learning: Compute-Computing and Compute-Computed’. Media Theory, 1, 74-87.

Bonilla-Silva, E. (2009). Racism without Racists: Color-Blind Racism and the Persistence of Racial Inequality in America London & New York: Rowman & Littlefield.

Boudet, J. et al. (2019) ‘The future of Personalization – And How to Get Ready For It’, McKinsey & Company, viewed 20 September 2020, https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/the-future-of-personalization-and-how-to-get-ready-for-it.

Bratton, B.H. (2015) The stack: On software and sovereignty. Cambridge: MIT Press.

Chun, W.H.K. (2013) ‘Race and/as Technology, or How to do Things with Race’. In Race after the Internet, (Eds, Nakamura, L. & Chow-White, P.) New York and London: Routledge, 38-60.

Cohen, K. (2019) ‘Literally, Ourselves’. Critical Inquiry, 46, Autumn, 167-192.

Darmody, A. & Zwick, D. (2020) ‘Manipulate to empower: Hyper-relevance and the contradictions of marketing in the age of surveillance capitalism’. Big Data & Society, January-June, 1-12.

Datta, A. et al. (2017) Proxy Non-Discrimination in Data-Driven Systems: Theory and Experiments with Machine Learnt Programs. arXiv preprint arXiv:1707.08120

Egan, E. (2016) ‘Improving Enforcement and Promoting Diversity: Updates to our Ethnic Affinity Marketing’, Facebook, viewed September 20, 2020, https://about.fb.com/news/2016/11/updates-to-ethnic-affinity-marketing/.

Facebook for Business. (2017) ‘People-Based Marketing: Thinking People-First Planning and Measurement’, Facebook, https://www.facebook.com/business/news/insights/the-future-of-marketing-people-based-planning-and-measurement.

Facebook for Business. (2018) ‘Keeping Advertising Safe and Civil’, Facebook, viewed 20 September 2020, https://www.facebook.com/business/news/keeping-advertising-safe-and-civil.

Facebook for Business. (2020a) ‘Personalization: Opportunities, Pitfalls, and How to Get it Right’, Facebook, viewed 20 September 2020, https://www.facebook.com/business/news/personalization-opportunities-pitfalls-and-how-to-get-it-right.

Facebook for Business. (2020b) ‘Simplifying Targeting Categories’, Facebook, viewed 20 September 2020, https://www.facebook.com/business/news/update-to-facebook-ads-targeting-categories.

Facebook for Business. (2020c). ‘How to create a Facebook Core Audience in Ads Manager’. Facebook, viewed 20 September, 2020: https://en-gb.facebook.com/business/learn/lessons/tips-to-create-core-audience-on-facebook

Facebook for Business. (2020d,). ‘How Does Facebook Use Machine Learning to Deliver Ads?’ Facebook, viewed 20 September, https://www.facebook.com/business/news/good-questions-real-answers-how-does-facebook-use-machine-learning-to-deliver-ads.

Facebook Newsroom. (2017) ‘Improving Enforcement and Promoting Diversity: Updates to Ads Policies and Tools’, Facebook, viewed September 20, 2020, https://about.fb.com/news/2017/02/improving-enforcement-and-promoting-diversity-updates-to-ads-policies-and-tools/.

Fuller, M. & Goffey, A. (2012) Evil media. Cambridge: MIT Press.

García Martínez, A. (2019) ‘Are Facebook Ads Discriminatory? It’s Complicated’, Wired, https://www.wired.com/story/are-facebook-ads-discriminatory-its-complicated/.

Gillum, J. & Tobin, A. (2019) ‘Facebook Won’t Let Employers, Landlords of Lenders Discriminate in Ads Anymore’, ProPublica, viewed 20 September 2020, https://www.propublica.org/article/facebook-ads-discrimination-settlement-housing-employment-credit.

Haraway, D. (2004). ‘A Manifesto for Cyborgs: Science, Technology and Socialist-Feminism in the 1980s’. In The Haraway Reader, Routledge, 7–47.

Helmond, A. (2015) ‘The platformization of the Web: Making Web data platform ready’. Social Media + Society, 1, 1-11.

Hooks, b. (1992). Black Looks: Race and Representation (1st edition). South End Press.

Kahn, J. (2012). Race in a Bottle: The Story of BiDil and Racialized Medicine in a Post-Genomic Age (Illustrated Edition). Columbia University Press.

Kant, T. (2020) Making it Personal: Algorithmic Personalization, Identity, and Everyday Life. Oxford University Press, New York.

Kowal, E., & Watt, E. (2018). ‘What is race in Australia?’ Journal of Anthropological Sciences, 96, 229–237.

Krafft, P. M., & Donovan, J. (2020). ‘Disinformation by Design: The Use of Evidence Collages and Platform Filtering in a Media Manipulation Campaign’. Political Communication, 37(2), 194–214.

Lentin, A. (2014). ‘Post-race, post politics: The paradoxical rise of culture after multiculturalism’. Ethnic and Racial Studies, 37(8), 1268–1285.

Lury, C. & Day, S. (2019) ‘Algorithmic personalization as a mode of individuation’. Theory, Culture & Society, 36, 17-37.

M’Charek, A. (2013). ‘Beyond Fact Or Fiction: On the Materiality of Race in Practice’. Cultural Anthropology, 28(3), 420–442.

Mackenzie, A. (2017) Machine Learners: Archaeology of a Data Practice. The MIT Press, Cambridge, Mass.

Mackenzie, A. (2018) ‘From API to AI: platforms and their opacities’. Information, Communication & Society, 1-18.

Mamo, L. (2005). ‘Biomedicalizing kinship: Sperm banks and the creation of affinity-ties’. Science as Culture, 14(3), 237–264.

McAlone, N. (2016). ‘Why ‘Straight Outta Compton’ had different Facebook trailers for people of different races’. Business Insider. https://www.businessinsider.com/why-straight-outta-compton-had-different-trailers-for-people-of-different-races.

Merrill, J.B. (2020) ‘Does Facebook Still Sell Discriminatory Ads?’, The Markup, viewed 20 September 2020, https://themarkup.org/ask-the-markup/2020/08/25/does-facebook-still-sell-discriminatory-ads.

Nadler, A., Crain, M., & Donovan, J. (2018). ‘Weaponizing the digital influence machine: The political perils of Online Ad Tech’, Data & Society. https://apo.org.au/sites/default/files/resource-files/2018-10/apo-nid197676.pdf.

Newitz, A. (2016a,). ‘Facebook’s ad platform now guesses at your race based on your behavior’, Ars Technica, Viewed 20 September 2020, https://arstechnica.com/information-technology/2016/03/facebooks-ad-platform-now-guesses-at-your-race-based-on-your-behavior/.

Newitz, A. (2016b). ‘Facebook explains that it is totally not doing racial profiling’, Ars Technica, viewed 21 September 2020, https://arstechnica.com/information-technology/2016/03/facebook-explains-that-it-is-totally-not-doing-racial-profiling/

Noble, S.U. (2018) Algorithms of oppression: How search engines reinforce racism. NYU Press, New York.

Phan, T. (2019) ‘Amazon Echo and the aesthetics of whiteness’. Catalyst: Feminism, Theory, Technoscience, 5, 1-38.

Poell, T., Nieborg, D. & van Dijck, J. (2019) ‘Platformisation’. Internet Policy Review, 8, 1-13.

Prey, R. (2018) ‘Nothing personal: algorithmic individuation on music streaming platforms’. Media, Culture & Society, 40, 1086-1100.

Rouvroy, A. & Berns, T. (2013) ‘Algorithmic governmentality and prospects of emancipation’. Réseaux, 177, 163-196.

Sahlins, M. (2013). What kinship is—And is not. Chicago: The University of Chicago Press.

Speicher, T. et al. (2018) ‘Potential for discrimination in online targeted advertising’. FAT 2018 – Conference on Fairness, Accountability, and Transparency, 1-15.

Stark, L. (2018) ‘Algorithmic psychometrics and the scalable subject’. Social Studies of Science, 48, 204-231.

Statt, N. (2018) ‘Facebook Signs Agreement Saying it Won’t Let Housing Advertisers Exclude Users By Race’, The Verge, viewed 20 September 2020, https://www.theverge.com/2018/7/24/17609178/facebook-racial-dicrimination-ad-targeting-washington-state-attorney-general-agreement

Steinberg, M. (2019) The platform economy: How Japan transformed the consumer Internet. University of Minnesota Press, Minneapolis.

Tobin, A. & Merrill, J.B. (2018) ‘Besieged Facebook Says New Ad Limits Aren’t Response to Lawsuits’, ProPublica, viewed 20 September 2020, https://www.propublica.org/article/facebook-says-new-ad-limits-arent-response-to-lawsuits

Universal Pictures UK (2015) ‘Straight Outta Compton—Official Global Trailer (Universal Pictures) HD’, viewed 8 August 2020, https://www.youtube.com/watch?v=rsbWEF1Sju0.

Universal Pictures (2015) ‘Straight Outta Compton—Red Band Trailer with Introduction from Dr. Dre and Ice Cube (HD)(Official)’, viewed 8 August 2020, https://www.youtube.com/watch?v=OrlLcb7zYmw.

Van Dijck, J., Poell, T. & De Waal, M. (2018) The platform society: Public values in a connective world. Oxford University Press, Oxford.

Wark, M. (2019) Capital is dead. Verso, London.

Wark, S. (2019) ‘The subject of circulation: on the digital subject’s technical individuations’. Subjectivity, 12, 65-81. Wills, J. H., Hübinett, T., & Willing, I., eds. (2020) Adoption and Multiculturalism: Europe, the Americas, and the Pacific. Michigan: University of Michigan Press