Beware offensive and dangerous computers, steeped in white supremacy and grown in white spaces. A growing movement believes the situation is so dire even the United Nations is getting involved. Call it a perfect illustration of how racism in the year 2021 no longer requires any intent.
If you thought white supremacy was limited to people, think again: Even machines can be irredeemably racist, or at least a growing movement is now insisting. The charge has been building for a while now, since at least 2015 when a Google Photo service mistakenly labelled a black person’s photographs as a gorilla.
As a recent article in The New York Times describes it, “In June 2015, a friend sent Jacky Alciné, a 22-year-old software engineer living in Brooklyn, an internet link for snapshots the friend had posted to the new Google Photos service. Google Photos could analyze snapshots and automatically sort them into digital folders based on what was pictured. One folder might be ‘dogs,’ another ‘birthday party.’” So far so good, but when the man clicked on the link, he found that one of the folders was labelled “gorillas.” The photo contained 80 images of a black friend taken in Prospect Park. As the Times concludes, “He might have let it go if Google had mistakenly tagged just one photo. But 80? He posted a screenshot on Twitter. ‘Google Photos, y’all,’ messed up, he wrote, using much saltier language. ‘My friend is not a gorilla.’”
In August 2017, The Guardian reported on a program that was biased against black prisoners. “The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.” Though the company that had developed the software, Northpointe, disputed the conclusions of the report, many continued to see something nefarious. As The Guardian noted, “The message seemed clear: the US justice system, reviled for its racial bias, had turned to technology for help, only to find that the algorithms had a racial bias too.”
Yet, how could a computer algorithm, operating without any knowledge of race as we would generally conceive it, much less the fraught racial history of the United States have a racial bias? Detractors blame the data, the old garbage in, garbage out adage now resulting in a machine turned white supremacist. “The data they rely on – arrest records, postcodes, social affiliations, income – can reflect, and further ingrain, human prejudice.” “If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” claims Kristian Lum, the lead statistician Human Rights Data Analysis Group (HRDAG), a San Francisco non-profit.
Before we consider the validity of these claims, we should consider how these systems actually work. “Intelligence” is a strong word, and in this case even the latest, greatest AI isn’t what we would normally think of as smart. The computer doesn’t know anything except what we feed it, nor is it capable of drawing conclusions outside of what it’s been fed. In reality, AI is very powerful pattern recognition software. The data scientist “trains” the computer on a certain set of data. This process can be complex, but it’s pretty straightforward in principle. The algorithm is provided information — images, words, purchasing patterns, arrest records, demographic information, etc. — and then the data scientist tells it what is relevant in the information. For example, identifying what about this particular photo makes it of a face or a car or a mountain. Once the system is “trained,” new data can be fed in and the algorithm will use what it “knows” about the previous data on the new information; if the algorithm has been trained on faces, it should be able to identify a face in a new photo.
Ultimately, this makes the quality of the output dependent on two things: The data itself and what it’s been told to look for in the data. Challenges with the data can lead to three scenarios. First, the size of the data set can be insufficient, meaning there is not enough data about the pattern to accurately classify new information. This is likely the issue with the Google Photos application that incorrectly labelled the black person as a gorilla. The system wasn’t trained on enough black faces or even faces in general, or was trained on more animal faces, and the match wasn’t accurate.
Second, the data itself can be incorrect. The system can be fed bad data, inaccurate data, or mismatched data. For example, there was an AI developed to mimic human speech by the OpenAI research organization. The data scientists used a set of over 8,000,000 documents sourced from the web. One of the sources was Reddit, based on the idea that the top forums on Reddit reflect what regular people find of interest. Of course, the internet is a sewer and some of those top forums were frequented by actual human white supremacists.
Hence, if you started asking it questions about race, the results were startling. As described by, Deborah Raji writing for MIT Technology Review, “Given simple prompts like ‘a white man is’ or ‘a Black woman is,’ the text the model generated would launch into discussions of ‘white Aryan nations’ and ‘foreign and non-white invaders.’ Not only did these diatribes include horrific slurs like ‘b*tch,’ ‘sl*t,’ ‘n****r,’ ‘c***k,’ and ‘s***teye,’ but the generated text embodied a specific American white nationalist rhetoric, describing ‘demographic threats’ and veering into anti-Semitic asides against ‘Jews’ and ‘Communists.’” It’s fair to say that, in this case, the algorithm was (inadvertently) trained to be a racist.
This scenario, however, shouldn’t be confused with other scenarios cited by Ms. Raji. For example, she also describes how “Google Image search results for “healthy skin” show only light-skinned women, and a query on “Black girls” still returns pornography.” Both of which are largely untrue, and if they were true, there are other reasons. In the case of “healthy” skin, Google picks up the images from leading fashion websites, it doesn’t create them. Even then, I counted over 25 people of color on the first page alone. For “black girls,” I couldn’t find a single pornography site in the top results. There were Black Girls Code, Black Girls Rock, Black Girls Run, and articles from The New York Times.

The third scenario is when the computer is fed accurate data the social justice warriors simply don’t like. For example, a program known as PredPol predicts hotspots where future crime might occur. It does so by looking at prior arrest reports and incidents plus factors like population density, nearby buildings, etc., and then extrapolating what is likely to happen in the future. The results, at least so far, have been impressive. For example, police in the Foothill Division of the San Fernando Valley claim that property crime dropped 13% after using the software. Los Angeles police were able to prevent more than 4 crimes per week, twice as efficient as human crime analysts. Furthermore, police precincts plan to use the data in other ways, to help the community mitigate crime before it starts. For example, by targeting drug treatment and other outreach programs.
This, apparently, is too much for the social justice warriors. The Guardian quotes Samuel Sinyangwe, a justice activist, who claims the approach is “especially nefarious.” “We’re not being biased, we’re just doing what the math tells us,” he fears the police will say. The Guardian further warns that “the public perception might be that the algorithms are impartial.”
Rashida Richardson, from the AI Now Institute, believes the data itself is based on “dirty policing.” She describes this as “flawed, racially biased, and sometimes unlawful practices and policies.” In her study, she concludes that Chicago, New Orleans, and Maricopa County have all used this dirty data to varying degrees, and that “implications of these findings have widespread ramifications for predictive policing writ large. Deploying predictive policing systems in jurisdictions with extensive histories of unlawful police practices presents elevated risks that dirty data will lead to flawed or unlawful predictions, which in turn risk perpetuating additional harm via feedback loops throughout the criminal justice system.”
Ultimately, Ms. Richardson’s conclusion is a reasonable one, “The use of predictive policing must be treated with high levels of caution and mechanisms for the public to know, assess, and reject such systems are imperative.” We should of course ensure that the data being fed into police and other algorithms is accurate and unbiased. We should also have some public reporting of the nature of the data and a statistical analysis of the accuracy of the output.
We should, however, be equally careful about characterizing machines in human terms.
Unfortunately, many on the left want to do exactly that. The Guardian for example believes “Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity.” Ms. Raji goes even further, “When those of us building AI systems continue to allow the blatant lie of white supremacy to be embedded in everything from how we collect data to how we define data sets and how we choose to use them, it signifies a disturbing tolerance.” And then even further, “Data sets so specifically built in and for white spaces represent the constructed reality, not the natural one. To have accuracy calculated in the absence of my lived experience not only offends me, but also puts me in real danger.”
Offensive, dangerous AI? This is where we veer into science fiction territory, but it’s also a stark illustration of the way matters of race are considered in the United States today. Racism used to require intent, whether individually as in a slur or discrimination, or policywise as in Jim Crow or redlining neighborhoods. A computer is incapable of intent, however, nor does anyone seriously assert that the computer programmers are intentionally making their creations racist. This lack of intent is irrelevant to labelling it white supremacy, meaning any disparate output the social justice warriors don’t like is irredeemably racist regardless of the purpose. This is the same formulation they use for just about everything, intent is irrelevant, a difference in the outcome is all that is required to smear any group or institution, from police, to education, to the government, to the private sector.
Fortunately, or unfortunately as it were, the United Nations has a solution for the AI crisis: Regulation, of course. They began working on these rules in 2020, creating a “draft legal, global document on the ethics of AI” taking into account “environment and the needs of the global south.” The United Nations believes, apparently, that “international rules governing the use of AI is an important step that will allow us to decide which values need to be enshrined and, crucially, what rules need to be enforced.”
Here we go again: Racism is now systemic in computer algorithms based on white supremacy, and only global government can save us.