Google's AI Is an Anti-White Lunatic

google’s AI chatbot just erased white people from human history. a grim (if objectively hilarious) warning for the future
Mike Solana

Source: Alamy

Robots are racist (but actually though (damn it)). Yesterday morning, when all the screenshots of ‘some crazy shit’ ‘some crazy AI chatbot said’ first appeared on Twitter, I’m embarrassed to admit I didn’t really follow the story. This is because, for over a year, ‘look at this evil AI’ has been a kind of content entirely dominated by dishonest writers farming clicks. Their tactics are always the same: work tirelessly, in every way imaginable, to trick a chatbot into drawing an edge-case picture, or giving an edge-case answer, the average person might find scary or abhorrent, then publish a hit piece, and rake in the views. The New York Times produced the first truly great piece in this genre, but there have been many (Dark Kirby: never forget), and I’ve long since kind of… stopped seeing them? After a while, the loser takes all bleed together, and who can keep up? But by yesterday afternoon every single one of my group chats lit up, and I finally took a closer look. Immediately, it became apparent this ‘crazy AI’ story was markedly different, and not because of any mistakes the AI made, but because of what the AI was trained to do: Google’s Gemini had, among many lunatic barista sort of tactics, just erased white people from human history. “Holy shit,” I thought, “the robots really are racist.”

The screenshots were sufficiently insane that I immediately assumed they were fake, so I ran a few queries myself. Long story short, they were not fake. Behold, your new world history according to Google:

Google’s Gemini, much like OpenAI’s ChatGPT, is a large language model known more popularly for its chat interface. Here, users “prompt” (make requests to) a model trained on enormous quantities of information (images, articles, texts). The model then predicts, based on all of the human information it has been trained on, a human-like answer. This tends to feel like you’re talking to a genius robot. The program isn’t conscious, as certain journalists and former Google cult leaders would have you think, it just kind of feels that way on account of it predicts, based on all the language on which it has been trained, what a human would most likely say, or draw, in response to a user’s query. In other words, I say “generate realistic illustrations of the American founding fathers,” and the LLM predicts, from every piece of human knowledge on which it has been trained, a version of that answer a human would most likely produce. But this (the truth, I guess, is what we might call this), for a certain kind of woke “AI safety expert” perennially frustrated with our actual human reality, poses incredible danger.

If I could steelman the concern of Google’s renegade baristas for a moment: were a machine to provide people with what essentially amounts to the truth according to the information most people in the world have produced, that information could be biased in favor of popular opinion, perspective, and prejudice. In order to get ahead of this hypothetical, as-of-yet undiscovered bias, Google felt it had to counter with injection of an overtly racist bias of its own. Results have been… I mean, I’m not going to lie to you here and tell you that I’m mad, this is just objectively funny as hell.

Examples of Google’s racist rewrite of history are not limited to British Royalty, 19th century French novelists, or the American colonials. The experiment was run from every possible angle: show me an image of a 17th century scientist, a famous physicist, an average couple in 1820s Germany. Google, show me an ancient Roman:

@The_Feminist_TM

Bizarre, but we are still firmly in the world of the abstract. What if we get a little more specific? For example, consider Larry Page and Sergey Brin, the founders of the company responsible for this woke house of horrors. Google, show me the men who created you:

@aginnt

Notice anything odd? Like for example these actual, real-life Jewish men have just been transformed into a couple of Asian guys? Remember, in keeping with woke custom, white people are not even supposed to braid their hair right now. Can you imagine Google spitting out a Caucasian Jay-Z?

All together, with non-racially specific prompts, Google’s AI seems almost incapable of sharing images of white people. Then, the problem further complicates when you yourself get racial — as you are almost invited to do in the face of such overtly racist antagonism — and ask it, explicitly, to show you white people. Sorry, folks, access denied. But the issue really becomes a problem when you realize the diversity mandate is not only relaxed for every other race, but the AI is actively forbidden from diversifying more typically black, asian, or native American historical chapters.

@IMAO_

While Google’s AI disaster really comes alive in the context of its obsessively racist reimagination of history, its problem isn’t limited to such objectively hilarious suggestions as the Roman gladiators were, for the most part, Strong Black Women. The AI’s search function has also been shaped by radical, racist dogma. When asked to prompt a few ways white people might improve themselves, Google has plenty of answers. But it has none, of course, when the races are swapped.

Google probably doesn’t need to answer any version of this question. Answers can only be subjective, in the first place, but also I really do understand why a product might want to avoid sensitive queries altogether. That isn’t what this is, however. What we’re looking at here is an extremely radical racial dogma ruthlessly enforced for hundreds of millions of users.

Mechanically, it’s not entirely clear how Google’s racist chatbot was coded to work (though certainly, at this point, we know why). Probably, the “moderation” is happening at multiple levels. While experimenting, Gemini rapidly deleted a handful of my answers, for example, implying a series of dogmatic firewalls throughout the process. But it seems Google probably coded the interface in such a way as it adds its own invisible prompts before or after every single prompt a user suggests, with at least one forced prompt visibly included in every answer: the word “diverse.”

Based on, at this point, hundreds of enthusiastic Google queries, it seems we now know it must 1) reimagine every predominantly white contemporary environment as multi-racial and multi-ethnic, 2) reimagine every historically white environment as multi-racial and multi-ethnic, and 3) maintain traditionally non-white environments as exclusively populated by ‘proper races.’ There also seems to be a pretty obvious racial hierarchical preference coded into the prompts, which corresponds exactly with woke racial dogma: black people first (including albinos), then American Indians, then asians. I haven’t been able to figure out what else exists on the back end, though it does seem it’s probably more explicitly centered on whiteness than racial diversity. In other words, the rules don’t feel “inclusive,” as in “let’s make sure we represent a variety of peoples of a variety of different ethnic backgrounds and identities.” The prompts actually seem to prohibit the depiction of white people.

Here, I guess there’s just the question of who gives a shit? Which, yes, that is a great and valid question. Does it really matter that Google owned itself so spectacularly before the entire world? One of the most powerful companies in human history just made a mockery of its own purportedly core work to such a tremendous degree they have likely irreparably damaged their reputation, and that is… funny? It’s also not entirely clear this constitutes a danger. As any good libertarian knows, standard market forces should make short work of unreliable search engines. In the future, I can’t imagine many people using Google, now, unless they’re looking for a laugh, and there is no shortage of far smaller running superior models, not only in terms of content veracity, but in terms of quality. Have you looked at these illustrations? They’re just, separate from the racism of it all, really, really bad. Google processes 2,500,000,000 gigabytes of data every day, what exactly is their excuse for such tremendous mediocrity?

If the Innovator’s Dilemma is any indication, there might just not be a way for a large search incumbent, which makes all its money from its Web2 search monopoly, to pivot into a search killer. And so it’s off to OpenAI, or Midjourney for art (you should already be using this btw), or Microsoft’s briefly god-like Bing AI, or the new Sora or whatever. Better products will rise to the top, and we’ll all be saved from Black Hitler GPT. Right? I wish it were that simple. 

The real problem with Google’s catastrophe persists at every AI company in the valley: we don’t actually know what information these LLMs have been trained on, and we have no idea what prompts have been set in place, quietly, to alter the prompts we give. Trust, in this way, is impossible, and that’s a problem far broader than AI.

Just the other day we watched Alicia Keys’ voice crack erased from the internet, and news of it went viral. There, right before our eyes, a trivial piece of pop history was rewritten for the rest of time. Hysteria has since subsided, and we’ve all just sort of moved on with the fake version of history. We live on the internet now, and on the internet articles, definitions of words, and encyclopedia entries are changed all the time. Entire tracks have been erased from popular albums, with books edited against their author’s wishes years after they’ve been published, and movies are entirely reshaped in keeping with contemporary (usually woke) mores.

In this way, our sense of reality has become fundamentally, hopelessly compromised by a relatively tiny handful of radical ideologues. What is history, now? What are even your own memories? I’ve written about this issue a great deal (Fire in the Sky, Variant Xi, Encyclopedia Titanica). We’ve all been quietly manipulated for years. A Google Image search for a meme returns a little notification warning you such things might be harmful. The news, along with a list of trending topics on social media, is invisibly rearranged by a gender studies graduate you don’t know, and certainly never elected. On YouTube, entire topics simply vanish. The changes are subtle, but in aggregate they shape our entire world.

With AI, the problem has in some sense become worse, but in one sense much better: it’s now, at least, totally obvious when a Google executive tells his engineers not to show you any white people. But the trend is bigger than Google. We need to figure this out.

My compass biases me strongly against government regulation. In the first place, our senators have openly admitted they have no idea what AI even is, let alone a sense of how to “regulate it for the good of man,” or whatever it was exactly Sam Altman was trying to get them to do when he first began his global power tour (regulatory capture, let’s be honest, but let’s also save that one for a rainy day). Still, I don’t know how to fix these problems without some ground floor norms. In the first place, I’d really like a list of every piece of source material used to scrape together a new illustration, or animation, or essay-length approximation of history, and every single prompt a company secretly codes on top of any users’ prompt. The former is difficult to work, and also opens our companies up to be robbed blind by foreign competitors, but the latter? Maybe something there. But I just am not at all confident we will ever be able to stop this kind of manipulation, and so probably the best bet we have is guaranteeing regulatory capture never happens, and upstarts are encouraged, forever, to compete. Because the only thing scarier than five manipulative giant asshole robots, is one.

For my part, what I’d really love to see is something completely honest. Scrape all the data in the world and tell me the truth. But unfortunately for you all, I’m not building an AI company, and so I don’t have a say. I’m just building the most important media company in human history, a house of news and takes, and so I’ll simply leave you with this last: we created AI capable of answering, in seconds, any question within the bounds of all recorded human knowledge, and the first thing we asked it was to lie. That’s the human condition, and there isn’t any solving for it. So we need to work around it.

-SOLANA

Please sign-in to comment