Discover more from Pirate Wires
Robots are Racist
pirate wires #94 // breaking down the work of our media’s favorite AI ethicist, our last defense against the tech bros' genocidal superintelligence (not a thing, but ok LOL we're digging in)
Making space for “AI ethicists.” Given the growing hostility of our anti-tech press, with its inevitable catalyzation of broad anti-tech popular sentiment, it’s no surprise the field of artificial intelligence has already attracted an army of critics. Partly, as I covered at the top of the week, the most virulent front is led by Bay Area rationalists, chief among them Eliezer Yudkowsky, who last week suggested provocation of a nuclear war is preferable to further development in the field, and any acceptable future must include the legal bombing of “rogue data centers.” But while the blackest blackpill of the bunch, Eliezer has written more cogently about AI risk, and for longer, than almost any other thinker, living or dead. Recent hysteria aside, he’s contributed a great deal to the field (if you can call it that) of AI safety. In keeping with the laws of Clown World, he is therefore unsurprisingly not the most influential anti-AI zealot in the crowded nascent cottage industry of anti-AI zealotry. In terms of ability to shape public sentiment, our mainstream press is still king, and there is no “AI ethicist” the press loves more than former Googler Timnit Gebru, who believes AGI is a white supremacist fantasy.
As Timnit has quickly come to dominate most media perspective on the subject of AI safety, it’s unfortunately necessary to parse her work a little more closely. What follows is my summary of this important public figure’s recent thinking, which I intend to grant the same respect that she has granted the men and women actually working on AI.
Our story begins with Timnit’s strange reaction to the Future of Life Institute’s open letter demanding a moratorium on AI training, which is ostensibly what both Timnit and Eliezer want. But Timnit — like Eliezer, who she endlessly attacks, and persistently racializes — was not happy. Her stated reason: the letter “fearmongered,” which is an incredible claim given her recent implication AGI is rooted in literally genocidal aspirations. On closer examination, Timnit’s real issue with the letter seems mostly a matter of her enormous ego. These “white men” (her relentless, racist framing) were getting attention, and that attention belonged to Timnit.
In one recent, typical example of her ire, after a blog post she didn’t like referenced Sam Altman:
But if Timnit’s charge is somehow none of the ‘right people’ are getting enough “fawning” press, with the category of ‘right people’ presumably including Timnit herself, it would be difficult to comprehend, as there is no single figure in AI who has received as much “fawning” press as Timnit. A brief list of her many public laurels: One of the World’s 50 Greatest Leaders (Fortune); one of the ten people who shaped science in 2021 (Nature); one of the most influential people in the world (TIME). Then there are the articles.
Not too long ago, Timnit resigned from Google, an episode erroneously framed by every major press outlet that covered her resignation as a firing. To recount her departure in more specific terms, she wrote a list of demands, gave it to Google with a resignation letter that would imminently take effect if the company didn’t give her everything she wanted, and her bosses hilariously accepted her resignation. Timnit not only continues to lie about the fact that she quit her job, but frames her “firing” as “traumatizing” — not for her, but, incredibly, for other people of color. After her resignation, Timnit was defended everywhere from the Washington Post and the Wall Street Journal to the MIT Technology Review, which does a decent job summarizing what is now her most notable work. She has since received heroic photographic spreads, with full-throated endorsements, from WIRED, and the New York Times. Her positions, until recently, have included the following: AI consumes a lot of energy (consumption of energy is bad), AI costs a lot of time and money (time and money should only be dedicated to Timnit’s preferred political projects), and AI is, by its nature, racist (because AI is a reflection of us, and we are all racist).
Recently, Timnit’s thinking has evolved. No longer satisfied with a run-of-the-mill racism charge, her focus has shifted from AI to AGI (artificial general intelligence), and her opinion today is the entire AGI aspiration is eugenicist in nature. This brings me to one of the most remarkable artifacts of our batshit crazy discourse I have ever had the pleasure of discovering:
Eugenics and the Promise of Utopia through AGI.
Like, you just know it’s going to be good.
Let’s dip in.
Timnit’s argument opens with the strange admission she has only just heard of AGI, and didn’t realize people in the field cared about AGI until recently. This is strange, I say, because AGI — the kind of ultimate potential of AI — has been a part of the AI story, especially including the negative story, from the beginning. The name may have changed, but there has always been the fear and awe of HAL 9000, and not only in the world of tech. The superintelligent machine is obviously a major theme of pop culture. In any case, Timnit is on the scene with explanations.
What is this bizarre new concept called “AGI,” she wonders. After some exhaustive research, she reports that all “AGI” really means is a “very smart” machine that “can do anything for anyone in any environment.” This, she notes, sounds very unpredictable, and she doesn’t understand why anyone would build it. A reasonable, if debatable position. It would be one of her few.
High-level, Timnit’s goal is to connect the absolutely vanilla likes of Sam Altman directly to white supremacy by way of a tenuous series of philosophical evolutions dating back over a century. Every branch of present-day utopian thinking is rooted in 19th Century eugenics, Timnit argues, and everyone working on AGI today wants to build a utopia. Therefore, all utopian AGI people are eugenicists. Critically, this is not to say utopian AGI people adhere to some mundane eugenicist belief like parents should be allowed to screen embryos for crippling genetic conditions, for example. No, present-day utopian thinking (generally defined by sane people in terms of things like abundant energy, space travel, and an end to human aging) is directly descendant of white supremacist thinking and practice, including such horrors as the sterilization of black people that took place in the state of California through the mid-20th Century (jump below to footnotes for the whole wild run of her logical contortions).
TRANSLATION: People who want to build AGI are basically Nazis.
These are just the facts, folks, I don’t know why you get so mad about this stuff!
“I’m not going to talk about the history of IQ tests here, but those themselves are quite racist” — Timnit Gebru
From here, Timnit departs from bad faith and tedious logic, and begins to straight-up lie. She characterizes a group of men who have spent their entire professional career warning us about the risk inherent of AGI as actually obsessed with building it.
There’s this guy Nick Bostrom, Timnit begins, a leading figure in the TESCREAL Bundle (Transhumanism, Extropianism, Singularitanism, Cosmism, Rationalism, Effective Altruism, and Longtermism). Bostrom used to talk a lot about something called “simulated realities.” Roughly: it is highly probable any sufficiently advanced future society will develop technology capable of realistically simulating worlds and lifeforms (the Matrix, more or less, though maybe in a fun way). Such a society would almost certainly simulate many worlds, including trillions upon trillions of lifeforms.
Now, here’s where it gets gnarly: almost all thinkers in the TESCREAL Bundle are utilitarian (I don’t think this is actually true, but at this point whatever). In other words, Team TESCREAL is principally interested in maximizing the well-being of the greatest number of people — regardless of what era they live in, and, if necessary, at the expense of today’s much smaller human population. Framed in this way, if we don’t build AGI, and by extension fail to birth trillions of hypothetical future simulated beings, we will have inadvertently caused the greatest genocide in history. Building AGI is therefore a moral imperative, and nothing else matters. This, Timnit argues, is the grand mission of every major figure in the TESCREAL Bundle, including fan favorites Nick Bostrom, Peter Thiel, Elon Musk, Sam Altman, and Eliezer Yudkowsky.
An incredible string of lies.
Yes, Altman is the CEO of OpenAI. He believes AGI will bring about something we might here, for the principle of charity, call “utopia.” But the rest of the men “driving the AGI craze,” as Timnit puts it?
“We’re like children playing with a bomb,” Bostrom famously said of sentient machines. His entire career is concerned with a concept called “ex risk,” or a study of the existential risks facing mankind, in hopes of preventing them. Elon Musk also believes AGI is the greatest danger facing human civilization, and champions Neuralink as a way to amplify human intelligence as a foil against the Terminator. Peter hasn’t spoken much about AGI in the fifteen years I’ve known him, but his beliefs can best be characterized as (surprise!) highly nuanced. He sees potential benefit and danger in the technology, but is most recently known for the quote “Crypto is libertarian, AI is communist.” He is not a communist. Feel free to connect those dots back to sanity. Last, we have Eliezer “bomb the data centers” Yudkowsky. In tech, he is literally the most well-known proponent of stopping AI. It’s, like, famously the main thing he talks about.
This isn’t just something Timnit gets wrong. She characterizes the views of these men in diametric opposition to nearly everything they’ve ever said and written on the topic of AGI. Their actual views are pretty similar to Timnit’s, or at least in those rare moments Timnit talks about actual AI risk rather than her speculative racist robot. But, then, speaking of racism, we do have a little more proof to parse.
Bostrom, Timnit says, once wrote a racist email many years ago. This is unfortunately true, and Bostrom has apologized. Eliezer? Once implied racial minorities are dumb, says Timnit. But the link she references argues precisely the opposite position:
I would be remiss not to mention Timnit has said a small handful of sane things. In her Nazi AGI video, she cites deepfake porn as a problem (agree, and we have reported extensively about this at Pirate Wires). She says it’s important for a person to maintain providence over their own search, and suggests we need context when seeking knowledge. She says you can’t cede critical thinking to some purportedly all-knowing thing, and call it a day, as that is incredibly dangerous. Lastly, if OpenAI does manage to ‘replace everything,’ such a system will not exist in service of empowering the entire population. Power, here, would probably be centralized to a degree we’ve never seen. Great, these are all valid critiques. Any one of them would have made an interesting topic for a talk.
Alas, back to the racist robot people.
After expressing a bit of seemingly random anger with Facebook marketing, in which the company allegedly claimed to solve translation for all languages — they didn’t, Timnit correctly asserts, because Facebook is not yet very good at translating several Ethiopian languages — she cites a little known company Lesan, which is doing great work! It’s terrible that Facebook swallows up all this undeserved attention, she argues, while companies like Lesan can’t even get funding. For some reason, Timnit fails to mention she herself is partnered with this company, and has a strong personal interest in the company’s success. Amidst her lamentations concerning broader society’s failure to adequately fund her specifically, she also fails to mention her company DAIR is backed by the MacArthur Foundation, the Rockefeller Foundation, and the Ford Foundation.
In her conclusion, Timnit is insistent: her intention is not to argue we should make AGI safe. AGI, she says — at times a clownish marketing ploy undeserving of serious scrutiny, at times the most dangerous thing that will ever face humanity — is inherently unsafe. We need to just stop. Don’t build a God, she says, finally and perfectly managing to attack both AGI enthusiasm and fear as nothing more than corporate propaganda, while somehow also believing the technology so dangerous it must immediately be abandoned. In one talk.
Following the plainly stated objective “we need to just stop,” you may have been surprised to learn Timnit so ferociously attacked the anti-AI open letter, as well as Eliezer’s far more pointed argument in TIME. But of course Timinit isn’t satisfied with further calls to pause work on AI, because Timnit doesn’t actually care about AI. Timnit cares about attention. Last week, the annoying open letter people were getting attention, which was bad. Now, Eliezer is getting attention, which is really bad.
Timnit is unfortunately just smart enough to understand she’s not the smartest person on the “AI’s gonna kill us all” party bus, which seems to really drive her into fits of rage on Twitter. Despite her tremendous, overwhelming celebration in the press, she knows, on some level, she isn’t taken seriously by anybody else. Her most cogent concern, that AGI seems fundamentally unpredictable, amounts to a weak echo of Eliezer’s contributions, or Bostrom’s, men who have spoken and written of such danger for over a decade. Men who have spoken and written of such danger before Timnit even realized — again, by her own admission — the concept of sentient machine intelligence was even, like, a thing.
But don’t worry about Timnit. She’ll get another glamorous photoshoot. She’ll look serious, and beautiful, beside a splashy pull quote in which she casually suggests ‘we have nothing to fear but eugenicist tech bros who fear AGI. Also, AGI is really dangerous, we should stop this thing before it’s too late.’ It doesn’t matter that her position is incoherent, Timnit is a star, and in America that’s all you need: a little grit, a little grift, and a little glamour. Ladies and gentlemen, this woman is a triple threat.
Timnit’s logic runs like this: Francis Galton, the founder of modern eugenics, believed human ability was heritable, and encouraged men and women with ability he perceived as advantageous to procreate. Chiefly, Galton prized intelligence, which Timnit appears to believe immeasurable (“IQ tests… are themselves quite racist,” she casually asserts with no further argument). In general, Galton believed Africans were less intelligent than white Europeans, and while he did not himself advocate negative eugenics (preventing people he considered inferior from procreating), his thinking was used to justify countless, unthinkably heinous acts, especially throughout the early half of the 20th Century. Hitler? Loved this man’s work. But… what does this have to do with AGI?
Wow, am I glad you asked.
The word “Transhumanism” was coined in 1940 by Julian Huxley, a “second wave eugenicist.” Huxley was an outspoken critic of every heinous genocidal act that preceded him, and didn’t believe race was a meaningful biological concept. But he did believe in heritable ability, he did believe that mate selection based on ability was favorable, and he was the president of the British Eugenics Society, problematically including that highly problematic word. Huxley was also, separately, a transhumanist, and a proponent of the notion humans should improve themselves — transcend, one might say — with science and technology. Therefore transhumanism, Timnit carefully implies, is fundamentally eugenicist (the bad kind). This is the sort of logical reasoning acceptable at Stanford, I guess. But… what does it have to do with AGI?
Wow, am I glad you asked.
Transhumanism is only one of many branches of utopian thinking, and from this philosophy of radical improvement has come, at various points, the following distantly-related contemporary philosophies: Extropianism, Singularitanism, Cosmism, Rationalism, Effective Altruism, and Longtermism. The “TESCREAL Bundle,” Timnit giggles, as she pauses every now and then to make fun of the silly men, with whom she’s obsessed, who talk about such silly things. What do any of these philosophies have to do with race? Well, nothing. But a few of them have a lot to do with AGI. THEREFORE: The desire for such things as incredibly smart robots capable of improving the human race must be negative eugenicist in nature. TRANSLATION: People who want to build AGI are basically Nazis.