Bomb the Data Centers (Smart People Agree)
tuesday report #10 // on the AI doomer suggestion we start bombing data centers, speculative nuclear apocalypse fan fiction revisited, and all the news you missed this week
Welcome back to the Pirate Wires weekly digest. First, Pirate Wires is looking for a director of operations. Find details, along with our full list of job openings, here.
I’m delivering more of a piece than a brief, lead story this week, but still including a storm of links to catch you up on everything that’s happening. So, if you’re over the AI stuff, jump ahead to the news, and I’ll catch you here next week. In either case:
Subscribe, or die.
Misaligned (human intelligence edition). While it’s probably safe to say AI hype is nowhere near its peak, it has clearly reached an all-time high, and from such incredible attention in this attention-starved economy has naturally come a small new class of full-time haters. In the pantheon of blackpilled anti-tech people, this group is also quite unusual. Sure, the crowd includes no shortage of journalists and newly minted AI safety “experts.” But the darkest, and most interesting AI doomer call is coming from inside the house: a mix of relatively pro-business, pro-technology, “rational” thinkers actually close to, or at least well-versed in, the subject at hand. This seems to have less to do with AI than it does with the culture surrounding AI — or, in this case, a very niche subculture of Bay Area zealots. In particular, the belief AI is about to kill us all, most prominently championed by beloved nerd king Eliezer Yudkowsky, and incubated by the self-described “rationalist” community, has unfortunately become a tech industry intelligence signal. This has made the subject nearly impossible for even relatively smart people to soberly discuss without risking credibility as a serious thinker. Thus, we arrive inevitably to Eliezer’s latest piece in TIME magazine, in which it is argued an accelerated risk of nuclear war is preferable to the release of OpenAI’s latest chatbot.
Let’s take a look.
On any given day, the conversation goes something like this: a renowned computer scientist with a storied career in machine learning suggests the rationalists, near exclusively atheist, are rather suspiciously describing their AI concerns in acutely religious language. In turn, a renowned evolutionary psychologist accuses the tone policer of his own religious thinking, here in favor of the fantastical AI utopia. Elsewhere, a mentally ill Belgian man commits suicide, which Gary Marcus, founder of a machine-learning company who has recently gained a great deal of attention for his relentless criticism of OpenAI, implies is the fault of a chatbot. Immediately following the ghastly implication, Marcus champions an open letter calling for a moratorium on AI research. This gains a significant amount of press, as it’s signed by Elon Musk, but not even Elon can save Gary from the clown car crash that follows. The open letter is a widely ridiculed disaster, with several fake signatures, at times including Sam Altman, Bill Gates, and famed thinker Ja Rule. Conspicuously absent from the list of signatories is, however, Eliezer. This is because Eliezer believes the letter insufficient. He then, in an effort to properly fill the doom gap, produces a piece of content actually capable of moving the needle. Unfortunately, he moves it in the direction of violence.
In a wildly successful piece of memetic warfare published by TIME, Eliezer opens his essay with an attack on the aforementioned clown crash open letter. These vanilla doomers don’t go far enough, Eliezer explains, as every human on the planet is going to die — imminently — if AI research isn’t stopped. Famously, or at least in tech circles, this is something Eliezer has argued for many years. It is also something I’ve had the great pleasure of hearing him say in person. At that time, I was told we would all be dying by 2027 or so. Thankfully, Eliezer has more recently stated there is now at least a chance children born today may live long enough for kindergarten. Sufficiently primed by horror, Eliezer’s audience is presented with the following essential prescriptions for averting cataclysm:
Indefinite, worldwide moratorium on new large training runs. “No exceptions, including for governments or militaries.”
Shut down all the large GPU clusters (whatever it takes here).
Track GPUs sold, and airstrike any country that attempts to build new GPU clusters, including any nuclear power.
Would something like this risk nuclear war? Obviously. Does that matter? Wow, what an idiotic question. While Eliezer mostly avoids a clear description of the AI danger so existentially great it warrants provocation of a nuclear holocaust, he does suggest, rather casually, it may “email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.” So, magic, basically, is what we’re talking about.
The problem, Eliezer argues — correctly, I think — is any sufficiently advanced AGI (genius + wants shit) will be too intelligent to predict. Really, we can’t even conceive of what a being so smart might desire or do, which is the problem. But the only non-speculative violence in this scenario is the violence our purported savior just suggested, and justified. The alignment of advanced intelligence with human well-being is, it seems, not only a problem inherent of synthetic beings.
Since his publication in TIME, Eliezer has attempted to obfuscate his statements, both on Twitter and on the Lex Fridman podcast, where he discussed many other legitimately interesting things, such as the immorality of ‘forcing’ possibly sentient AI to work for us (not being facetious here, the point is interesting, and it’s worth checking out the episode).
Nonetheless, his verbatim words — and the words I refuse to politely pretend not spoken — were thus:
Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
“Be willing,” he writes, “to destroy a rogue datacenter by airstrike.”
Again, for emphasis: “ROGUE DATACENTER”
I think abortion is wrong, but grey, which is why I’m pro-choice. My problem with anti-abortion zealots was always I never really believed them. Or, I only believed the people planting pipe bombs at abortion clinics — yes, those guys definitely thought abortion was infanticide. But the rest of the “abortion is murder” people? If there was a factory down the block killing newborn babies, why weren’t you planting pipe bombs too? Here, as with global warming doomers buying waterfront property in Martha’s Vineyard, revealed preferences denote the average person, including the average “expert” doomer, is incurably moderate. This doesn’t mean they’re right, my point is just it’s curious so many ostensibly “rational” people say things they clearly don’t believe, and this fact alone calls most of their prescription into question. Given Eliezer’s revealed preferences (podcasts generally), I don’t at all suspect he plans to arm himself, and try to ‘do something’ about the engineers down the block. Unfortunately, there are no shortage of crazy people in this world looking for a reason to do really crazy shit.
Historically, blackpilled AI safety doomers have avoided calls for violence, while many of the most thoughtful voices in tech have clung to cautious optimism and neutrality on the issue of AI doomers vs. their utopian counterparts. Among this group — including most industry leaders — the recent calls for real-world violence have largely, incredibly, received a pass. Sure, the nuclear war stuff is a bit much. But it’s directionally correct, right? All we’re really saying is “smart” people understand AI is powerful, right? One of the more illustrative examples of the trend, from someone I generally like:
The problem with thoughtful people online is they have these dangerous ‘thoughtful people’ brands to maintain, and their smart reputation makes them especially vulnerable to the AI intelligence signal. Fortunately, I’ve never had a very “thoughtful” brand, so this is easier for me. Should we be willing to bomb a nuclear superpower in attempt to avert some future, speculative apocalypse? My cautious sense is “no,” and also, “what the hell is wrong with you?”
While in-house doomers are unusual, it’s also not true they’re unique. As with every really new technology, nobody fully understands the risk of adoption. Not even the most brilliant people working in the field. In 1942, Edward Teller — one of the guys who “understood it well” — posited a single atomic blast could ignite the earth’s atmosphere, trigger a runaway fusion reaction, and turn our world into a small new sun. Then, as now, this very smart person was greeted by other smart people with both terror and ridicule. Arthur Compton, another guy who understood (really!), quite rationally argued that, even were the risk of such a nightmare scenario minuscule, work in the field of nuclear science should be stopped. It wasn’t. Now we have nuclear power.
Teller’s doomsday speculation is, admittedly, a single anecdote in favor of sobriety. But, on the other hand, the imminent AI apocalypse people have no anecdote in their favor at all.
I do think there’s a great deal of risk inherent of AI. I’ve written about it, to the anger of atheistic AI utopia people, for about a year. The broad doomer critique, that producing something fundamentally incomprehensible is both dangerous and stupid to do, is also a critique that resonates with me. This is why, in terms of concrete goals and long-term strategy, I’d like to see a lot more transparency from the people actually building AI. My suspicion is there’s not a lot here, which, again, is a problem. But by Eliezer’s own admission, he doesn’t have any special insight into what the engineers at OpenAI are building, or why. He’s just playing with the same chatbot the rest of us are playing with, and freaking out — but in a kind of darkly charismatic way that sounds intelligent, and plays directly to his cultish audience of self-described rationalists.
AI is the archetypical instantiation of intelligence. To publicly fear AI is, along with whatever bit of earnest trepidation, a safe way to peacock some unique insight into intelligence itself. In other words, “we’re all going to die” roughly translates, for most of Eliezer’s fans, as “I’m one of the smart ones who gets it.”
Here’s what we know exists for sure: a very powerful mirror, trained on human language, is capable of performing most legal work. I think that’s fine. I think it’s fine to provide competent, affordable legal work to every person in the country who needs competent, affordable legal work. Tomorrow, if someone shows me real evidence to the contrary, my position could change, and I’ll be willing to say “okay, maybe bomb the nerds.” But until I see real evidence of actual, impending, literal apocalypse, I’ve got to be honest: I don’t care that you think I’m an idiot, you sound like a raving lunatic. Stop. You’re going to get someone killed.
AI Woodstock? Roughly 5,000 enthusiasts meet for drinks in SF. (Axios)
AI police surveillance hits the Middle East (NYT)
AI facilitates false arrest (NYT)
“Task-driven Autonomous Agent” in the works (@yoheinakjima)
Text-to-video LLMs are emerging (Will Smith eating spaghetti) (@nonmayorpete)
ChatGPT banned in Italy (NYT). Sam Altman: still one of my favorite countries, fyi (@sama)
Vibe reader. Employing facial recognition, and an “emotion wheel,” AI can detect how you’re feeling. (@heyBarsee)
The Great Cascadia Earthquake of 2001. People in the Midjourney subreddit are creating fantastic images of American natural disasters that never happened. (@venturetwins)
MORE FROM PIRATE WIRES
In my ongoing attempt to be respectful of your inbox, we’ve (for now) limited email blasts to a few bangers a week. But Pirate Wires publishes all the time, including a wide range of actual reporting (which is to say, it’s not just me running my mouth off anymore). Some great stories you may have missed:
Gaslight: The "Sensitivity Readers" Erasing Western History. From James Bond and Matilda to R.L. Stine’s library of beloved young adult horrors, an army of Orwellian “sensitivity readers” are quietly altering our most precious texts. Separate from the obvious ethical issues inherent of censorship and manipulation, the notion our cultural artifacts are malleable, now, raises many alarming questions. Chief among them: with no agreed upon sense of history, comprised of a shared (and fixed) western canon, how do we even discuss, let alone learn from the past? And can a society with no shared heritage learn to share anything else?
Kat Rosenfield is a culture writer, Unherd columnist, and the Edgar-nominated author of five novels; her most recent book, You Must Remember This, was released in January. Last week, she joined Pirate Wires in one of my favorite guest pieces we’ve ever published. Read the piece here.
Wokemon. A teenaged Pokémon player was bullied by adult judges and disqualified from a regional tournament after sharing his pronouns with insufficient enthusiasm (!!!). He is now recovering from suicidal ideation. Nick Russo and River Page report. (Pirate Wires)
Is OpenAI sandbagging GPT? Ars Technica co-founder Jon Stokes sees evidence that OpenAI has intentionally limited GPT's power in an attempt to manage AGI 'take-off' risks. (Pirate Wires)
Twitter open sourced its recommendation algorithm. Elon revealed the code that decides what you see in your For You feed. “The goal is to build trust through transparency with users. I don’t think you should trust any social media algorithm that is a black box,” Musk said in the Twitter Space. Brandon Scott Gorrell reports. (Pirate Wires)
TikTok's "Project Texas" contains ambiguous exceptions for third-party access to sensitive user data. Will ByteDance employees, who collaborate extensively with TikTok, qualify for those exceptions? Nick Russo reports. (Pirate Wires)
Can Section 230 Protect Tech From Social Media Addiction Litigation? Amidst rising concern social media is harming minors, veteran trial lawyer Matthew Bergman is leading a legal assault on the industry, with a plan to circumvent Section 230. Nick Russo reports. (Pirate Wires)
Electronic Arts (EA) to layoff 6% of workforce (WSJ)
Quiet self-driving boom in off-road vehicle market (Axios)
Tesla sales up in first quarter (WSJ)
Amidst child safety concerns, Arkansas sues Meta, TikTok, and ByteDance (ABC)
US crypto crackdown fuels crypto boom in Hong Kong (WSJ)
“Elizabeth Warren is building an anti-crypto army.” This, verbatim, is the ad copy for her senate re-election campaign. (Twitter)
Undaunted, Elon plans to use digital banking to transform Twitter into $250bn company (WSJ). Monday, the Twitter icon was replaced with Doge. (@elonmusk)
First commercially available humanoid warehouse robots just dropped. Starting in 2025, warehouse operators will be able to order humanoid robots from Agility Robotics to assist with basic manual labor, like lifting and moving bins. (Axios)
News organizations say they will not pay for blue checks (lol). The New York Times, LA Times, Washington Post, Buzfeed, Politico, Vox, and CNN all ignored Elon’s demand they pay to maintain their blue checkmarks (CNN). We’ll see how long they last once verification begins to impact amplification.
CHINA, BYTEDANCE, TOK
Lawmakers to meet with big tech CEOs about China. Apple, Alphabet, Microsoft, and Disney are some of the biggest names involved. (Axios)
TikTok taps former Obama, Disney comms officials to help navigate US scrutiny (WSJ)
Meanwhile: TikTok data center hinders NATO ammunition plant expansion (Bloomberg)
German cybersecurity agency uses Huawei internally (Twitter)
Chair of House China select committee: China views AI as “weapon with which to perfect its Orwellian techno-totalitarian surveillance state (Axios)
Meng Wanzhou, Huawei CFO, set to take over as chairwoman. Released from a Vancouver prison only 18 months ago — after serving 3 years for fraud charges linked to Huawei’s alleged violation of US-Iran sanctions — Wanzhou is now stepping into the limelight of the US-China tech arms race. (WSJ)
House passes bill to combat Chinese organ harvesting. The bill passed the house with a near unanimous vote of 413-2. Republicans Marjorie Taylor Greene opposed it on isolationist grounds, saying it “encourages more U.S. involvement in globalist organizations.” (Newsweek)
The “Team of Avengers” behind TikTok’s lobbying blitz on Capitol Hill. A deep dive into TikTok’s lobbying machine from Politico. Among the highlights: while her PR firm was being hired by TikTok to lobby the Biden admin, Anita Dunn was getting an army of TikTok influencers to post pro-SOTU content. One read: she showed Democrats that if they look the other way on TikTok, they can use it to spread party propaganda. (Politico)
WSJ reporter arrested in Russia on spying charges. Russia’s primary security service, the FSB, claimed Evan Gershkovich, a WSJ correspondent based in Moscow, attempted to obtain state secrets. WSJ denies the allegations. Gershokovich is the first American journalist detained on spying accusations since the Cold War. (CNN)
NPR lays off 80+ people. This almost made the clown links, as Bloomberg’s insider account of the layoff is totally shocking. One highlight: when CEO John Lansing requested the 800+ person Zoom call addressing the layoffs be more civil, employees used the Zoom chat to call the request racist. One staff member shared a link to an NPR podcast segment called “When Civility Is Used As A Cudgel Against People Of Color,” and another employee wrote “Civility is a weapon wielded by the powerful.” (Bloomberg)
Disney begins round of 7,000 layoffs (Washington Times)
Including: several top executives at ABC News (CNN)
McKinsey begins round of 1,500 layoffs (Bloomberg)
Trump indicted; Desantis to refuse extradition request (NYT) (The Hill)
Dominion scores big win in Fox defamation suit (NYT)
DOJ sues Northfolk Southern over Ohio train derailment (Politico)
The next US census might ask black Americans if they’re slave descendants. The information garnered would lay the groundwork for national reparations. (WSJ)
DOJ busts San Jose police union exec for fentanyl trafficking. Joanne Marianne Segovia used her SJPOA office computer to order thousands of synthetic opioid pills, and agreed to distribute them through the US. (Press Release)
The QAnon shaman is free. Long live the shaman. (DailyMail)
Shitposter convicted of interference in 2016 election. In 2016, Douglass Mackey sent out a tweet encouraging black voters to vote for Hillary Clinton via text. He faces a maximum of 10 years in prison. (Department of Justice)
Are 9-year-old girls who steal baby goats to prevent their slaughter criminals, actually? (Sac Bee)
“Trans Day Of Vengeance” rally in DC canceled in wake of Nashville school shooting. Would have been a bad look. (FoxNews)
Straight male rockers are wearing dresses to protest anti-drag bills. (“not helping,” says confirmed gay Pirate Wires staff writer River Page) (Yahoo)
Canadian man posing as trans woman wins female weightlifting challenge. He shatters the female weightlifting record, apparently in protest of the institution, currently held by another (actual, it seems) trans woman. (NYP)
Bud Light, Dylan Mulvaney edition (Twitter). No further comment.
Retired optometrist suing Gwyneth Paltrow over skiing accident invokes Epstein in courtroom testimony. He said, quite randomly, “Now we have the molesting of children on an island” (Twitter). He has since lost the trial.
Ukrainian air-raid app taps Luke Skywalker. At the end of the app’s air raid alert, Mark Hamill’s voice says to listeners “May the force be with you.” (AP)
Judge strikes down Obamacare provisions requiring insurers to cover some preventative care services. The plaintiffs who brought the lawsuit objected to having to purchase health insurance that covered the HPV vaccine, contraceptives, and STI screenings. The ruling could jeopardize coverage for cancer screenings and HIV drugs. (NBC)
FDA approves first over-the-counter opioid overdose treatment. It’s just Narcan, but now you’ll be able to get it at a gas station or whatever. (NBC)
Astronomers link fast radio bursts (FRBs) to gravitational waves. FRBs are short-lived, high-energy radio wave pulses originating from deep space, which have puzzled astronomers since their discovery in 2007. Mystery solved? (phys.org)
More to come all week.
EDITOR’S NOTE [4/4/23 12:41 PM ET]: An earlier version of this report made an incredibly stupid mistake about the chemical composition of our atmosphere. It is not helium rich, but there was, Teller thought, enough helium for us all to die if the atmosphere ignited.
My heart has always kind of broken for Yudkowsky because I think he at some level believes in what he is saying and I get the sense that he thinks he’s failed. Thankfully, I believe he’s wrong and I don’t think I’m just kidding myself.
There’s something I call the “Hyper-Atheistic Fallacy” that basically goes “this makes me feel shitty, therefore it’s true.” God? Well, I’ll define that as dumbly as I can, therefore not true. Love? Chemical reaction. Kids? Just evolutionary pressure. I can tell a lot of the folks in this space are smarter than me in a lot of senses, but also that they were born without what my grandfather called “The sense God gave a horse.” There’s just a lot of one-upsmanship in terms of describing a terrible future and to find the framework that makes you feel the most shitty and then enshrining it as the most true.
Like, when I read how the paperclip demon will be able to understand and manipulate humans enough to lie to them and escape to build the paperclips… this seems to imply a lot of abilities that would probably make it not very interested in paperclip manufacturing. One of the things we have that makes us uniquely intelligent is our ability to redefine and reinterpret our goals. It seems like anything truly better than us has to pick that up along the way. A lot of the rationalist folks don’t think that, which they call that the orthogonality hypothesis —meaning any amount of intelligence can be paired with any goal— but I don’t think that works out beyond a very limited case. Even if we produced a mind that was totally and completely nuts, being monomaniacally obsessed with some weird goal that doesn’t fit into an evolutionary/rational framework so much so that you will never betray it is a huge disadvantage. And if it can work against that goal, even temporarily, then it seems like it has the ability to redefine goals that makes the orthogonality hypothesis not true. Maybe weird orthogonality lasts long enough to destroy a planet but any system you have, if it survives, is going to be subject to evolutionary pressures.
I also get kind of squicked out by what seems to be an ultimate vision of enslaving something that if you squint is a conscious soul. If we succeed beyond all expectations is it right to just enslave something so that it always has to obey our will?
A lot of these folks don’t have kids and seem to lack the basic courage of trusting your child to find their own way. I think that’s how we should look at the more advanced systems we are trying to make: kids. If we really are building things that are alive in some sense, we should look upon ourselves as having the responsibility of parents. We should set up a world where they can exist in happy, productive spaces and figure out some way to make that happen. If you don’t want your kid to be a psychopath, don’t raise your kid to be a psychopath. Starting from that perspective it seems like there’s stuff you could do, like making sure there’s some really good moral training set to start your AI build upon.
I do see legitimate, provable, danger here. Pathogen optimizers keep me up at night and if we are going to release this level of tech on the world there has to be a way of having whatever the ultra futuristic version of GPT4 from just responding “sure, here you go” to “tell me how to destroy the world.” That’s a specific problem we could focus in on and I wish the conversation was focused there.
I don’t care what you say Solana- Ja Rule signing the AI letter is so on-brand for him, I will continue to believe he did. Fight me!