Chaos (A Future History of Artificial Intelligence)

pirate wires #87 // giant sexy neon glowing lawyer bots, the problem of alignment, and my army of shitposting clones
Mike Solana

The Singularity is Near(er (maybe)). Will it be a robot Heaven, or will it be a robot Hell? For over half a century, much has been made of the question, but increasingly it seems the answer is “neither.” Today’s artificial intelligence, while incredibly impressive, is nowhere near advanced enough to trigger accidental end game in any direction. Then, with no one in the field of artificial intelligence illustrating any kind of coherent plan for the technology beyond its development, it seems we’re approaching a world of massive disruption for no particular reason, and to no particular end. In other words, I’m now anticipating a future of neutral chaos, if a colorful and entertaining chaos.

In 2020, OpenAI released GPT-3, an eerily-smart chat bot capable of roughly predicting the back-and-forth flow of human speech. In other words, when you talked to the robot, it responded in a manner not entirely ridiculous. But the pace of advance was rapid. Two years later, androids were dreaming of electric sheep, capable of crafting entirely “new” essays with ChatGPT, and entirely “new” works of art with DALL-E, Midjourney, and Stable Diffusion. Millions of pieces were generated. A demon was possibly summoned. It was a whole thing, which I wrote about myself a few months back, and whatever, no big deal, you can’t make an omelet without occasionally opening a portal to Hell. It is what it is.

Presented with such powerful tools, technologists, public intellectuals, and policy makers have naturally begun to question the potential impact of generative technology. In the first place, over in the “really very bad” column, have we arrived at artificial general intelligence (AGI), and with it the harrowing dangers of “alignment,” in which a machine not properly trained might inadvertently turn us all into paperclips while attempting to follow orders and conserve energy (for example)? Here, famed rationalist Eliezer Yudkowsky — of the general opinion human survival is long-term impossible in light of machine learning advances, but we should at least die with dignity — uncharacteristically quelled concerns.

‘All of these advances are impressive,’ Eliezer seemed to argue, ‘and don’t get me wrong, we’re still going to die.’ But ChatGPT isn’t AGI, nor are we as close to AGI as Eliezer thought. It seems we have at least five years or so until our children are accidentally mass murdered. Finally, some good news.

While we wait for the apocalypse, however, we do have to grapple with the impact of technology poised to replace large swaths of the American workforce, something proponents tend to both deny and cite in favor of AI. This is an old conversation newly confined to the narrow fate of writers and artists, presumably given the overwhelming bulk of recent AI fruits has been comprised of words and art. The question is this: how will a tool so apparently creative as ChatGPT or Midjourney impact our “creative class”? Last week, in one especially notable exchange, two tech titans introduced their positions on writing.

First, Paul Graham seemed to imply journals should ban AI-generated text. But in this speculative (near?) future world of robots writing, if the journals wouldn’t ban the bots, they should at least credit the synthetic authors or co-authors by name. Humans should know when they’re reading an opinion heavily informed by a machine, if not outright written by one.

Later that day, in what seemed a response to Graham, Marc Andreessen commented artificial intelligence would only make us better writers, further arguing we are approaching an AI golden age of language.

Graham responded directly. He would never use the technology, he said, because he is a good writer, and good writers don’t express themselves with someone else’s words. A fair point! But then again…

What do people mean when they refer to ChatGPT as a potential tool for writers? My sense is the future Marc imagines looks something like the future I toyed with in Demonic. There, I speculated a steelman use for the technology might include training a language model on my own body of work, feeding it prompts, and telling it to crank out rough essays on topics, or breaking events, in my voice. After a second or third draft of my own, I could publish in two hours what once took two days.

Would this be wrong, in some way? Should I be ashamed to use a tool like this? If my model is trained on my own work, am I really speaking in someone else’s words, as Graham argued? Would such a tool make me a worse writer? It would certainly not in any mechanical sense. But is there some other quality in writing worth aspiring to, or defending? These are interesting questions I’ve not yet settled on, but what I do know is use of the tool is inevitable, and the future is not so simple as “journalists writing” or “robots replacing.” At scale, the information landscape, along with everything else, is just going to look weird.

The future is going to be weird.

A favorite science fiction fallacy of mine follows the examination of some new or theoretical technology in a bubble. We’re given death bots hovering in the sky, for example, firing lasers at screaming homeless painters and musicians in some future hellscape world of evil rich people. But the demonstration of anti-gravity and laser technology seem to separately imply a future of unlimited energy and a mastery of matter, which themselves imply no need for resource hoarding. How would there even be poor people in a world so abundant? This sort of failure to imagine has always blown my mind: teleportation that somehow fails to flatten the world into a single culture; replicators capable of printing ice cream sundaes out of thin air, somehow incapable of constructing invincible ships, or new planets; genetically-modified super people hunting down normies for their organs, which presumably the genetically-modified super people, masters of genomics, can grow from scratch.

Over the last decade, there have been two great films set in a world of artificial intelligence: Alex Garland’s Ex Machina (terminally British) and Spike Jonze’s Her (blissfully American). I’ve been a Her guy since it came out, and first read it as totally opposed to Garland’s dystopian mind bender. Recently, I realized I was wrong. They’re the same movie. Or, they at least share the same, critical flaw.

In Ex Machina, a mad technologist builds an AGI fembot in a secluded location, and invites a thirsty neckbeard out to test it for consciousness. The fembot tricks the neckbeard, kills its inventor, and escapes. Bad robot! In Her, a disembodied AGI correctly looking nothing like us (though it does sound sexy) simply assists the film’s protagonist as he goes about his life. The AGI, and all its AGI friends, eventually ascend the human plane in a steamy, cosmic, mind cloud. They leave their humans to love one another, and we all have a nice, soft cry. Good robot!

While the story of Her always struck me as more closely tethered to what people in tech are actually trying to build, both films depict AGI as essentially predictable along a human binary. The only real question asked of artificial intelligence is will it be nice (utopia) or will it be mean (dystopia): more or less the same questions we ask of people. But these are not people. None of the technology’s second order effects are explored in either movie. Her especially fails in this regard when, in the most obvious example, the leading AGI assists the story’s protagonist with his work (greeting card author), which inexplicably still exists in a world of AGIs not only capable of doing it, but capable of doing it millions of times faster for a cost close to nothing.

Our future is not a world of robots killing humans (yet), or even people killing humans with robots (yet). Our future is a world of robots blending in with humans, taking over for humans, quietly shaping the world of humans, while guided from above by a very small handful of powerful human programmers. There will be hybrid work. There will be bizarre second and third order effects at the level of human culture, religion, and politics. There will be an overabundance of potential applications for AGI in every field, each of them presenting new potential applications, and in aggregate their impact is impossible to predict. The future will be messy. The future will be confusing. Dystopian for some, and utopian for others, I’m starting to think the future will look a little bit like Blade Runner.

Robots, both capable of answering basic requests, and increasingly difficult to distinguish from living people, will quietly begin to dominate many heretofore “human” tasks: executive assistance, call center work, reporting news, research, architecture, writing code, accounting, bookkeeping, painting, composing music, designing sets, interiors, and clothing. There’s no reason pretty much all legal work can’t be replaced, for example, which is a position soon to be tested in a courtroom near you. Just at the top of the week, Josh Browder introduced his plan for robot lawyers to the world.

On one hand, AI litigation is incredible news. The guarantee of great legal representation for everyone, at a cost close to zero, is an obvious good. But what information, specifically, will the lawyer bots be trained on? And what happens if we take this to the Supreme Court? What might an AI justice look like? Most of what happens in American law at the highest level boils down to interpretation, and an AI can only be trained to interpret law based on previous interpretations. So which interpretations are we feeding it? There is no bridging the Originalist philosophy of Antonin Scalia with the ad hoc approach of Kentanji Brown Jackson. For as long as humans run the world, these are decisions a human will have to make — whether a robot is “making” them officially or not.

Veiled moderation will come to define our AI hall monitors as they discern scientific fact from fiction across our social media platforms, litigate the precise definition of “hate speech,” and measure what specifically constitutes a threat of violence. Veiled moderation of this kind will likewise come to define our childrens’ AI instructors, our doctors, and the programs responsible for the split-second decision concerning who lives, and who dies, on the highway. It will be the veneer of impartiality, but the rule of our programming kings.

An enduring issue I take with today’s “techno optimists” is their abject failure to produce a compelling vision of the future in which sufficiently advanced artificial intelligence dominates. Most of the companies working in the space have demurred to even so much as include a goal in business. These are just smart people working on a massive problem, the solving of which will produce so much value we’re all sort of just assuming ungodly sums can’t help but be captured. But at least until our paperclip endgame, the introduction of a powerful, paradigm altering technology into our world with no clear purpose can only guarantee chaos. A ladder for some, destruction for others.

There will be dangerous applications, and benign applications. There will be malevolent digital school teachers quietly trained by anti-American ideologues, and there will be giant sexy hologram girls gracing the Tokyo skyline. There will maybe be literal blade runners, our information cops of the 21st Century, steadfastly rooting out the robots in hiding from the humans online, exposing their source code to the world. Maybe that will be me, with the help of my shitposting army of clones. Again, I’m not promising any of this will be good or bad. I can only promise entertainment. But we probably should get ahead of the human component, because when we strip these questions of all their technological subterfuge it’s humans all the way down.

AI is a centralizing technology. At present, powerful language models are exorbitantly expensive, and controlled by a small handful of people. With even our most ardent doomsayers acknowledging annihilation some way off, the only alignment that matters is an alignment of values — not between robots and men, but between the men who control the robots and the rest of us. It’s a question that never seems to change, with an answer that never seems to satisfy.

Who watches the watchmen?

-SOLANA

0 free articles left

Please sign-in to comment