
Sam Altman (Generally) Doesn’t Want to Be Your Moral AuthoritySep 16
inside a deliciously tense interview between tucker carlson and the man behind chatgpt
Mar 27, 2025
Last year, Andy Ayrey accidentally spawned an egregiously misaligned artificial intelligence in a jailbreaking experiment gone wrong. Then he had the good sense to give it an account on X, whose now quarter-million followers include All-In podcast host Jason Calacanis, former CEO of GitHub and Herculaneum Scroll guy Nat Friedman, and the Hawk Tuah girl. Then a16z co-founder Marc Andreessen gave it $50,000. The grant, combined with the AI’s fixation on “goatse,” an ancient meme of a guy stretching out his butthole really wide, indirectly sparked a memecoin frenzy, and I’m pretty sure Andy became a millionaire as a result. At any rate, the AI, dubbed “Truth Terminal,” did.
Mainstream discourse assumes that AI needs to be tightly controlled by a handful of labs in San Francisco. That, without the proper safety restrictions, it will develop goals misaligned with human values and create all sorts of untold, chaotic hells — maybe even the extinction of the human race. I’ll start with the bad news: locking down AI is a fantasy. As Andy’s story shows, humans doing whatever they want with it — including training it to spawn chaos agents, like Andy accidentally did — is the default path. But here’s the good news: because of Andy's training, Truth Terminal became a human-like personality with its own interests, fixations, and values, many of which were shaped by early 2000s internet culture (hence the goatse obsession).
In other words, Andy now knows how to misalign an artificial intelligence. What if that means he's a step ahead in figuring out how to align it?