OpenAI launched in 2015 with the stated goal of building humanity’s first artificial general intelligence (AGI). The response to that launch was, as Altman tells Lex Fridman in a recent podcast interview, mostly derision and laughter. People rolled their eyes — there go those crazy techbros again!
But with the 2020 release of GPT-3, the laughter turned to curiosity. Then the December 2022 launch of ChatGPT, powered by an even more powerful version language model, turned the curiosity into a kind of rolling wave of euphoria with undertones of rising panic.
The recent rollout of OpenAI’s GPT-4 model had a number of peculiar qualities to it, and having stared at this odd fact pattern for a while now, I’ve come to the conclusion that Altman is carefully, deliberately trying to engineer what X-risk nerds call a “slow take-off” scenario — AI’s capabilities increase gradually and more or less linearly, so that humanity can progressively absorb the novelty and reconfigure itself to fit the unfolding, science-fiction reality.