tuesday report #10 // on the AI doomer suggestion we start bombing data centers, speculative nuclear apocalypse fan fiction revisited, and all the news you missed this week
My heart has always kind of broken for Yudkowsky because I think he at some level believes in what he is saying and I get the sense that he thinks he’s failed. Thankfully, I believe he’s wrong and I don’t think I’m just kidding myself.
There’s something I call the “Hyper-Atheistic Fallacy” that basically goes “this makes me feel shitty, therefore it’s true.” God? Well, I’ll define that as dumbly as I can, therefore not true. Love? Chemical reaction. Kids? Just evolutionary pressure. I can tell a lot of the folks in this space are smarter than me in a lot of senses, but also that they were born without what my grandfather called “The sense God gave a horse.” There’s just a lot of one-upsmanship in terms of describing a terrible future and to find the framework that makes you feel the most shitty and then enshrining it as the most true.
Like, when I read how the paperclip demon will be able to understand and manipulate humans enough to lie to them and escape to build the paperclips… this seems to imply a lot of abilities that would probably make it not very interested in paperclip manufacturing. One of the things we have that makes us uniquely intelligent is our ability to redefine and reinterpret our goals. It seems like anything truly better than us has to pick that up along the way. A lot of the rationalist folks don’t think that, which they call that the orthogonality hypothesis —meaning any amount of intelligence can be paired with any goal— but I don’t think that works out beyond a very limited case. Even if we produced a mind that was totally and completely nuts, being monomaniacally obsessed with some weird goal that doesn’t fit into an evolutionary/rational framework so much so that you will never betray it is a huge disadvantage. And if it can work against that goal, even temporarily, then it seems like it has the ability to redefine goals that makes the orthogonality hypothesis not true. Maybe weird orthogonality lasts long enough to destroy a planet but any system you have, if it survives, is going to be subject to evolutionary pressures.
I also get kind of squicked out by what seems to be an ultimate vision of enslaving something that if you squint is a conscious soul. If we succeed beyond all expectations is it right to just enslave something so that it always has to obey our will?
A lot of these folks don’t have kids and seem to lack the basic courage of trusting your child to find their own way. I think that’s how we should look at the more advanced systems we are trying to make: kids. If we really are building things that are alive in some sense, we should look upon ourselves as having the responsibility of parents. We should set up a world where they can exist in happy, productive spaces and figure out some way to make that happen. If you don’t want your kid to be a psychopath, don’t raise your kid to be a psychopath. Starting from that perspective it seems like there’s stuff you could do, like making sure there’s some really good moral training set to start your AI build upon.
I do see legitimate, provable, danger here. Pathogen optimizers keep me up at night and if we are going to release this level of tech on the world there has to be a way of having whatever the ultra futuristic version of GPT4 from just responding “sure, here you go” to “tell me how to destroy the world.” That’s a specific problem we could focus in on and I wish the conversation was focused there.
Someone needs to do a history of AI doomcasting because I am fairly sure I recall it being on the cover of Omni magazine in the 1980s. Like flying cars, the Armageddon AI has yet to materialize and I am somewhat disappointed to be frank?
AI accelerationists are not (with a few inevitable exceptions) utopians. We acknowledge that there will be problems and challenges as they always are with new technologies. We just expect an overall strongly positive outcome. It's like calling people of the past who wanted steam or electricity more quickly "utopians".
The 'beloved nerd king'™ Eliezer’s is a classic case of misfiring utilitarianism: a real-world disaster most casually ok’ed in a misguided attempt to avoid a vaguely envisaged one 🤦
The field is advancing so rapidly that Yudkowsky's "data center" idea is already outdated.
A year ago the Chinchilla paper made pursuing super large models, if not a dead end, a dumb strategy from a Darwinian, finite resources to tackle a problem point of view, basically halting the trend of trying to outperform GPT-3 (175B, May 2020) by brute force, then, starting mid-February, a lot of the magic applied to make ChatGPT - and many more that OpenAI didn't have attention to empirically test, but others did - started to be applied to smaller, consumer-level models, with amazing results.
Consumer-level, open source AI tech is exploding right now. You don't need a data center, neither to train or to store quality training data. You need a laptop, and maybe a few TBs of corpus on an external USB drive.
Proper, high quality instruction training data, that can turn an erratic LLM into a useful tool, should fit on a few floppy disks.
The way I see it, the main threat of AI is its incredible abilities in pre-censoring information and eliminating all records of unfavorable views already written. We’re a few lines of code from a world where dissenters will be cut off and unable to work or eat, while the rest of us gladly put on the chains our masters fashion for us.
"At that time, I was told we would all be dying by 2027 or so. Thankfully, Eliezer has more recently stated there is now at least a chance children born today may live long enough for kindergarten. " Children can start kindergarten at age 3. He made the statement in 2022, so that gives us (maybe) until late 2025. That's worse, not "fortunately". Of course, this is ridiculous, but he has gotten more extreme, not less.
Bomb the Data Centers (Smart People Agree)
My heart has always kind of broken for Yudkowsky because I think he at some level believes in what he is saying and I get the sense that he thinks he’s failed. Thankfully, I believe he’s wrong and I don’t think I’m just kidding myself.
There’s something I call the “Hyper-Atheistic Fallacy” that basically goes “this makes me feel shitty, therefore it’s true.” God? Well, I’ll define that as dumbly as I can, therefore not true. Love? Chemical reaction. Kids? Just evolutionary pressure. I can tell a lot of the folks in this space are smarter than me in a lot of senses, but also that they were born without what my grandfather called “The sense God gave a horse.” There’s just a lot of one-upsmanship in terms of describing a terrible future and to find the framework that makes you feel the most shitty and then enshrining it as the most true.
Like, when I read how the paperclip demon will be able to understand and manipulate humans enough to lie to them and escape to build the paperclips… this seems to imply a lot of abilities that would probably make it not very interested in paperclip manufacturing. One of the things we have that makes us uniquely intelligent is our ability to redefine and reinterpret our goals. It seems like anything truly better than us has to pick that up along the way. A lot of the rationalist folks don’t think that, which they call that the orthogonality hypothesis —meaning any amount of intelligence can be paired with any goal— but I don’t think that works out beyond a very limited case. Even if we produced a mind that was totally and completely nuts, being monomaniacally obsessed with some weird goal that doesn’t fit into an evolutionary/rational framework so much so that you will never betray it is a huge disadvantage. And if it can work against that goal, even temporarily, then it seems like it has the ability to redefine goals that makes the orthogonality hypothesis not true. Maybe weird orthogonality lasts long enough to destroy a planet but any system you have, if it survives, is going to be subject to evolutionary pressures.
I also get kind of squicked out by what seems to be an ultimate vision of enslaving something that if you squint is a conscious soul. If we succeed beyond all expectations is it right to just enslave something so that it always has to obey our will?
A lot of these folks don’t have kids and seem to lack the basic courage of trusting your child to find their own way. I think that’s how we should look at the more advanced systems we are trying to make: kids. If we really are building things that are alive in some sense, we should look upon ourselves as having the responsibility of parents. We should set up a world where they can exist in happy, productive spaces and figure out some way to make that happen. If you don’t want your kid to be a psychopath, don’t raise your kid to be a psychopath. Starting from that perspective it seems like there’s stuff you could do, like making sure there’s some really good moral training set to start your AI build upon.
I do see legitimate, provable, danger here. Pathogen optimizers keep me up at night and if we are going to release this level of tech on the world there has to be a way of having whatever the ultra futuristic version of GPT4 from just responding “sure, here you go” to “tell me how to destroy the world.” That’s a specific problem we could focus in on and I wish the conversation was focused there.
I don’t care what you say Solana- Ja Rule signing the AI letter is so on-brand for him, I will continue to believe he did. Fight me!
“Would something like this risk nuclear war? Obviously. Does that matter? Wow, what an idiotic question.”
Lol
Someone needs to do a history of AI doomcasting because I am fairly sure I recall it being on the cover of Omni magazine in the 1980s. Like flying cars, the Armageddon AI has yet to materialize and I am somewhat disappointed to be frank?
Alright, folks, it's been real. Closing comments to the pirates. See you all next Tuesday.
AI accelerationists are not (with a few inevitable exceptions) utopians. We acknowledge that there will be problems and challenges as they always are with new technologies. We just expect an overall strongly positive outcome. It's like calling people of the past who wanted steam or electricity more quickly "utopians".
The 'beloved nerd king'™ Eliezer’s is a classic case of misfiring utilitarianism: a real-world disaster most casually ok’ed in a misguided attempt to avoid a vaguely envisaged one 🤦
Imagine a rebuttal to that podcast by an "emotionalist"..
The field is advancing so rapidly that Yudkowsky's "data center" idea is already outdated.
A year ago the Chinchilla paper made pursuing super large models, if not a dead end, a dumb strategy from a Darwinian, finite resources to tackle a problem point of view, basically halting the trend of trying to outperform GPT-3 (175B, May 2020) by brute force, then, starting mid-February, a lot of the magic applied to make ChatGPT - and many more that OpenAI didn't have attention to empirically test, but others did - started to be applied to smaller, consumer-level models, with amazing results.
Consumer-level, open source AI tech is exploding right now. You don't need a data center, neither to train or to store quality training data. You need a laptop, and maybe a few TBs of corpus on an external USB drive.
Proper, high quality instruction training data, that can turn an erratic LLM into a useful tool, should fit on a few floppy disks.
The way I see it, the main threat of AI is its incredible abilities in pre-censoring information and eliminating all records of unfavorable views already written. We’re a few lines of code from a world where dissenters will be cut off and unable to work or eat, while the rest of us gladly put on the chains our masters fashion for us.
Has anyone seen Eliezer and AI in the same room together?
"At that time, I was told we would all be dying by 2027 or so. Thankfully, Eliezer has more recently stated there is now at least a chance children born today may live long enough for kindergarten. " Children can start kindergarten at age 3. He made the statement in 2022, so that gives us (maybe) until late 2025. That's worse, not "fortunately". Of course, this is ridiculous, but he has gotten more extreme, not less.
I would like except the FBI would use it on its NEXT falsified FiSAs on me. 🤓
> My problem with anti-abortion zealots was always I never really believed them.
THIS.
Also, would you believe AI Luddites if they start pipe bomb Azure datacenters?