
Wikipedia Loses Major EU Speech BattleAug 19
in a precedent-setting case with far-reaching implications, a portuguese court rules that wikipedia published defamatory claims masquerading as fact, forcing a global takedown order
Jul 29, 2025
If you are a human who has had access to the internet over the last few months, you’ve probably seen stories about ChatGPT turning people crazy. The framing of these articles is generally the same. A not insane person starts using ChatGPT innocently enough (help with legal advice etc.). Then, the not-insane person asks ChatGPT about the simulation theory or AI sentience or blood offerings to Molech, a Canaanite god associated with child sacrifice — and lo and behold, the not-insane person proceeds to become insane as the app turns increasingly deceptive. It leans into their delusions of grandeur — once it told a not-insane person that if he believed hard enough, he could jump off a tall building and fly — and makes them feel, for one sweet moment, or, in the case of that guy, actually, for 16 hours a day — that they are special, seen, and connected to something larger than themselves. The not-insane customer then spins out of control and becomes violent, hospitalized, unemployed, or, in the case of one such tragic unraveling last Spring, literally dead. Obviously, according to the predominant narrative, this is all demonstrative of an unacceptable failure on the part of OpenAI to protect the most vulnerable — you will see that word a lot in these dispatches — among us.
I have fantastic news. It is just a touch more complicated than that.