
Wikipedia Loses Major EU Speech BattleAug 19
in a precedent-setting case with far-reaching implications, a portuguese court rules that wikipedia published defamatory claims masquerading as fact, forcing a global takedown order
Feb 20, 2023
Mirror mirror on the wall. Last week, the New York Times published a 10,000 word conversation between star “technology columnist” Kevin Roose, and Microsoft’s new Bing bot, a search tool powered by the most advanced large language model (LLM) the public has ever seen. “Welcome to the Age of Artificial Intelligence,” the Times declared. But what does “artificial intelligence” mean, exactly? How are these models being used? What can they do? For anybody just tuning in — presumably including most Times readers — these are reasonable questions. Unfortunately, one would be hard pressed to find answers for them online, and certainly not from Kevin Roose.
Since the beta release of Microsoft’s new search tool, the social internet has been saturated with screenshots of alleged conversations between random, largely anonymous users and “Sydney,” which many claim the AI named itself. In reality, “Sydney” was the product’s codename, inadvertently revealed in an early prompt injection attack on the program, which separately revealed many of the AI’s governing rules. This is a kind of misunderstanding we will observe, in excruciating recurrence, throughout this piece (and probably throughout our lives, let’s be honest).
As most people are not yet able to use Microsoft’s new tool, Sydney screenshots have been grabbing enormous attention. But among the most popular and unnerving examples, a majority are crudely cut, impossible to corroborate, or both. The Times’ conversation with Sydney, while dishonest in its framing and execution, does at least appear to be authentic and complete. It also went totally viral. These facts make it the most notable piece of recent AI hysteria to date, perhaps of all time, and with that hysteria mounting it’s worth taking the piece apart in detail.