
Wikipedia Loses Major EU Speech BattleAug 19
in a precedent-setting case with far-reaching implications, a portuguese court rules that wikipedia published defamatory claims masquerading as fact, forcing a global takedown order
Jul 17, 2024
California's AI safety bill (SB 1047), which will create a new government agency to regulate AI, has been widely criticized as impractical, onerous, and likely to stifle innovation
The bill requires AI researchers to sign a sworn statement affirming their models don’t pose an “unreasonable risk” of harm, even though there is no consensus — at all — on the theoretical harm cutting edge models such as GPT-4 can cause
Many of the bill's requirements are so vague that not even the leading AI scientists would agree about how to meet them
The scope of SB 1047, combined with California’s outsized importance in the burgeoning AI industry, means the bill will act as an extrajudicial compliance mechanism, with nationwide (and, likely, international) implications
Devs, prepare yourselves: California’s AI safety bill is inching closer to the legislative finish line. Earlier this month, SB 1047 (officially “The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”) passed in the Assembly Judiciary Committee, a key legislative hurdle on its way to Newsom’s desk. The bill awaits further votes in the Assembly Appropriation Committee and on the chamber floor — both of which it will likely pass easily — and the only question that remains is whether the Governor will veto it.