
Why Thiel Fellows WinAug 29
peter thiel paid cracked teens to leave college and stay weird. now it's a $750b startup pipeline
Aug 21, 2024
SB 1047, California’s effort to regulate frontier AI models, is nearing the finish line. It will soon either die a quiet death in the California Assembly or proceed to Governor Newsom’s desk for signature or veto. Though the bill has undergone many revisions since its introduction six months ago, its basic structure remains the same: appointing unelected bureaucrats to set nebulous AI safety “guidelines,” and holding AI model developers legally liable for major crimes committed using their models by third parties beyond the developers’ direct control.
Ultimately, SB 1047 sets the stage for the gradual death of frontier open-source AI. The liability provisions will increase the marginal risk of releasing powerful models as open source. The government-issued guidelines envisioned by the bill will likely be incompatible with open-source models, similar to how the federal US AI Safety Institute’s recent AI misuse mitigation guidelines made recommendations that are impossible for open-source developers to adopt.
This should come as no surprise: the Effective Altruism-based Center for AI Safety (CAIS) co-sponsored 1047, drafting it in all but name, and many Effective Altruists believe open-source AI is an existential threat to every living thing on Earth, and that AI should instead be centralized under a monopoly lab or the government itself. Some have proposed de facto bans of any open-source model larger than GPT-3, such as earlier this year when the CAIS proposed stringent regulations on models trained with more than 10^23 flops; others, like Gladstone AI, have suggested making it a felony to release open models far smaller than Llama 3.