Even The Politicians Thought the Open Letter Made No Sense In The Senate Hearing on AI
today's hearing on ai covered ai regulation and challenges, and the infamous open letter, which nearly everyone in the room thought was unwise
In a Senate Committee on Armed Services hearing today on how the Department of Defense can both leverage AI and mitigate its risks, senators and industry leaders discussed regulatory approaches to AI for commercial and defense applications, specific obstacles to being the global leader in leveraging AI that the DoD faces — and solutions to overcoming them.
Shyam Sankar, Palantir CTO, Josh Lospinoso, Shift5 CEO, and Jason Matheny, Rand Corporation CEO and Commissioner of the National Security Commission on AI provided expert testimony. The hearing was chaired by Sen. Joe Manchin (D-WV) and Sen. Mike Rounds (R-SD).
The open letter
After describing the open letter to pause AI development that the Future of Life Institue published in March, Senator Mike Rounds (R-SD) said "I think the greater risk, and I'm looking at this from a US security standpoint, is taking a pause while our competitors leap ahead of us in this field... I don't believe that now is the time for the US to take a break."
A pause would be “close to impossible… It’s also unclear how we would use that pause,” Matheny responded.
And "other than ceding the advantage to the adversary," Sankar added, the pause would have no effect. "The bigger consequence is the nature of the AIs. China has already said that AI should have socialist characteristics... To the extent that that becomes the standard AI for the world, is highly problematic. I would double down on the idea that a democratic AI is crucial.”
A pause would be “impractical,” Lospinoso agreed. “We [would] abdicate leadership on ethics and norms, not to mention practical implications of us falling behind on cyber security, military applications.”
Regulating artificial intelligence
Though there was consensus on the letter, Matheny repeatedly called for the government to create a regulatory regime that would require licenses for AI development and for companies to report when and how they’re training LLMs, as well as essentially ban open-source development of LLMs.
“We need a licensing regime, a government system of guard rails, around the models that are being built, the amount of compute used by those models… I think we’re going to need a regulatory approach that allows the government to say, ‘Tools of a certain size can’t be shared freely around the world, to our competitors, and need to have certain guarantees of security before they're deployed.’”
DoD should additionally “invest in potential moonshots for AI security including microelectronic controls that are embedded in AI chips to prevent the development of AI models without security safeguards,” Matheny said, and “generalizable approaches to evaluate the safety of AI systems before they're deployed.”
Rand CEO Matheny also described parts of a roadmap for maintaining competitive advantage in AI through export controls — “Ensure strong export controls of leading edge AI chips and equipment, while licensing benign uses of chips that can be remotely throttled as needed.”
In his questioning, Manchin repeatedly referred to the “early days” of the Internet and Section 230, which he unambiguously implied — but for vague reasons — was a missed opportunity for establishing the right regulation. Manchin said he hopes “we’ve learned from those mistakes” and will “put guardrails in place” to avoid similar mistakes with AI.
Notably, Manchin asked all three industry leaders if they would provide a set of regulatory recommendations to the committee in 30 to 60 days.
Leveraging AI to our advantage
Palantir CTO Sankar called for the US to adopt a more hands on, accelerationist approach to AI. This, in his view, is practically a requirement for securing global, geopolitical dominance.
We need to “spend at least 5% of our budget on capabilities that will terrify our adversaries,” Sankar said.
“We must completely rethink what we are building and how we are building it. AI will completely change everything. Even toasters, but most certainly tanks.”
“This will be disruptive and emotional. Many incumbents in government will be affected, and they will feel threatened and dislocated,” he said. And later: “What keeps me up at night is: do we have the will? The issue of AI adoption is one of willpower. Are we adopting AI like our survival depends on it? Because I believe it does. And I think you see that in our adversaries, they [realize it’s a matter of survival].”
Lospinoso was focused on the challenges the DoD faces in data collection, management, and transfer.
“Most major weapons systems are not AI ready,” he said. “Unfortunately, the DoD struggles to liberate even the simplest data streams from our weapons systems. These machines are talking, but the DoD is unable to hear them. We are unable to deploy great AI weapons systems without great data. This requires taking seriously the difficult, unglamorous work of building great systems.”
“We must solve the operational challenge of transferring terabytes of data from the field to the cloud, making them available to the AI technologies they will fuel,” he said.
“We're not collecting the data from these weapons. It's all about having a massive data set. It's not usable. The vast majority of data that these systems generate evaporate into the ether immediately.”
And later, Lospinoso said that the “single biggest asymmetric threat that we face is the cyber security of our weapons systems.”
Lospinoso warned that “if [this] trend [continues], China will surpass us in a decade.”
In addition to the above, the hearing spent a significant time on China, American companies working with China, the concept of authoritarian AI vs. democratic AI, DoD efforts to redteam AI, using AI for cybersecurity, and the need for America to attract and maintain top AI talent.
This article didn't show up in my email inbox, or my SubStack inbox. Maybe intentional on your part?