close
close

California governor vetoes bill to create first-in-the-nation AI security measures

California governor vetoes bill to create first-in-the-nation AI security measures

SACRAMENTO, California. – California Governor Gavin Newsom vetoed a landmark bill aimed at creating a first in the nation security measures Sunday for major AI models.

The decision is a major blow to efforts to rein in a rapidly expanding domestic industry with little oversight. Supporters said the bill would create some of the first regulations on large-scale AI models in the country and pave the way for nationwide AI safety regulations.

Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California should take the lead in regulating AI in the face of federal inaction. offer “It could have a chilling effect on the industry.”

Newsom said the proposal, which faces fierce opposition from start-ups, tech giants and several House Democrats, could harm the local industry by imposing stringent requirements.

“SB 1047, although well-intentioned, does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom said in a statement. “Instead, the bill stands.” As long as a large system uses it, there are strict standards for even the most basic functions. “I do not believe this is the best approach to protecting the public from the real threats posed by technology.”

On Sunday, Newsom announced he would instead partner with several industry experts, including the state’s AI pioneer Fei-Fei LiDeveloping guardrails around powerful AI models. Li opposed the AI ​​security proposal.

The measure, aimed at reducing potential risks posed by artificial intelligence, would require companies to test their models and publicly disclose their security protocols to prevent the models from being manipulated, for example, to destroy the state’s power grid or help make chemical weapons. Experts say these scenarios may be possible in the future as the industry continues to advance rapidly. Workers would also be provided with whistleblower protection.

Democratic Sen. Scott Weiner, who wrote the bill, called the veto “a failure for anyone who believes in the oversight of massive corporations making critical decisions that affect the public’s safety, welfare and the future of the planet.”

“Companies developing advanced AI systems recognize that the risks these models present to the public are real and rapidly increasing. While major AI labs have made admirable commitments to monitor and mitigate these risks, the reality is that voluntary commitments from the industry are not feasible and rarely work well for the public,” Wiener said in a statement Sunday afternoon.

Wiener said the debate around the bill has significantly advanced the issue of AI safety and he will continue to press this point.

Among the legislation a series of invoices It was passed by the Legislature this year to regulate artificial intelligence, fight against deepfakes And protect workers. California needs to act this year, state lawmakers said, citing the hard lessons they learned from failing to rein in social media companies when they had the chance.

Proponents of the measure, including Elon Musk and Anthropic, said the proposal could bring some levels of transparency and accountability to large-scale AI models, as developers and experts say they still lack a full understanding of how AI models behave. From where.

The bill targeted systems that require more than 100 million dollars build. None of the current AI models have reached that threshold, but some experts have said that could change within the next year.

“This is because of the massive scale of investment in the industry,” said former OpenAI researcher Daniel Kokotajlo, who resigned in April over what he saw as the company’s disregard for AI risks. “This is an insane amount of power for any private company to have unaccountable control, and it’s also incredibly risky.”

USA is already behind Europe on regulating artificial intelligence To limit risks. California’s proposal wasn’t as comprehensive as European regulations, but it could be a good first step in putting up barriers around the fast-growing technology that has raised concerns about job loss, misinformation, invasion of privacy and privacy violations. automation biashis supporters said.

Last year a number of leading AI companies accepted voluntarily Following precautions set by the White House, such as testing models and sharing information about them. Supporters of the measure said the California bill would mandate artificial intelligence developers to follow requirements similar to those commitments.

But critics, including former US House Speaker Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation. They said this would deter AI developers from investing in large models or sharing open source software.

Newsom’s decision to veto the bill marks another victory in California for big tech companies and artificial intelligence developers; Many of them have spent the past year working with the California Chamber of Commerce to lobby the governor and lawmakers to dissuade them from advancing artificial intelligence regulations.

Two other sweeping AI proposals, facing growing opposition from the tech industry and others, died before the legislative deadline last month. The bills would require AI developers to tag AI-generated content. Banning discrimination from AI tools It is used to make employment decisions.

The governor said earlier this summer that he wants to maintain California’s status as a global leader in artificial intelligence, noting that 32 of the world’s 50 largest artificial intelligence companies are located in the state.

Promoted California as an early adopter as a state may soon deploy productive AI tools relieving highway congestion, providing tax guidance, and facilitating homelessness programs. The state also announced last month voluntary partnership We’re collaborating with AI giant Nvidia to help educate students, university faculty, developers, and data scientists. California is also considering new rules against AI discrimination in hiring practices.

Earlier this month, Newsom signed some of the toughest legislation in the country. election deepfakes and precautions Protect Hollywood workers from unauthorized use of AI.

But despite Newsom’s veto, the California security proposal is inspiring lawmakers in other states to take similar measures, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit organization that works with lawmakers on technology and privacy proposals.

“They will potentially either copy this or do something similar in the next legislative session,” Rice said. “So it’s not going.”

—-

Associated Press and OpenAI a license and technology agreement This allows OpenAI to access some of AP’s text archives.

Copyright 2024 Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.