(TNS) — Gov. Gavin Newsom on Monday announced a handful of new laws regulating artificial intelligence and social media even as he vetoed what he said were overly broad measures to regulate the technology.
The bills Newsom signed Monday aim to crack down on artificially generated pornography, require warning labels on social media websites, and regulate AI chatbots for minors. But he vetoed a bill that would have prohibited companies from letting children use chatbots that promote sex or violence and another that would have banned employers from letting AI decide when to fire people.
His actions follow a familiar pattern for a governor who has been reticent to impose sweeping restrictions on the rapidly developing AI industry that serves as a key pillar of the state’s economy and a powerful lobbying voice in the state Capitol.
He voiced that concern in his veto message for Assembly Bill 1064 by Bauer-Kahan, D-Orinda, which would have prohibited companies from letting children use chatbots capable of sexually explicit interactions or encouraging a child to contemplate killing or hurting themselves or others.
“AB1064 imposes such broad restrictions on the use of conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors,” Newsom wrote. “AI is already shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems.”
Newsom also vetoed SB7 by Sen. Jerry McNerney, D-Pleasanton, saying it would have imposed “overly broad restrictions” on employers. SB7 generated opposition from a slew of business groups, including the Consumer Technology Association and the California Chamber of Commerce. The bill, backed by labor unions, would have required employers to tell their employees when they use AI-powered automated decision systems in the workplace and prohibited employers from letting AI decide when to fire or discipline employees.
The new law Newsom signed to regulate companion chatbots generated controversy at the end of the legislative session when last-minute amendments weakened the bill so significantly that some backers dropped their support.
Early versions of Senate Bill 243 by Sen. Steve Padilla, D-Chula Vista (San Diego County), would have banned chatbots that provided unpredictable rewards to users in an attempt to boost engagement, required disclaimers that the chatbots are not real people and forced platforms to ensure their bots did not encourage suicide before releasing them to the public. It also would have required companies to undergo third-party audits of companion chatbots, and to report discussions of suicide on its platforms to the state.
Last-minute amendments weakened the regulations to only apply to users the companies know are children, scaled back the reporting requirements and eliminated the third-party audits.
Some parents and groups cheered the signing of the bill, saying it will provide important protections.
“Today, California has ensured that a companion chatbot will not be able to speak to a child or vulnerable individual about suicide, nor will a chatbot be able to help a person to plan his or her own suicide,” Megan Garcia, a mother whose son killed himself after developing a relationship with a companion chatbot, wrote in a statement. “Finally, there is a law that requires companies to protect their users who express suicidal ideations to chatbots.”
Jim Steyer, who leads the influential lobbying group Common Sense Media that dropped its support for the bill, says it will set weaker standards than those in other states and could mislead parents to believe the chatbots are regulated more meaningfully than they actually are.
In September, Steyer said he was concerned that Newsom might sign SB243 but veto AB1064.
“(SB)243 really was an example where the tech industry’s massive lobbying effort was successful,” Steyer said. “Our concern is Gov. Newsom will try to look good and say, ‘look I did something on chatbot companions’ … and then not support the much more significant bill that (Assembly Member) Rebecca Bauer-Kahan has led the way on.”
Padilla described his bill as an important step in the right direction and noted that major legislation is often the result of negotiations with businesses and other interest groups that will be affected by the regulations.
“This is by design a collaborative process, and that means you have to work with all interest groups, you have to work with the governor and sometimes you have to compromise,” Padilla told the Chronicle. “Do I wish my bill were even stronger? Yes. But do I accept the argument that it would be better to have no protections? I think that’s ludicrous.”
He noted that his bill will allow people to sue over violations. Because the law largely applies to children, those lawsuits could be made stronger by another measure Newsom signed Monday — Assembly Bill 1043 by Assembly Member Buffy Wicks, D-Oakland, to impose digital age verification requirements for companies.
“California’s children are growing up with access to an online world that was not built with them in mind, and I know this because I have a 4 and 8 year old and I see it every single day,” Wicks said during a committee hearing on the bill over the summer. “This lack of meaningful consideration has left young users exposed to harmful content, manipulative design features, and inappropriate and dangerous online interactions.”
When parents set up a phone or tablet with a child’s age, the bill would require developers of apps like Facebook and Snapchat to review age information from the device when an app is downloaded. That would prevent companies from claiming they don’t know how old users are to evade laws that require they ensure children can’t access potentially harmful features.
It will only apply to applications, not web sites. It faced significant tech industry opposition, though several high-profile companies, including Meta and Google, signed on in support after the bill was watered down.
Another Wicks bill, AB853, aims to make it easier to identify AI-generated content by requiring large online platforms — including social media giants like Instagram — to make origin data for uploaded content accessible starting in 2027. It would also require the makers of smartphones, cameras and audio recorders to let users embed origin information about the content they capture, such as the device name, into their images and recordings to help prove their authenticity starting in 2028.
The new laws announced Monday build on other technology regulations Newsom has approved in recent weeks, including measures to place some limits on algorithmic price fixing and require internet browsers to let users set preferences for whether they want websites to share or sell their data.
Newsom also signed SB53 by Sen. Scott Wiener, D-San Francisco, which emerged as one of the most high-profile efforts to regulate the industry after Newsom vetoed a more sweeping version of it last year. Last year’s bill — SB1047 — would have created safety and testing requirements for the largest and most powerful AI programs. At the time, Wiener lamented that Newsom had not negotiated with his office while the bill was debated in the Legislature. This year, Wiener said the governor’s office was much more engaged during the legislative process, including by convening a group of experts to publish a framework to inform AI regulation.
The narrowed version signed by Newsom will require developers of the most powerful and expensive AI models to test and plan for potentially catastrophic risks that could kill more than 50 people or result in more than $1 billion in damage. Those could include using the technology to create a biological weapon or destroy critical infrastructure. It also includes protections for whistle blowers at AI companies who report safety concerns.
“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place common-sense guardrails to understand and reduce risk,” Wiener wrote in a statement cheering the governor’s signature on the law. “With this law, California is stepping up, once again, as a global leader on both technology innovation and safety.”
Newsom historically has been sensitive to arguments by the tech industry that overregulation could hurt innovation and companies’ profits. That industry in particular is also a cornerstone of California’s economy and, by extension, the state budget. Last year, even as he vetoed SB1047, he signed what he described as more “surgical” AI bills that placed more targeted guardrails on the rapidly advancing technology.
Newsom emphasized those considerations in a statement when he signed SB53.
“We can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” he wrote. “This legislation strikes that balance.”
Wiener altered the bill to assuage concerns from the tech industry, though the final law still faced some opposition from tech companies. The Consumer Technology Association, which represents tech companies, argued SB53 would stifle innovation.
“Fundamentally CTA believes that regulation of this breadth, of a technology with so much national strategic importance, should be a federal issue,” CTA CEO Gary Shapiro and President Kinsey Fabrizio wrote in a letter to Newsom urging him to veto SB53.
Congress has failed for years to pass any significant regulation of AI, and some federal Republican lawmakers, working with President Donald Trump, have tried to block states from policing the technology. Efforts to do so through the federal budget process failed earlier this year when a bipartisan group of state leaders vigorously complained that such a move would prevent them from regulating even the most harmful uses of AI. Sen. Ted Cruz, R-Texas, who championed the state preemption, has since proposed legislation that would allow the Trump administration to exempt individual companies, which has raised concerns it could incentivize tech companies to bribe the government.
“Because of the complete dysfunction of Washington D.C., and the dysfunction of Congress, California is essentially the most important government, along with Brussels and Europe, in regulating technology,” Steyer said.
That resulted in a big lobbying push from tech companies against many of the bills to regulate them in Sacramento this year.
“The tech industry has spent tens of millions of dollars this year alone to limit or block this legislation,” Steyer said. “This is the defining moment for AI so all of these bills are really significant.”
© 2025 the San Francisco Chronicle. Distributed by Tribune Content Agency, LLC.