Policy Plunge

SB 1047: A Missed Opportunity For AI Regulation?

As California's ambitious AI regulation bill fails to become law due to the veto power exercised by its Governor, the moot question remains: Is future innovation worth the risk of unchecked AI development?

In the state of California in the USA, a Bill (in India, we call it an Act) was introduced last month to regulate artificial intelligence (AI), but it just fell short of being passed into law.

It was an important Bill, which if enacted, would have put checks and balances in the way AI is developed in a state — home to 32 of the top 50 AI companies in the world. 

The Bill, SB 1047, broadly applied to AI models that cost over US$ 100 million to develop. Key features of the bill included requiring AI developers to conduct impact assessments to evaluate potential harms, implement a "kill switch" that can be activated if the model engages in harmful activity, transparency reports by companies detailing the use and risks of their AI systems, and a mechanism which protects whistleblowers who expose unethical or harmful uses of AI in their companies.

The Secretariat spoke to Ajith Sahasranamam, founder and CEO of Ongil.ai, who highlighted a few drawbacks of the Bill: “This number (US$ 100 million) is totally arbitrary. This takes an approach that only models which are extremely computationally intensive need to be regulated. The Act also does not get into any of the environmental costs that could be incurred in training the model.”

Even though it would have been a state law, its consequences would have extended beyond state lines, with the lanes of San Francisco and Palo Alto lined with companies like OpenAI, Google DeepMind, Tesla, Nvidia, Meta AI, Palantir and Apple AI. 

Almost all the big companies mentioned above, which have invested heavily in AI, and a few others like Anthropic, Facebook, Y Combinator and venture capital firm Andreessen Horowitz were against the Bill, along with Speaker Emerita Nancy Pelosi. They put forward the argument that regulations would curb AI innovation.

It is believed that California’s Governor Gavin Newsom, who vetoed the bill, was lobbied hard by the tech industry not to pass the bill.

While the Governor noted in a message that SB 1047 was well-intentioned, he said it failed to take into account “whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data."

Governor Newsom hinted that an AI regulation is anticipated.

“Does the rationale given for vetoing this make any sense? No, it is just playing into the hands of the big tech who are already controlling all the resources,” said Sahasranamam.

On the other side of the fence were policymakers like Scott Wiener — the author of the Bill — along with the ‘godfathers of AI’ — Geoffrey Hinton and Yoshua Bengio. One other such ‘godfather’ Matt LeCun who runs Meta’s AI lab voiced his disagreement over the Bill. A surprise proponent of the Bill was Tesla CEO Elon Musk who also runs an AI company called xAI.

Even with such heavyweights behind the Bill, the regulation has been stalled, showing Californian companies a green light to operate without stringent oversight, allowing them to innovate faster, and dominate the market. 

But the issue of SB 1047, or for that matter any AI law, cannot be seen in black and white. As they say, there are always two sides to the coin. How do governments balance the need for innovation with the need to protect citizens? 

The Indian Dilemma: Innovation Vs Regulation

From the world’s second-biggest democracy to the world’s biggest democracy: India has a similar dynamic unfolding.

Investments fostering an AI ecosystem in India are flowing in, and there are mechanisms and incentives in place to boost local AI startups, particularly in fields like education, healthcare, and agriculture.

But similar to California's, the Indian government has adopted a stance of encouraging innovation by avoiding heavy-handed regulation.

In both democracies, the pattern is clear — lobbying from powerful tech firms often shapes policy decisions. But this deregulated approach comes with a price. Citizens may face potential misuse by powerful corporations.

Tech companies often argue that over-regulation will slow down progress, making them less competitive in a fast-moving global market. The opposite side of the argument is just as compelling: unchecked AI development could exacerbate existing inequalities, compromise data privacy, and lead to real-world harm.

“EU’s AI act, for example, could be a starting point for us to start thinking along these specific guidelines, and that could be a way forward,” said Sahasranamam. He argued that the AI Act passed by the European Union is in stark contrast to SB 1047.

“The EU Act makes specific demands on where a human should be involved in the loop, the explainability of the AI models, and the summary of the data collection (that) needs to be provided, where appropriate and required. The SB 1047 falls very short of such rigour,” added Sahasranamam.

As the world’s most powerful and largest democracies navigate these challenges, the question remains: who will ultimately benefit from the AI revolution — tech giants or the common people?

This is a free story, Feel free to share.

facebooktwitterlinkedInwhatsApp