Starting January 1, a new law signed by California Gov. Gavin Newsom requires tech companies creating large, advanced AI models, like Google and OpenAI, to increase transparency about societal impacts and protect employees who raise safety concerns. The law provides whistleblower protections for employees assessing critical safety risks and mandates that large AI developers publish frameworks on their websites detailing how they manage catastrophic risks and respond to incidents. Companies must report critical safety incidents to the state within 15 days, or within 24 hours if there is an imminent threat of death or injury. Fines can reach $1 million per violation. Originating from Senate Bill 53 by Sen. Scott Wiener, the law targets catastrophic risks defined as scenarios where AI causes over 50 deaths through cyberattacks or weapons use, or results in over $1 billion in loss or damage, especially in cases where operators lose control over AI actions—a largely hypothetical concern. The law requires AI creators to issue transparency reports that describe intended model uses, usage restrictions, risk management approaches, and any third-party review of these efforts. Rishi Bommasani of Stanford, a key contributor to a report influencing SB 53, sees the law as a vital advancement for AI transparency, noting that only 3 of 13 companies his team studied regularly report incidents, and transparency scores have declined.
However, he emphasizes that the law’s impact depends heavily on enforcement and resource allocation by government agencies. The law has also influenced other states; New York’s AI transparency and safety law follows California’s model. Despite its progress, critics note significant gaps: the law excludes risks related to environmental impacts, disinformation, systemic biases like sexism or racism, and does not cover government AI systems used for profiling or impact assessments. It only applies to companies with annual revenues over $500 million. Transparency requirements are also limited—incident reports submitted to California’s Office of Emergency Services (OES) are confidential, accessible only to legislators and the governor, and may be redacted to protect trade secrets, limiting public access. Bommasani hopes additional transparency will come from Assembly Bill 2013, effective January 1, 2024, which requires disclosure about the data used to train AI models. Further provisions of SB 53 begin in 2027, when OES will produce anonymized reports on critical AI safety incidents received from the public and AI developers. While this may clarify AI’s potential for attacks or autonomous actions, it will not publicly identify specific AI models posing risks.
California AI Transparency Law: New Requirements for Tech Giants Starting 2024
The travel industry is undergoing a profound transformation in consumer behavior that transcends typical economic fluctuations.
Artificial Intelligence (AI) Mode is rapidly transforming SEO by introducing innovative features and challenges to traditional practices.
Artificial intelligence is transforming remote team collaboration, especially by enhancing video conferencing tools.
The plays remain the same, but the rules have shifted dramatically—here’s what truly matters now.
AI Video Synthesis Revolutionizes Real-Time Language Translation in Videos, Breaking Down Communication Barriers In today’s era of rapid globalization and interconnectedness, language barriers remain a significant challenge to seamless communication and information sharing
Meta's recent acquisition of Manus AI marks a significant milestone in the technology giant's strategic expansion into the enterprise artificial intelligence sector.
Artificial intelligence (AI) is progressively reshaping content creation and optimization, marking a pivotal change in marketers’ approach to content marketing strategies.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today