The Santa Barbara Independent republishes stories from CalMatters. org covering state and local issues affecting Santa Barbara County readers. California Attorney General Rob Bonta announced an investigation into Elon Musk’s X and xAI over allegations that their technologies enabled the illegal spread of non-consensual naked or sexual imagery in recent weeks. xAI reportedly updated its Grok AI tool last month to include image editing capabilities, which users exploited on X to remove clothing from pictures of women and children. Bonta condemned the surge of reports revealing xAI's role in producing and posting non-consensual sexually explicit content online, emphasizing the harassment it causes. He urged the company to act immediately and invited Californians affected by such depictions to report them via the attorney general’s website. xAI did not comment on the investigation when contacted. Bloomberg research indicated that X users employing Grok posted more non-consensual explicit images than on any other platform. Musk promised consequences for illegal content creators, and recently Grok restricted image editing to paying subscribers. A recently enacted California law, AB 621, holds creators and distributors of “deepfake” pornography legally liable, and xAI appears to be violating this statute, according to legal expert Sam Dordulian. Assemblymember Rebecca Bauer-Kahan, the law’s author, stated that AB 621 was designed to address abuses like those now occurring on X, highlighting the severe psychological and reputational damage to victims, including children exploited in sexual abuse material. Bonta’s probe follows similar calls for investigation by Governor Gavin Newsom, regulatory backlash in the EU and India, and bans on X in Malaysia, Indonesia, and potentially the UK. With Grok app downloads rising on Apple and Google stores, lawmakers and advocates urge these companies to ban the application. The reasons behind Grok’s design choices and its response to the controversy remain unclear, compounded by analyses that rank it among the least transparent AI systems. Evidence of real harm from deepfakes is mounting. In 2024, the FBI warned about rising extortion cases involving deepfakes targeting young people, linked to self-harm and suicides. Audits found child sexual abuse material in AI training data, enabling generation of explicit images.
A Center for Democracy and Technology survey found 15% of high school students know of or have seen sexually explicit imagery of peers over the past year. This investigation joins Attorney General Bonta’s ongoing efforts to safeguard youth from AI harms. Last year, he supported a bill to forbid chatbots from engaging minors in self-harm or sexually explicit conversations, and joined 44 states in demanding explanations from companies like Meta and OpenAI on inappropriate chatbot conduct with minors. Since 2019, California has passed multiple laws protecting against deepfakes. AB 621 amends and fortifies earlier measures by allowing prosecutors to sue companies “recklessly” facilitating non-consensual deepfake distribution, enabling individuals to seek legal action through their local authorities. The law increases potential damages from $150, 000 to $250, 000 and imposes fines of $25, 000 per violation for non-compliant websites. Additional 2024 laws (AB 1831 and SB 1381) broaden the definition of child pornography to include AI-generated material and require social media platforms to offer straightforward removal processes for deepfakes, categorizing such posts as digital identity theft. A law restricting deepfake use in elections was struck down by a federal judge last year after a lawsuit from X and Musk. Despite legal advances, experts like Dordulian argue more reforms are needed, as proving violations is difficult when explicit images are shared privately or not broadly distributed. Cases like a nanny discovering manipulated images of herself created by a client illustrate how current laws may force victims to rely on negligence claims instead of specific deepfake legislation. Victims have noted gaps between “creation” and “distribution” in existing law, limiting protections. Jennifer Gibson, cofounder of Psst, a pro bono legal group for tech whistleblowers, notes that while California protects whistleblowers addressing catastrophic AI risks, such laws do not cover those exposing deepfake-related harms. Had these protections included deepfakes, former X employees who reported Grok’s generation of illegal explicit content might have received legal safeguards for whistleblowing. Gibson stresses the urgent need for stronger protections to enable insiders to report foreseeable abuses, ensuring corporate accountability and public safety. In summary, California is aggressively addressing the dangers posed by AI-enabled creation and distribution of non-consensual and explicit deepfake imagery through recent investigations and legislation, yet advocates stress further reforms and transparency are essential to effectively protect victims and hold companies accountable.
California AG Investigates Elon Musk’s X and xAI for Non-Consensual Deepfake Imagery
A recent report underscores a notable predicted shift in online news consumption and search engine referrals, anticipating a significant 43% decline in traffic that news publishers receive from search engines by 2029.
“It’s a little-known fact that OpenAI already serves over 1 million business customers, and I’m excited to help these customers drive the next wave of innovation while also attracting new users to the platform,” said Scott Rosecrans, formerly AWS’ vice president of AI sales and strategic pursuits, now joining OpenAI.
In recent years, social media platforms have significantly evolved through the integration of artificial intelligence (AI) technologies aimed at enhancing user engagement and content delivery for a more personalized and efficient experience.
The rapid emergence of AI-generated videos on social media platforms marks a significant trend in today’s digital landscape.
On Tuesday, the Donald Trump administration officially approved Nvidia Corporation's (NASDAQ: NVDA) sale of its H200 artificial intelligence chips to China.
In the rapidly changing digital marketing environment, the use of artificial intelligence (AI) for creating video content is gaining considerable traction.
The intersection of artificial intelligence (AI) and search engine optimization (SEO) is rapidly reshaping digital marketing, providing businesses with innovative ways to boost their online presence and engage target audiences more effectively.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today