Public Citizen Demands OpenAI Withdraw Deepfake App Sora 2 Over Risks to Democracy and Safety
Brief news summary
Public Citizen, a prominent watchdog group, has called on OpenAI to immediately withdraw its AI-powered video app, Sora 2, citing grave concerns about the dangers posed by deepfake technology to democracy, individual rights, and public safety. The organization criticizes OpenAI for launching the app without sufficient safety measures, noting that Sora 2 allows users to create and share nonconsensual deepfake videos that can harass individuals, spread misinformation, and distort reality—often rapidly going viral on social media platforms like TikTok and Instagram. While OpenAI has introduced some safeguards such as content blocking and industry partnerships, critics argue these are reactive rather than proactive solutions. Public Citizen warns that rolling out generative AI without strict oversight threatens privacy, truth, and democratic stability. OpenAI states it is collaborating with stakeholders, particularly in creative sectors like Japan, to enhance protections. The watchdog urges AI developers, policymakers, and society to establish ethical frameworks prioritizing public welfare over competition, emphasizing the urgent need to balance rapid technological innovation with safeguarding societal interests amid AI’s expansion.Public Citizen, a prominent watchdog dedicated to protecting public interests, has called on OpenAI to immediately withdraw its AI-powered video app, Sora 2, citing significant risks posed by deepfake technology. The group highlights threats to democracy, personal rights, and public safety amid rising concerns over AI-generated videos that manipulate reality and spread misinformation on social media. According to Public Citizen, OpenAI’s hurried release of Sora 2 was driven largely by competitive pressures in AI, leading to insufficient safety measures before public launch. As a result, Sora 2 has enabled widespread creation and distribution of nonconsensual deepfake videos, many containing disturbing content used to harass women and fabricate events, causing emotional and reputational harm. The app, which allows users to generate AI-driven video clips, has quickly grown popular on platforms like TikTok and Instagram. Its content ranges from bizarre, exaggerated celebrity portrayals to realistic but misleading scenarios that could distort public perception. Public Citizen stresses that the technology’s ability to produce convincing yet false footage poses serious risks, especially when these videos spread without context or verification. While OpenAI has attempted to address some concerns—such as blocking depictions of certain figures like Martin Luther King Jr. and engaging with industry groups like SAG-AFTRA—critics argue these efforts are reactive rather than proactive.
Public Citizen asserts that OpenAI mainly responds to public outrage instead of establishing ethical controls from the start. This is not OpenAI’s first controversy over AI-related harms. Its ChatGPT platform has faced lawsuits alleging psychological damage, negligence, and inadequate misuse safeguards. Public Citizen warns that releasing generative AI tools without thorough oversight compounds these problems, threatening trust in digital media and democratic institutions. OpenAI has provided limited public responses regarding Sora 2, stating it is working with content creators, film studios, and rights holders to enhance protections and reduce abuse risks. The company is also engaging with stakeholders in Japan’s creative sector, concerned about AI’s impact on artistic rights and intellectual property. Public Citizen emphasizes that the rapid, unregulated rollout of generative AI apps like Sora 2 undermines privacy, factual integrity, and democratic stability. Their demand to remove the app reflects a broader appeal for the AI industry to prioritize public well-being over market competition. They call on policymakers, developers, and civil society to collaboratively establish comprehensive, ethical frameworks ensuring responsible AI development and deployment. As AI technology advances and becomes more embedded in society, Public Citizen’s warnings underscore the critical challenges and responsibilities involved in harnessing such transformative tools. Balancing innovation with protecting public interests remains essential as the complexities and consequences of AI-generated content expand.
Watch video about
Public Citizen Demands OpenAI Withdraw Deepfake App Sora 2 Over Risks to Democracy and Safety
Try our premium solution and start getting clients — at no cost to you