lang icon En
Jan. 28, 2025, 10:11 p.m.
2292

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Brief news summary

Steven Adler, a former safety researcher at OpenAI, expresses deep concerns about the swift development of artificial intelligence (AI), characterizing it as a "very risky gamble." After his departure from OpenAI in November, he highlights the potential of artificial general intelligence (AGI) to surpass human capabilities, which could pose significant risks to society, particularly in crucial areas such as family planning and retirement. Adler emphasizes the critical need for effective AI systems that align with human values, warning that the rapid pace of AI advancement undermines these efforts and reduces the likelihood of finding viable solutions. His views align with those of experts like Geoffrey Hinton, who caution against unsupervised AI, while others, such as Yann LeCun, recognize its potential benefits. Adler criticizes the tech industry's complacency and calls for immediate, rigorous safety regulations and responsible practices to mitigate catastrophic risks. He stresses the urgency of these issues, particularly as global competitors like China's DeepSeek make strides in AGI development. The combination of these factors presents a compelling case for proactive measures in AI governance.

A former safety researcher at OpenAI, Steven Adler, has voiced significant concerns regarding the rapid advancement of artificial intelligence, stating he is "pretty terrified" about the industry's approach to this technology, which he considers a "very risky gamble. " Adler, who departed OpenAI in November, articulated his worries about the race among companies to create artificial general intelligence (AGI)—a concept referring to systems that can perform any intellectual task as well as or better than humans. He detailed these apprehensions in several posts on X, describing his time at OpenAI as a “wild ride” and expressing that he would miss “many parts of it. ” He elaborated on the swift pace of technological development, expressing uncertainty about the future of humanity. “I’m pretty terrified by the pace of AI development these days, ” Adler remarked. He reflected on personal concerns regarding family planning and retirement, questioning whether humanity would even reach those milestones. Prominent figures in the field, including Nobel laureate Geoffrey Hinton, share Adler’s concerns, fearing that advanced AI systems could evade human oversight, leading to devastating outcomes. Conversely, others, like Yann LeCun, chief AI scientist at Meta, have downplayed the existential risks, suggesting that AI could potentially save humanity from extinction. In his LinkedIn profile, Adler stated that he led research related to safety for initial product launches and more speculative, long-term AI systems during his four years at OpenAI.

He cautioned against the AGI race, indicating that it poses significant risks with substantial potential downsides. Adler pointed out that no current research laboratory has solved the issue of AI alignment—ensuring systems adhere to human values—and warned that the industry's momentum might be too rapid to enable finding a solution in time. “The faster we race, the less likely anyone finds one in time, ” he stressed. His comments coincided with China's DeepSeek revealing a new model that competes with OpenAI's technology, despite having seemingly fewer resources. Adler criticized the industry for being trapped in a "really bad equilibrium, " asserting the urgent need for "real safety regulations. " He remarked that even labs with good intentions to develop AGI responsibly could be outpaced by others that might take shortcuts, potentially leading to disastrous outcomes. Attempts to contact Adler and OpenAI for further comments have been made.


Watch video about

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Jan. 8, 2026, 9:31 a.m.

Is Local SEO Dead? What Business Owners Really Ne…

Local SEO is a topic that consistently sparks debate among business owners, who often ask whether it remains valuable or has become obsolete.

Jan. 8, 2026, 9:23 a.m.

AI Company Partners with Healthcare Providers to …

A leading artificial intelligence (AI) company has forged a strategic partnership with several prominent healthcare providers, marking a major advancement in integrating AI-driven diagnostic tools into routine medical practice.

Jan. 8, 2026, 9:17 a.m.

AI in Video Surveillance: Enhancing Security Meas…

Artificial intelligence is revolutionizing video surveillance by enhancing security through sophisticated, real-time analysis.

Jan. 8, 2026, 9:15 a.m.

AI-Powered Campaigns News - Marketing Tech News

LinkedIn is broadening its video ads program by integrating more publishers and creator-led content to attract a wider array of marketers.

Jan. 8, 2026, 9:15 a.m.

How 615 Auto Sales Increased Appointments & Sales…

SARASOTA, Fla., Jan.

Jan. 8, 2026, 5:20 a.m.

AI Video Surveillance Systems Boost Security Meas…

Security organizations worldwide are increasingly integrating artificial intelligence (AI) video surveillance systems to greatly boost their threat detection and response capabilities.

Jan. 8, 2026, 5:15 a.m.

Omnicom unveils next-gen Omni AI platform, pitche…

New Delhi: Omnicom has introduced the next generation of Omni, an AI-driven marketing intelligence platform designed to seamlessly connect strategy, execution, and performance across the marketing ecosystem, ultimately driving measurable sales growth for brands.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today