lang icon En
Jan. 28, 2025, 10:11 p.m.
2428

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Brief news summary

Steven Adler, a former safety researcher at OpenAI, expresses deep concerns about the swift development of artificial intelligence (AI), characterizing it as a "very risky gamble." After his departure from OpenAI in November, he highlights the potential of artificial general intelligence (AGI) to surpass human capabilities, which could pose significant risks to society, particularly in crucial areas such as family planning and retirement. Adler emphasizes the critical need for effective AI systems that align with human values, warning that the rapid pace of AI advancement undermines these efforts and reduces the likelihood of finding viable solutions. His views align with those of experts like Geoffrey Hinton, who caution against unsupervised AI, while others, such as Yann LeCun, recognize its potential benefits. Adler criticizes the tech industry's complacency and calls for immediate, rigorous safety regulations and responsible practices to mitigate catastrophic risks. He stresses the urgency of these issues, particularly as global competitors like China's DeepSeek make strides in AGI development. The combination of these factors presents a compelling case for proactive measures in AI governance.

A former safety researcher at OpenAI, Steven Adler, has voiced significant concerns regarding the rapid advancement of artificial intelligence, stating he is "pretty terrified" about the industry's approach to this technology, which he considers a "very risky gamble. " Adler, who departed OpenAI in November, articulated his worries about the race among companies to create artificial general intelligence (AGI)—a concept referring to systems that can perform any intellectual task as well as or better than humans. He detailed these apprehensions in several posts on X, describing his time at OpenAI as a “wild ride” and expressing that he would miss “many parts of it. ” He elaborated on the swift pace of technological development, expressing uncertainty about the future of humanity. “I’m pretty terrified by the pace of AI development these days, ” Adler remarked. He reflected on personal concerns regarding family planning and retirement, questioning whether humanity would even reach those milestones. Prominent figures in the field, including Nobel laureate Geoffrey Hinton, share Adler’s concerns, fearing that advanced AI systems could evade human oversight, leading to devastating outcomes. Conversely, others, like Yann LeCun, chief AI scientist at Meta, have downplayed the existential risks, suggesting that AI could potentially save humanity from extinction. In his LinkedIn profile, Adler stated that he led research related to safety for initial product launches and more speculative, long-term AI systems during his four years at OpenAI.

He cautioned against the AGI race, indicating that it poses significant risks with substantial potential downsides. Adler pointed out that no current research laboratory has solved the issue of AI alignment—ensuring systems adhere to human values—and warned that the industry's momentum might be too rapid to enable finding a solution in time. “The faster we race, the less likely anyone finds one in time, ” he stressed. His comments coincided with China's DeepSeek revealing a new model that competes with OpenAI's technology, despite having seemingly fewer resources. Adler criticized the industry for being trapped in a "really bad equilibrium, " asserting the urgent need for "real safety regulations. " He remarked that even labs with good intentions to develop AGI responsibly could be outpaced by others that might take shortcuts, potentially leading to disastrous outcomes. Attempts to contact Adler and OpenAI for further comments have been made.


Watch video about

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Jan. 31, 2026, 1:36 p.m.

AI-Powered Video Editing Tools Revolutionize Cont…

The landscape of video content creation is experiencing a profound transformation with the rise of AI-powered editing tools.

Jan. 31, 2026, 1:31 p.m.

AI Company Announces Acquisition of Robotics Star…

AI Company has strategically acquired a promising robotics startup specializing in industrial automation, aiming to enhance its technological capabilities and expand market reach.

Jan. 31, 2026, 1:25 p.m.

CLNB 2025: Technology Innovation Will Reshape Lit…

At the Lithium Battery Recycling Forum during the 10th CLNB New Energy Industry Chain Expo 2025, hosted by SMM Information & Technology Co., Ltd.

Jan. 31, 2026, 1:20 p.m.

Apple’s iPhone sales surge to new quarterly high …

Apple’s iPhone sales surged to a new quarterly high during the holiday season, despite setbacks in artificial intelligence that led the tech pioneer to seek assistance from Google.

Jan. 31, 2026, 1:19 p.m.

Google unveils three AI strategies to reshape mar…

Google has just provided marketers with their 2026 playbook.

Jan. 31, 2026, 1:13 p.m.

AI and SEO: A New Era in Digital Marketing

Artificial intelligence (AI) is profoundly transforming digital marketing, especially by reshaping how search engine optimization (SEO) is approached and executed.

Jan. 31, 2026, 9:28 a.m.

AI-Driven SEO: Transforming Content Strategies fo…

As we enter 2026, artificial intelligence (AI) is profoundly transforming search engine optimization (SEO), revolutionizing how marketers create, distribute, and evaluate content.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today