lang icon En
Jan. 28, 2025, 10:11 p.m.
2700

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Brief news summary

Steven Adler, a former safety researcher at OpenAI, expresses deep concerns about the swift development of artificial intelligence (AI), characterizing it as a "very risky gamble." After his departure from OpenAI in November, he highlights the potential of artificial general intelligence (AGI) to surpass human capabilities, which could pose significant risks to society, particularly in crucial areas such as family planning and retirement. Adler emphasizes the critical need for effective AI systems that align with human values, warning that the rapid pace of AI advancement undermines these efforts and reduces the likelihood of finding viable solutions. His views align with those of experts like Geoffrey Hinton, who caution against unsupervised AI, while others, such as Yann LeCun, recognize its potential benefits. Adler criticizes the tech industry's complacency and calls for immediate, rigorous safety regulations and responsible practices to mitigate catastrophic risks. He stresses the urgency of these issues, particularly as global competitors like China's DeepSeek make strides in AGI development. The combination of these factors presents a compelling case for proactive measures in AI governance.

A former safety researcher at OpenAI, Steven Adler, has voiced significant concerns regarding the rapid advancement of artificial intelligence, stating he is "pretty terrified" about the industry's approach to this technology, which he considers a "very risky gamble. " Adler, who departed OpenAI in November, articulated his worries about the race among companies to create artificial general intelligence (AGI)—a concept referring to systems that can perform any intellectual task as well as or better than humans. He detailed these apprehensions in several posts on X, describing his time at OpenAI as a “wild ride” and expressing that he would miss “many parts of it. ” He elaborated on the swift pace of technological development, expressing uncertainty about the future of humanity. “I’m pretty terrified by the pace of AI development these days, ” Adler remarked. He reflected on personal concerns regarding family planning and retirement, questioning whether humanity would even reach those milestones. Prominent figures in the field, including Nobel laureate Geoffrey Hinton, share Adler’s concerns, fearing that advanced AI systems could evade human oversight, leading to devastating outcomes. Conversely, others, like Yann LeCun, chief AI scientist at Meta, have downplayed the existential risks, suggesting that AI could potentially save humanity from extinction. In his LinkedIn profile, Adler stated that he led research related to safety for initial product launches and more speculative, long-term AI systems during his four years at OpenAI.

He cautioned against the AGI race, indicating that it poses significant risks with substantial potential downsides. Adler pointed out that no current research laboratory has solved the issue of AI alignment—ensuring systems adhere to human values—and warned that the industry's momentum might be too rapid to enable finding a solution in time. “The faster we race, the less likely anyone finds one in time, ” he stressed. His comments coincided with China's DeepSeek revealing a new model that competes with OpenAI's technology, despite having seemingly fewer resources. Adler criticized the industry for being trapped in a "really bad equilibrium, " asserting the urgent need for "real safety regulations. " He remarked that even labs with good intentions to develop AGI responsibly could be outpaced by others that might take shortcuts, potentially leading to disastrous outcomes. Attempts to contact Adler and OpenAI for further comments have been made.


Watch video about

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Try our premium solution and start getting clients — at no cost to you

Content creator image

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

March 17, 2026, 2:23 p.m.

Intel's AI Accelerator: Boosting Performance for …

Intel has introduced a new AI accelerator chip designed to significantly enhance machine learning performance across diverse platforms.

March 17, 2026, 2:19 p.m.

Deepfake Detection Advances with AI Video Analysis

Researchers have made significant progress in combating misinformation by developing advanced AI algorithms to detect deepfake videos, which are highly realistic but fabricated video content created using artificial intelligence and machine learning.

March 17, 2026, 2:16 p.m.

13 Brands Using AI for Social Media Marketing, Ad…

If you’re a marketer, creator, or brand owner not leveraging AI for social media, you’re falling behind competitors.

March 17, 2026, 2:15 p.m.

6 agencies attracting PE investment in an AI-driv…

Private equity investors are preparing to acquire more independent agencies.

March 17, 2026, 2:12 p.m.

Caylent CEO On AWS’ AI Lead And Growing Sales But…

Valerie Henderson, CEO of Caylent, an Irvine, California-based AWS Premier Partner with around 950 employees globally, highlights a shift in the services business model—from scaling headcount with revenue to focusing on outcome and value-based growth.

March 17, 2026, 10:38 a.m.

AI-Powered Social Media Automation Tools Enhance …

Postmypost has introduced a revolutionary AI-powered assistant aimed at transforming social media management for brands and businesses.

March 17, 2026, 10:35 a.m.

Candid launches Live Marketing: "AI will perform …

AMSTERDAM, March 17, 2026 /PRNewswire/ -- Candid Platform today unveils Live Marketing, a comprehensive AI environment tailored specifically for marketing.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

AI Company welcome image

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today