lang icon English
Jan. 28, 2025, 10:11 p.m.
1696

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

A former safety researcher at OpenAI, Steven Adler, has voiced significant concerns regarding the rapid advancement of artificial intelligence, stating he is "pretty terrified" about the industry's approach to this technology, which he considers a "very risky gamble. " Adler, who departed OpenAI in November, articulated his worries about the race among companies to create artificial general intelligence (AGI)—a concept referring to systems that can perform any intellectual task as well as or better than humans. He detailed these apprehensions in several posts on X, describing his time at OpenAI as a “wild ride” and expressing that he would miss “many parts of it. ” He elaborated on the swift pace of technological development, expressing uncertainty about the future of humanity. “I’m pretty terrified by the pace of AI development these days, ” Adler remarked. He reflected on personal concerns regarding family planning and retirement, questioning whether humanity would even reach those milestones. Prominent figures in the field, including Nobel laureate Geoffrey Hinton, share Adler’s concerns, fearing that advanced AI systems could evade human oversight, leading to devastating outcomes. Conversely, others, like Yann LeCun, chief AI scientist at Meta, have downplayed the existential risks, suggesting that AI could potentially save humanity from extinction. In his LinkedIn profile, Adler stated that he led research related to safety for initial product launches and more speculative, long-term AI systems during his four years at OpenAI.

He cautioned against the AGI race, indicating that it poses significant risks with substantial potential downsides. Adler pointed out that no current research laboratory has solved the issue of AI alignment—ensuring systems adhere to human values—and warned that the industry's momentum might be too rapid to enable finding a solution in time. “The faster we race, the less likely anyone finds one in time, ” he stressed. His comments coincided with China's DeepSeek revealing a new model that competes with OpenAI's technology, despite having seemingly fewer resources. Adler criticized the industry for being trapped in a "really bad equilibrium, " asserting the urgent need for "real safety regulations. " He remarked that even labs with good intentions to develop AGI responsibly could be outpaced by others that might take shortcuts, potentially leading to disastrous outcomes. Attempts to contact Adler and OpenAI for further comments have been made.



Brief news summary

Steven Adler, a former safety researcher at OpenAI, expresses deep concerns about the swift development of artificial intelligence (AI), characterizing it as a "very risky gamble." After his departure from OpenAI in November, he highlights the potential of artificial general intelligence (AGI) to surpass human capabilities, which could pose significant risks to society, particularly in crucial areas such as family planning and retirement. Adler emphasizes the critical need for effective AI systems that align with human values, warning that the rapid pace of AI advancement undermines these efforts and reduces the likelihood of finding viable solutions. His views align with those of experts like Geoffrey Hinton, who caution against unsupervised AI, while others, such as Yann LeCun, recognize its potential benefits. Adler criticizes the tech industry's complacency and calls for immediate, rigorous safety regulations and responsible practices to mitigate catastrophic risks. He stresses the urgency of these issues, particularly as global competitors like China's DeepSeek make strides in AGI development. The combination of these factors presents a compelling case for proactive measures in AI governance.

Watch video about

Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Oct. 27, 2025, 10:27 a.m.

OpenAI rolls out pet-centric AI video features an…

OpenAI has revealed major updates to its text-to-video app, Sora.

Oct. 27, 2025, 10:20 a.m.

AI market surge raises alarm over financial stabi…

AI has emerged as a major force in global markets, with companies connected to AI now accounting for roughly 44% of the S&P 500’s market capitalisation.

Oct. 27, 2025, 10:19 a.m.

AI Fuels $263B in Holiday Orders, Pushing 2025 Di…

The holiday shopping season is on the verge of a major technological shift, largely propelled by advances in artificial intelligence (AI) and agent-led shopping experiences.

Oct. 27, 2025, 10:14 a.m.

Tech Savy Crew Revolutionizes Rankings as a Leadi…

Tech Savy Crew's AI-driven SEO strategies have empowered over 100 global businesses to rank prominently on Google and AI search platforms like ChatGPT, Gemini, and Perplexity, significantly boosting growth and visibility.

Oct. 27, 2025, 10:14 a.m.

Manus AI: China's Autonomous AI Agent

Manus is an innovative autonomous artificial intelligence agent developed by Butterfly Effect Pte.

Oct. 27, 2025, 6:37 a.m.

Where Will SoundHound AI Stock Be in 1 Year?

SoundHound AI's (SOUN) stock price has more than tripled in the past year, fueled by accelerating revenue growth, aggressive expansion plans, and a surge in AI stock investments.

Oct. 27, 2025, 6:25 a.m.

Should You Buy BigBear.ai Stock Before Nov. 10?

BigBear.ai (BBAI +3.98%) is one of the many artificial intelligence (AI) stocks that have surged in 2025, with its shares rising approximately 300% over the past 12 months through October 22.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today