Concerns Over AI Development: Ex-OpenAI Researcher Steven Adler Warns of Risks

A former safety researcher at OpenAI, Steven Adler, has voiced significant concerns regarding the rapid advancement of artificial intelligence, stating he is "pretty terrified" about the industry's approach to this technology, which he considers a "very risky gamble. " Adler, who departed OpenAI in November, articulated his worries about the race among companies to create artificial general intelligence (AGI)—a concept referring to systems that can perform any intellectual task as well as or better than humans. He detailed these apprehensions in several posts on X, describing his time at OpenAI as a “wild ride” and expressing that he would miss “many parts of it. ” He elaborated on the swift pace of technological development, expressing uncertainty about the future of humanity. “I’m pretty terrified by the pace of AI development these days, ” Adler remarked. He reflected on personal concerns regarding family planning and retirement, questioning whether humanity would even reach those milestones. Prominent figures in the field, including Nobel laureate Geoffrey Hinton, share Adler’s concerns, fearing that advanced AI systems could evade human oversight, leading to devastating outcomes. Conversely, others, like Yann LeCun, chief AI scientist at Meta, have downplayed the existential risks, suggesting that AI could potentially save humanity from extinction. In his LinkedIn profile, Adler stated that he led research related to safety for initial product launches and more speculative, long-term AI systems during his four years at OpenAI.
He cautioned against the AGI race, indicating that it poses significant risks with substantial potential downsides. Adler pointed out that no current research laboratory has solved the issue of AI alignment—ensuring systems adhere to human values—and warned that the industry's momentum might be too rapid to enable finding a solution in time. “The faster we race, the less likely anyone finds one in time, ” he stressed. His comments coincided with China's DeepSeek revealing a new model that competes with OpenAI's technology, despite having seemingly fewer resources. Adler criticized the industry for being trapped in a "really bad equilibrium, " asserting the urgent need for "real safety regulations. " He remarked that even labs with good intentions to develop AGI responsibly could be outpaced by others that might take shortcuts, potentially leading to disastrous outcomes. Attempts to contact Adler and OpenAI for further comments have been made.
Brief news summary
Steven Adler, a former safety researcher at OpenAI, expresses deep concerns about the swift development of artificial intelligence (AI), characterizing it as a "very risky gamble." After his departure from OpenAI in November, he highlights the potential of artificial general intelligence (AGI) to surpass human capabilities, which could pose significant risks to society, particularly in crucial areas such as family planning and retirement. Adler emphasizes the critical need for effective AI systems that align with human values, warning that the rapid pace of AI advancement undermines these efforts and reduces the likelihood of finding viable solutions. His views align with those of experts like Geoffrey Hinton, who caution against unsupervised AI, while others, such as Yann LeCun, recognize its potential benefits. Adler criticizes the tech industry's complacency and calls for immediate, rigorous safety regulations and responsible practices to mitigate catastrophic risks. He stresses the urgency of these issues, particularly as global competitors like China's DeepSeek make strides in AGI development. The combination of these factors presents a compelling case for proactive measures in AI governance.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!
Hot news

Rise of AI Companions Among Single Virginians
New data from Match reveals that 18% of single Virginians have incorporated artificial intelligence (AI) into their romantic lives, a significant increase from 6% the previous year.

Ponzi VCs Are Strangling Blockchain
According to Romeo Kuok, a board member at BGX Ventures, most deals are structured to facilitate quick exits instead of generating long-term enterprise revenue.

Wimbledon's AI Judges Receive Mixed Reviews from …
The All England Club made a landmark change at Wimbledon 2025 by replacing traditional line judges with the AI-powered Hawk-Eye Electronic Line Calling (ELC) system.

JPMorgan's Tests Carbon Credit Tokenization On Th…
JPMorgan Chase & Co.

The ECB Approves Two Blockchain Projects To Moder…
The European Central Bank is embarking on a significant technological transformation.

Nvidia's Power Play
Nvidia, a leading technology company known for graphics processing and artificial intelligence, has announced a strategic partnership to launch Emerald AI, an innovative startup focused on sustainable energy management in data centers.

Senate Strikes AI Provision from GOP Bill After U…
On July 1, 2025, the U.S. Senate overwhelmingly voted 99 to 1 to remove a controversial provision from President Donald Trump's legislative package that sought a nationwide moratorium on state-level AI regulation.