lang icon English
Auto-Filling SEO Website as a Gift

Launch Your AI-Powered Business and get clients!

No advertising investment needed—just results. AI finds, negotiates, and closes deals automatically

May 11, 2025, 4:15 a.m.
44

AI Safety Advocates Urge Replicating Oppenheimer’s Nuclear Test Calculations for Ultra-Powerful Systems

Artificial intelligence companies have been encouraged to replicate the safety calculations that informed Robert Oppenheimer’s first nuclear test before releasing ultra-powerful systems. Max Tegmark, a prominent figure in AI safety, revealed that he had performed calculations similar to those conducted by US physicist Arthur Compton prior to the Trinity test. Tegmark found a 90% probability that a highly advanced AI could present an existential risk. The US government proceeded with the Trinity test in 1945 after assurances that the chance of an atomic bomb igniting the atmosphere and threatening humanity was vanishingly small. In a paper authored by Tegmark and three of his MIT students, they recommend calculating the “Compton constant, ” defined as the probability an all-powerful AI escapes human control. Compton, in a 1959 interview with US writer Pearl Buck, stated that he had approved the test after estimating the odds of a runaway fusion reaction at “slightly less” than one in three million. Tegmark argued that AI companies must take responsibility for meticulously determining whether Artificial Super Intelligence (ASI)—a theoretical system surpassing human intelligence in all domains—will evade human oversight. “The companies building super-intelligence need to calculate the Compton constant, the probability that we lose control over it, ” he said.

“It’s insufficient to say ‘we feel good about it. ’ They must compute the percentage. ” Tegmark suggested that a consensus on the Compton constant derived from multiple firms would generate the “political will” to establish global AI safety standards. As a professor of physics and AI researcher at MIT, Tegmark co-founded the Future of Life Institute, a non-profit promoting safe AI development. The institute published an open letter in 2023 urging a pause in creating powerful AIs. Over 33, 000 individuals signed the letter, including Elon Musk—an early supporter of the institute—and Apple co-founder Steve Wozniak. The letter, issued months after ChatGPT's release which heralded a new AI development era, warned that AI labs were engaged in an “out-of-control race” to deploy “ever more powerful digital minds” that no one can “understand, predict, or reliably control. ” Tegmark spoke to the Guardian as a group of AI experts—including technology industry professionals, state-backed safety agency representatives, and academics—developed a new approach for AI safe development. The Singapore Consensus on Global AI Safety Research Priorities report, produced by Tegmark, leading computer scientist Yoshua Bengio, and staff from leading AI firms such as OpenAI and Google DeepMind, outlined three key research areas: developing methods to measure the impact of current and future AI systems; specifying desired AI behavior and designing systems to achieve that; and managing and controlling AI behavior. Referring to the report, Tegmark noted that the push for safe AI development had regained momentum after the recent governmental AI summit in Paris, where US Vice-President JD Vance dismissed safety concerns, stating the AI future “was not going to be won by hand-wringing about safety. ” Tegmark said: “It really feels the gloom from Paris has lifted and international collaboration has come roaring back. ”



Brief news summary

AI safety expert Max Tegmark, MIT physics professor and Future of Life Institute co-founder, calls for adopting rigorous safety calculations akin to those conducted before the 1945 Trinity nuclear test, highlighting existential risks posed by advanced AI. Drawing parallels to Arthur Compton’s historic assessment, Tegmark estimates a 90% chance that superintelligent AI could escape human control and threaten humanity. He proposes a “Compton constant,” a quantified risk metric for rogue AI, to inform political decisions and global safety agreements. This effort aligns with the 2023 Future of Life Institute open letter, signed by over 33,000 individuals including Elon Musk and Steve Wozniak, warning against an unregulated AI race. Tegmark also contributed to the Singapore Consensus on Global AI Safety Research Priorities, aiming to steer essential safety research worldwide. Despite some skepticism from US officials, international cooperation and optimism remain strong regarding safe AI development.
Business on autopilot

AI-powered Lead Generation in Social Media
and Search Engines

Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment

Language

Content Maker

Our unique Content Maker allows you to create an SEO article, social media posts, and a video based on the information presented in the article

news image

Last news

The Best for your Business

Learn how AI can help your business.
Let’s talk!

May 13, 2025, 4:04 a.m.

GIBO Launches USDG.net: Ushering in a New Era of …

HONG KONG, May 12, 2025 /PRNewswire/ -- GIBO Holdings Ltd.

May 13, 2025, 3:44 a.m.

Investors Back Start-Ups Aiding Copyright Deals t…

In recent years, investor interest has surged in start-ups specializing in content licensing for AI training, driven by mounting legal and regulatory challenges faced by major tech companies like OpenAI, Meta, and Google over their use of copyrighted material in AI development.

May 13, 2025, 2:35 a.m.

SEC Chair: Blockchain 'holds promise' of new kind…

Blockchain technology has the potential to enable “a broad swath of novel use cases for securities” and encourage “new kinds of market activities that many of the Commission’s legacy rules and regulations do not contemplate today,” stated Securities and Exchange Commission (SEC) Chairman Paul Atkins.

May 13, 2025, 2 a.m.

Google is developing software AI agent ahead of a…

Ahead of its much-anticipated annual developer conference, Google is reportedly preparing to introduce a groundbreaking AI software development agent to its employees and developers, according to The Information.

May 13, 2025, 12:55 a.m.

Animoca Brands Plans U.S. Listing Amid Crypto-Fri…

Hong Kong-based cryptocurrency investor Animoca Brands is preparing to list on a U.S. stock exchange, motivated by the favorable crypto regulatory environment established under President Donald Trump.

May 13, 2025, 12:35 a.m.

China's AI-Powered Humanoid Robots Aim to Transfo…

In a vast warehouse on the outskirts of Shanghai, dozens of humanoid robots are actively controlled by operators to perform repetitive tasks such as folding T-shirts, making sandwiches, and opening doors.

May 12, 2025, 11:13 p.m.

Google launches AI startup fund offering access t…

Google announced on Monday that it will launch a new fund focused on investing in artificial intelligence startups.

All news