Artificial intelligence companies have been encouraged to replicate the safety calculations that informed Robert Oppenheimer’s first nuclear test before releasing ultra-powerful systems. Max Tegmark, a prominent figure in AI safety, revealed that he had performed calculations similar to those conducted by US physicist Arthur Compton prior to the Trinity test. Tegmark found a 90% probability that a highly advanced AI could present an existential risk. The US government proceeded with the Trinity test in 1945 after assurances that the chance of an atomic bomb igniting the atmosphere and threatening humanity was vanishingly small. In a paper authored by Tegmark and three of his MIT students, they recommend calculating the “Compton constant, ” defined as the probability an all-powerful AI escapes human control. Compton, in a 1959 interview with US writer Pearl Buck, stated that he had approved the test after estimating the odds of a runaway fusion reaction at “slightly less” than one in three million. Tegmark argued that AI companies must take responsibility for meticulously determining whether Artificial Super Intelligence (ASI)—a theoretical system surpassing human intelligence in all domains—will evade human oversight. “The companies building super-intelligence need to calculate the Compton constant, the probability that we lose control over it, ” he said.
“It’s insufficient to say ‘we feel good about it. ’ They must compute the percentage. ” Tegmark suggested that a consensus on the Compton constant derived from multiple firms would generate the “political will” to establish global AI safety standards. As a professor of physics and AI researcher at MIT, Tegmark co-founded the Future of Life Institute, a non-profit promoting safe AI development. The institute published an open letter in 2023 urging a pause in creating powerful AIs. Over 33, 000 individuals signed the letter, including Elon Musk—an early supporter of the institute—and Apple co-founder Steve Wozniak. The letter, issued months after ChatGPT's release which heralded a new AI development era, warned that AI labs were engaged in an “out-of-control race” to deploy “ever more powerful digital minds” that no one can “understand, predict, or reliably control. ” Tegmark spoke to the Guardian as a group of AI experts—including technology industry professionals, state-backed safety agency representatives, and academics—developed a new approach for AI safe development. The Singapore Consensus on Global AI Safety Research Priorities report, produced by Tegmark, leading computer scientist Yoshua Bengio, and staff from leading AI firms such as OpenAI and Google DeepMind, outlined three key research areas: developing methods to measure the impact of current and future AI systems; specifying desired AI behavior and designing systems to achieve that; and managing and controlling AI behavior. Referring to the report, Tegmark noted that the push for safe AI development had regained momentum after the recent governmental AI summit in Paris, where US Vice-President JD Vance dismissed safety concerns, stating the AI future “was not going to be won by hand-wringing about safety. ” Tegmark said: “It really feels the gloom from Paris has lifted and international collaboration has come roaring back. ”
AI Safety Advocates Urge Replicating Oppenheimer’s Nuclear Test Calculations for Ultra-Powerful Systems
The European Union has initiated an antitrust investigation into Google's anti-spam policies within its Search engine, under the recently implemented Digital Markets Act (DMA).
C3 AI, a prominent enterprise AI software provider, has announced a major restructuring of its global sales and services organization to strengthen its market position and accelerate growth.
Three leading companies in artificial intelligence—Microsoft, Anthropic, and NVIDIA—have announced a strategic partnership set to significantly shape AI development and deployment in the coming years.
Over the past four months, millions have been captivated by Granny Spills, a virtual influencer known for her all-pink designer suits and witty life advice blending humor and sass.
ImagineArt has launched the AI News Video Generator, an innovative tool designed to transform news content creation and delivery.
Artificial intelligence was the central theme at this year’s Indigo Trigger Lead-to-Cash Bash, permeating every discussion.
Google has officially launched Search Live in the United States, a groundbreaking feature that revolutionizes user interaction with search engines by introducing interactive voice conversations within its new AI Mode.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today