Are AI companies adequately protecting humanity from the risks of artificial intelligence?According to a new report card by the Future of Life Institute, a Silicon Valley nonprofit, the answer is likely no. As AI becomes increasingly integral to human-technology interactions, potential harms are surfacing—ranging from people using AI chatbots for counseling and subsequently dying by suicide, to AI-enabled cyberattacks. Future risks also loom, including AI’s use in weaponry or attempts to destabilize governments. However, insufficient incentives exist for AI firms to prioritize global safety. The Institute’s recently published AI Safety Index, which seeks to direct AI development toward safer outcomes and mitigate existential threats, highlights this issue. Max Tegmark, the Institute president and MIT professor, noted that AI companies operate as the only U. S. industry producing powerful technology without regulation, creating a "race to the bottom" where safety is often neglected. The highest grades on the index were only C+, awarded to OpenAI, developer of ChatGPT, and Anthropic, known for its chatbot Claude. Google’s AI division, Google DeepMind, received a C. Lower grades included a D for Meta (Facebook’s parent company) and Elon Musk’s xAI, both based near Palo Alto. Chinese firms Z. ai and DeepSeek also earned a D. Alibaba Cloud received the lowest rating, a D-. Companies were evaluated using 35 indicators across six categories such as existential safety, risk assessment, and information sharing. The assessment combined publicly available data and company survey responses, scored by eight AI experts including academics and organization leaders. Notably, all firms scored below average in existential safety, which measures internal controls and strategies to prevent catastrophic AI misuse.
The report stated none demonstrated credible plans to prevent loss of control or severe misuse as AI advances toward general and superintelligence. Both Google DeepMind and OpenAI affirmed their commitment to safety. OpenAI emphasized its investment in frontier safety research, rigorous testing, and sharing safety frameworks to elevate industry standards. Google DeepMind highlighted its science-driven safety approach and protocols to mitigate severe risks from advanced AI models before these risks materialize. By contrast, the Institute highlighted that xAI and Meta have risk management frameworks but lack adequate monitoring and control commitments or notable safety research investments. Firms like DeepSeek, Z. ai, and Alibaba Cloud lack publicly available safety strategy documentation. Meta, Z. ai, DeepSeek, Alibaba, and Anthropic did not respond to requests for comment. xAI dismissed the report as “Legacy Media Lies, ” and Musk’s attorney did not reply to further inquiries. Though Musk advises and has funded the Future of Life Institute, he was not involved in producing the AI Safety Index. Tegmark expressed concern that insufficient regulation may enable terrorists to develop bioweapons, increase manipulative potential beyond current levels, or destabilize governments. He stressed that fixing these issues is straightforward: establishing binding safety standards for AI companies. While some government efforts aim to enhance AI oversight, tech lobbying has opposed such regulations, fearing stifled innovation or corporate migration. Nonetheless, legislation like California’s SB 53, signed by Governor Gavin Newsom in September, mandates that companies disclose safety and security protocols and report incidents such as cyberattacks. Tegmark regards this law as progress but stresses substantially more action is necessary. Rob Enderle, principal analyst at the Enderle Group, found the AI Safety Index a compelling approach to AI’s regulatory challenges but questioned the current U. S. administration’s capacity to implement effective regulations. He warned that poorly crafted rules might cause harm and doubted enforcement mechanisms currently exist to ensure compliance. In sum, the AI Safety Index reveals that major AI developers have yet to demonstrate robust safety commitments, underscoring the urgent need for stronger regulation to safeguard humanity from AI’s growing risks.
AI Safety Index Reveals Major AI Companies Fail to Adequately Protect Humanity from AI Risks
Meta, the parent company of Facebook, Instagram, WhatsApp, and Messenger, has recently achieved significant progress in advancing its artificial intelligence capabilities by securing multiple commercial agreements with prominent news organizations.
AI visibility is essential for SEOs, beginning with managing AI crawlers.
A bipartisan group of U.S. senators, including notable Republican China hawk Tom Cotton, has introduced a bill to prevent the Trump administration from easing restrictions on Beijing’s access to artificial intelligence chips for 2.5 years.
Muster Agency is rapidly becoming a leading force in AI-powered social media marketing, providing a comprehensive range of services aimed at enhancing businesses’ online presence through advanced technology.
Vizrt has launched version 8.1 of its media asset management system, Viz One, introducing advanced AI-driven features designed to boost speed, intelligence, and accuracy.
Microsoft has recently revised its sales growth targets for its AI agent products after many sales representatives missed their quotas in the fiscal year ending in June, as reported by The Information.
Artificial intelligence (AI) is increasingly transforming search engine optimization (SEO), compelling marketers to update their strategies to stay competitive.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today