As companies develop increasingly powerful AI, safety measures seem to be falling behind. A recent report, published by the Future of Life Institute, reveals concerns about potential harms from AI technology used by firms like OpenAI and Google DeepMind. Flagship models from these developers exhibit vulnerabilities, and while some companies have enhanced safety protocols, others are trailing behind. This report follows the Future of Life Institute's open letter in 2023 advocating for a pause in large-scale AI model training, which garnered significant support. The report, created by a panel of seven independent experts, including notable figures like Turing Award winner Yoshua Bengio, assessed companies across six areas: risk assessment, current harms, safety frameworks, existential safety strategy, governance and accountability, and transparency and communication. Threats evaluated ranged from carbon emissions to AI systems going rogue. According to panelist Stuart Russell, activity under the paradigm of 'safety' at AI companies is not yet very effective. The ratings assigned reflect this, with Meta and X. AI receiving the lowest scores, while OpenAI and Google DeepMind scored slightly higher but were still deemed insufficient.
Anthropic, despite its emphasis on safety, received only a C grade, suggesting even leading players have improvements to make. All companies were found to have models susceptible to "jailbreaks, " exposing their systems to risks. Panelist Tegan Maharaj from HEC Montréal stresses the need for independent oversight since relying on inner-company evaluations can be misleading due to the lack of accountability. Maharaj mentions "low-hanging fruit, " or simple safety improvements that some companies are ignoring, such as risk assessments at Zhipu AI, X. AI, and Meta. More complex issues require technical breakthroughs owing to the inherent nature of current AI models. Stuart Russell highlights the absence of guaranteed safety under the current AI approach, which relies on vast data sets, and acknowledges the increasing difficulty as AI systems grow larger. Bengio emphasizes the necessity of initiatives like the AI Safety Index, believing they are vital for ensuring companies adhere to safety commitments and for encouraging the adoption of responsible practices.
AI Safety Measures Lag Behind as Power Grows: Future of Life Institute Report
IBM's Watson Health AI has achieved a major milestone in medical diagnostics by reaching a 95 percent accuracy rate in identifying various cancers, including lung, breast, prostate, and colorectal types.
Earlier this week, we asked senior marketers about AI’s impact on marketing jobs, receiving a wide variety of thoughtful responses.
Vista Social has made a notable breakthrough in social media management by integrating ChatGPT technology into its platform, becoming the first tool to embed OpenAI’s advanced conversational AI.
CommanderAI has secured $5 million in a seed funding round to expand its AI-powered sales intelligence platform tailored specifically for the waste hauling industry.
Melobytes.com has launched an innovative service that transforms the creation of news videos by leveraging artificial intelligence technology.
Benjamin Houy has discontinued Lorelight, a generative engine optimization (GEO) platform aimed at monitoring brand visibility across ChatGPT, Claude, and Perplexity, after determining that most brands do not require a specialized tool for AI search visibility.
Key Points Summary Morgan Stanley analysts predict artificial intelligence (AI) sales across cloud and software sectors will surge over 600% in the next three years, surpassing $1 trillion annually by 2028
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today