ADL Report Reveals Biases Against Jews and Israel in Major AI Models
Brief news summary
A recent report by the Anti-Defamation League (ADL) reveals significant anti-Jewish and anti-Israel bias in prominent language models (LLMs) like OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, Google's Gemini 1.5 Pro, and Meta's Llama 3-8B. An analysis of 34,000 responses to 86 statements showed that all models exhibited bias, with Llama displaying the most pronounced anti-Jewish and anti-Israel sentiments, including misinformation. ADL CEO Jonathan Greenblatt urged AI developers to take responsibility for their systems, warning that biased outputs could skew public discourse and exacerbate antisemitism. He criticized the models for avoiding engagement with the Israel-Hamas conflict. In contrast, Meta and Google disputed the ADL's findings, claiming the models assessed were outdated and not indicative of typical usage. The ADL is calling on both developers and policymakers to mitigate bias in AI, advocating for enhanced testing protocols and regulations to promote fairness and safety in these systems.A recent report by the Anti-Defamation League (ADL) reveals biases against Jews and Israel in various large language models (LLMs), including OpenAI's GPT-4, Anthropic's Claude 3. 5, Google's Gemini 1. 5 Pro, and Meta's Llama 3-8B. The ADL's study involved evaluating 86 statements across multiple categories, including biases against Jews and Israel, conspiracy theories, and the Israel-Hamas conflict, with LLMs generating 34, 000 responses from 8, 600 prompts. The findings indicated that all LLMs exhibited measurable anti-Jewish and anti-Israel biases, with Meta’s Llama showing the most pronounced and "outright false" answers regarding Jews and Israel. ADL CEO Jonathan Greenblatt emphasized the potential for AI to amplify misinformation and contribute to antisemitism, calling for accountability and better safeguards from AI developers. When responding to questions about the Israel-Hamas war, GPT and Claude displayed significant biases, with a noted reluctance to answer Israel-related questions compared to others.
The ADL criticized the models for their failure to properly reject antisemitic tropes, noting that, aside from GPT, all LLMs were more biased towards Jewish conspiracy theories than those involving non-Jews, while generally showing more bias against Israel. In response, Meta argued that the ADL used an outdated version of their AI, asserting that current models provide different responses, especially for open-ended questions. Google echoed this sentiment, stating the version of Gemini tested was a developer model, not representative of consumer use. ADL representatives stressed the pressing need for companies to rectify these biases, suggesting collaborations with government and academic institutions for pre-deployment testing, adherence to the National Institute of Standards and Technology's (NIST) guidelines, and advocating for a regulatory framework to enhance AI safety.
Watch video about
ADL Report Reveals Biases Against Jews and Israel in Major AI Models
Try our premium solution and start getting clients — at no cost to you