Study Reveals AI Chatbots Frequently Provide Incorrect Answers
Brief news summary
A study published in *Nature* by José Hernández-Orallo from the Valencian Research Institute for Artificial Intelligence explores the performance of advanced AI chatbots, including OpenAI's GPT, Meta's LLaMA, and BigScience's BLOOM. The research highlights a significant issue: over 60% of the analyzed responses were found to be incorrect or evasive, raising concerns about users' understanding of AI capabilities. The study involved an extensive analysis of thousands of prompts and revealed that models like GPT-4 often attempt to answer complex questions, increasing the likelihood of errors and leading users to mistakenly trust these inaccuracies. Hernández-Orallo recommends that AI developers prioritize accuracy in simpler queries and train models to avoid responding to overly difficult questions. Although some AI models do express uncertainty with statements like "I don't know," they frequently provide confidence in incorrect answers, which may cause users to overvalue the reliability of AI systems.A study on advanced versions of three popular AI chatbots reveals that they tend to generate incorrect answers more frequently than they admit when they don't know something. The research, led by José Hernández-Orallo from the Valencian Research Institute for Artificial Intelligence, analyzed the errors of large language models (LLMs), noting that while accuracy improves with model size and refinement, the rate of incorrect responses has also risen. Instead of opting to decline difficult questions, these models often provide answers, leading to an increase in misleading responses. Hernández-Orallo observes that chatbots are becoming more adept at mimicking knowledge without genuine understanding, a phenomenon described as "ultracrepidarianism. " This can lead to users overestimating chatbot abilities, which poses risks. The team examined models like OpenAI's GPT, Meta's LLaMA, and the open-source BLOOM, assessing their accuracy across various question types.
They found that even with improved models, over 60% of their responses were incorrect or unqualified. Moreover, human volunteers often miscategorized incorrect answers as correct, demonstrating a lack of ability to supervise the models effectively. To enhance user understanding, Hernández-Orallo suggests that developers should improve performance on simple questions and train chatbots to refrain from answering difficult ones. This would help users identify where AI is reliable and where it isn't. Although some chatbots can acknowledge their lack of knowledge, the push for models to tackle difficult questions remains prominent, especially for those marketed as general-purpose.
Watch video about
Study Reveals AI Chatbots Frequently Provide Incorrect Answers
Try our premium solution and start getting clients — at no cost to you