Large Language Models (LLMs) have changed the way we interact with information, but grounding their outputs in verifiable facts is still a major challenge. This difficulty is exacerbated by the fragmented nature of real-world knowledge across various sources with differing formats and APIs, which complicates integration. The lack of grounding often results in "hallucinations, " where LLMs produce incorrect or misleading information. Our research focuses on creating responsible and trustworthy AI systems, making it essential to address hallucinations in LLMs. We are pleased to introduce DataGemma, an experimental set of open models designed to tackle hallucination challenges by grounding LLMs in the extensive statistical data available in Google's Data Commons. This resource already features a natural language interface, allowing users to query data without having to write traditional database queries. For instance, one can ask, "What industries contribute to California jobs?” or "Have any countries increased their forest land?" DataGemma thus simplifies access to diverse data formats by acting as a universal API for LLMs. DataGemma enhances the Gemma family of lightweight, cutting-edge open models, which leverage technologies underlying our Gemini models. By utilizing the knowledge stored in Data Commons, DataGemma aims to improve the factual accuracy and reasoning of LLMs, employing advanced retrieval techniques to integrate data from credible institutions, thereby reducing hallucinations and enhancing reliability. DataGemma operates through natural language queries, negating the need for users to understand complex data schemas. It employs two methodologies: Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG). RAG retrieves relevant data from Data Commons prior to text generation, ensuring a solid factual basis for responses. A challenge with RAG is managing the vast amount of data returning from broad queries, which can average 38, 000 tokens, with some reaching up to 348, 000 tokens.
This is made feasible due to the Gemini 1. 5 Pro’s long context window, permitting extensive data integration. Here’s how RAG functions: 1. **User submission**: A user poses a question to the LLM. 2. **Query processing**: The DataGemma model analyzes the input and formulates a natural language query for Data Commons. 3. **Data retrieval**: The model queries Data Commons and retrieves pertinent data tables. 4. **Prompt augmentation**: The gathered data is integrated with the user’s original query. 5. **Response generation**: The larger LLM then generates a well-rounded and fact-based response using the enhanced prompt. Using this approach has advantages, such as improved accuracy as LLMs evolve and utilize context more effectively. However, modifying the user prompt can sometimes diminish the user experience, and the effectiveness largely depends on the quality of the generated queries. We recognize that DataGemma is just the beginning in developing grounded AI and invite researchers, developers, and enthusiasts to explore this tool with us. Our aim is to ground LLMs in Data Commons’ real-world data, enhancing AI's ability to provide intelligent, evidence-based information. We encourage a reading of our accompanying research paper for further insights. Moreover, we hope others extend this research beyond our approach with Data Commons, as it offers means for third parties to create their own instances. The principles of this research are also applicable to other knowledge graph formats, and we anticipate further exploration in this area. To get started with DataGemma, download the models from Hugging Face or Kaggle (RIG, RAG) and check out our quickstart notebooks that provide practical introductions to both approaches.
Introducing DataGemma: Grounding LLMs with Google’s Data Commons
Security organizations worldwide are increasingly integrating artificial intelligence (AI) video surveillance systems to greatly boost their threat detection and response capabilities.
New Delhi: Omnicom has introduced the next generation of Omni, an AI-driven marketing intelligence platform designed to seamlessly connect strategy, execution, and performance across the marketing ecosystem, ultimately driving measurable sales growth for brands.
Artificial intelligence (AI) is rapidly reshaping e-commerce search engine optimization (SEO) by providing innovative tools that boost online sales and customer engagement.
Accenture has announced its intention to acquire Faculty, a UK-based artificial intelligence company, as part of its strategy to boost client adoption of AI technologies.
Amid ongoing economic volatility, marketers are adjusting strategies by reallocating budgets toward tactics and technologies promising higher returns on investment.
Meta has recently announced a major expansion of its AI assistant, Meta AI, through strategic partnerships with numerous leading news organizations.
Profound, an innovative company specializing in artificial intelligence search visibility, has recently raised a substantial $35 million in Series B funding.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today