Large Language Models (LLMs) have changed the way we interact with information, but grounding their outputs in verifiable facts is still a major challenge. This difficulty is exacerbated by the fragmented nature of real-world knowledge across various sources with differing formats and APIs, which complicates integration. The lack of grounding often results in "hallucinations, " where LLMs produce incorrect or misleading information. Our research focuses on creating responsible and trustworthy AI systems, making it essential to address hallucinations in LLMs. We are pleased to introduce DataGemma, an experimental set of open models designed to tackle hallucination challenges by grounding LLMs in the extensive statistical data available in Google's Data Commons. This resource already features a natural language interface, allowing users to query data without having to write traditional database queries. For instance, one can ask, "What industries contribute to California jobs?” or "Have any countries increased their forest land?" DataGemma thus simplifies access to diverse data formats by acting as a universal API for LLMs. DataGemma enhances the Gemma family of lightweight, cutting-edge open models, which leverage technologies underlying our Gemini models. By utilizing the knowledge stored in Data Commons, DataGemma aims to improve the factual accuracy and reasoning of LLMs, employing advanced retrieval techniques to integrate data from credible institutions, thereby reducing hallucinations and enhancing reliability. DataGemma operates through natural language queries, negating the need for users to understand complex data schemas. It employs two methodologies: Retrieval Interleaved Generation (RIG) and Retrieval Augmented Generation (RAG). RAG retrieves relevant data from Data Commons prior to text generation, ensuring a solid factual basis for responses. A challenge with RAG is managing the vast amount of data returning from broad queries, which can average 38, 000 tokens, with some reaching up to 348, 000 tokens.
This is made feasible due to the Gemini 1. 5 Pro’s long context window, permitting extensive data integration. Here’s how RAG functions: 1. **User submission**: A user poses a question to the LLM. 2. **Query processing**: The DataGemma model analyzes the input and formulates a natural language query for Data Commons. 3. **Data retrieval**: The model queries Data Commons and retrieves pertinent data tables. 4. **Prompt augmentation**: The gathered data is integrated with the user’s original query. 5. **Response generation**: The larger LLM then generates a well-rounded and fact-based response using the enhanced prompt. Using this approach has advantages, such as improved accuracy as LLMs evolve and utilize context more effectively. However, modifying the user prompt can sometimes diminish the user experience, and the effectiveness largely depends on the quality of the generated queries. We recognize that DataGemma is just the beginning in developing grounded AI and invite researchers, developers, and enthusiasts to explore this tool with us. Our aim is to ground LLMs in Data Commons’ real-world data, enhancing AI's ability to provide intelligent, evidence-based information. We encourage a reading of our accompanying research paper for further insights. Moreover, we hope others extend this research beyond our approach with Data Commons, as it offers means for third parties to create their own instances. The principles of this research are also applicable to other knowledge graph formats, and we anticipate further exploration in this area. To get started with DataGemma, download the models from Hugging Face or Kaggle (RIG, RAG) and check out our quickstart notebooks that provide practical introductions to both approaches.
Introducing DataGemma: Grounding LLMs with Google’s Data Commons
The song “Walk My Walk” by country group Breaking Rust recently hit No.
The future of Search Engine Optimization (SEO) is set to experience considerable change as it increasingly integrates Artificial Intelligence (AI) technologies with human expertise.
Tech companies are racing to expand their infrastructure as their increasingly resource-heavy AI products consume capacity, deplete chipmakers’ supply, and demand more power.
Advancements in Artificial Intelligence have led to the creation of highly realistic AI-generated videos, with platforms like OpenAI's Sora 2 and Google's Veo 3.1 leading this technological innovation.
Chief marketing officers face significant pressure to meet specific KPIs, whether these pertain to revenue, brand awareness, or cost management.
In today’s fast-paced world, the enormous amount of news content produced daily is truly staggering.
Jensen Huang, CEO of semiconductor giant Nvidia—whose value has surged 300% over two years—personifies the AI mania.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today