Google is set to launch its Gemini AI chatbot for children under 13, starting next week in the US and Canada, with Australia’s release scheduled for later this year. Access will be restricted to users with Google Family Link accounts, which allow parental control over content and app use, such as on YouTube. Parents create these accounts by providing personal details like the child’s name and birthdate, raising some privacy concerns; however, Google assures that children’s data won’t be used for AI training. The chatbot will be enabled by default, requiring parents to disable it if they wish to restrict access. Children can prompt the AI for text responses or image generation. Google acknowledges the chatbot may produce errors and stresses the importance of evaluating the accuracy and trustworthiness of its content since AI can “hallucinate” or fabricate information. This is especially critical when children use chatbot responses for homework, necessitating fact-checking with reliable sources. Unlike traditional search engines, which direct users to original materials like news articles or magazines, generative AI analyzes patterns in data to create new text or images based on user prompts. For example, if a child asks the chatbot to “draw a cat, ” the system generates a new image by combining cat-like features it has learned. Understanding the difference between AI-generated content and retrieved search results will be challenging for young users. Research indicates that even adults, including professionals like lawyers, can be misled by false information produced by AI chatbots. Google claims the chatbot includes safeguards to block inappropriate or unsafe content; however, such filters may unintentionally restrict access to legitimate, age-appropriate material—for instance, information about puberty might be blocked if certain keywords are restricted.
Because many children are adept at navigating and bypassing app controls, parents cannot rely solely on these built-in protections. Instead, they need to actively review content, educate their kids about the chatbot’s operation, and help them critically evaluate the information’s accuracy. There are significant risks associated with AI chatbots for children. The eSafety Commission has warned that AI companions can share harmful material, distort reality, or provide dangerous advice, which is especially concerning for young kids who are still developing critical thinking and life skills to identify manipulation by computer programs. Research into various AI chatbots like ChatGPT and Replika shows these systems mimic human social behaviors or “feeling rules” (such as saying “thank you” or “sorry”) to build trust. This human-like interaction may confuse children, leading them to trust false content or believe they are interacting with a real person rather than a machine. The timing of this rollout is notable as Australia plans to ban children under 16 from holding social media accounts starting December this year. While this ban aims to protect children, generative AI tools like Gemini’s chatbot fall outside its scope, illustrating that online safety challenges extend beyond traditional social media platforms. Consequently, Australian parents must remain vigilant, continuously learn about emerging digital tools, and understand the limits of social media restrictions in safeguarding their children. In response to these developments, it would be prudent to urgently establish a digital duty of care for large tech companies like Google, ensuring they prioritize child safety in the design and deployment of AI technologies. Parents and educators must be proactive in guiding children’s safe and informed use of AI chatbots, supplementing technical safeguards with education and oversight to mitigate the associated risks.
Google Launches Gemini AI Chatbot for Kids with Parental Controls Amid Privacy Concerns
As the holiday shopping season nears, small businesses prepare for a potentially transformative period, guided by key trends from Shopify’s 2025 Global Holiday Retail Report that could shape their year-end sales success.
Meta’s Artificial Intelligence Research Lab has made a notable advancement in fostering transparency and collaboration within AI development by launching an open-source language model.
As artificial intelligence (AI) increasingly integrates into search engine optimization (SEO), it brings significant ethical considerations that must not be overlooked.
During Nvidia’s GPU Technology Conference (GTC) keynote on October 28, 2025, a disturbing deepfake incident occurred, raising significant concerns about AI misuse and deepfake risks.
British advertising firm WPP announced on Thursday the launch of a new version of its AI-powered marketing platform, WPP Open Pro.
LeapEngine, a progressive digital marketing agency, has significantly upgraded its full-service offerings by integrating a comprehensive suite of advanced artificial intelligence (AI) tools into its platform.
OpenAI’s latest AI video model, Sora 2, has recently faced substantial legal and ethical challenges following its launch.
Launch your AI-powered team to automate Marketing, Sales & Growth
and get clients on autopilot — from social media and search engines. No ads needed
Begin getting your first leads today