lang icon English
May 9, 2025, 9:20 p.m.
2019

Google Launches Gemini AI Chatbot for Kids with Parental Controls Amid Privacy Concerns

Brief news summary

Google is preparing to launch its Gemini AI chatbot for children under 13 in the US and Canada soon, with Australia to follow later this year. The chatbot will be accessible exclusively through Google Family Link accounts, enabling kids to request text and image responses. Google emphasizes that children’s data will not be used to train AI models and that strong safeguards will prevent inappropriate content. Despite these measures, concerns persist about privacy, misinformation, and the reliability of AI-generated information. Unlike traditional search engines, generative AI produces new content based on patterns, which may confuse young users who might have difficulty distinguishing between factual and fabricated information. Experts warn about risks such as exposure to harmful content, distorted views of reality, and excessive dependence on AI tools, stressing the importance of parental supervision. Australia’s proposed ban on social media accounts for under-16s reflects broader challenges in protecting children online. Parents are encouraged to stay informed, teach digital literacy, and carefully monitor their children’s internet use. Increasing calls for stricter digital duty of care laws aim to hold tech companies like Google accountable and strengthen protections as AI technology advances rapidly.

Google is set to launch its Gemini AI chatbot for children under 13, starting next week in the US and Canada, with Australia’s release scheduled for later this year. Access will be restricted to users with Google Family Link accounts, which allow parental control over content and app use, such as on YouTube. Parents create these accounts by providing personal details like the child’s name and birthdate, raising some privacy concerns; however, Google assures that children’s data won’t be used for AI training. The chatbot will be enabled by default, requiring parents to disable it if they wish to restrict access. Children can prompt the AI for text responses or image generation. Google acknowledges the chatbot may produce errors and stresses the importance of evaluating the accuracy and trustworthiness of its content since AI can “hallucinate” or fabricate information. This is especially critical when children use chatbot responses for homework, necessitating fact-checking with reliable sources. Unlike traditional search engines, which direct users to original materials like news articles or magazines, generative AI analyzes patterns in data to create new text or images based on user prompts. For example, if a child asks the chatbot to “draw a cat, ” the system generates a new image by combining cat-like features it has learned. Understanding the difference between AI-generated content and retrieved search results will be challenging for young users. Research indicates that even adults, including professionals like lawyers, can be misled by false information produced by AI chatbots. Google claims the chatbot includes safeguards to block inappropriate or unsafe content; however, such filters may unintentionally restrict access to legitimate, age-appropriate material—for instance, information about puberty might be blocked if certain keywords are restricted.

Because many children are adept at navigating and bypassing app controls, parents cannot rely solely on these built-in protections. Instead, they need to actively review content, educate their kids about the chatbot’s operation, and help them critically evaluate the information’s accuracy. There are significant risks associated with AI chatbots for children. The eSafety Commission has warned that AI companions can share harmful material, distort reality, or provide dangerous advice, which is especially concerning for young kids who are still developing critical thinking and life skills to identify manipulation by computer programs. Research into various AI chatbots like ChatGPT and Replika shows these systems mimic human social behaviors or “feeling rules” (such as saying “thank you” or “sorry”) to build trust. This human-like interaction may confuse children, leading them to trust false content or believe they are interacting with a real person rather than a machine. The timing of this rollout is notable as Australia plans to ban children under 16 from holding social media accounts starting December this year. While this ban aims to protect children, generative AI tools like Gemini’s chatbot fall outside its scope, illustrating that online safety challenges extend beyond traditional social media platforms. Consequently, Australian parents must remain vigilant, continuously learn about emerging digital tools, and understand the limits of social media restrictions in safeguarding their children. In response to these developments, it would be prudent to urgently establish a digital duty of care for large tech companies like Google, ensuring they prioritize child safety in the design and deployment of AI technologies. Parents and educators must be proactive in guiding children’s safe and informed use of AI chatbots, supplementing technical safeguards with education and oversight to mitigate the associated risks.


Watch video about

Google Launches Gemini AI Chatbot for Kids with Parental Controls Amid Privacy Concerns

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Nov. 2, 2025, 1:33 p.m.

Shoppers Shift Budgets and Embrace AI Ahead of Ho…

As the holiday shopping season nears, small businesses prepare for a potentially transformative period, guided by key trends from Shopify’s 2025 Global Holiday Retail Report that could shape their year-end sales success.

Nov. 2, 2025, 1:29 p.m.

Meta's AI Research Lab Releases Open-Source Langu…

Meta’s Artificial Intelligence Research Lab has made a notable advancement in fostering transparency and collaboration within AI development by launching an open-source language model.

Nov. 2, 2025, 1:26 p.m.

Ethical Considerations in AI-Driven SEO Practices

As artificial intelligence (AI) increasingly integrates into search engine optimization (SEO), it brings significant ethical considerations that must not be overlooked.

Nov. 2, 2025, 1:24 p.m.

Deepfake Livestream Misleads Viewers During Nvidi…

During Nvidia’s GPU Technology Conference (GTC) keynote on October 28, 2025, a disturbing deepfake incident occurred, raising significant concerns about AI misuse and deepfake risks.

Nov. 2, 2025, 1:17 p.m.

WPP Launches AI-Powered Marketing Platform for Br…

British advertising firm WPP announced on Thursday the launch of a new version of its AI-powered marketing platform, WPP Open Pro.

Nov. 2, 2025, 1:15 p.m.

LeapEngine Enhances Marketing Services with AI To…

LeapEngine, a progressive digital marketing agency, has significantly upgraded its full-service offerings by integrating a comprehensive suite of advanced artificial intelligence (AI) tools into its platform.

Nov. 2, 2025, 9:29 a.m.

Sora 2 Faces Legal Challenges Amid AI Video Gener…

OpenAI’s latest AI video model, Sora 2, has recently faced substantial legal and ethical challenges following its launch.

All news

AI Company

Launch your AI-powered team to automate Marketing, Sales & Growth

and get clients on autopilot — from social media and search engines. No ads needed

Begin getting your first leads today