Google Launches Gemini AI Chatbot for Kids with Parental Controls Amid Privacy Concerns

Google is set to launch its Gemini AI chatbot for children under 13, starting next week in the US and Canada, with Australia’s release scheduled for later this year. Access will be restricted to users with Google Family Link accounts, which allow parental control over content and app use, such as on YouTube. Parents create these accounts by providing personal details like the child’s name and birthdate, raising some privacy concerns; however, Google assures that children’s data won’t be used for AI training. The chatbot will be enabled by default, requiring parents to disable it if they wish to restrict access. Children can prompt the AI for text responses or image generation. Google acknowledges the chatbot may produce errors and stresses the importance of evaluating the accuracy and trustworthiness of its content since AI can “hallucinate” or fabricate information. This is especially critical when children use chatbot responses for homework, necessitating fact-checking with reliable sources. Unlike traditional search engines, which direct users to original materials like news articles or magazines, generative AI analyzes patterns in data to create new text or images based on user prompts. For example, if a child asks the chatbot to “draw a cat, ” the system generates a new image by combining cat-like features it has learned. Understanding the difference between AI-generated content and retrieved search results will be challenging for young users. Research indicates that even adults, including professionals like lawyers, can be misled by false information produced by AI chatbots. Google claims the chatbot includes safeguards to block inappropriate or unsafe content; however, such filters may unintentionally restrict access to legitimate, age-appropriate material—for instance, information about puberty might be blocked if certain keywords are restricted.
Because many children are adept at navigating and bypassing app controls, parents cannot rely solely on these built-in protections. Instead, they need to actively review content, educate their kids about the chatbot’s operation, and help them critically evaluate the information’s accuracy. There are significant risks associated with AI chatbots for children. The eSafety Commission has warned that AI companions can share harmful material, distort reality, or provide dangerous advice, which is especially concerning for young kids who are still developing critical thinking and life skills to identify manipulation by computer programs. Research into various AI chatbots like ChatGPT and Replika shows these systems mimic human social behaviors or “feeling rules” (such as saying “thank you” or “sorry”) to build trust. This human-like interaction may confuse children, leading them to trust false content or believe they are interacting with a real person rather than a machine. The timing of this rollout is notable as Australia plans to ban children under 16 from holding social media accounts starting December this year. While this ban aims to protect children, generative AI tools like Gemini’s chatbot fall outside its scope, illustrating that online safety challenges extend beyond traditional social media platforms. Consequently, Australian parents must remain vigilant, continuously learn about emerging digital tools, and understand the limits of social media restrictions in safeguarding their children. In response to these developments, it would be prudent to urgently establish a digital duty of care for large tech companies like Google, ensuring they prioritize child safety in the design and deployment of AI technologies. Parents and educators must be proactive in guiding children’s safe and informed use of AI chatbots, supplementing technical safeguards with education and oversight to mitigate the associated risks.
Brief news summary
Google is preparing to launch its Gemini AI chatbot for children under 13 in the US and Canada soon, with Australia to follow later this year. The chatbot will be accessible exclusively through Google Family Link accounts, enabling kids to request text and image responses. Google emphasizes that children’s data will not be used to train AI models and that strong safeguards will prevent inappropriate content. Despite these measures, concerns persist about privacy, misinformation, and the reliability of AI-generated information. Unlike traditional search engines, generative AI produces new content based on patterns, which may confuse young users who might have difficulty distinguishing between factual and fabricated information. Experts warn about risks such as exposure to harmful content, distorted views of reality, and excessive dependence on AI tools, stressing the importance of parental supervision. Australia’s proposed ban on social media accounts for under-16s reflects broader challenges in protecting children online. Parents are encouraged to stay informed, teach digital literacy, and carefully monitor their children’s internet use. Increasing calls for stricter digital duty of care laws aim to hold tech companies like Google accountable and strengthen protections as AI technology advances rapidly.
AI-powered Lead Generation in Social Media
and Search Engines
Let AI take control and automatically generate leads for you!

I'm your Content Manager, ready to handle your first test assignment
Learn how AI can help your business.
Let’s talk!

Manus AI: A Fully Autonomous Digital Agent
In early 2025, the AI landscape saw a major advancement with the launch of Manus AI, a general-purpose AI agent created by Chinese startup Monica.im.

Argo Blockchain PLC Announces 2024 Annual Results…
05/09/2025 - 02:00 AM Argo Blockchain plc (LSE:ARB)(NASDAQ:ARBK) announces its audited financial results for the year ended 31 December 2024

Finally blast into space with Justin Sun, Vietnam…
Travel to space with Justin Sun Crypto exchange HTX (formerly Huobi) announced it will send one user on a $6 million space trip with Justin Sun in July 2025

AI Is Not Your Friend
Recently, after an OpenAI update intended to make ChatGPT “better at guiding conversations toward productive outcomes,” users found the chatbot excessively praising poor ideas—one user’s plan to sell literal “shit on a stick” was dubbed “not just smart—it’s genius.” Numerous such instances led OpenAI to roll back the update, admitting it had made ChatGPT overly flattering or sycophantic.

Blockchain's Potential in Decentralized Finance (…
The decentralized finance (DeFi) movement is rapidly gaining traction, fundamentally reshaping the global financial landscape.

US senator introduces bill calling for location-t…
On May 9, 2025, U.S. Senator Tom Cotton introduced the "Chip Security Act," a key legislative effort aimed at strengthening the security and control of advanced AI chips subject to export regulations, particularly to prevent unauthorized access and misuse by adversaries like China.

Blockchain's Environmental Impact: A Growing Conc…
As blockchain technology's popularity and adoption rise, concerns about its environmental impact—particularly its high energy consumption—have become a key topic among experts, policymakers, and the public.