lang icon English
Oct. 23, 2024, 8:31 a.m.
346

Innovative AI Watermark by Google DeepMind Enhances Text Authenticity

Researchers at Google DeepMind in London have developed an innovative 'watermark' to invisibly label text produced by artificial intelligence (AI), which is now in use by millions of chatbot users. This watermark, detailed in a Nature publication on October 23, represents a significant real-world application, despite not being the first of its kind or immune to removal attempts. According to Scott Aaronson, a computer scientist formerly with OpenAI who worked on AI watermarks, the deployment of this technology is noteworthy. The ability to identify AI-generated text is increasingly critical for addressing issues like fake news and academic dishonesty, as well as preventing the degradation of future models through training on AI-generated content. In extensive trials involving Google’s Gemini large language model (LLM), users indicated that they considered watermarked texts to be of similar quality to those without watermarks. Experts suggest that watermarked tools are likely to proliferate in the commercial space. DeepMind’s watermark, SynthID-Text, is unique in its approach, as it modifies word selection in a systematic but hidden manner that can be detected using a cryptographic key.

This method is more detectable than competitors' solutions while ensuring that text generation remains efficient. Pushmeet Kohli of DeepMind encourages other AI developers to incorporate this watermarking system into their models, although Google maintains confidentiality regarding the detection key. While governments view watermarking as a means to combat AI-generated text proliferation, challenges persist in achieving uniform adoption across developers and addressing vulnerabilities. Research has indicated that all watermarks risk being removed or imitated. DeepMind's watermarking technique employs a sampling algorithm that builds on existing methodologies, wherein a series of token comparisons, likened to a tournament, determines which token to generate. This complex process enhances watermark detectability and complicates removal attempts; researchers demonstrated that even after text paraphrasing by another model, identification of the watermark remained viable. However, the effectiveness of this watermark against intentional removal has not been thoroughly examined, presenting a significant unanswered question in AI safety. Kohli emphasizes the intention behind the watermark as a tool to facilitate responsible LLM use and foster community-driven improvements.



Brief news summary

Google DeepMind has launched SynthID-Text, an innovative watermarking technology aimed at differentiating AI-generated text produced by its Gemini large language model (LLM). This feature addresses issues related to misinformation and academic integrity by discreetly modifying word choices using a hidden cryptographic key, thereby ensuring quality alongside detectability. In testing involving 20 million text outputs, feedback indicated that the quality of watermarked text matched that of unmarked counterparts, highlighting its versatility for various commercial applications. Nevertheless, concerns remain regarding developers' adoption of this technology and the potential for users to eliminate the watermarks. The watermarking technique is effectively incorporated into the text generation process, bolstering detection while resisting tampering. It utilizes a "token tournament" method for word selection, akin to a combination lock. However, researchers caution that the effectiveness of the watermarks against removal hasn't been fully assessed, prompting ongoing debate about the safety and ethical dimensions of AI-related innovations.

Watch video about

Innovative AI Watermark by Google DeepMind Enhances Text Authenticity

Try our premium solution and start getting clients — at no cost to you

I'm your Content Creator.
Let’s make a post or video and publish it on any social media — ready?

Language

Hot news

Oct. 20, 2025, 2:25 p.m.

Debunking claims US 'No Kings' crowd video is old…

Examining AI ‘hallucinations’ and Sunday’s Gaza blasts Thomas Copeland, BBC Verify Live journalist As we prepare to close this live coverage, here's a summary of today's key stories

Oct. 20, 2025, 2:20 p.m.

AI’s hidden environmental cost: what marketers ca…

The challenge marketers face today is harnessing AI’s potential without compromising sustainability goals—a question we at Brandtech have been exploring with clients and industry peers.

Oct. 20, 2025, 2:15 p.m.

Gartner Predicts 10% of Sales Associates Will Use…

By 2028, it is expected that 10 percent of sales professionals will use the time saved through artificial intelligence (AI) to engage in 'overemployment,' a practice where individuals secretly hold multiple jobs simultaneously.

Oct. 20, 2025, 2:12 p.m.

As Broadcom becomes its latest major ally, this g…

OpenAI has rapidly established itself as a leading force in artificial intelligence through a series of strategically crafted partnerships with top technology and infrastructure companies worldwide.

Oct. 20, 2025, 2:12 p.m.

Is Misinformation More Open? A Study of robots.tx…

A recent study reveals stark differences in how reputable news websites and misinformation sites manage AI crawler access via robots.txt files, a web protocol controlling crawler permissions.

Oct. 20, 2025, 10:21 a.m.

Trump posts AI video showing him dumping on No Ki…

On Saturday, President Donald Trump shared an AI-generated video showing him in a fighter jet dropping what appears to be feces onto U.S. protesters.

Oct. 20, 2025, 10:20 a.m.

Nvidia Partners with Samsung for Custom CPUs to D…

Nvidia Corp.

All news

AI team for your Business

Automate Marketing, Sales, SMM & SEO

and get clients on autopilot — from social media and search engines. No ads needed

and get clients today