Researchers at Google DeepMind in London have developed an innovative 'watermark' to invisibly label text produced by artificial intelligence (AI), which is now in use by millions of chatbot users. This watermark, detailed in a Nature publication on October 23, represents a significant real-world application, despite not being the first of its kind or immune to removal attempts. According to Scott Aaronson, a computer scientist formerly with OpenAI who worked on AI watermarks, the deployment of this technology is noteworthy. The ability to identify AI-generated text is increasingly critical for addressing issues like fake news and academic dishonesty, as well as preventing the degradation of future models through training on AI-generated content. In extensive trials involving Google’s Gemini large language model (LLM), users indicated that they considered watermarked texts to be of similar quality to those without watermarks. Experts suggest that watermarked tools are likely to proliferate in the commercial space. DeepMind’s watermark, SynthID-Text, is unique in its approach, as it modifies word selection in a systematic but hidden manner that can be detected using a cryptographic key.
This method is more detectable than competitors' solutions while ensuring that text generation remains efficient. Pushmeet Kohli of DeepMind encourages other AI developers to incorporate this watermarking system into their models, although Google maintains confidentiality regarding the detection key. While governments view watermarking as a means to combat AI-generated text proliferation, challenges persist in achieving uniform adoption across developers and addressing vulnerabilities. Research has indicated that all watermarks risk being removed or imitated. DeepMind's watermarking technique employs a sampling algorithm that builds on existing methodologies, wherein a series of token comparisons, likened to a tournament, determines which token to generate. This complex process enhances watermark detectability and complicates removal attempts; researchers demonstrated that even after text paraphrasing by another model, identification of the watermark remained viable. However, the effectiveness of this watermark against intentional removal has not been thoroughly examined, presenting a significant unanswered question in AI safety. Kohli emphasizes the intention behind the watermark as a tool to facilitate responsible LLM use and foster community-driven improvements.
Innovative AI Watermark by Google DeepMind Enhances Text Authenticity
Examining AI ‘hallucinations’ and Sunday’s Gaza blasts Thomas Copeland, BBC Verify Live journalist As we prepare to close this live coverage, here's a summary of today's key stories
The challenge marketers face today is harnessing AI’s potential without compromising sustainability goals—a question we at Brandtech have been exploring with clients and industry peers.
By 2028, it is expected that 10 percent of sales professionals will use the time saved through artificial intelligence (AI) to engage in 'overemployment,' a practice where individuals secretly hold multiple jobs simultaneously.
OpenAI has rapidly established itself as a leading force in artificial intelligence through a series of strategically crafted partnerships with top technology and infrastructure companies worldwide.
A recent study reveals stark differences in how reputable news websites and misinformation sites manage AI crawler access via robots.txt files, a web protocol controlling crawler permissions.
On Saturday, President Donald Trump shared an AI-generated video showing him in a fighter jet dropping what appears to be feces onto U.S. protesters.
Nvidia Corp.
Automate Marketing, Sales, SMM & SEO
and get clients on autopilot — from social media and search engines. No ads needed
and get clients today