X

Watermarking tool for AI-generated text is now open-sourced by Google

Sheena Vasani, a writer known for her coverage of commerce, tech news, and e-readers, recently reported that Google’s SynthID, a text watermarking technology, has been made open-source through the Google Responsible Generative AI Toolkit. Announced on X, this development allows other generative AI developers to integrate the technology into their own systems to detect AI-generated text, aiding in the creation of more responsible AI outputs. Pushmeet Kohli, Vice President of Research at Google DeepMind, shared with MIT Technology Review that this tool will make it easier for developers to verify whether the text originates from their own large language models (LLMs).

As AI becomes increasingly common, watermarking has emerged as a vital tool in identifying AI-generated content, especially in efforts to counteract its use in spreading political misinformation, generating harmful or nonconsensual content, and other malicious activities. While countries like China already mandate AI watermarking, and California is exploring similar legislation, these tools are still evolving. SynthID, first introduced last August, embeds invisible watermarks in AI-generated images, audio, video, and text, enabling software detection while remaining imperceptible to humans.

The technology works by subtly altering the probability scores of certain tokens—individual characters, words, or phrases—during the generation process. For instance, in a sentence like “My favorite tropical fruits are __,” the LLM might choose from tokens such as “mango,” “lychee,” or “papaya,” each assigned a probability score. SynthID adjusts these scores in a way that doesn’t compromise the overall quality or coherence of the text, yet leaves a detectable pattern for software to recognize as AI-generated. This process occurs repeatedly throughout the text, so a single page could feature hundreds of adjusted probability scores, collectively forming the watermark.

Google has integrated this system into its Gemini chatbot and claims it doesn’t degrade the accuracy, creativity, or speed of the generated content, unlike previous watermarking efforts. SynthID is effective on texts as short as three sentences and can even withstand paraphrasing or slight modifications. However, it still struggles with shorter texts, translations, or content that has been extensively rewritten.

While Google acknowledges that SynthID is not a foolproof solution for detecting AI-generated content, it marks an important step toward creating more reliable identification tools. As AI continues to be more integrated into daily life, tools like SynthID will play a crucial role in helping people understand and interact with AI-generated content responsibly.

Categories: Technology
Pratik Patil:
X

Headline

Privacy Settings