AI-NewsAI Tools & Products News

Google open-sourced its watermarking tool for AI-generated text

Google has implemented artificial intelligence watermarking to automatically distinguish text produced by its Gemini chatbot, simplifying the identification of AI-generated content from that authored by humans.

Now, other [generative] AI developers will be able to use this technology, so that other generative AI developers can similarly watermark the output from their own large language models.

 Key Points

Google’s SynthID text watermarking technology, a tool the company created to make AI-generated text easier to identify, is now available open-source through the Google Responsible Generative AI Toolkit.

Google claims the system, which it’s already integrated into its Gemini chatbot, doesn’t compromise the quality, accuracy, creativity, or speed of generated text, which has long been an issue with watermarking systems.

Google says it can work on text as short as three sentences, as well as text that’s been cropped, paraphrased, or modified.

But it struggles with short text, content that’s been rewritten or translated, and even responses to factual questions.

Background

In may of this year, Google DeepMind announced that it had implemented its SynthID method for watermarking AI-generated images, text and video from Google’s Gemini and Veo AI services, respectively.

The company has now published a paper in the journal Nature showing how SynthID generally outperformed similar AI watermarking techniques for text.

AI Watermarking Takes Fight Against Misinformation

Major tech companies like Google, Meta, and OpenAI are implementing digital watermarks in AI-generated content to combat misinformation.

These invisible markers serve as digital fingerprints, helping users distinguish between human and AI-created material.

The technology comes at a crucial time, particularly with upcoming elections, as concerns grow over AI-generated deepfakes and misleading content.

However, experts note that this is just one piece of the puzzle in building trust in the digital age.

Social media platforms, news organizations, and educational institutions are particularly interested in these tools as they battle the rising tide of synthetic content.

 As we head toward major global events, including elections, the ability to distinguish between human and AI-created content becomes more crucial than ever.

News Gist

Google has released its SynthID watermarking technology as open-source, allowing other AI developers to mark AI-generated content.

Originally used in Google’s Gemini chatbot, this tool helps distinguish AI text from human writing.

The system works effectively on texts of three sentences or longer, though it has limitations with short texts and translations.

Now publicly available through Google’s Responsible Generative AI Toolkit.

Leave a Reply

Your email address will not be published. Required fields are marked *