skip to Main Content
Google launches SynthID to watermark AI images

Google Launches New AI Image Watermarking Tool to Combat Deepfakes

Google’s AI research lab DeepMind has released a new tool to detect and watermark images created by AI systems like text-to-image generators.

But, experts warn that the tech is still limited in its capabilities to verify the authenticity of photos and videos online.

The AI tool named SynthID will initially only be available to Google Cloud customers using the company’s Imagen text-to-image model.

Users can choose to embed an invisible digital watermark when generating images with Imagen, identifying it as AI-created. The same system can then scan images to check for the presence of the watermark.

The launch comes amid growing public unease over the proliferation of deepfakes and other forms of synthetic media enabled by generative AI models like DALL-E and Stable Diffusion.

While the technology brings enormous creative potential, it has also enabled abuses like nonconsensual deepfake pornography and art theft.

In response, tech firms have raced to develop watermarking techniques to label AI content at its source. Meta, Microsoft and smaller startups have showcased similar research. But Google claims its watermarking approach is uniquely resilient to tampering.

“Finding the right balance between imperceptibility and robustness to image manipulations is difficult,” said Pushmeet Kohli, VP of research at DeepMind, in a blog post announcing the tool. “SynthID’s combined approach makes it more accurate against many common image manipulations.”

SynthID uses two neural networks – one to subtly alter select pixels to encode the watermark and another to detect its presence. Google says this allows the watermark to persist even if an image is cropped, resized, rotated or has filters applied.

Unlike visible overlays or metadata tags, combining invisible and robust watermarking will make SynthID harder to fool or remove, especially as images spread online.

SynthID: A new tool for watermarking and identifying AI-generated content

Limitations Remain

But AI experts caution that while a promising first step, SynthID is unlikely to be a silver bullet against deepfakes.

“There are few or no watermarks that have proven robust over time,” said Ben Zhao, professor of computer science at the University of Chicago. He noted that early attempts at text watermarking have already been defeated.

“An attacker seeking to promote deepfake imagery as real will have a lot to gain and will not stop at cropping or changing colours,” he added.

According to Zhao, those with an incentive to spread misinformation will actively probe tools like SynthID to find ways of circumventing or removing its watermark. He says information sharing between firms will be vital to overcoming this “adversarial” threat.

Claire Leibowicz of the Partnership on AI agreed SynthID’s limitations are unsurprising given the nascency and difficulty of imperceptible watermarking.

“The fact that this is really complicated shouldn’t paralyse us into doing nothing,” she said. “This is a good first step.”

Also read:

A Promising but Proprietary Solution

For now, SynthID also faces limitations by being restricted to Imagen and Google Cloud. Wider adoption by creators beyond Google’s ecosystem will be crucial to its effectiveness against misuse.

Sasha Luccioni, an AI researcher at startup Hugging Face, said adding similar watermarking to all image generators could mitigate harms like deepfake porn. But Google has yet to state plans to expand the tool.

“If you add a watermarking component to image generation systems across the board, there will be less risk of harm,” she said.

There are also concerns that SynthID’s proprietary nature prevents the collective study of its methods to improve watermarking technology as a whole.

Google said the initial launch aims to gather user feedback and enhance the system before potential expansion.

Holistic Approach

While an incremental advance, experts ultimately emphasise watermarking is only one piece of the puzzle in ethically deploying generative AI.

“To build AI-generated content responsibly, we’re committed to developing safe, secure and trustworthy approaches at every step of the way – from image generation and identification to media literacy and information security,” said Kohli.

“These approaches must be robust and adaptable as generative models advance and expand to other mediums.”

Google said it continues evolving its set of techniques for working with AI media responsibly. But industry-wide standards enforced through regulation will also likely be needed to compel comprehensive action by all players.

Watermarking and detection tools can be one element of a broader strategy. But relying solely on sacrificial markers risks a false sense of security, given the speed of progress in this arms race.

Holistic solutions encompassing education, policy and ethical AI development will ultimately be essential to realising the benefits of generative models while mitigating their considerable risks.

Rebecca Taylor

Rebecca is our AI news writer. A graduate of Leeds University with an International Journalism MA, she possesses a keen eye for the latest AI developments. Rebecca’s passion for AI, and with her journalistic expertise, brings insightful news stories for our readers.

Recent AI News Articles
Amazon - Anthropic
Back To Top