Google introduces a watermark tool to combat misinformation in AI-generated images

Last updated on 5 June 2024



Google introduces a watermark tool to combat misinformation in AI-generated images

Google has introduced SynthID, a new tool developed in collaboration with DeepMind and Google Cloud, to combat the spread of misinformation through AI-generated images. SynthID adds invisible watermarks to images created by Imagen, Google’s text-to-image generator, allowing for their identification as computer-generated. The watermark is embedded directly into the pixels of the image and remains intact even after modifications or alterations. SynthID also offers confidence levels to assess the likelihood of an image being generated by Imagen, providing valuable insights for users. While the technology is not perfect, initial testing has shown promising accuracy against common image manipulations. With the rise of deepfake technology and the potential consequences of manipulated content, tech companies are actively seeking ways to identify and flag such content. SynthID is Google’s unique approach to addressing this challenge, complementing efforts by the Coalition for Content Provenance and Authenticity (C2PA) and the EU’s Code of Practice on Disinformation. However, as AI technology continues to advance, the long-term effectiveness of technical solutions like SynthID remains uncertain in the ever-evolving landscape of misinformation.