Google’s SynthID Detects AI Content—But What Is AI ‘Watermarking’ and Does It Work?

Last month, Google announced SynthID Detector, a tool that can identify AI-generated text, images, videos, and audio. There are some limitations. Currently, the tool is only accessible to “early testers” via a waitlist.
Also, SynthID mainly works with content generated by Google AI services like Gemini (text), Veo (video), Imagen (images), or Lyria (audio). It won’t detect AI content created with tools like ChatGPT.
This is because SynthID doesn’t actually identify AI-generated content itself; it only detects a “watermark” embedded by certain Google AI products in their output.
A watermark is a machine-readable mark hidden within images, videos, audio, or text. Digital watermarks help track the origin or authorship of content, often used to protect creative works and combat misinformation.
SynthID embeds such watermarks in AI-generated output. These marks are invisible to users but can be detected by tools to confirm content created or edited by AI models using SynthID.
SynthID is one of the newest attempts at this—but how well do these methods actually work?
There Isn’t a Universal System For Detecting AI
Several AI companies, including Meta, have created their own watermarking tools and detectors like SynthID, but these are specific to individual models rather than universal solutions.
As a result, users must manage multiple tools to verify content. Despite calls for a unified system and efforts by companies like Google to promote theirs, the detection landscape remains fragmented.
Another approach focuses on metadata—details about the origin, authorship, and editing history of media. For instance, the Content Credentials inspect tool lets users verify media by checking its edit history.
However, metadata can be easily removed when content is uploaded to social media or converted to different formats, which is especially problematic if someone wants to hide a content’s source.
There are also detectors that analyze forensic clues like visual glitches or lighting errors. While some are automated, many rely on human judgment and simple checks, such as counting fingers in AI-generated images. These methods may lose effectiveness as AI quality improves.
How Well Do AI Detection Tools Work?
AI detection tools vary widely in effectiveness. They tend to perform better with fully AI-generated content, like essays created entirely by chatbots.
However, accuracy drops when AI is used to edit or modify human-created work. In these cases, detectors often miss AI involvement or wrongly label human content as AI-generated.
These tools rarely clarify how they reach their conclusions, causing further uncertainty. When applied in university plagiarism checks, they’re seen as an “ethical minefield” and have been criticized for bias against non-native English speakers.
Areas Where AI Detection Tools Are Useful
AI detection tools have many practical uses. For example, insurers can verify if an image in a claim truly represents the situation, helping guide their response.
Journalists and fact-checkers may use AI detectors alongside other methods to decide if information is trustworthy enough to share. In hiring, both employers and applicants need to confirm whether the person they’re interacting with is real or an AI-generated fake.
Dating app users want to know if a profile belongs to a genuine person or an AI avatar, which could be part of a romance scam. For emergency responders, identifying whether a caller is human or AI can be crucial in allocating resources and saving lives.
What’s Next?
As these examples illustrate, authenticity challenges are unfolding in real time, and static methods like watermarking won’t suffice. Developing AI detectors capable of real-time audio and video analysis is urgently needed.
In any case, relying solely on one tool for authenticity judgments is unlikely.
A key first step is understanding how these tools function and their limits. Combining their results with other information and your own context will continue to be crucial.
Read the original article on: Tech Xplore
Leave a Reply