Aloshdenny did not set out to expose a flaw in Google's content authenticity infrastructure. The researcher simply wanted to know what SynthID was doing in Gemini images.
What they found, and published on March 5, 2026, is that SynthID's watermark lives in the frequency domain of the diffusion model's latent space — a spread-spectrum phase encoding at specific carrier frequencies, surviving denoising because the denoiser mistakes the signal for real content. To extract the codebook, aloshdenny used 100 white Gemini images and 100 black ones, averaged their noise patterns, and recovered the consistent frequency structure underneath. No adversarial techniques. No special access. Signal processing.
The result is reverse-SynthID, an open-source tool that identifies SynthID watermarks with 90% accuracy by tuning into specific carrier frequencies in the noise pattern — the same frequency coordinates the watermark uses, including primary carriers at (±14, ±14) and secondary ones at (±126, ±14). In its V3 version, the tool drops carrier energy by 75 percent and phase coherence by 91 percent while maintaining 43+ dB peak signal-to-noise ratio — meaning the modified image looks identical to the original both to the human eye and to automated quality checks.
SynthID is Google's invisible watermarking system, embedded at the diffusion model latent space level into every image generated by Gemini. The company calls it a cornerstone of its AI safety infrastructure: a way to identify AI-generated content without altering the image. Google has watermarked more than 10 billion images and video frames with SynthID across its services.
The bypass works because it exploits the same physics the watermark does. SynthID embeds phase shifts at specific carrier frequencies during the diffusion process; these survive because the denoiser treats them as genuine content rather than noise. The reverse-SynthID tool targets those same frequencies with a multi-resolution codebook. The detection threshold — correlation above 0.179 — fails.
Google has not said whether it considers this a vulnerability, whether it has patched the gap, or what its enterprise customers who built workflows on SynthID detection should do now. The company did not respond to a request for comment.
SynthID's own technical paper, published in October 2025, describes the system's threat models and the reasoning behind its design. The paper acknowledges that a sufficiently motivated actor could extract the watermark with enough reference images — it lists model extraction as a known attack vector. What the paper does not say is that the extraction had been demonstrated, open-sourced, and shown to work at image quality levels that make detection unreliable in practice.
The downstream implications are the part that cannot be patched retroactively. Newsrooms, law enforcement agencies, and courts have begun treating AI-generated image detection as a verification tool — a way to establish whether evidence or press material is authentic. That workflow was built on the assumption that SynthID detection was reliable and that the provenance trail it left was durable. The reverse-SynthID bypass, now public and running on commodity hardware, makes the reliability of that assumption a live question.