
Google is turning Gemini into a sort of lie detector for its own AI creations.
The company is adding a feature to the Gemini app that lets you ask if an image was made or edited by a Google AI tool simply by saying, “Is this AI-generated?” Well, it might look like a small tweak, but it is part of a broader push to make synthetic media easier to check and trust.
Right now, the feature works only with images and only with content that carries Google’s SynthID watermark. If a picture was created or edited by a Google model that uses SynthID, Gemini can look for that invisible mark and tell you.
Google says support for video and audio verification will follow soon, and it also plans to bring the same checks beyond the Gemini app into products such as Search.
The bigger step will come when Gemini can verify content using C2PA credentials, which are an industry standard for attaching secure information to media files.
At launch, the system relies only on SynthID, so it can confirm content from Google’s own tools. Once C2PA is supported, Gemini should be able to identify media from a much wider range of AI tools and creative software, including image and video generators made by other companies.
Google is also baking C2PA into its newest image models. Pictures generated by the newly announced Nano Banana Pro model will include C2PA metadata from the start. That move lines up with TikTok’s recent decision to adopt C2PA in its own invisible watermarking for AI content, giving the standard more momentum across big platforms.
At the moment, Gemini’s check is something you trigger manually, so it mainly helps when you are already unsure about an image.
The post Google is making its AI better at spotting other AI fakery appeared first on Trusted Reviews.