Google announced a new feature that will make it easier to detect artificial intelligence-supported content for those using the Gemini application. Now, users can tell Gemini whether an image was created or edited by Google’s AI tools by asking “Was this AI-generated?” You can find out by asking: This verification is based on SynthID, an invisible watermarking technology developed by Google.
Although this new feature is limited to images only for now, Google states that audio and video content will soon be verified with this system. It is also stated that verification capabilities will not be limited only to the Gemini application, but will also be integrated into other platforms such as Google Search in the future. In this way, users will be able to evaluate the authenticity of content from different sources in a broader context.
Google Gemini begins comprehensive verification process with C2PA support
While the current verification process for images relies solely on SynthID technology, Google plans to expand these capabilities with industry-wide C2PA (Coalition for Content Provenance and Authenticity) content credentials. C2PA offers a system that makes it easy to identify the source of content created by different manufacturers and software tools. This step will allow more transparent identification of content created not only by Google’s own systems, but also by different artificial intelligence tools such as OpenAI’s Sora.
Another development announced today is that all images created by Google’s newly introduced Nano Banana Pro model will be presented with C2PA metadata. This is considered an important development in terms of providing transparency in the production process of the content. In the same week, TikTok’s announcement that it would integrate C2PA support into its invisible watermarking system seems to have accelerated the spread of this standard.
Although all these steps accelerate the development of content verification technologies, according to experts, social media platforms need to take a more active role for a truly effective verification system. While users currently have to manually check content, automating this process in the future could significantly reduce the spread of misleading content.
There is a noticeable tendency among technology companies to move towards common standards such as C2PA. The C2PA consortium, which includes companies such as Adobe, Microsoft, Intel and Truepic, aims to create reliable digital identities regarding the source of content. If this system is adopted on a global scale, it will become more possible to use artificial intelligence-supported content within ethical boundaries. Google’s contributions in this field offer concrete progress in meeting transparency expectations in content production.