OpenAI Just Made It Easier to Tell If an Image Was Made by DALL-E 3

OpenAI is finally making it easier to tell if an image was made with the DALL-E 3. The company shared the news this week, noting that it will soon start adding two types of watermarks to all images generated by DALL-E 3, adhering to standards set forth by the C2PA (The Coalition for Content Provenance and Authenticity). The change is already rolling out to images generated through the website and via the API, but mobile users will start to getting the watermarks starting Feb. 12.

The first of two watermarks exists strictly within the image's metadata. You'll be able to check the creation data of an image using the Content Credentials Verify website, as well as other websites like it. The second watermark will be a visible CR symbol in the top left corner of the image.

new C2PA watermarks as they appear in images created by DALL-E 3 and ChatGPT
Credit: OpenAI

It's a good change, one that moves DALL-E 3 in the right direction as well as properly identifying when something was made using AI. Other AI systems use similar watermarking in the metadata, and Google has implemented its own watermark to help identify images created using its image generation model, which recently made the jump to Google Bard.

As of the writing of this article, only still images will carry the watermark. Videos or text will still remain free of any watermarks. OpenAI says that the watermark being added to the metadata should not create any issues with latency or affect the quality of the image generation, either. It will, however, slightly increase the size of images in some tasks.

If this is the first you're hearing of it, the C2PA is a group that consists of companies like Microsoft, Sony, and Adobe. These companies have continued to push for the inclusion of Content Credentials watermarks to help identify whether images were created using AI systems. In fact, the Content Credentials symbol which OpenAI is adding to DALL-E 3 images was created by Adobe.

While watermarking can help, it isn't a surefire way to ensure that misinformation isn't being spread via AI-generated content. Metadata can still be omitted thanks to the use of screenshots, and most visible watermarks can be cropped out of photos. However, OpenAI believes that these methods will help encourage users to recognize that these "signals are key to increase the trustworthiness of digital information" and that they will lead to less abuse of the systems that it has made available.



from LifeHacker https://ift.tt/HxQFoGp
https://ift.tt/TSMEGI1

Related Posts
Previous
« Prev Post