Anyone using Google Photos as their cloud backup storage might be familiar with the AI-based editing tools of the app. They can do some color correction, enhance the colors, and even make crazy edits such as entirely changing the background. Google just announced yesterday that they are bringing more transparency to the Google Photos AI edit features by tagging such photos, to indicate that it has been edited using AI. The exact excerpt from Google’s article about the same reads: “Google Photos will note when a photo has been edited with Google AI right in the Photos app”.
The new move is good in the environment of extensive AI-enhanced photos, used all over the internet. While that in itself is not a problem, it is when you are using AI to manipulate reality. For example, you can argue that you were at one place by showing AI edited photo when you were actually at another place entirely. So, marking that the photo is edited with AI, can be the right thing to do, especially considering the scale at which the Generative AI features are scaling right now.
That being said, the proposed method, or Google’s new “AI Policy” is not very foolproof. You can only see the tag when you look at the details of the specific photo, from the Google Photos app, or when you look at the “Properties” of the file. This means that, if you are posting that same photo on social media, they wouldn’t know it is edited with AI.
Importance Of Transparency In Generative AI
As the world around us is increasingly adopting various kinds of generative AI to our daily lives including text-based generative AIs, image generators, photo editors, and more, the world is met with an ethical challenge that did not have any precedence. The most important factor is of course the transparency. While companies are working their best to make their generative AI as real as possible, it questions the authenticity and actual value of the product.
If you are creating a piece of art or a showcase of your photography skills, but if you are using generative AI to manipulate your original work, or if you are simply generating the art from the ground up using Artificial Intelligence, it loses a lot of its meaning.
In addition, a lot of the new AI features let you do a lot of editing that would’ve required a professional to do. You can erase objects, completely remove the background add a new one and even change the closeness of two subjects in a photo just with some taps and drops. While these abilities are exciting and useful, it also is quite scary. Theoretically, someone can take your photo and drop you wherever they want by editing the background. So, the generative AIs must be marked and transparent.
Also Read: Google Has Open-Sourced Its Watermarking Tool, An Ideal Time For AI Developers!
Google Enhancing Transparency for AI Edits
Google Photos is arguably one of the most popular cloud storage for smartphones. While Google Photos is not widely known for this, they do offer a lot of editing features. The Generative AI features of Google Photos are not up to the mark of Samsung’s AI, or the Google Pixel’s AI capabilities. But they are improving every day, and it is safe to assume that more generative AI-based features will start showing up in Google Photos real soon. As we have mentioned earlier, while the AI editing capabilities are exciting, they do bring up quite a lot of ethical questions. To address these, Google has announced that Google Photos will start labeling AI-edited photos starting next week.
Now, the images edited with some of the tools of Google Photos such as the Magic Editor, Magic Eraser, and Zoom Enhance are already marked as “edited with AI” on its meta tag. This means that the users can see the tag if they look into the “Properties” of the file. But after the new policy comes live, the tag will be visible right on the Google Photos App.
However, the tag is not directly put on the frame of the photo itself. So, you can post the photos to your social media pages such as Instagram, and X (formerly Twitter), and no one will notice the AI enhancements, given that it has done a good job editing the original photo. So, while this is a step toward the right move, we might need a more solid way of indicating AI edits in the future.