Very soon I will have the analysis of the Google Pixel 8 Pro ready. Some of its applications are still being updated and I want to show in detail how its camera behaves. Meanwhile, one of its main novelties has made me reflect: the magic editor.
Google’s approach is changing. Realism doesn’t matter. It doesn’t matter to be faithful to what we saw when we took the photo. What matters is that the photograph looks exactly the way we want, even if that means modifying it to the extreme. I don’t know if I think it’s brilliant or a terrible idea.
Everything in the cloud. The magic editor and its star features come with an important toll: they only work with photos in the cloud. For the new exclusive possibilities of the Pixel 8 Pro, photos must be sent to Google servers. Therefore, they must have their corresponding copy made in Google Photos.
This will also apply to the night mode in video that will arrive soon. Our memories will have to be part of the Google cloud so that they can be processed online. It is a curious approach to say the least considering that Tensor G3 is intended to, supposedly, allow better local processing.
That’s not your face. “I don’t look good in this photo.” We have all said that at some point and I have always been clear about how to solve it. The tedious way is to repeat the photograph, the quickest way was to take advantage of the “Live Photo” style functions to recover the frame in which that person appears best. A video that can be converted into photography, an ideal solution.
Google wants to go further. It wants to combine information from photographs similar to the one we have taken to reconstruct our face using AI. It’s realistic, it’s convincing, and it’s priceless to everyone who sees the photo. But that one in the photo won’t be you. It will be a reconstruction of your face.
That sky, those elements… they do not exist. If the day is cloudy, you can remove the clouds. If it’s not golden hour, you can completely change the lighting. If that river is stopped, you can modify its flow. If you haven’t jumped enough, you can elevate yourself via AI and reposition yourself in the photo. A kind of Photoshop to convert photos into whatever we want.
As a photography enthusiast and from a personal perspective, on the one hand I defend that users have complete editing freedom. On the other hand, it is starting to get a little scary that even photographs taken with a terminal as great as the Pixel 8 Pro end up edited to the core, with a final result completely different from what the phone captured.
Mobile photography and the opposite path. The expansion of AI on our phones is as inevitable as it is necessary. It will be a key element to unlock your maximum potential. However, sticking to mobile photography, I can’t help but be happy when I find just the opposite: analyzing a phone whose camera is dangerously similar to an analog camera, although it achieves it through a combination of hardware and software.
Manufacturers like Samsung, Xiaomi and even OnePlus are trying to go down a few gears with processing. Less artificial photographs to get closer to realistic results. Google has always opted for overprocessing and AI, although this year’s stance is more aggressive than ever.
Image | TechGIndia