The New AI Photo Features on the Pixel 8 That Impressed Me

 The Pixel 8 and Pixel 8 Pro smartphones have introduced some impressive image editing tools, namely the Magic Editor and Audio Magic Eraser, which offer exciting, and somewhat intimidating, capabilities for manipulating photos and videos.

SOME AI PHOTO

Watch HD

In all my years of reviewing personal tech gadgets, I can't recall too many instances where I was genuinely awestruck by a new product. It's essential to maintain a degree of skepticism as a journalist, after all. However, that skepticism wavered when Google showcased a handful of image-enhancing tricks on its new Pixel 8 and Pixel 8 Pro devices.

When considered on their own, these features may seem like something that anyone with experience in Photoshop or video editing software could accomplish. However, what sets the new Pixel phones apart is their accessibility to everyone, and that's both thrilling and slightly daunting. Let's dive into them.

Google had teased this feature during its developer conference in May. It's essentially the next step in the evolution of Magic Eraser, which Google introduced a few years back. The original Magic Eraser allowed users to remove unwanted objects from their photos, like fire hydrants or background distractions. Magic Editor, on the other hand, takes photo manipulation to a whole new level.

During a demonstration, Google showcased a photo of a girl running on a beach. With Magic Editor in the Google Photos app, a spokesperson simply tapped on the subject, and the software accurately cut her out. They then effortlessly moved the subject anywhere within the scene, and the software intelligently filled in the background with what it believed should be there. It's important to note that these were pre-selected photos by Google, but Magic Editor executed the task with remarkable precision.

Magic Editor also introduced the ability to adjust the scene's lighting. If you took a photo at noon with harsh lighting, you could effortlessly transform it into a golden hour shot, complete with warm evening hues—and maybe even add a beautiful sunset!

In another example, there was a photo of a kid getting ready to shoot a basketball from the ground. The spokesperson grabbed the subject in the photo, lifted him into the air to create the illusion of a dunk, and casually mentioned, "You can even move their shadow too!"

Last year, I had a conversation with Ramesh Raskar, an associate professor at MIT Media Lab, about computational photography and digital photo manipulation. His insights now seem remarkably prescient. Companies are assuming that most consumers want to snap a photo, press a button, and get an image they'd truly like to see, regardless of whether it matches reality. Imagine arriving in Paris, and the Eiffel Tower is shrouded in haze. What you'd want is a family photo with the Eiffel Tower in the background, as if it were a sunny day. If someone can seamlessly insert a bright, sunny Eiffel Tower into the shot, that would make you quite happy.

With Magic Editor, achieving this has become easier than ever. However, there's also the possibility of encountering deceptive images that subtly alter the truth of a scene, similar to the AI-generated viral images of Donald Trump we saw over the summer. There is some hope for truth-seekers, as Google claims that metadata will indicate whether Magic Editor was used. Nonetheless, it's easy to strip metadata from images, so the effectiveness of this safeguard remains uncertain.

We've all been in situations where group photos include someone looking away or with closed eyes. Best Take promises to bring relief to parents of active kids (perhaps with a touch of panic).

When you capture a photo on most smartphones, they actually take multiple shots at different exposures to ensure well-exposed photos in varying lighting conditions. Google's solution for closed eyes is to select another frame from its capture and replace the closed-eyed face with one where the eyes are open.

This concept is reminiscent of a feature Google introduced years ago called Top Shot, which suggests a potentially better frame from a series of photos taken when you tap the shutter button. However, Best Take can pull a frame from a sequence of up to six photos taken within seconds of each other—particularly useful if the photographer snapped several shots in quick succession.

I observed as the spokesperson selected a person's face and cycled through various versions of the face from recent images and other frames. Simply choose the face you want (a peculiar sentence to write) to perfect your group photo. Google assured me that it doesn't generate facial expressions but relies on an on-device face recognition algorithm (Google Photos can already detect familiar faces) to match images.