American News Group

Inside the Google Pixel 6 cameras’ bigger AI brains and upgraded hardware

During this week’s Pixel 6 launch event, Google demonstrated a handful of AI-powered photography tricks built into its new phones. Among the capabilities: erasing photobombers from backgrounds, taking the blur out of smudged faces and handling darker skin tones more accurately. Those features, however, are just the tip of an artificial intelligence iceberg designed to produce the best imagery for phone owners.

The $599 Pixel 6 and $899 Pixel 6 Pro employ machine learning, or ML, a type of artificial intelligence, in dozens of ways when you snap a photo. The features may not be as snazzy as face unblur, but they show up in every photo and video. They’re workhorses that touch everything from focus to exposure.

The Pixel 6 cameras are AI engines as much as imaging hardware. AI is so pervasive that Pixel product manager Isaac Reynolds, during an exclusive interview about the inner workings of the Pixel cameras, has to pause when describing all the ways AI is used.

“It’s a hard list because there’s like 50 things,” Reynolds said. “It’s actually easier to describe what’s not learning based.”

All the AI smarts are possible because of Google’s new Tensor processor. Google designed the chip, combining a variety of CPU cores from Arm with its own AI acceleration hardware. Plenty of other chip designs accelerate AI, but Google’s approach paired its AI experts with its chip engineers to build exactly what it needed.

Camera bumps become a mark of pride

With photos and videos so central to our digital lives, cameras have become a critical smartphone component. A few years ago, phone designers worked for the sleekest possible designs, but today, big cameras are a signal to consumers that a phone has high-end hardware. That’s why flagship phones these days proudly display their big camera bumps or, in the Pixel 6’s case, a long camera bar crossing the back of the phone.

AI is invisible from the outside but no less important than the camera hardware used. The technology has leaped over the limits of traditional programming. For decades, programming has been an exercise in if-this-then-that determinism. For example, if the user has enabled dark mode, then give the website white text on a black background.

With AI, data scientists train a machine learning model on an immense collection of real-world data, and the system learns rules and patterns on its own. For converting speech to text, an AI model learns from countless hours of audio data accompanied by the corresponding text. That lets AI handle real-world complexity that’s very difficult with traditional programming.

The AI technology is a new direction for Google’s years-long work in computational photography, the marriage of digital camera data with computer processing for improvements like noise reduction, portrait modes and high dynamic range scenery.

Google’s AI camera

Among the ways the Pixel 6 uses AI:

Each year, Google expands its AI uses. Earlier examples include a portrait mode to blur backgrounds and Super Res Zoom to magnify distant subjects.

On top of that are the snazzier new AI-powered photo and video features in the Pixel 6: Real Tone to accurately reflect the skin of people of color; Face Unblur to sharpen faces otherwise smeared by motion; Motion Mode to add blur to moving elements of a scene like trains or waterfalls; and Magic Eraser to wipe out distracting elements of a scene.

Better camera hardware, too

To make the most of the debut of its first Tensor phone processor, Google also invested in upgraded camera hardware. That should produce better image quality that’s a foundation for all the processing that happens afterward.

Exit mobile version