If you happened to miss the announcement in the midst of last week’s heavier news cycle, it may surprise you to learn that Google quietly unveiled a smartphone with an unprecedented photography feature: an AI‑driven zoom system integrated directly into the camera. More specifically, the newly released Pixel 10 Pro incorporates generative artificial intelligence into its camera software, allowing users to produce images that remain remarkably clean and detailed even when zooming up to one hundred times. To put that in perspective, this means the phone can transform what would traditionally be noisy, blocky, and practically unusable digital zoom captures into something that, at least at first glance, looks crisp and serviceable. However, this comes with philosophical and practical questions: when algorithms are essentially inventing what the camera lens cannot see, how much of the resulting image can still be considered a faithful photograph? The results are undeniably compelling, but they also raise concerns about authenticity. To evaluate Google’s claims, I set the Pixel 10 Pro against a formidable point of comparison: Nikon’s Coolpix P1100.

For those unfamiliar, the P1100 is not merely a consumer compact camera but a true ultrazoom powerhouse. Its optical system covers an enormous range, equivalent to 24–3000mm, which means it can bring subjects miles away into clear view without resorting to digital trickery. Unlike digital zoom solutions that mathematically enlarge an image, optical zoom collects more real information by physically adjusting lenses. Naturally, the P1100 still processes its imagery with techniques such as noise reduction, sharpening, or subtle color balancing. But crucially, it never has to fabricate an entire pixel’s worth of data from nothing, since the actual glass optics have already captured those details in the first place.

The approach taken by the Pixel 10 Pro is fundamentally different. When you digitally enlarge an image by a factor of ten, twenty, or even one hundred, the camera has to fill massive data gaps. Algorithms step in to infer what might belong in those missing areas, generating details based on probability. Google’s system, which it calls Pro Res Zoom, employs generative AI to make those assumptions with far greater sophistication than earlier upscaling software. And when testing such AI‑assisted zoom, one can hardly resist the classic subject of every camera stress test: the moon.

It is worth acknowledging that capturing a sharp photograph of the moon is an ambitious task for any smartphone, even with advanced AI assistance. Google is hardly the first manufacturer to attempt mitigating this difficulty with artificial intelligence, though its approach represents one of the boldest iterations to date. While the Pro Res Zoom output does resemble the familiar lunar surface, closer inspection exposes artificial textures — a sort of mushy, sponge‑like surface that departs from the true crispness of optical detail. In side‑by‑side comparisons, the Nikon’s version, relying purely on optical reach, appears far more natural in tonality and structure.

The real‑world effectiveness of Pro Res Zoom becomes even clearer when applied to urban landmarks. From a vantage point in downtown Seattle, about a mile away, I photographed Lumen Field on a gray and hazy afternoon. The results demonstrate both the promise and pitfalls of Google’s AI approach. On the positive side, the Pixel’s algorithms managed to make sign lettering surprisingly legible and cleaned up edges that would otherwise blur into a pixelated mess. On the negative, however, generative AI also eliminated authentic textural details, transforming the stadium’s distinctive metal cladding into an unnaturally smooth surface, as though over‑processed by excessive noise reduction. The system still wrestles with written text in particular, often producing warped or illegible results.

A similar phenomenon emerges in images of the Starbucks headquarters taken from the same perspective. At a casual glance on a phone display, the AI‑enhanced versions appear decent, even impressive. But enlarging the shot reveals peculiar misinterpretations: street lamps mistaken for windows, or the iconic clocktower subtly warped, as if sketched by Salvador Dalí rather than built by human engineers. These surreal artifacts highlight both the incredible potential and the unpredictable flaws of AI‑driven reconstruction.

Additional challenges appear under more demanding conditions. From roughly three miles away, I pointed both the Pixel 10 Pro and the P1100 at Seattle’s most recognizable landmark, the Space Needle, on a bright day plagued by severe atmospheric distortion. Heat haze, a shimmering distortion caused by rising warm air, wreaks havoc on distant photography. Here, the Pixel’s AI struggled dramatically, warping the structure into a version reminiscent of something designed by Tim Burton. Yet the Nikon’s optical reach fared little better against the same physical phenomenon — reinforcing that some atmospheric challenges can defeat even the best hardware.

When photographing planes on the tarmac at Boeing Field, heat shimmer again proved unavoidable. The distance itself was not extreme, but the hot asphalt radiating between lens and subject created rolling visual distortions. Surprisingly, this scenario revealed one of the clearest advantages of AI‑assisted imaging: while not perfect, the Pixel’s Pro Res Zoom handled the heat haze far more gracefully, reconstructing recognizable patterns where traditional optics could only deliver wobbling blur. In situations like this, AI enhancement may be the only practical way to salvage a coherent image.

At this point, the conversation inevitably becomes more nuanced. Generative AI is hardly new to the field of photography — many editing programs have been employing it for years, albeit primarily in post‑processing. These tools excel at tasks such as denoising old DSLR files or repairing heavily degraded photographs. Heat distortion, however, represents a particularly intractable challenge, as its chaotic ripples defy conventional corrective software. Increasingly, professional landscape and wildlife photographers are adopting AI‑based editing applications capable of tackling such flaws, far beyond what standard sliders in Lightroom or similar programs can achieve.

The critical difference now is that AI is moving from external editing software into the very moment of capture, embedded directly in the smartphone’s camera application. This shift raises both excitement and concern. On one hand, it empowers average users with advanced correction previously reserved for skilled editors. On the other hand, it blurs the line between authentic photography and algorithmic invention, as the camera is no longer merely recording light but actively imagining details. Does Pro Res Zoom frequently misstep, introducing distortions or fabrications? Yes. But this experiment demonstrates how quickly the future of imaging is evolving, and it seems inevitable that we will see even more ambitious applications of generative AI integrated directly at the point of capture rather than in post‑processing. Whatever one’s stance on the philosophical implications, the Pixel 10 Pro has made it abundantly clear: the definition of a photograph is shifting before our very eyes.

Sourse: https://www.theverge.com/tech/769360/google-pixel-10-pro-res-zoom-100x-sample-photos-nikon-coolpix-p1100