Jason Hiner/ZDNET
Follow ZDNET:
Add us as a preferred source on Google.

**ZDNET’s Key Takeaways**
The debut of Google’s Pixel 10 smartphones, which arrive equipped with sophisticated, deeply integrated artificial intelligence capabilities, has highlighted the growing vulnerability of Apple’s upcoming iPhone 17. The Pixel 10 demonstrates how seamless AI woven into both hardware and system-level software can reshape user experience, raising questions about whether Apple’s device risks falling behind in the rapidly advancing AI arms race.

It is important to note that many of the most advanced AI-driven services in the world are already accessible on the iPhone through third-party applications. This suggests that Apple could potentially secure deeper partnerships with these AI pioneers in order to bring richer, more cohesive integrations to the iPhone ecosystem. However, what differentiates the Pixel 10 is that it doesn’t merely rely on standalone apps. Instead, it unlocks exclusive AI experiences that flow directly from the synergy between Google’s custom processor, its operating system, and its software design. This system-level integration represents a frontier where Apple has yet to fully compete.

A particularly striking example of this divergence lies in the Pixel 10’s AI-enhanced camera suite—a feature set that may become the most visible differentiator between Google’s new phones and Apple’s iPhone 17. While the iPhone enjoys access to essentially every cutting-edge AI application, Apple hasn’t yet delivered a comprehensive ecosystem where such intelligence is deeply baked into the core user experience, spanning the camera, the operating system, and all first-party services. The Pixel 10, on the other hand, shows what happens when AI integration is no longer an afterthought but a central design principle.

### 1. ChatGPT’s Voice Mode
OpenAI’s Voice Mode in ChatGPT represents a transformative reimagining of the digital assistant concept—essentially offering what long-time Apple users have wished Siri would become. With this mode, a user can simply begin speaking naturally, asking a question or giving an instruction. The assistant not only responds conversationally but also retrieves useful information and executes basic tasks. ZDNET’s Sabrina Ortiz has already explained how Voice Mode can be mapped to the iPhone’s Action Button, effectively allowing it to act as a more functional Siri replacement.

However, its limitations remain evident: Voice Mode currently cannot reach into the deeper layers of iOS to manipulate calendar entries, email, native text messages, or settings on the device. Were Apple to either create its own advanced system or formalize a partnership with OpenAI, the iPhone could offer a far richer integration across all its services—achieving productivity gains without compromising Apple’s hallmark commitment to privacy. Meanwhile, Google already boasts Gemini Live, and Microsoft has introduced Copilot Voice, underscoring the urgency for Apple to evolve before Siri risks obsolescence.

### 2. Pixel 10’s Super Res Zoom
The realm of zoom photography highlights one of the few remaining gaps between the capabilities of smartphones and professional-grade cameras. For years, phone enthusiasts had no choice but to turn to dedicated equipment such as a Sony mirrorless camera paired with a 70–200mm lens in order to achieve crisp, detailed zoom. Google’s Pixel 10 Pro takes an important step toward closing this gap with its advanced Super Res Zoom feature, which algorithmically reconstructs missing data and refines digital zoom images up to 100x, keeping them surprisingly usable.

Such developments provoke fascinating questions about the definition of photography itself—whether the result counts as an authentic representation of reality or a computational reconstruction. Regardless, it underscores Google’s growing mastery of computational photography. At present, Apple is the only company with both the resources and heritage in imaging technology to rival Google in this domain.

### 3. Google’s Magic Cue
At WWDC 2024, Apple flaunted its vision for **Personal Intelligence**, promising a feature that would intuitively understand user context—interpreting data from calendars, messages, and emails. For instance, Apple’s keynote offered an example where the device would remind a user that rescheduling a meeting might conflict with ferrying a child to a standing activity. Yet despite its promise, Apple has not released this functionality. Google, instead, has turned conceptual hype into reality.

With the Pixel 10, Google unveiled **Magic Cue**, which reduces the friction of app-jumping by proactively surfacing information from across apps. In one scenario demonstrated, if a friend messages to ask what time dinner reservations were, Magic Cue automatically extracts the details from your Gmail confirmation and allows you to send it back instantly. This process now occurs locally on the device itself, powered by Google’s Tensor G5 chip. While impressive, many users would be more inclined to trust Apple with such private contextual intelligence, since Apple’s business model does not depend on monetizing personal data.

### 4. Deep Research from Anthropic
Generative AI’s most practical advantage often lies in its ability to serve as a tireless research assistant. Emerging features commonly called **Deep Research** allow users to pose complex questions and grant the AI additional processing time—ranging from five minutes to half an hour—to carefully analyze sources, resolve ambiguities, and deliver responses that include verifiable citations. Among these options, many find Anthropic’s Claude particularly reliable, thanks to its emphasis on accuracy and transparency.

Reports suggest Apple has begun exploratory discussions with Anthropic regarding AI collaborations. If Claude’s Deep Research were built into Siri, users could summon in-depth reports with simple voice or text commands, elevating the iPhone into a far more capable intellectual companion.

### 5. Best Take in Google Photos
Google addressed a common frustration known as the “group shot dilemma”—where, in sequences of photos, someone invariably looks away, blinks, or strikes an awkward expression. In 2023, the Pixel 8 introduced Best Take, and in the Pixel 10 this feature matured into an even more autonomous capability. By analyzing a burst of shots, it intelligently composites the most flattering expression of each subject into one optimal image. The new **Auto Best Take** streamlines this further, doing the work in the background.

The Pixel 10 also continues to refine the **Add Me** feature, which allows the photographer to appear within a group shot by combining multiple photos. Although Apple has exceptional expertise in computational photography, it has yet to match these innovations. Given that Google Photos is already available on iOS, Apple could either license such advancements or develop its own parallel solutions.

### 6. Broadened Language Support
Among the most remarkable strengths of large language models is their fluidity in translation. Increasingly, both smartphones and smart glasses—from Meta Ray-Bans to Solos AirGo 3—are being used as real-time language bridges. Google Translate alone supports over 100 languages, underscoring the astonishing possibilities of AI-driven communication. Apple, however, currently supports only around 20 languages in its native Translate app.

If Apple were to leverage modern LLMs, the iPhone could vastly increase its language coverage, embedding it not just into the Translate app but into Siri itself, enabling things like live multilingual phone calls, context-aware text translations, and visual object identification with translated overlays—all of which could transform the iPhone into a global communicator.

### 7. Conversational Photo Editing
Perhaps the most unexpected yet impressive addition to the Pixel 10 suite is **Conversational Editing** in Google Photos. Users can now describe edits in natural language—such as moving a subject, reducing glare, shifting backgrounds, or adding artistic elements—and the AI executes those modifications seamlessly. What once demanded advanced technical training in programs like Photoshop is now achievable in a few seconds through casual conversation.

While image alteration can raise concerns about authenticity, Google emphasizes that its AI models are tuned with hypersensitivity to subtle visual details, ensuring changes align closely with user intent without compromising contextual integrity. Because of its accessibility and convenience, this feature is poised to rapidly grow in popularity.

### Final Word
Apple now faces a decisive challenge. The company must not only embrace the AI features already flourishing on its devices but also pursue deeper system-level integration to ensure the iPhone remains competitive. While in the past year Apple’s slower rollout of Apple Intelligence may not have significantly eroded its position, the rapid acceleration of innovation from Google, OpenAI, Microsoft, and Anthropic changes the landscape. To prevent the iPhone 17 from appearing like a step backward, Apple must deliver robust innovations that make the device feel visionary rather than cautious. At this moment, Google can justifiably argue that the Pixel 10 is the most intelligent smartphone available—an assertion Apple will need to counter strongly if it hopes to retain industry leadership.

Sourse: https://www.zdnet.com/article/7-ai-features-id-like-to-see-the-iphone-17-embrace-from-google-openai-and-others/