Jason Hiner/ZDNET\nFollow ZDNET: Add us as a preferred source on Google.\n\nZDNET’s primary insights underscore a shifting landscape in smartphone innovation. The debut of the Google Pixel 10 series, which seamlessly fuses artificial intelligence into its very core, has illuminated a number of critical weaknesses that Apple must address if the forthcoming iPhone 17 is to remain competitive. While countless cutting-edge AI services already exist on iOS as third-party applications, Apple’s devices often lack the same level of intrinsic, system-deep integration that Android—with its partnership between hardware and AI-driven software—has robustly demonstrated through the Pixel 10.\n\nIndeed, the most striking distinction now emerging between Apple’s next-generation handset and Google’s flagship centers around computational photography, particularly the new AI-enhanced camera utilities. Despite Apple providing access to nearly every leading AI application—from OpenAI’s ChatGPT to Anthropic’s Claude—such experiences remain app-bound rather than natively embedded, limiting the extent to which they feel effortless or indispensable to everyday use. This contrast becomes more apparent when we examine the individual capabilities that could decisively transform the iPhone 17, should Apple pursue deeper integration.\n\n1. **ChatGPT’s Advanced Voice Mode**\nOpenAI has developed a conversational voice mode for ChatGPT that functions almost exactly as users once hoped Siri might. By activating the feature, individuals can interact naturally and fluidly, asking questions, retrieving information, or issuing simple commands. Although ZDNET’s Sabrina Ortiz has demonstrated that clever workarounds—such as assigning ChatGPT’s voice interaction to the iPhone’s Action Button—can partially substitute Siri, the lack of native system-level integration remains evident. An Apple-engineered equivalent, or a strategic collaboration with OpenAI, could enable expansive system access, weaving AI into reminders, scheduling, email management, messaging, or device controls—always safeguarded by Apple’s strict privacy approach. Competitors like Google’s Gemini Live and Microsoft’s Copilot Voice are already advancing in this field, highlighting the urgency for Apple to respond.\n\n2. **Pixel 10’s Computational Super Zoom**\nPhotography continues to be one of the essential battlegrounds in premium smartphones. While enthusiasts often rely on professional-grade cameras for telephoto clarity, Google’s Pixel 10 Pro has attempted to bridge this divide with its innovative Super Res Zoom feature. This technology augments digital zoom performance up to an astonishing 100x by intelligently filling gaps through advanced computational methods, thereby creating images that remain impressively usable despite the vast magnification. By contrast, Apple’s current iPhone models have not yet reached this level of enhancement. Given Apple’s long-standing expertise in computational imagery, an equally ambitious innovation is necessary if the iPhone 17 is to rival Google’s progress.\n\n3. **Google’s Intuitive Magic Cue**\nLast year, Apple showcased its plans for a “Personal Intelligence” system at WWDC 2024. This feature promised relevance-aware assistance by drawing upon private, device-contained data such as calendar appointments, emails, and text messages. However, Apple has yet to deliver on this commitment. In the meantime, Google has moved decisively forward with Magic Cue in the Pixel 10. With on-device processing powered by the Tensor G5 chip, Magic Cue anticipates user needs—such as surfacing dinner reservation details from Gmail directly within a messaging thread—with minimal effort required from the user. Such functionality highlights the competitive edge of contextual AI. While Google’s implementation is capable and efficient, many consumers may inherently prefer Apple’s privacy-first stance, raising expectations for an eventual Apple counterpart.\n\n4. **Deep Research via Anthropic**\nAnother transformative AI capability lies in long-form research assistance. Tools such as Anthropic’s Claude now provide a feature called Deep Research, which dedicates extended time—minutes rather than seconds—to exploring multiple sources and presenting carefully attributed results. This moves beyond quick answers and toward comprehensive, citation-rich insights. Reports suggest Apple has actively explored potential collaboration with Anthropic, presenting the real possibility of implementing this technology directly into Siri. An integrated Deep Research mode on iOS would enable users to initiate detailed inquiries instantly, removing friction while bolstering accuracy.\n\n5. **Google Photos’ Best Take**\nThe collective group photo has long been plagued by closed eyes, awkward smiles, and mistimed glances. To solve this, Google introduced “Best Take” through its collaborative effort between Pixel hardware, Google Photos, and AI research teams. By analyzing multiple successive frames, the function selects each subject’s optimal pose or expression before assembling them into one cohesive image. In the Pixel 10 series, the upgraded “Auto Best Take” performs this process seamlessly in the background. Apple, with its powerful imaging pipeline, could readily provide a similar experience—or even explore a licensing partnership—removing one of the most common frustrations encountered in shared photography.\n\n6. **Expansive Multilingual Support**\nTranslation is an area where artificial intelligence illustrates its global benefits most vividly. While rival solutions like Google Translate provide support for more than 100 languages, Apple’s in-house Translate tool accommodates only about twenty. Smart glasses and emerging devices from companies such as Meta, Solos, and Brilliant Labs integrate real-time translation into everyday interactions across dozens of languages, underscoring Apple’s relative lag. By leveraging large language models, Apple could dramatically extend its language portfolio, elevate Siri into a truly global interpreter, and embed translation more naturally within live calls, text messages, or visual-processing tools like Visual Intelligence.\n\n7. **Conversational Editing in Photography**\nAmong Google’s most surprising Pixel 10 updates is its conversational image-editing feature integrated into Google Photos. Users can simply describe the adjustments they want—whether shifting a subject, changing backgrounds, refining blur, or adjusting lighting—and the AI executes them, removing the need for professional editing skills. Google’s product leads emphasize the system’s high sensitivity to photo context, ensuring minimal disruption to surrounding details. This approachable yet advanced capability may prove immensely popular, as it grants ordinary users control over editing techniques that previously demanded significant technical knowledge. Were Apple to craft an equivalent tool, it could blend creative accessibility with its emphasis on authenticity and reliability.\n\n**Final Thought**\nThe trajectory is clear: Apple now faces a considerable challenge in matching the rapid pace of AI integration exemplified by Google and other AI-focused companies. Although the delayed rollout of Apple’s long-promised Apple Intelligence features has not yet damaged iPhone sales or brand prestige, the continued absence of such capabilities risks portraying the iPhone 17 as trailing behind. At present, Google’s Pixel 10 may hold a convincing claim as the most intelligent smartphone available, while Apple must act swiftly if it hopes to retain its reputation as the industry leader in design and user experience.

Sourse: https://www.zdnet.com/article/i-hope-iphone-17-adopts-these-7-features-from-google-openai-and-others/