According to an extensive report by the Financial Times, OpenAI and renowned designer Jony Ive are confronting a number of formidable technical challenges as they attempt to bring to life an entirely new kind of computing experience — a screen-less device powered by advanced artificial intelligence. This project, already the subject of intense speculation across the technology sector, represents a bold departure from traditional hardware paradigms by seeking to merge voice, vision, and environmental awareness into a seamless interface that dispenses entirely with a display.
The collaboration between OpenAI and Ive stems from a decisive acquisition made earlier in the year. In May, OpenAI purchased the startup io for an extraordinary $6.5 billion. This venture had been co-founded by Jony Ive — celebrated for his era-defining design contributions at Apple — alongside OpenAI’s CEO, Sam Altman. At the time of the acquisition, Altman publicly emphasized that Ive and his creative team would play a pivotal role in shaping what he described as a foundation for a “new generation of AI-powered computers.” Around the same period, Bloomberg’s reporting added further intrigue, indicating that the first wave of hardware arising from this collaboration was tentatively slated for release sometime in 2026.
However, new information disclosed by the Financial Times suggests that the joint endeavor may be facing developmental turbulence as the team tries to craft what is envisioned as a compact, handheld device — small enough to rest in one’s palm — that operates entirely without a screen. Instead of relying on visual interfaces, this device is designed to interpret spoken commands and visual cues directly from the user’s surrounding environment, analyzing real-world input and responding intelligently to requests. Despite the elegance of this concept, a range of unresolved issues continues to complicate the project’s trajectory.
Among the most pressing of these issues are ongoing discussions about the device’s so-called “personality” — essentially, how the AI should express itself in a manner that balances friendliness, utility, and discretion — as well as persistent concerns regarding data privacy and the complex cloud infrastructure required to support continuous interaction. These factors, coupled with the challenges inherent in fine-tuning a new form of ambient computing, may ultimately push back the expected launch timeline.
As one source familiar with the project told the Financial Times, one of the team’s main ambitions is for the device to operate with an “always-on” mode, able to listen attentively and process information from its environment without needing to be explicitly activated by a traditional verbal wake phrase. Yet, achieving this seamless responsiveness has proven more complicated than initially hoped. The team, according to reports, has encountered difficulty ensuring that the AI intervenes only at moments of genuine relevance — speaking or assisting precisely when its input is valuable — and then gracefully retreating when the interaction has run its natural course. In essence, the developers are striving to imbue the device with a sense of judgment akin to human conversational awareness, a task that lies at the heart of the broader challenge of creating technology that feels intuitive rather than intrusive.
If realized, this ambitious vision could reinvent how individuals interact with AI-enabled systems, moving beyond screens and keyboards toward a model grounded in natural communication and environmental perception. Yet for now, the intricate balance of design, engineering, and ethical considerations continues to define the demanding journey faced by OpenAI and Jony Ive as they strive to turn their concept into a working reality.
Sourse: https://techcrunch.com/2025/10/05/openai-and-jony-ive-may-be-struggling-to-figure-out-their-ai-device/