Kerry Wan/ZDNET
Follow ZDNET:
Add us as a preferred source on Google.
**ZDNET’s Key Takeaways**
The concept of *Physical AI* represents the most recent and rapidly expanding frontier in the field of advanced technology. It encompasses a transformative approach that extends artificial intelligence beyond the confines of digital interfaces and into the tangible world around us—leveraging real-world data to power highly autonomous machines and systems. Remarkably, the earliest forms of this emerging phenomenon may already be part of your daily life, perhaps even integrated into something as personal as the device resting on your face.
More than three years have passed since the launch of ChatGPT, an event that ignited a global frenzy of innovation in artificial intelligence. While AI models continue to evolve in complexity, versatility, and capability, experts recognize that to achieve genuine usefulness in people’s everyday experiences, such systems must gain direct access to human environments and daily activities. True utility requires that AI progress beyond the limitations of a chatbot window on a computer or smartphone and manifest in the physical spaces where people live and work—becoming an integral part of the human ecosystem rather than merely a digital observer.
**A New Technological Vocabulary**
This brings us to the industry’s most recent and hotly debated term: *Physical AI*. The phrase gained extraordinary visibility at the Consumer Electronics Show (CES) last week, where technology giants and emerging innovators alike showcased hardware and models designed to advance this domain. Notably, Nvidia’s presentation stood out as CEO Jensen Huang highlighted the profound significance of this new era, directly comparing the emergence of Physical AI to the original release of ChatGPT. His declaration captured the mood on the show floor: “The ChatGPT moment for Physical AI is here—when machines begin to comprehend, reason, and act within the real world.”
**Defining Physical AI**
At its essence, Physical AI refers to artificial intelligence embedded in hardware that can perceive its surrounding environment, analyze contextual data, and make informed decisions that lead to meaningful, autonomous actions. Examples include self-driving vehicles, robotic assistants, and intelligent drones. While robots capable of task execution have existed for decades, the evolution toward *reasoning-based* AI marks a decisive shift.
Anshuman Saxena, Vice President and General Manager of Automated Driving and Robotics at Qualcomm, articulates the difference clearly: Physical AI integrates a system’s ability to reason, decide, and act in response to environmental conditions. According to Saxena, true Physical AI entails creating a chain of thought—a cohesive reasoning process akin to a human brain operating within context to take purposeful actions. This represents a move from mechanical obedience to adaptive cognition. For example, a humanoid robot could shift from following predetermined commands, such as moving boxes from one point to another, to independently assessing its environment and determining the most efficient and intuitive method to complete the task.
**Everyday Examples Already Here**
While such humanoid systems capture attention, the concept of Physical AI extends far beyond futuristic robots. Ziad Asghar, Qualcomm’s Senior Vice President and General Manager of XR, Wearables, and Personal AI, emphasizes that many consumers already own sophisticated manifestations of the concept: “Smartglasses are currently the most striking representation of Physical AI.” These devices operate as intelligent companions that see and hear what the user does, continually processing data from the user’s personal surroundings. This makes them not only digital accessories but also interactive participants in the user’s physical reality.
**A Symbiotic Relationship of Data**
Saxena further explains that humanoid robots will serve a vital supporting role in performing dangerous, repetitive, or undesired tasks—scenarios where human involvement is limited or impractical. However, these machines are not designed to substitute for humans but to complement and augment human potential. Wearable AI devices, like smart glasses, exemplify this augmentation by expanding the scope of human perception and decision-making capabilities. Importantly, these same wearables may provide invaluable data resources to improve the capabilities of other physical AI systems such as service robots or automated vehicles.
As Saxena points out, one of the greatest advantages of large language models (LLMs) lies in their access to an immense dataset drawn from the internet. However, similar scale and diversity of *physical* data—real-world sensory and contextual information—remain largely unavailable. The absence of such data poses a significant challenge to the advancement of Physical AI. Because allowing autonomous robots to learn directly in the real world poses safety and ethical risks, companies rely on simulated environments and synthetic datasets to train and test AI systems. At CES, many organizations showcased innovations aimed at alleviating this bottleneck.
Nvidia, for instance, unveiled new models capable of perceiving and interpreting real-world dynamics, thus facilitating the creation of high-quality synthetic datasets that replicate genuine life scenarios for AI training. Qualcomm introduced a comprehensive Physical AI stack that pairs its newly launched Qualcomm Dragonwing IQ10 Series processor with advanced software tools designed to streamline AI data gathering and model development. Together, such efforts bridge the gap between simulation and lived experience.
Despite technological progress, building reliable, high-fidelity datasets remains both costly and time-intensive. Saxena proposes that wearable AI devices—already widespread and embedded in users’ daily routines—could generate authentic, experience-based data that reflects natural human interactions with the world. He illustrates this idea plainly: sensors in smart glasses continuously record and interpret what their wearer perceives and how they respond to their surroundings. Each gesture, glance, or decision creates a constant stream of contextual data that could feed back into robot learning, effectively forming a loop of information that allows machines to understand the nuances of human behavior within real environments.
**Privacy and Ethical Considerations**
Naturally, enabling such a data ecosystem invites legitimate concerns surrounding privacy and data ownership. Saxena emphasizes that user information gathered through wearable devices must be protected with the utmost rigor and anonymization. Only through strict privacy safeguards can this data responsibly contribute to training AI systems. If managed correctly, anonymized datasets could simultaneously preserve individual privacy while enhancing robotic intelligence—enabling robots to create new datasets of their own, thus sustaining a continuously evolving ecosystem.
As Ziad Asghar eloquently summarizes, the sharing of contextual understanding and AI capabilities between wearable devices and physical robots represents a major step toward a unified technological symbiosis. This collaboration between human-centered wearable AI and environment-aware robotics forms the foundation of a future where digital intelligence not only processes information but also perceives and participates in the physical world.
In sum, Physical AI is not an abstract vision of distant technology—it is an ongoing transformation redefining how artificial intelligence interacts with human lives. From smartglasses that witness what we see to advanced robots that intuitively reason their actions, this movement extends AI’s presence from the virtual into the tangible. The horizon of innovation no longer lies solely in code but in the seamless integration of thinking machines into our everyday environment.
Sourse: https://www.zdnet.com/article/what-is-physical-ai-ces/