In the past month, General Motors decisively aligned itself with an expanding cohort of global car manufacturers intent on developing a pioneering form of partially automated operation known as “eyes-off driving.” This next-generation technology represents a significant progression toward the dream of vehicles capable of managing most driving tasks independently. Yet, despite the boldness of the move, GM notably avoided giving a detailed or transparent account of how it intends to assume legal and ethical responsibility when inevitable mishaps or collisions occur once human oversight diminishes.
It is important to make a clear distinction between GM’s proposed “eyes-off” automation and the more commonplace, hazardous version of so-called “eyes-off” driving that manifests daily when inattentive motorists glance at phones or engage in other distractions. GM’s advancement has the potential to serve as an essential stepping-stone toward its long-term ambition: delivering privately owned, fully autonomous vehicles. Some of the company’s current models already feature the much-publicized Super Cruise system, which allows a driver to remove their hands from the wheel while an intricate gaze-tracking setup ensures they remain visually focused on the roadway. GM’s forthcoming iteration, which aspires to meet the criteria of Level 3 autonomy on the industry’s six-tiered scale of driving automation, will go further by permitting drivers to remove both their hands from the steering wheel and their eyes from the road under particular conditions on select American highways.
The automaker publicly announced its intention to debut this Level 3 system by 2028, beginning with the Cadillac Escalade IQ luxury SUV. Over time, the technology is expected to cascade into the broader GM family, encompassing brands such as Chevrolet, Buick, and GMC. The company envisions an era in which drivers can glance at smartphones freely—perhaps even streaming videos or interacting with entertainment systems—without fear of regulatory reprisal or safety condemnation, provided the automated system is engaged in approved scenarios. Nonetheless, this liberty is conditional. At Level 3, the driver cannot entirely abdicate responsibility; they must remain alert enough to reassert control immediately if the system signals disengagement or encounters uncertainty. Failure to respond promptly could not only expose the driver to moral scrutiny but also make them legally culpable for resulting accidents. And in the unpredictable theatre of modern roadways, complications are inevitable.
Dr. Alexandra Mueller, senior research scientist at the Insurance Institute for Highway Safety, captured the essence of this complexity succinctly: “With conditional automation, Level 3 automation, things get messier.” The ambiguity surrounding driver accountability and system reliability underscores deep anxieties within both scientific and legal communities. “That’s where I think a lot of concerns are coming from,” Mueller continued, noting that much remains unknown about how these systems behave in unpredictable real-world circumstances.
GM is hardly alone in this pursuit. Other automotive leaders, including Ford, Stellantis (Jeep’s parent company), and Honda, are also firmly investing in Level 3 development. Mercedes-Benz, meanwhile, has already introduced a comparable system under the name Drive Pilot, though its deployment is legally permitted only along specified highways in California and Nevada. Herein lies the paradox of progress: while the technology advances rapidly, the regulatory landscape remains stagnant. Most jurisdictions still broadly prohibit the full use of Level 3 systems, with only limited experimental authorizations in places like Germany and Japan, where BMW and Honda have received temporary exemptions. Until lawmakers clarify legal frameworks for such conditional automation, its widespread adoption will remain constrained.
For regulators, the central conundrum is daunting: How can liability be allocated within an environment where operational control oscillates fluidly between machine intelligence and human agency? Mercedes-Benz, in a rare gesture of corporate accountability, has pledged to accept liability for any collisions caused by Drive Pilot while it operates autonomously. Yet even that admission is laden with caveats. If the driver ignores prompts to retake control or deliberately misuses the system, the manufacturer’s guarantee dissolves.
Tesla’s Level 2 systems—Autopilot and Full Self-Driving—have already capitalized on this murky middle ground. A series of investigations into several dozen Tesla crashes noted that Autopilot often disengages mere moments, sometimes “less than one second,” before impact. This technicality, though not direct evidence of misconduct, leaves the impression that the system exits just in time to shift legal burden back to the driver. The technology’s growing complexity makes determining responsibility even murkier, with sensor data—ranging from cameras and infrared trackers to steering torque sensors—serving as potential evidentiary tools in post-crash analyses.
When GM unveiled its new “eyes-off” initiative, CEO Mary Barra emphasized how the proliferation of sensors across its vehicles will aid the company in reconstructing accident events. “We’re going to have so much more sensing that we’re going to know pretty much exactly what happened,” she assured, suggesting that data transparency might shield GM from unwarranted blame. “And I think you’ve seen General Motors always take responsibility for what we need to.” Her comments underline the belief that digital forensics may become the new front line in automotive litigation.
The design of Level 3 autonomy itself harbors a fundamental contradiction: humans are explicitly told they may disengage their attention, yet are simultaneously required to remain perpetually prepared for immediate reengagement. When the handoff from automated to manual control is anticipated—for instance, when entering or exiting a mapped highway zone—the transition can occur seamlessly. However, unplanned disruptions, such as abrupt weather changes, debris, or evolving traffic conditions, pose significant challenges. Decades of cognitive research suggest that human drivers experience difficulty resuming control after prolonged passive monitoring, a phenomenon known as “out-of-the-loop performance degradation.” When thrust unexpectedly back into command, drivers might respond erratically—oversteering, braking too harshly, or freezing entirely—each action carrying serious risks.
Dr. Mueller warns that the problem extends beyond individual behavior to the broader chaos of shared road environments. “The mixed fleet scenario, which is going to exist probably well beyond our lifetime, offers a highly uncontrolled environment,” she explained. “Highly automated systems—even partially or conditionally automated ones—will struggle indefinitely because we live in a chaotic and dynamic world where change is constant.” Her words echo a long-recognized reality: human unpredictability and the inherent variability of real-world driving remain the most daunting obstacles to seamless automation.
Already, legal systems are grappling with crashes involving autonomous or semi-autonomous technologies, and courts have generally placed accountability on the human operator. In Arizona, the safety driver overseeing an Uber self-driving test vehicle entered a guilty plea for negligent homicide after a fatal 2017 accident that occurred while the system controlled the car. In another case, a Tesla owner was convicted of negligent homicide following a crash in which Autopilot was active. These prosecutions suggest that as of now, the law views human drivers as the final arbiters of safety, even when automation is engaged. Automakers may quietly welcome these judgments, knowing they shift liability away from corporations. Yet not all outcomes favor manufacturers. In a recent Florida case, Tesla was found partially responsible for a fatal Model S crash, with a jury awarding $243 million in damages to victims’ families—an unambiguous demonstration that company accountability can coexist alongside driver negligence.
Mike Nelson, a trial attorney specializing in emerging mobility law, observes that jurisprudence in this field remains in its infancy. Decisions made today concerning Level 2 systems will inevitably influence future rulings about Level 3 and beyond. However, because most legal professionals and juries lack the technical literacy required to interpret such complex systems, the resulting landscape is marked by uncertainty. Nelson advises automakers to act with radical transparency in this transitional era. “Juries appreciate honesty,” he noted, explaining that companies earn goodwill when they acknowledge faults rather than concealing them. Ultimately, he views the current upheaval not as a surprise but as a recurring pattern that accompanies every industrial transformation. “I’m not happy about the chaos,” he conceded, “but this is not unforeseen. This has happened every time we’ve had an industrial revolution.”
As humanity stands at the threshold of a new age—one defined by blurred boundaries between machine precision and human judgment—the promise of “eyes-off driving” invites both excitement and apprehension. The future may indeed arrive sooner than our legal systems, institutions, and moral codes are prepared to face it.
Sourse: https://www.theverge.com/transportation/812439/eyes-off-driving-level-3-legal-liability-crash