Just. Type. Faster. These three terse words capture the essence of a striking observation about the modern race toward building artificial general intelligence, or AGI. If anyone needed a signal revealing how intensely the artificial intelligence community—sometimes referred to as “AI‑land”—is striving to reach this ambitious milestone, it lies in the fact that one of its leading figures has identified something as seemingly mundane as human typing speed as a major obstacle slowing progress.
Alexander Embiricos, the head of product development for Codex—OpenAI’s specialized coding‑focused agent—elaborated on this perspective during his recent appearance on “Lenny’s Podcast.” There, he emphasized what he described as an “underappreciated limiting factor” in the pursuit of general artificial intelligence: not the sophistication of neural architectures or the lack of computational power, but rather the pace at which humans can type or manage multiple writing and prompt‑related tasks simultaneously. In his view, the human capacity to input information—our fundamental interface speed—is emerging as one of the last bottlenecks between the current state of AI and its fully generalized, human‑level counterpart.
To understand the magnitude of what he means, it helps to recall what AGI represents. Artificial General Intelligence is not merely a smarter or faster version of existing AI systems; it is a still‑theoretical form of intelligence capable of reasoning, learning, and adapting across essentially any intellectual domain as efficiently—or even more effectively—than a human being. This vision has long served as the grand objective for nearly all leading AI organizations. Each of them aspires to be the first to achieve that decisive moment when machine intelligence becomes universally capable, self‑directing, and broadly generalizable.
Embiricos explained his reasoning through an example from AI‑assisted coding workflows. Even if one deploys a highly capable agent to observe and carry out much of the work automatically, if the agent’s outputs must still be checked, validated, or corrected manually by a human operator, then the process remains constrained by that human’s reviewing speed. “You can have an agent watch all the work you’re doing,” he observed, “but if that agent lacks the ability to validate its own output, you, the human, are still stuck spending valuable time verifying every line of code before moving forward.” In effect, the faster the AI becomes, the more glaringly the human verification process becomes a time‑consuming drag on efficiency.
From Embiricos’s perspective, the next great leap forward requires liberating humans from these very manual tasks—writing prompts and validating results—since our cognitive and physical response speeds are simply insufficient compared to the potential velocity of machine reasoning. The logical implication is clear: building systems in which the agent is “default useful,” or innately capable of performing valuable work without constant supervision or slow human input cycles, is the path toward unlocking exponential, or so‑called “hockey‑stick,” growth curves. A “hockey‑stick” pattern in the language of technology and business describes a trajectory that remains relatively flat for a time and then suddenly accelerates upward, mirroring the bent shape of a hockey stick. Embiricos used this metaphor to describe how productivity could surge once machines are empowered to act more independently within robustly designed frameworks.
Nevertheless, he cautioned that there is no single straightforward route to a completely automated pipeline. Every field or operational context will require its own tailored approach for determining when and how to grant agents greater autonomy. Yet despite this complexity, he expressed confidence that tangible progress toward this explosive growth phase is approaching quickly. Starting as early as next year, he predicted, pioneering adopters—innovative individuals and small teams—are likely to witness the first sharp increases in productivity as they learn to rely more fully on semi‑autonomous AI collaborators. In the years that follow, he expects to see progressively larger organizations adopt similar techniques, thereby scaling these gains across the entire technological ecosystem.
According to Embiricos, it will be somewhere between these two stages—when the early innovators begin demonstrating extraordinary output boosts and before the largest corporate entities fully automate all concept‑to‑execution pipelines—that the world may finally glimpse the arrival of true AGI. Productivity improvements achieved by these early systems, he said, will feed directly back into the research institutions and AI labs developing next‑generation models. That feedback loop—where performance gains from real‑world automation accelerate the speed of further AI innovation—will in his assessment signal that humanity is standing at the threshold of genuine artificial general intelligence. “That hockey‑sticking,” Embiricos concluded, “will be flowing back into the AI labs, and that’s when we’ll essentially have reached AGI.”
Sourse: https://www.businessinsider.com/openai-artificial-general-intelligence-bottleneck-human-typing-speed-2025-12