This is The Stepback, a thoughtfully curated weekly newsletter devoted to dissecting one carefully selected and consequential story emerging from the ever-evolving world of technology. Each edition serves as a detailed exploration of how digital culture, policy, and innovation intersect, inviting readers to pause and reflect on the broader implications of the tech landscape. For those interested in the decline of the once-promised open internet, and in understanding how its transformation reveals troubling patterns of control and commercialization, Adi Robertson continues to be a valuable guide. The Stepback arrives with precise regularity in subscribers’ inboxes every week at 8 AM Eastern Time, providing essential context to unfolding digital debates. Readers can easily subscribe to The Stepback online and ensure that each in-depth issue reaches them directly.

In 2018, two years after the British government decided to enforce strict, legally mandated age verification barriers on adult content websites, policymakers introduced an almost surreal proposal: the so-called “porn pass.” This physical card, which individuals would acquire in person at a local shop by showing valid identification, was intended as a low-technology mechanism for online age verification. The pass would contain encoded authentication information, allowing users to confirm they were over the age of eighteen while ostensibly protecting personal anonymity—an attempt at maintaining privacy in an era increasingly defined by digital exposure. Unsurprisingly, the notion of physically purchasing access credentials for online adult material elicited widespread amusement and irony. It underscored the uncomfortable contortions regulators endured in attempting to balance legitimate goals of child protection with the deeply ingrained principles of personal privacy and technological practicality. Ultimately, the entire framework was deemed unworkable and officially abandoned in 2019, appearing at the time to signal the end of government-led age verification ambitions.

Yet, as it turned out, 2019 was merely the first battle in a much longer struggle over digital identity and access control. By 2025, the momentum had shifted dramatically. Advocates of age verification had not only regained influence but were achieving significant policy victories across multiple jurisdictions. In the United Kingdom, the passage of the sweeping Online Safety Act expanded mandatory verification requirements beyond adult entertainment to encompass large swaths of social media. Meanwhile, both the European Union and Australia have entered trial phases for comparable regulations, stirring intense political and ethical debates. Further afield, Canada is considering similar frameworks, and in the United States, the Supreme Court overturned longstanding precedent by permitting age verification measures for adult content and, temporarily, even for social networking platforms. Despite fervent criticism from privacy advocates and free-speech defenders, who cautioned that such systems might erode anonymity and chill expression, many governments have continued to advance these initiatives, while major corporations that once resisted are now strategically choosing compliance.

The natural question arises: what changed between 2019 and 2025 to make once-unpopular verification laws politically viable? The simplest answer may be that the internet’s omnipresence has begun to weary its users. Once celebrated as a revolutionary medium for creativity and knowledge, it has increasingly been recast as a source of harm, manipulation, and moral decay. Skeptics of age verification had long insisted that even those indifferent to explicit material should be wary of regulations that fragment access to educational or artistic online content. Yet a cross-ideological wave of disillusionment now questions whether the internet retains meaningful social value at all. This erosion of faith has, in turn, made sweeping restrictions more palatable.

However, early implementations of these new verification systems have already validated many of the skeptics’ fears. The UK’s ambitious rollout under the Online Safety Act proved an almost textbook demonstration of every foreseeable complication. Consumers faced a confusing variety of verification providers, each collecting ID scans or facial data, creating vast new security vulnerabilities if breached. Furthermore, determined users easily circumvented restrictions through clever digital workarounds—video game photo modes, for instance—revealing the futility of enforcing airtight gating systems. VPN usage surged sharply, leading to political discussions, later denied, about potentially prohibiting VPNs altogether. In addition, numerous legitimate communities and sites began restricting content others deemed appropriate for minors, while smaller independent web services simply withdrew from the UK market altogether.

In the United States, implementation has been more diffuse, with some states experimenting independently even before the Supreme Court’s landmark ruling. The platform Bluesky, for example, opted to block users from the state of Mississippi after officials allowed its age-gating law to proceed, citing the impracticality of tracking which users were minors as a compliance challenge. On both sides of the Atlantic, evidence now clearly points to one outcome: age verification disproportionately burdens small and mid-sized online entities, those least economically equipped to implement such elaborate regulatory systems.

Only one major milestone remains unfulfilled—a catastrophic leak of personal data explicitly collected through a mandated verification system—but the warning signs are increasingly apparent. In a recent incident, a third-party service working with Discord was compromised, exposing sensitive government identification information belonging to as many as 70,000 users. Not long before that, the hacking of the dating and advice platform Tea illustrated the devastating personal consequences of having one’s official identification leaked into the public domain. Each such breach underscores the reality that once personal identification data is linked to online behavior, the damage potential is immense and enduring.

Even if some societal benefits eventually materialize from these strict verification regimes—perhaps better protection for minors, for instance—they are likely to take years to measure and may never fully offset their immediate drawbacks. Some of the justifications most often invoked for these policies rest on dubious empirical grounds, such as unsubstantiated claims that adult content impairs neurological development. Others involve issues that remain scientifically and culturally unsettled, like assessing whether pervasive social media consumption truly correlates with worsening mental health among teenagers. Yet most tragically, the systems are motivated by very real incidents of online harassment, sexual coercion, and exploitation—issues that certainly deserve redress but may well require more targeted, less intrusive interventions than total identity surveillance.

Politically, the Online Safety Act has become a flashpoint within the United Kingdom. It obliges platforms not only to verify ages but also to file comprehensive risk assessments of content exposure. Prime Minister Keir Starmer has publicly affirmed that he holds a strong conviction in protecting young internet users, and the government reiterated in response to a parliamentary petition that there are no plans to repeal the Act. Conversely, the legislation has provoked backlash from American-affiliated websites such as 4chan, often associated with far-right online movements, and from political figures like Nigel Farage, who vowed to overturn it should his party gain power, citing free speech violations. Across the globe, momentum nonetheless continues: the European Union is advancing its pilot verification programs, while Australia aims to implement its rules by December.

In the United States, the regulatory landscape remains complex. While age verification for pornographic content appears likely to persist, universal requirements across all social platforms face much heavier constitutional scrutiny. Although the Supreme Court temporarily allowed Mississippi’s statute to move forward, its accompanying commentary implied serious doubts about the law’s ultimate legality. The greatest uncertainty may lie with hybrid sites—those that host explicit material alongside expansive general content, such as Reddit or Bluesky—forcing difficult distinctions about what constitutes “adult” digital space.

Meanwhile, major American tech companies are engaged in aggressive lobbying wars to determine who bears liability. Meta, for instance, has championed laws that shift verification responsibility upward to app-store operators like Apple and Google. Predictably, those companies oppose such measures, preferring that accountability remain with individual services. Regardless of outcome, a trend is emerging: major platforms, from YouTube to Roblox, are independently strengthening age verification infrastructure even in the absence of strict law. Advanced algorithms now analyze behavioral patterns and account metadata to approximate user age, but when these automated systems err—as they often do—users are required to upload identification documents anyway, undermining much of the intended convenience and privacy.

It bears noting that neither Europe nor North America pioneered digital identity enforcement. South Korea began mandating real-name submissions for internet access as far back as 2004, while China instituted wide-reaching restrictions controlling when and how minors may go online, even limiting game-playing hours. Yet South Korea’s approach has repeatedly been scaled back by judicial rulings citing free-expression and logistical challenges, whereas China’s program exists as a pillar of a vast surveillance architecture that now penalizes online expressions of emotion as mild as sadness.

Across many countries, legislators are increasingly introducing “child safety” proposals that, while ostensibly unrelated to explicit verification, effectively create the same outcome. Any regulation prescribing special treatment for minors implicitly requires age differentiation—and therefore identification—at scale. Nevertheless, viable alternatives do exist. Governments could instead direct greater funding toward investigative units combating child exploitation or enact focused privacy legislation that curbs invasive advertising and data collection practices for users of all ages. The EU and the UK already maintain comprehensive privacy frameworks offering such avenues; the United States, by contrast, lacks an equivalent, leaving it particularly ill-prepared to manage the complex ethical terrain now unfolding.

Ultimately, as the United States embarks upon measures that risk dismantling online anonymity, it does so from a position of vulnerability rather than strength. Deep structural deficiencies in cybersecurity, consumer privacy, and regulatory coherence make the country spectacularly unfit to manage the unintended consequences of these initiatives. Current safety proposals—however well-intentioned—may only worsen problems of surveillance, inequality, and digital insecurity that they were designed to solve.

Readers who wish to continue following the evolution of this debate—spanning privacy, policy, and technology—can track related stories, authors, and emerging trends through personalized homepage feeds or email updates. Adi Robertson’s column, The Stepback, remains an essential chronicle of how power, culture, and code increasingly converge to shape the contours of online life.

Sourse: https://www.theverge.com/column/798159/age-gating-internet