Australia is stepping firmly into the growing international movement to regulate how young people engage with the digital world, particularly with social media platforms. Beginning on December 10th, nearly all major online social networks will be required to remove users younger than sixteen years old from their Australian services. This directive establishes not only concrete limits on children’s online participation but also mandates that these companies introduce what the legislation refers to as a “reasonable” system of age verification—a measure intended to confirm whether users truly meet the new age threshold. Although the intention behind this rule is to enhance youth protection, critics have voiced concerns that resourceful children and teenagers will inevitably find ways to circumvent these new restrictions through technological workarounds or deceptive sign-ups.

This sweeping policy originates from the Online Safety Amendment (Social Media Minimum Age) Bill, which was approved by the Australian Parliament in November 2024 as an expansion of the earlier Online Safety Act 2021. The amendment intensifies accountability for online platforms, requiring them to take proactive, demonstrable steps to identify and remove accounts belonging to individuals under the age of sixteen, as well as to block attempts at creating new ones from Australian IP addresses or geographical data. At the time of implementation, the provision applies to at least eleven prominent platforms—Facebook, Instagram, TikTok, Snapchat, X, Reddit, YouTube, Twitch, Kick, Threads, and Lemon8—with Bluesky adding itself voluntarily to the list of compliant networks. Although children will still be permitted to browse certain public pages or isolated posts without logging in, they will lose access to personalized features such as curated feeds, posting capabilities, notifications, and the ability to communicate directly with other users.

The precise definition of “covered platforms” under this law extends to any online service that exists primarily—or in significant part—to enable social interaction between users, including forums for posting, linking, or mutual engagement. However, it deliberately excludes online gaming services like Xbox Live and standalone messaging apps such as WhatsApp and Messenger. Platforms like Discord, Pinterest, Roblox, and YouTube Kids currently fall outside its immediate purview, though the government reserves the right to broaden the scope later if deemed necessary for child safety. 

Interestingly, the law stops short of prescribing a single, uniform method for confirming a user’s age, instead giving companies latitude in how they choose to implement compliance mechanisms. While they cannot rely solely on a government-issued ID and are forbidden from hoarding data collected during the verification process, they may employ alternative systems like facial recognition, age inference based on behavioral patterns, or AI-driven estimation models. A comprehensive report published by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts explored these options in depth, acknowledging their imperfections but concluding that private and efficient solutions are achievable even though no “one-size-fits-all” method currently exists.

Prime Minister Anthony Albanese has publicly lauded the initiative, highlighting its potential to empower parents who worry about the digital environments their children navigate daily. He has emphasized that the measure aims not to suppress youthful curiosity but to preserve what he described as “a genuine childhood.” Supporting voices, including Australian researchers studying digital well-being, have connected the new law to arguments presented in Professor Jonathan Haidt’s 2024 book *The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness*. Haidt’s thesis—that early exposure to social media can profoundly damage adolescent mental health—helped inspire the “36 Months” campaign, which advocates extending the minimum legal age for social media usage from thirteen to sixteen.

Nevertheless, detractors of the legislation warn that the policy may serve as an overly simplistic response to a deeply complex issue. Damini Satija, program director at Amnesty Tech, characterizes the approach as a convenient but superficial “quick fix,” arguing that real progress in youth safety requires broader reforms involving robust data protection, rigorous platform accountability, and better design principles aimed at minimizing harm. Similarly, Reuben Kirkham from the Free Speech Union of Australia contends that tech-savvy users will often sidestep such blockades using common tools such as virtual private networks (VPNs). He cautions that the law endangers both digital privacy and freedom of expression by establishing a precedent for government overreach in a nation historically committed to liberal democratic values. Complementing these concerns, the Digital Industry Group (DIGI)—which represents Meta, Google, TikTok, and other corporations—warned that young users might be driven into obscure, unregulated online spaces, potentially less safe than mainstream networks.

When viewed in the context of similar global initiatives, Australia’s policy diverges significantly in method and scope. The United Kingdom’s Online Safety Act, for instance, stops short of banning adolescents but compels all websites that host potentially harmful material—from violence and hate content to self-harm imagery—to implement age checks before allowing access to minors. Several U.S. states, including Florida, Utah, and Texas, have adopted similar mechanisms with varied strategies—some placing the burden of verification on the platforms themselves, while others transfer it to mobile app stores operated by Apple and Google. A comparable debate is also unfolding within the European Union, where legislators have expressed interest in following Australia’s lead by prohibiting users under sixteen from social networks and AI-based interaction tools unless parental consent is granted.

In anticipation of December’s enforcement deadline, major companies have begun preparing compliance strategies. Meta, the parent of Facebook, Instagram, and Threads, is notifying users to back up their personal data before underage accounts are disabled. The company plans to identify possible underage users through algorithmic “signals,” similar to the system it employs to restrict teen accounts elsewhere. Yet Meta remains a vocal critic of the legislation, arguing that sweeping bans unintentionally isolate young people from supportive online communities. Meta continues to lobby for age verification to occur at the device or app distribution level rather than within the platforms themselves—a solution that would satisfy compliance while preserving user experience.

YouTube’s approach mirrors Meta’s in scale but emphasizes automation: users identified as younger than sixteen will be automatically signed out on December 10th, losing access to subscriptions, monetization, and other personalized tools. While they will still be able to consume publicly available videos, their accounts, playlists, and content uploads will be hidden until they reach the appropriate age. YouTube’s public policy team, however, warns that the ban might have unintended consequences, reducing access to protective features such as “Take a Break” reminders and parental supervision tools, which currently help families guide teenagers’ screen habits.

Reddit’s compliance mechanism will hinge on predictive modeling to estimate user age, combining machine learning with confirmation through identity providers like Persona. Suspected underage accounts will face suspension but not outright deletion, allowing users to recover data or permanently erase it if they wish. Reddit has criticized the measure as undermining anonymity and open discourse—core values of its community-driven ecosystem.

Snapchat has agreed to follow the law but emphasizes that it considers itself primarily a private messaging service rather than a traditional social network. To verify age, Snap will prompt many users to confirm their date of birth with financial credentials, photo identification, or facial verification techniques. Accounts belonging to minors will be locked and, if inactive, eventually deleted after three years. 

The streaming platform Twitch will enforce the age cutoff in two phases, halting new underage registrations on December 10th and deactivating existing ones by early January. Working in partnership with verification firm k-ID, it will employ video-based checks to validate identity. Kick, a rival service, is adopting a similar model. TikTok and its sister app Lemon8, both owned by ByteDance, have also committed to becoming “16+ platforms.” They will rely on multiple verification pathways, including AI-powered facial estimation and government ID scans via Yoti. Users removed for being underage will have the opportunity to download their data or reactivate their accounts once they come of age.

Meanwhile, Elon Musk’s X (formerly Twitter) has signaled hesitancy, expressing doubts about the law’s enforceability and warning that it may function as a covert method of broad internet control. To date, the company has not clarified exactly how it will meet the December deadline, fueling continued debate over whether and how such extensive government mandates can coexist with principles of free expression online.

Collectively, these developments signify a turning point in digital policy—one that challenges the tension between personal freedom and collective safety in an increasingly interconnected world. As Australia embarks on this unprecedented regulatory experiment, its outcomes will likely serve as a model, cautionary or otherwise, for every nation striving to redefine what responsible access to the internet should mean for the next generation.

Sourse: https://www.theverge.com/report/840822/australia-social-media-ban-under-16-response