Australia’s online watchdog has accused the world’s largest social media companies of not adequately implementing the country’s ban on under-16s using their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the ban took effect, the regulator found numerous deficiencies and has now moved from monitoring to active enforcement, warning that platforms must demonstrate they have implemented “appropriate systems and processes” to stop under-16s from using their services.
Regulatory Breaches Uncovered in Initial Significant Review
Australia’s eSafety Commissioner has documented a concerning pattern of failure to comply among the world’s biggest social media platforms in her first formal review following the ban came into effect on 10 December. The report shows that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish adequate safeguards to prevent minors from accessing their services. Julie Inman Grant expressed particular concern about systemic weaknesses in age verification processes, highlighting that some platforms have allowed children who originally stated themselves under 16 to subsequently claim they were older, effectively circumventing the law’s intent.
The findings indicate a significant escalation in the regulatory action, with the eSafety Commissioner moving beyond monitoring to active enforcement. The regulator has emphasised that merely demonstrating some children still maintain accounts is inadequate; platforms must instead furnish substantive proof that they have established robust systems and processes intended to stop under-16s from creating accounts in the first place. This shift reflects the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that fail to meet the statutory obligations.
- Permitting previously banned users to confirm again their age and restore account access
- Allowing multiple tries at the same age assurance method with no repercussions
- Insufficient safeguards to block accounts for under-16s from being opened
- Limited notification systems for parents and members of the public
- Absence of clear information about enforcement efforts and user account terminations
The Extent of the Issue
The substantial scale of social media activity amongst Australian young people highlights the compliance challenge confronting both the authorities and the platforms themselves. With millions of accounts already removed or restricted since the ban’s implementation, the figures provide evidence of extensive early non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to enforcing age restrictions have turned out to be considerably more complex than expected, with platforms having difficulty to differentiate authentic age confirmations from fraudulent ones. This complexity has left enforcement authorities grappling with the fundamental question of whether existing age verification systems are adequate to the task.
Beyond the technical obstacles lies a wider issue about the readiness of companies to prioritise compliance over user growth. Social media companies have long resisted stringent age verification measures, citing data protection worries and the real challenge of verifying age digitally. However, the regulatory report suggests that some platforms may not be making sufficient effort to implement the systems required by law. The shift towards active enforcement represents a critical juncture: either platforms will significantly enhance their regulatory systems, or they risk facing significant penalties that could transform their operations in Australia and potentially influence compliance frameworks internationally.
What the Statistics Demonstrate
In the first month subsequent to the ban’s launch, Australian authorities reported that 4.7 million accounts had been suspended or taken down. Whilst this statistic initially appeared to prove compliance achievement, subsequent analysis reveals a more layered picture. The sheer volume of account deletions implies that many under-16s had managed to establish accounts in the beginning, revealing that preventive controls were lacking. Additionally, the data prompts inquiry about whether suspended accounts represent authentic compliance or just users closing their pages of their own accord in in light of the new restrictions.
The minimal transparency surrounding these figures has frustrated independent observers trying to determine the ban’s actual effectiveness. Platforms have disclosed little data about their compliance procedures, effectiveness metrics, or the characteristics of removed accounts. This absence of transparency makes it challenging for regulators and the general public to evaluate whether the ban is functioning as designed or whether young people are just locating other methods to use social media. The Commissioner’s insistence on detailed evidence of systematic compliance measures reflects growing frustration with platforms’ resistance to disclosing full information.
Sector Reaction and Pushback
The social media giants have responded to the regulatory enforcement measures with a mixture of assurances of compliance and doubts regarding the ban’s practicality. Meta, which operates Facebook and Instagram, stressed its dedication to adhering to Australian law whilst at the same time contending that precise age verification continues to be a major challenge across the industry. The company has advocated for a alternative strategy, proposing that robust age verification and parental approval mechanisms implemented at the app store level would be more effective than enforcement at the platform level. This stance reflects broader industry concerns that the existing regulatory system puts an unrealistic burden on separate platforms.
Snap, the developer of Snapchat, has taken a more proactive public stance, announcing that it had locked 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers dispute whether such figures demonstrate genuine compliance or simply represent reactive account management. The fundamental tension between platforms’ business models—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an entire age demographic remains unresolved. Companies have long resisted rigorous age verification methods, pointing to privacy concerns and technical limitations, creating a standoff between authorities and platforms over who carries responsibility for implementation.
- Meta contends age verification should occur at app store level instead of on individual platforms
- Snap claims to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups point to privacy issues and technical obstacles as impediments to effective age verification
- Platforms maintain they are doing their best whilst questioning the ban’s overall effectiveness
Wider Considerations About the Prohibition’s Impact
As Australia’s under-16 online platform ban enters its implementation stage, fundamental questions persist about whether the legislation will achieve its intended goals or merely push young users towards less regulated platforms. The regulatory authority’s initial compliance assessment reveals that despite months of implementation, significant loopholes remain—children keep discovering ways to bypass age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics argue that the ban’s effectiveness depends not merely on regulatory oversight but on whether young people will genuinely abandon mainstream platforms or simply shift towards other platforms, encrypted messaging applications, or VPNs designed to mask their age and location.
The ban’s international ramifications add another layer of complexity to assessments of its effectiveness. Countries such as the United Kingdom, Canada, and various European states are observing Australia’s experiment closely, exploring similar regulatory measures for their respective populations. If the ban proves ineffective at reducing children’s online activity or cannot protect them from dangerous online content, it could weaken the case for equivalent legislation elsewhere. Conversely, if regulation becomes sufficiently robust to genuinely restrict underage access, it may embolden other nations to adopt comparable measures. The outcome will probably shape worldwide regulatory patterns for years to come, making Australia’s implementation efforts scrutinised far beyond its borders.
Who Benefits and Those Who Suffer
Mental health campaigners and child safety organisations have endorsed the ban as a essential measure against algorithmic manipulation and contact with harmful content. Parents and educators maintain that removing young Australians platforms built to maximise engagement could lower anxiety levels, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has acknowledged the mental health risks associated with social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates legitimate uses of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s real-world effects goes further than individual users to impact content creators, small businesses, and community organisations that rely on social media platforms. Young people who might have followed creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that depend on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously utilised effectively. Meanwhile, the ban unintentionally benefits large technology companies with resources to build age verification infrastructure, arguably consolidating their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has announced a marked change from hands-off observation to direct intervention, marking a key milestone in the rollout of the youth access prohibition. The authority will now gather evidence to determine whether companies have failed to take “reasonable steps” to restrict child participation, a legal standard that surpasses simply noting that children remain on these platforms. This approach requires tangible verification that platforms have introduced appropriate systems and procedures designed to exclude minors. The Commissioner’s office has stated it will conduct enquiries systematically, developing arguments that could result in significant fines for breach of requirements. This shift from oversight to intervention reflects increasing dissatisfaction with the companies’ present approach and signals that voluntary cooperation by itself is insufficient.
The enforcement phase presents critical issues about the adequacy of penalties and the operational systems for maintaining corporate responsibility. Australia’s statutory provisions offers regulatory tools, but their efficacy relies on the eSafety Commissioner’s readiness to undertake official proceedings and the platforms’ capacity to respond meaningfully. Global regulators, particularly regulators in the United Kingdom and European Union, will keenly observe Australia’s implementation tactics and consequences. A successful enforcement campaign could set a model for other nations contemplating similar bans, whilst failure might undermine the comprehensive regulatory system. The forthcoming period will prove crucial whether Australia’s groundbreaking legislation produces genuine protection for young people or becomes largely performative in its impact.
