Why Australia social media ban is failing and why Jimmy Wales is right to be worried

Why Australia social media ban is failing and why Jimmy Wales is right to be worried

Australia’s under-16 social media ban was supposed to be the "world-first" solution to a mental health crisis. Instead, it's quickly turning into what Wikipedia founder Jimmy Wales calls an "unmitigated disaster."

Since the law took effect on December 10, 2025, the reality on the ground hasn't matched the political rhetoric. I've watched as parents scramble to understand new "age assurance" tech while their kids simply find a way around it. It’s a mess. And honestly, it’s exactly what happens when you try to solve a complex cultural problem with a blunt legislative hammer.

Jimmy Wales and the surveillance trap

Jimmy Wales didn't hold back when he spoke recently at a media event. He branded the ban an "embarrassment" and warned that it’s teaching a generation of children to accept a surveillance state as the status quo.

"When it comes with demands that we adults have to prove our age—identifying ourselves with personally identifying information—this is madness," Wales said. He's got a point. To enforce a ban on kids, platforms like TikTok and Meta are increasingly forced to verify everyone.

We’re essentially telling kids: "To use the internet, you must first hand over your face or your ID to a multi-billion dollar corporation."

The myth of the toxic algorithm

One of the biggest arguments for the ban is the "addictive algorithm." But Wales, who saw the internet's birth in the early 2000s, reminds us that the "good old days" weren't that good.

  • Usenet and message boards: Before algorithms, we had unmoderated forums that were arguably more toxic than today's feeds.
  • Human nature: We don't need a computer code to be mean to each other.
  • The Serfdom: Wales describes current users as "serfs on the master’s estate," where rules are made by faceless moderators.

The ban doesn't fix the toxicity; it just moves the "estate" to less regulated corners of the web.

The numbers don't lie

Early data from 2026 suggests the ban is more of a suggestion than a law. A study by the Molly Rose Foundation found that 61% of Australian children who had accounts before the ban still have access to at least one of them.

How? It’s not magic. It’s VPNs, fake birth dates, and "age-blind" platforms.

The Australian government threatened tech giants with fines up to $50 million. But if 60% of the target demographic is still scrolling, the "reasonable steps" required by the law are clearly not enough.

What the ban actually covers

The list of banned platforms includes the big hitters:

  1. TikTok
  2. Instagram
  3. Snapchat
  4. X (formerly Twitter)
  5. Reddit

But here’s the kicker: messaging apps like WhatsApp and Messenger Kids are exempt. So are gaming platforms like Roblox and Discord. If you think a 14-year-old can't find "harmful content" on Discord or a group chat, you haven't been paying attention.

A moral panic in place of education

Wales is right to call this a "massive moral panic." Instead of teaching digital literacy, the government has essentially told parents they don't need to worry because the "ban" will handle it.

I’ve talked to parents who didn't even know that basic parental controls exist on iPhones and Androids. It's much easier for a politician to pass a ban than to fund a massive educational campaign that actually teaches families how to manage tech.

We are trading privacy for a false sense of security. Kids are being pressured into "bad, unsafe behavior"—like using biometric age-check tools that prompt them to turn on their cameras—just to access the digital world.

The unintended consequences of going underground

When you ban a teenager from the "town square" (even a digital one), they don't just stop talking. They go into the alleys.

By pushing kids off mainstream platforms with (admittedly flawed) safety teams, we’re pushing them toward unregulated spaces. These "underground" apps don't care about the eSafety Commissioner. They don't have reporting tools. They don't have "safety prompts."

What you should do instead of relying on the ban

If you’re a parent or an educator, don't wait for the government to fix this. It’s not going to happen.

  • Audit the hardware: Learn the parental control settings on the device itself, not the app. Both iOS and Android have powerful limiters that are harder to bypass than an app-level age check.
  • Talk, don't just block: Ask your kids what they’re seeing. The goal is "digital resilience," not "digital abstinence."
  • Focus on privacy: Teach them why handing over biometric data or ID to a platform is a long-term risk.

The "unmitigated disaster" Jimmy Wales warned about is already here. Australia tried to build a wall around the internet, but they forgot that the internet was built to route around obstacles. The sooner we stop pretending the ban works, the sooner we can start actually protecting kids.

Stop expecting the eSafety Commissioner to be your child's primary filter. Take the phone, open the settings, and start a conversation. It's a lot more work than passing a law, but it's the only thing that actually works.

OE

Owen Evans

A trusted voice in digital journalism, Owen Evans blends analytical rigor with an engaging narrative style to bring important stories to life.