A mistake to ban under-16s from social media. Ban the algorithms.

Dan Barrett
Dan Barrett

The Australian government has introduced legislation to ban Australians under the age of 16 from social media platforms. The list of which platforms that will be restricted include Tiktok, Snapchat, Instagram, X, and Reddit, but not messaging platforms like the Meta-owned WhatsApp and Facebook Messenger.

Kids will still be able to use YouTube and Google Classroom. You may look at the list of these apps and even the decision to use the age of 16 years-old as the line of demarcation as being arbitrary.

In a statement, Prime Minister Anthony Albanese said: “We know social media is doing social harm. We want Australian children to have a childhood, and we want parents to know the Government is in their corner.

“This is a landmark reform. We know some kids will find workarounds, but we’re sending a message to social media companies to clean up their act.”

- Advertisement -

Under the new laws, social media companies will be slapped with fines of up to $50 million if they fail to do enough to verify a user’s age.

There is no question that something needs to be done about the intrusion of social media networks and other digital platforms in our lives. The evidence is in and it is having a destructive impact on social cohesion and the mental health of our kids.

But is a blanket ban on kids under 16 accessing a specific set of social media apps the right move?

Good intentions, but a bad approach

The concern of many is that the Government is taking the wrong approach here – barreling its way towards a ‘solution’ that isn’t thought through, harming some teenagers in the process, while also restricting freedom of expression. How this ban will be enforced also isn’t entirely clear.

- Advertisement -

Independent MP Zoe Daniel has raised concern that a blanket ban on under-16s will just result in social platforms becoming less safe for everyone else.

In an interview with the ABC, Daniel said: “My biggest concern about it really is that it doesn’t substantively change what the platforms need to be doing on their platforms, and there may be an unintended consequence that the platforms actually become less safe.”

“If you were to create a system where the platforms have to take responsibility, mitigate risk and be transparent about how they’re doing that and what tools they’re using, then that sort of provides, potentially, an environment where everyone can be in a safe space.

“What we’re doing is saying, ‘Well, we’re going to lock everyone under 16 out, and then everyone else can do whatever they want in there’.

“And also, we know that some people under 16 will get in. I don’t think that that’s really a good pathway to go down.”

- Advertisement -

In June, eSafety Commissioner Julie Inman Grant compared a social media ban to water safety restrictions:

“We do not fence the ocean or keep children entirely out of the water but we do create protected swimming environments that provide safeguards and teach important lessons from a young age,” she said. 

Grant’s office released a report into LGBTIQ+ teens’ experiences negotiating connection, self-expression, and harm online. It found that the Internet offers LGBTIQ+ teens a space where they can “hang out, have fun, explore, and express themselves safely and, often, anonymously.”

Many of these spaces and venues for engagement will be limited by the ban, restricting LGBTIQ+ teens from finding community and support. But, there’s also very little difference between why LGBTIQ+ teens are using the Internet and social media and why teens (and adults) more broadly are using these same platforms.

Teens may be using social platforms to engage with like-minded communities to discuss sports, or romance novels, or comic books, or Star Wars, or all manner of perfectly reasonable interests and activities. Yes, there are harms associated with social media, but it also offers opportunities for inclusion and positive stimulation.

- Advertisement -

A better solution: Ban the algorithms

The smartest approach we have seen to the idea of banning kids from using social media is efforts in New York to ban the algorithms that power social media platforms and not to restrict access entirely. EducationDaily wrote about the Stop Addictive Feeds Exploitation (SAFE) for Kids act earlier this year.

The bill called out the addictive feeds that power social media platforms, which elevates extreme content as users strive for the dopamine hits that more likes and shares can offer.

The bill passed by the New York legislature justified the algorithm ban:

Addictive feeds have had an increasingly devastating effect on children
and teenagers since their adoption, causing young users to spend more
time on social media which has been tied to significantly higher rates
of youth depression, anxiety, suicidal ideation, and self-harm. Children
are particularly susceptible to addictive feeds, which provide a non-
stop drip of dopamine with each new piece of media, as they are less
capable of exercising the impulse control necessary to mitigate these
negative effects, which can stunt development and cause long-term harm.
Among girls, the association between poor mental health and social media
use is stronger than the associations between poor mental health and
binge drinking, obesity, or hard drug use.

Research shows that spending time on social media is ten times more
dangerous than time spent online on non-social media. Self-regulation by
social media companies has and will not work, because the addictive
feeds are profitable, designed to make users stay on services so that
children can see more ads and the companies can collect more data.

The idea of a social media platform not powered by an algorithm is difficult for many of us to consider. The algorithms are so much a part of the platforms that we use, that they seem intrinsically linked.

Can social media thrive without an algorithm? As coincidence would have it, this month we have been able to watch one thrive.

Blue skies

The big social media story of last week was the rapid adoption of social media platform Bluesky. Initially built as a spin-off from Twitter (now branded as X by Elon Musk), this is a social media platform that is not controlled by an algorithm. Users can opt to view an algorithm-curated tab to ‘Discover’ posts and can subscribe to other users curated feeds, but that is all opt-in and isn’t the default way users access the platform.

- Advertisement -

Instead, Bluesky users default is a linear feed of all the posts published to the platform of the accounts a user opts to follow. This is counter to the default feeds of similar social channels X and Threads (owned by Facebook) where posts are delivered to the user based on how interested the user will be – determined by algorithms.

Spend more than five minutes scrolling on Bluesky and you will find comment after comment from people talking about the civility on the platform. It appears that once the algorithm is not a concern, there is less incentive for people to post extreme hot takes or “rage-bait” to drive attention and clicks.

At the end of October, Bluesky had approximately 13 million monthly users. Bluesky CEO Jar Graber reported mid-week that the number of subscribers has blown past 20 million. The platform is adding around a million users a day. To put things into perspective, X still has around 300 million active users – so Bluesky still has a way to go.

A study has found that Bluesky’s daily active users in the US and UK has now passed Threads. A passing fad? Perhaps. But this trending data does support the idea that users are actively embracing a platform which isn’t dominated by an algorithm.

The blanket ban

The decision to just ban apps rather than take a more considered approach is understandable when you appreciate that this comes from a decade plus of frustration from kids, parents, and educators alike.

- Advertisement -

In an interview with ACA on Thursday, Queensland Catholic Secondary Principals Association director Dan McMahon explained that “Not everyone on Snapchat is an online bully, but in my experience every online bully uses Snapchat.

“It’s just such a great tool to weaponise harm.”

It won’t take kids long to find a new fun, non-legislated platform to message their friends. Or for the bullies to find them. The social media blanket ban isn’t a solution.

Share This Article
Follow:
Dan Barrett is the Head of Content at EducationDaily's publisher. He is a writer/producer/comms professional who has worked for organisations including SBS, Mediaweek, National Seniors Australia, iSentia, the NSW Dept of Customer Service, and Radio National. He is passionate about the Oxford comma and is one of Australia's earliest podcasters.