Metro

Meta Bans Under-16s from Social Media in Australia

 

Tech giant Meta announced on Thursday that it will start removing users under the age of 16 in Australia from Instagram, Threads, and Facebook, in anticipation of the country’s pioneering youth social media legislation.

Australia’s new law, set to come into effect on 10 December, requires major online platforms—including TikTok and YouTube—to prevent underage users from accessing their services. Companies that do not take “reasonable steps” to comply may face fines of up to AUD 49.5 million (USD 32 million).

A spokesperson for Meta stated: “While we are working diligently to remove all users we believe to be under the age of 16 by 10 December, compliance with the law will be an ongoing and multi-faceted process.”

The spokesperson also mentioned that younger users will have the ability to save and download their online histories: “Before you turn 16, we will inform you that you will soon be able to regain access to these platforms, and your content will be restored exactly as you left it.”

Instagram alone reported around 350,000 Australian users aged 13 to 15, meaning hundreds of thousands of teenagers will be impacted. Some popular applications, such as Roblox, Pinterest, and WhatsApp, are exempt, although this list is still under review.

Meta reiterated its commitment to adhering to the law but suggested that app stores should also bear some responsibility for age verification. “The government should mandate that app stores verify age and obtain parental consent whenever teenagers under 16 download applications, alleviating the need for teens to verify their age multiple times across different apps,” the spokesperson commented.

“Social media platforms could then utilise this verified age information to ensure that teenagers are engaging in age-appropriate experiences.”

YouTube has similarly condemned the ban, cautioning that it could render young Australians “less safe”, as those under 16 could still access content without an account but would forfeit YouTube’s safety filters.

Australia’s Communications Minister, Anika Wells, dismissed this argument as “peculiar”, stating: “If YouTube is telling us that it is not safe and that inappropriate content exists for age-restricted users on their site, that is a problem that YouTube needs to address.”

She pointed out that some Australian teenagers had tragically taken their own lives after algorithms “latched on”, directing them towards content that undermined their self-esteem. “This specific law will not resolve every issue occurring on the internet, but it will facilitate a better path for children to pursue a more positive version of themselves,” Wells remarked.

The legislation has faced legal challenges, with the Digital Freedom Project filing a High Court case last week, labelling the law as an “unfair” attack on freedom of speech.

“Guidelines acknowledge that some teenagers may attempt to circumvent the restrictions using fake IDs or AI-generated images, and platforms are expected to devise solutions to prevent this. However, the internet safety watchdog cautioned that “no solution is likely to be 100 percent effective.”

The Australian law has garnered international attention as regulators across the globe contemplate similar restrictions. Malaysia plans to block under-16s from social media accounts next year, while New Zealand is set to introduce a comparable ban.