Skip to main content

Australia may push Apple to block AI apps under new age verification rules

Following last year’s introduction of a ban on social media apps for teenagers, Australia is now tightening age-verification requirements for AI apps. Here are the details.

AI apps under scrutiny over mental health issues

Last year, Australia became the first country to ban social media apps for teenagers, in what became the first national effort to protect young people’s mental health.

The move followed growing global concern about the mental health impact of social media on young users, a debate that has only intensified since the release of books such as The Anxious Generation by Jonathan Haidt.

Now, starting March 9, AI platforms, including services from companies such as OpenAI, will have to comply with a series of requirements designed to prevent users under 18 from accessing pornography, extreme violence, self-harm, or eating disorder content.

The move also addresses concerns about excessive chatbot use among teens, including fears that emotionally manipulative design features could encourage dependency at a time when the impact of these tools remains unclear:

Australia is yet to experience reports of chatbot-linked violence or self-harm, but [eSafety] has reported being told about children as young as 10 talking to the AI-powered interactive tools up to six hours a day.

eSafety was “concerned that AI companies are leveraging emotional manipulation, anthropomorphism and other advanced techniques to entice, entrance and entrench young people into excessive chatbot usage”, the spokesperson said.

As part of the new rules, Reuters reports that app stores and search engines may be required to block access to non-compliant AI services:

Australia’s internet regulator said it may push search engines and app stores to block artificial intelligence services that fail to verify user ages after a Reuters review found more than half had not made public any steps to comply by a deadline next week.

When Reuters asked for comment, Apple declined to respond.

Still, the company has been rolling out age-related safeguards across its platforms to comply with age-restriction laws worldwide, including systems that rely on signals automatically detected by the device. Adoption of these APIs, as well as compliance with local requirements, ultimately remains the responsibility of individual developers.

Back to the Australian issue, Reuters also reported that compliance remains limited, with the majority of the 50 most popular text-based AI tools showing no clear steps toward implementing age verification or content filtering ahead of the fast-approaching deadline.

To read Reuters’ full report, follow this link.

Accessory deals on Amazon

FTC: We use income earning auto affiliate links. More.

You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel

Comments

Author

Avatar for Marcus Mendes Marcus Mendes

Marcus Mendes is a Brazilian tech podcaster and journalist who has been closely following Apple since the mid-2000s.

He began covering Apple news in Brazilian media in 2012 and later broadened his focus to the wider tech industry, hosting a daily podcast for seven years.