By Agboola Aluko, GLiDE NEWS
M eta, the parent company of Facebook, has expanded its Teen Accounts safety initiative to encompass both Facebook and Messenger, in a move aimed at providing more controlled and age-appropriate experiences for users under the age of 18.
The expanded system introduces tighter restrictions for young users, including mandatory parental consent for live streaming and for turning off default protections such as image filtering in direct messages. Originally launched on Instagram in September 2024, the Teen Accounts feature is designed to place teens in a safer digital environment by default.
According to Meta, more than 54 million teen accounts globally have transitioned into this framework since its introduction. Of particular note, the company claims that 97% of users aged 13 to 15 have retained the built-in protections, which include settings for private accounts and limited exposure to potentially harmful content.
However, the system heavily depends on self-reported age during account setup. While older teens aged 16 to 18 can disable some of the default safeguards, younger teens must obtain parental approval—facilitated through adding a parent or guardian to their profile—to do the same.
Meta says it has implemented tools such as video selfies to help verify user age but plans to go further in 2025 by leveraging artificial intelligence to identify teens who may have misrepresented their age. Suspected cases will be reassigned to Teen Accounts to ensure compliance with safety protocols.
Despite Meta’s assurances, child safety advocates remain cautious, arguing that the actual impact of these new features remains unproven. Some have criticized the reliance on age self-reporting and called for stronger, more transparent enforcement mechanisms.
The expansion comes at a time when Big Tech firms are under increasing regulatory and public scrutiny over their responsibility to protect younger users in digital spaces that are often fraught with risks.
With these latest adjustments, Meta is signaling a more proactive approach to digital child safety—but whether the measures will deliver meaningful protection remains to be seen.
0 Comments