Meta has unveiled a fresh set of AI-powered measures aimed at strengthening the safety of teenagers across its platforms, as the company steps up efforts to create safer digital experiences for young users on Facebook, Instagram and its other apps.

According to Meta, the new measures are part of its broader commitment to enforcing age-appropriate experiences online and ensuring that users below the age of 13 are not able to access its platforms. The company said it is investing heavily in advanced technologies that can better detect underage accounts and take swift action where necessary.

Meta explained that its systems now rely on a combination of artificial intelligence, product design and parental support tools to identify teen users more accurately, apply stronger protections by default and support families navigating online spaces.

As part of the update, Meta said its AI systems can now analyse a wide range of contextual signals across user profiles, including posts, comments, captions and biographies, to detect indicators linked to age, such as references to school activities or teenage milestones. The company noted that these capabilities are being expanded across more areas of its apps to improve consistency and proactive enforcement.

The company is also introducing advanced visual analysis technology capable of estimating age ranges from photos and videos using broad age-related cues. Meta clarified that the system does not rely on facial recognition or identify individuals, but instead works alongside behavioural and textual signals to improve detection accuracy.

Accounts flagged as potentially underage may now be required to undergo age verification checks, while accounts that fail to confirm age requirements could face removal. Meta said the move is intended to preserve the integrity of its platforms and strengthen compliance with its minimum age policy.

In addition, the company has simplified reporting tools to make it easier for users to flag suspected underage accounts both within its apps and through the Help Center. Meta said the streamlined process would help surface violations faster and improve response times.

To further improve moderation, Meta is expanding the use of AI-assisted review systems that work alongside human moderation teams. According to the company, the technology applies standardised evaluation criteria to reports, enabling quicker and more consistent enforcement decisions.

The tech company also disclosed that it is strengthening safeguards designed to prevent repeated attempts to bypass age restrictions, particularly from users who create new accounts after enforcement actions have been taken.

While several of the AI-driven safety systems are already in operation globally, Meta said some of the more advanced features will be introduced gradually across additional markets in the coming months.

Beyond age-detection measures, the company said it is continuing to expand teen account protections that reduce exposure to inappropriate content and limit unwanted interactions online

The leading African innovative tech, startup and business news provider. For Ads/enquiries, email 📩 business@techinsider.africa

Leave A Reply

Exit mobile version