Meta launches new safety features for teens and removes accounts that sexualize children.

MEXICO CITY (AP) — Instagram's parent company, Meta, has introduced new safety features aimed at protecting teens using its platforms, including information about accounts that message them and an option to block and report accounts with a single tap.
The company also announced Wednesday that it has removed thousands of accounts that left sexualized comments or requested sexual images from adult-run accounts of children under 13. Of these, 135,000 were commenting, and another 500,000 were linked to accounts that "interacted inappropriately," Meta said in a blog post.
The strengthened measures come as social media companies face increasing scrutiny over how their platforms affect the mental health and well-being of younger users. This includes protecting children from predatory adults and scammers who ask for—and then extort—naked images.
Meta said teenage users blocked more than 1 million accounts and reported another 1 million after seeing a "safety notice" reminding people to "be cautious in private messages and to block and report anything that makes them uncomfortable."
Earlier this year, Meta began testing the use of artificial intelligence to determine if children are lying about their ages on Instagram, which is technically only allowed for users 13 and older. If a user is determined to be misrepresenting their age, the account will automatically become a teen account, which has more restrictions than an adult account. Teen accounts are private by default. Private messages are restricted so that teens can only receive them from people they follow or are already connected with. In 2024, the company made teen accounts private by default.
Meta is facing lawsuits from dozens of U.S. states accusing it of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that hook children on their platforms.
proceso