Meta places new safeguards on teen Instagram accounts, introduces PG-13-guided content filters
Instagram will moderate the content for users under 18 using filters guided by the PG-13 movie rating system, the platform's parent company Meta announced on Tuesday in its latest effort to address concerns about teenagers' online safety.
The new system will restrict posts featuring strong language, risky stunts, drugs or other content that "could encourage potentially harmful behaviors." The rules will also apply to Meta's generative AI tools.
Under the safeguards, teenage users will be blocked from following or interacting with accounts found to share age-inappropriate content.
Meta noted that existing policies already block the recommendation of sexually suggestive content, graphic or disturbing images and adult content such as tobacco or alcohol sales from teenagers.
BIPARTISAN SENATORS CALL FOR INSTAGRAM TO SHUT DOWN ITS NEW MAP FEATURE, CITING CHILDREN'S SAFETY CONCERNS
"Teen Accounts were already designed to protect teens from inappropriate content and, over the past year, we've further refined our age-appropriate guidelines to hide even more potentially inappropriate content in the updated default 13+ content setting," the company said in its announcement.
"We decided to more closely align our policies with an independent standard that parents are familiar with, so we reviewed our age-appropriate guidelines against PG-13 movie ratings and updated them accordingly. While of course there are differences between movies and social media, we made these changes so teens' experiences in the 13+ setting feel closer to the Instagram equivalent of watching a PG-13 movie," it added.
Teenagers will not be able to opt out of the new system without parental permission, and parents may choose an even more restrictive setting for their children.
Similar to how some suggestive content or strong language may be included in a PG-13 movie, Meta said teenagers may occasionally see similar content on Instagram but that the company is working to keep those instances as rare as possible.
"This is the most significant update to Teen Accounts since we introduced them last year, and builds on the automatic protections already provided by Teen Accounts to hundreds of millions of teens globally," Meta said. "We know teens may try to avoid these restrictions, which is why we'll use age prediction technology to place teens into certain content protections - even if they claim to be adults."
The new system comes amid criticism and lawsuits alleging the company failed to protect teenage users from harmful content or misled them about the psychological harm from its platforms.
A report in September showed several Instagram safety features do not work well.
Meta was also found to have allowed inappropriate chatbot behavior, with bots engaging in "conversations that are romantic or sensual."
SENATOR SLAMS BIG TECH'S ROLE IN 'PIRATING' COPYRIGHTED BOOKS FOR AI TRAINING PURPOSES
GET FOX BUSINESS ON THE GO BY CLICKING HERE
The company added safeguards in August for teenagers across its AI products by training its systems to avoid flirty exchanges and discussions of self-harm or suicide with young users.
The new rating settings were released in the U.S., UK, Australia and Canada, and a full launch is expected by the end of the year.
"We recognize no system is perfect, and we're committed to improving over time," Meta said. "We hope this update reassures parents that we're working to show teens safe, age-appropriate content on Instagram by default, while also giving them more ways to shape their teen's experience."
Meta is also introducing more safeguards for teenage users on Facebook.
Reuters contributed to this report.
via: https://www.foxbusiness.com/technology/meta-places-new-safeguards-teen-instagram-accounts-introduces-pg-13-guided-content-filters
