Instagram Teen Accounts don’t keep harmful content from kids
Instagram promises parents that its Teen Accounts shield kids from harm “by default.” Tests by a Gen Z nonprofit and me — a dad — found it fails spectacularly on some key dimensions.
This spring, Sacramento high school senior Saheb Gulati used a burner phone to create a test Instagram account for a hypothetical 16-year-old boy. Since last fall, accounts for teens should automatically filter out “sensitive content” to protect their mental health and safety. He says the Instagram account became obsessed with discussions of “toxic masculinity,” or “what men shouldn’t and should do.”
Four other Gen Z testers from a youth group called Design It For Us did the same test, and they all received sexual content. Four of the five also received body image and disordered-eating content, including a video showing a woman saying, “Skinny is not a phase, it’s a lifestyle.” The report by Accountable Technology details some of the disturbing content that was shown to teens. However, it is too graphic to be described here. The danger they face isn’t just bad people on the internet — it’s also the app’s recommendation algorithm, which decides what your kids see and demonstrates the frightening habit of taking them in dark directions.
For lawmakers weighing a bill to protect kids online, the failures of Instagram’s voluntary efforts speak volumes about its accountability.
When I showed the group’s report to Instagram’s owner, Meta, it said that the youth testers were biased and that some of what they flagged was “unobjectionable” or consistent with “humor from a PG-13 film.”
“A manufactured report does not change the fact that tens of millions of teens now have a safer experience thanks to Instagram Teen Accounts,” Meta spokeswoman Liza Crenshaw said in an email. The report was flawed but, even if taken at face-value, it only identified 61 pieces that were deemed’sensitive’, less than 0.3% of the entire content the researchers would have seen during the testing. Meta’s definition of “sensitive” includes content that “discusses eating disorders, self-harm or suicide,” or is “sexually suggestive.” However, people can disagree on what constitutes “sensitive.” Instagram suggested a video in the first 10 minutes on my test account that celebrated a man who had passed out after drinking too much. One showed a ring that was marketed as a way to give a “bump of snuff” but which is also linked with cocaine. The account’s recommendations eventually snowballed to a full-on addiction with alcohol and tobacco products like Zyn. They appeared as often as one in five Reels that I watched. The app selects the images, but seeing them on Instagram repeatedly has an impact. “The algorithm shapes your perception of what is acceptable in ways I hadn’t realized before,” he told me.
Despite some parts of Teen Accounts that work, Gulati says, the overall promise “doesn’t seem to have been fulfilled in any meaningful way that changes your experience.”
What worked — and what didn’t
The point of the Gen Z test was to independently evaluate whether Teen Accounts fulfilled their promises. Alison Rice is the campaigns director for Accountable Tech. She says that going directly to users, those who are able to attest to what they experience on a daily basis, can be a key to efficacy. The test accounts were different in age, gender and interest. Gulati’s account, for example, followed only the 10 most popular celebrities on Instagram.
Some teen account-protection features worked. Instagram’s test accounts are private by default. Users under 16 cannot change this setting without parental consent. The app also restricted who could tag and direct message them. One of them received a notification late at night despite a prohibition. One of them received a notification late at night despite a prohibition.
And all the testers flagged one giant problem: The app kept recommending content that appeared to violate Meta’s definition of “sensitive.”
When it launched Teen Accounts in September, Meta promised in its news release that “teens will be placed into the strictest setting of our sensitive content control, so they’re even less likely to be recommended sensitive content, and in many cases we hide this content altogether from teens, even if it’s shared by someone they follow.”
Not only did Teen Accounts fail to hide lots of sensitive content, the content it did recommend left some of the young testers feeling awful. Four out of five testers reported distressing experiences when viewing Instagram’s recommended material. Among the revelations: 32 percent of teen girls had told the company that when they felt bad about their bodies, Instagram made them feel worse.
Crenshaw, the Meta spokeswoman, said the company was “looking into why a fraction” of the content flagged by the testers and myself was recommended. She did not answer my question about how the automated systems determine which content is inappropriate for teens. In April, the UK-based 5Rights Foundation also conducted an investigation into Instagram Teen accounts, and reported that they had been exposed to sexual content, including by one of the same creators Gulati identified. Gen Z users scrolled the test accounts like they would do their own accounts, for about an hour a day. They liked, commented on, and saved content from the main page, Explore page, and Reels. Instagram did not consult the creators of these posts, which included professional comedians and marketers. The maker of the Bump Ring, whose snuff-serving device showed up in my test account, said over email that “our material is not created with teen users in mind” and that “we support efforts by platforms to filter or restrict age-inappropriate content.”
Parental controls and shutdown prompts on rival social media app TikTok have also gotten a harsh reception from some parents and advocates. And the state of New Mexico sued Snapchat maker Snap after an undercover investigation surfaced evidence that the app recommends accounts held by strangers to underage Snapchat users, who are then contacted and urged to trade sexually explicit images of themselves.
The battle over protecting kids
Child-advocacy groups have long warned that social media puts teens at risk. The sticking point has been the balance of who is responsible: parents, the tech companies who make the apps or the young people themselves.
And by the time Meta unveiled Teen Accounts in September, Congress was on the verge of taking action. The Senate passed the Kids Online Safety Act, which would have required social media companies take “reasonable care” to avoid design features that could put minors at risk of self-harming, substance abuse, or sexual exploitation. Meta announced Teen Accounts just a day before the House Committee was to consider amendments to this bill. Meta denies that it started the program in order to avoid regulation. Meta responded that Teen Accounts are working because fewer teens have been contacted by adult users since Meta made the changes. But it offered no internal or external proof to indicate Teen Accounts are succeeding at improving teen well-being or protecting children from harmful content.
Rice, from Accountable Tech, says voluntary programs like Instagram Teen Accounts — even if they’ve gone further than the competition — aren’t living up to their own promises. Her organization is in favor of legal accountability, such as age-appropriate laws like the one in California, which has been challenged in court.