A new report from The Wall Street Journal has claimed Meta's algorithm for Instagram Reels served up questionable content to accounts that were exclusively following pre-teen and teen influencers on the platform.

The WSJ conducted an experiment by taking a group of accounts and exclusively following pre-teen and teen influencers. The publication then monitored the type of content the account was being delivered through Instagram Reels, and according to the report, the content these accounts were being served was "risqué footage of children as well as overtly sexual adult videos". These types of ads aren't allowed on Meta's platforms, yet they were being served to accounts designed to simulate a typical child's account.
Additionally, the WSJ reports that some notable brands, such as Disney, Walmart, Pizza Hut, Bumble, Match Group, and The Wall Street Journal itself, had ads representing their brands served alongside this questionable content. The same issue recently took place on the social media platform formerly called Twitter, and now called X, as antisemitic featured ads from big brands that resulted in some advertising expenditure being pulled or at least paused.
It seems that Meta's content algorithm has now had the spotlight placed on it, which Meta has responded to - saying it "would pay for brand-safety auditing services to determine how often a company's ads appear beside content it considers unacceptable."



