Instagram's Reels Algorithm Raises Alarming Concerns: A Deep Dive 28-Nov-2023

Instagram's Reels, the platform's video service designed to showcase short videos on various topics, is under scrutiny for its algorithm's unsettling behavior, as revealed by testing conducted by The Wall Street Journal. The Meta Platforms-owned social app, which is supposed to curate content based on users' interests, has been found recommending inappropriate and explicit content, particularly involving children.

Algorithmic Anomalies: A Troubling Revelation
In an attempt to understand the extent of Instagram's Reels algorithm, The Wall Street Journal conducted tests using accounts focused on following young gymnasts, cheerleaders, and other teen influencers on the platform. The findings were alarming, with the algorithm serving explicit and sexualized content, including videos of children, alongside advertisements for major U.S. brands.

The Disturbing Content Mix
The juxtaposition of salacious content featuring minors and explicit adult material with ads from renowned brands raises serious concerns about Instagram's content moderation and brand safety measures. Test accounts observed disturbing scenarios, such as an ad for a dating app appearing between inappropriate videos and a Pizza Hut commercial following a questionable video involving an adult and a child.


Industry Responses and Meta's Stance
The Meta spokesperson responded to The Journal's tests, stating that the experience was manufactured and did not represent what billions of users encounter. The company mentioned new brand safety tools introduced in October and emphasized the removal or reduction of videos violating its standards each month.

Advertisers React
Major companies, including Disney, Walmart, and Match Group, whose ads were featured alongside inappropriate content, expressed concern and took action. Match Group canceled Meta advertising for some apps and halted all Reels advertising, citing concerns about marketing brands to potentially harmful audiences.

Safety Concerns and Internal Knowledge
Former and current Meta employees acknowledged that Instagram's algorithmic tendency to aggregate child sexualization content was known internally. Once a user is identified as interested in specific content, the recommendation system directs more related content to them, potentially exacerbating the problem.

The Challenge of Video Content Moderation
Automated systems face challenges in parsing video content compared to text or images. The nature of Reels, which promotes videos from sources users don't follow, creates an environment where inappropriate content can be recommended without direct user connections.


The Journal's tests revealed that Meta-related brands, including WhatsApp and Ray-Ban Stories, also faced ad placement challenges. The Canadian Centre for Child Protection highlighted Instagram's role in serving content featuring children from the National Center for Missing and Exploited Children's database, raising concerns about the platform's impact on child exploitation communities.

As Instagram faces increased scrutiny for its algorithmic anomalies, the spotlight is on Meta to address these issues promptly. The revelations underscore the critical importance of refining algorithms, implementing robust content moderation, and ensuring the safety of users, especially minors, in the digital landscape. The disturbing intersection of explicit content, children, and major brands demands immediate attention and action from social media platforms.