Content
1. Overview
Instagram is a leading social media platform owned by Meta that leverages machine learning algorithms for content recommendation at its core. It has over a billion users sharing photos and videos, making it a prime application domain for AI-driven content moderation and recommendation. Content moderation on Instagram uses AI classifiers to detect and remove content violating guidelines, while content recommendation algorithms personalise what each user sees in their feed and the Explore page. These algorithms process vast data in real-time, determining which posts are amplified to wider audiences and which are suppressed. Instagram’s shift in 2016 from a purely chronological feed to an algorithmic feed exemplifies the reliance on machine learning to make the most of your time by showing what you supposedly care about most . In practice, this personalisation has a profound impact on user experience and culture: the platform has become a cultural phenomenon that shapes beauty standards and social values. A growing concern is that Instagram’s AI-powered recommendation system appears to promote immodest or sexually suggestive content amplifying content that show more skin and revealing attire more aggressively than modest content.
This report is based on a small scale experimental analysis conducted on Instagram. In one example, a creator uploaded two nearly identical video some where the individual wore modest clothing and another with immodest attire. The modest reel reached 3 million views, whereas the immodest one reached 4.5 million, indicating a 1.5x increase in reach. Another creator carried out a similar experiment, two videos were uploaded with the same script, hashtags, and settings. One included a suggestive pose, gaining 2.2 million views, while the more clearly framed and non-suggestive version reached only 558,000 views. These findings highlight growing concerns that Instagram’s AI-powered recommendation system disproportionately amplifies immodest or sexually suggestive content. This report examines that phenomenon in depth, drawing on academic research, industry studies, and the above experiments to critically analyse Instagram’s algorithm, its objectives, the data and patterns it relies on, the actions it takes, and the resulting ethical issues and regulatory implications.
2. State of the Art
These algorithms are designed to draw user engagement and maximise user interaction but studies show this can introduce bias in what content gets promoted (Brown, n.d.; Fouquaert & Mechant, 2022). Recent investigations and research shed light on how Instagram’s algorithm amplifies certain types of posts.
An exclusive investigation by AlgorithmWatch revealed that Instagram’s feed algorithm prioritises photos of scantily clad people. In an analysis of 2,400 photos from 37 content creators, posts showing women in bikinis or lingerie and shirtless men were far more likely to appear in feeds than other content like food or landscapes. In fact, posts with women in swimwear/underwear were 54% more likely to be shown to followers, while images of food or scenery were about 60% less likely a statistically significant difference. Indicating a systematic amplification of sexually suggestive visuals by the recommendation system (Srivastava, 2023; Kayser-Bril, Richard, Duportail, & Schacht, n.d.).
Researchers in social computing and media studies are actively examining these algorithmic biases. Recent work has explored how Instagram’s algorithm may favourconventionally attractive women and sexualized imagery, linking it to both technical factors engagement metrics and societal factors such as advertising, marketing (Narayanan, 2023). There is also growing study of how algorithmic amplification affects vulnerable groups likepromotion of pro-eating-disorder content or misogynistic content to teens which furtherbroadens risks of engagement optimized systems . Overall, the state of the art reveals an engagement-centric algorithm that, perhaps unintentionally, amplifies sexually suggestive content disproportionately, more so than in comparator systems that have implemented stronger content controls (Segarra, 2024; Lane, 2024).
3. Objective of the AI System
The recommendation system is engineered with the primary objective of maximizing user engagement and time spent on the platform. Public communications from Instagram leadership frame this objective in terms of user experience by personalizing content to show what you’ll find interesting. Adam Mosseri, the head of Instagram, explained that Instagram uses “a variety of algorithms, classifiers, and processes” to personalize feeds and “show you content you care about most”, given that users would otherwise miss a majority of posts in an unfiltered feed (Mosseri, 2021; Narayanan, 2023). In essence, the implicit goal of this AI system is to increase metrics like likes, comments, views, and session duration by curating content most likely to prompt those interactions. And from a business perspective, this does makes sense. However, Researchers, users and critics argue that this engagementmaximizing objective is not well-aligned with societal well-being and can yield unintended consequences. The algorithm’s notion of relevant and quality recommendation is based largely on popularity and interaction, which might not be normatively desirable content. For example, the system has understood that provocative posts trigger quick responses, so it will preferentially amplify those even when they are not healthy or appropriate. By harshlyoptimizing for engagement, Instagram’s AI may inadvertently promote divisive or objectifying material that keeps eyes glued to the screen. The objective does not explicitly account for context, nor for fairness . Leading to unintended outcomes such as users being exposed to sexual content within minutes of joining, creators pressured to post such content to be successful and users developing distorted perceptions of reality (Segarra, 2024). In short, while maximizing engagement has driven Instagram’s growth the objective, as currently defined, remains questionable from ethical standpoint.
4. Data and Pattern
Instagram tracks every user interaction likes, comments, shares, saves, video views, scrolling time, and negative feedback. It also logs account follows, search queries, and how users navigate the app. These behavioural signals help machine learning models predict what content users will engage with. Photos and videos are processed with computer vision to detect scenes, objects, and even nudity levels. Textual data such as captions, hashtags, comments along with location, device and time metadata, informs both moderation and recommendations. Social and demographic data such as your network who you follow and engage with, age, gender, and inferred interests also shapes the content shown to the user. All this data comes from user activity and uploaded content, processed under Instagram’s terms of service. AI systems not humans analyse it at scale, often without users fully realizing how their data powers the algorithm (Segarra, 2024; Narayanan, 2023; Fouquaert & Mechant, 2022).
The data is sourced directly from user activities and interactions on the platform, as well as from user-uploaded content. Instagram collects this data continuously and automatically as users engage with the app.
IInstagram’s machine learning algorithms are designed to identify specific patterns to predict which content will generate high engagement. Key patterns include:
• Engagement patterns: Recognizing the types of content that attract user attention and interaction (likes, comments, shares). For example, the algorithm learns that images showing more skin or provocative poses typically receive more user engagement(Narayanan, 2023).
• Visual feature patterns: Detecting visual elements such as clothing style, body exposure, attractiveness, and other visually stimulating features. Posts with such features frequently result in higher engagement, teaching the algorithm to prioritize similar content.
• Textual and contextual patterns: Identifying hashtags, captions, and keywords associated with popular or trending content. For instance, hashtags related to fashion or swimwear might be associated with high engagement and thus be promoted more prominently (Narayanan, 2023; (Willcox, 2025).
The algorithm primarily uses features derived from visual analysis (e.g., body exposure level), user behaviour (interaction history), textual content (hashtags, captions), and social network connections to predict and promote high-engagement content
5. Action
Based on detected patterns, Instagram’s machine learning algorithms actively influence what content users see through several actions:
• Recommendations: Content is recommended to users via the Explore page and Reels, often pushing content with visual elements (like revealing attire) that historically drive more engagement (Mosseri, 2021; Narayanan, 2023).).
• Content Amplification and Suppression: Content meeting specific high-engagement criteria is amplified, reaching larger audiences beyond immediate followers. Conversely, less engaging or contextually modest content may be deprioritized or "shadow-banned," resulting in reduced visibility (Mosseri, 2021).
• Moderation Actions: Automated moderation decisions might remove or restrict content identified as violating community standards, sometimes inconsistently affecting posts featuring certain body types or groups (Mosseri, 2021).
These actions significantly shape user experiences, determining which types of content gain widespread visibility and influence user interactions and societal perceptions on the platform.
Evaluation
Instagram’s engagement-based algorithm benefits both users and businesses by enabling personalised content discovery, boosting user engagement, offering exposure opportunities for creators and brands, and managing information overload by prioritizing relevant posts (Mosseri, 2021; Brown, n.d.; Fouquaert & Mechant, 2022).
However these benefits come at significant risks and costs, The engagement focused approach inherently favours content that triggers reaction, which can marginalize certain voices. Content that is modest or nuanced may be algorithmically at an disadvantage. In the context of gender, this bias often advantages conventionally attractive women who are willing to show skin, while disadvantaging those who post other types of content. The algorithm’s amplifying effect can thus reinforce societal biases effectively objectifying women by disproportionately pushing content that highlights their bodies. All of this means the algorithm can unintentionally discriminate or create representation harm (Mauro & Schellmann, 2021).
Although The algorithm doesn’t force anyone to post revealing photos, but it strongly incentivises those who do, creators notice what works to get engagement. This can pressure them to share more sexualized images to gain likes and followers, effectively aligning self-worth with sexual appeal. In evaluating Instagram’s AI, it’s clear that the benefits of engagement and discovery are intertwined with serious risks. The algorithm succeeds in keeping users hooked and content flowing, which benefits advertisers and some users, but it also amplifies problematic content and increases social pressures. The unintended consequences raise the question of whether pure engagement maximization is an appropriate goal for a system that mediates social content for hundreds of millions of peopleundress (Kayser-Bril, Richard, Duportail, & Schacht, n.d.).
Regulation
Given the influential role of Instagram’s algorithm, regulators and lawmakers are increasingly scrutinizing such AI systems. In the EU, recent regulations directly address the responsibilities of platforms like Instagram with its Digital Services Act in 2022. Under this act, Instagram’s must assess and mitigate systemic risks posed by its services. This includes risks to minors, public health, civic discourse, and gender-based harm. In response, Instagram has already introduced features like the ability to switch to a chronological feed or favourites feed. Moreover, the act strengthens oversight, EU can demand data access to scrutinize the algorithm, and independent audits of Instagram’s compliance will be required(Department of Enterprise, Trade and Employment, 2022).
With Upcoming AI a new European AI Act, still in the legislative process as of 2025, aims to regulate AI systems based on risk levels. While primarily focused on high-risk AIthere has been debate about whether social media recommender algorithms should be classified as high-risk given their impact on fundamental rights and well-being. The AI Act will at minimum enforce certain transparency for AI systems that interact with people so Instagram may have to clearly inform users that an AI is deciding what they see. If aspects of Instagram’s algorithm are deemed to influence people in harmful ways Instagram would have to alter those systems or face prohibition in the EU. The AI Act will also likely require bias monitoring and mitigation in high-risk AI if recommender systems fall under that, Instagram would have to routinely test its algorithms for biases like gender or racial bias in moderation and recommendation and correct them (Software Improvement Group, 2025).
In sum, regulation is catching up and EU’s Digital Service Act directly compels companies to be more transparent, and the forthcoming AI Act will further shape how the recommendation system must behave. This pressure is crucial to drive changes that market incentives alone did not, aiming to ensure Instagram’s AI operates within societal norms and legal guardrails.
Regulation
To address the risks posed by Instagram’s algorithm, a combination of technical and policy-based interventions is needed. First, Instagram should redefine its algorithmic goals to prioritise meaningful interactions and user well-being rather than purely maximising engagement, which often promotes provocative content. Increasing content diversity in users feeds and offering better user controls to limit unwanted content would reduce overexposure to one type of content (Alptraum, 2019). Regular audits to detect and correct biases related to gender, race, or body type should be implemented to ensure fairer recommendations. Greater transparency is also vital users and creators should understand why certain posts are promoted and be given more control over their content visibility. Enforcing existing community guidelines consistently and fairly can help set clear norms without reinforcing discriminatory standards (Srivastava, 2023). Additionally, Instagram should promote positive, inclusive content and enhance user awareness through digital literacy tools. Lastly, independent oversight and cooperation with researchers, especially under regulations like the EU’s Digital Services Act, will help improve accountability and ensure the platform operates responsibly.