Sextortion is a form of sexual exploitation in which perpetrators coerce individuals, often children, into sharing explicit images and then threaten to distribute those images unless their demands, such as sexual favours, money, or other forms of abuse, are met.
For content moderation teams, tackling sextortion presents complex challenges, such as perpetrators exploiting private or encrypted messages, using fake identities, or engaging in subtle grooming tactics that evade the detection of moderation tools. At the same time, it also offers opportunities to develop advanced detection technologies, improve user education, and strengthen online safety policies to protect users.
The Scale of Sextortion
Sextortion cases have surged globally in recent years. The National Center for Missing and Exploited Children (NCMEC) highlights a 300% rise between 2021 and 2023 in reports of sextortion targeting children, typically boys, often resulting in severe psychological harm and in some cases, loss of life due to suicide. Although one might assume that girls would be the primary targets of sextortion, perpetrators are more likely to exploit boys due to societal norms and stigma around masculinity, boys' reluctance to report abuse due to shame or fear of judgment, and their lack of awareness about online grooming and sextortion– all of which makes them more vulnerable targets.
Perpetrators of sextortion operate online and across jurisdictions, making it difficult to track and prosecute offenders. Advancements in technology, such as deepfake tools and encrypted communication platforms, have also escalated the scale and severity of sextortion.
Challenges in Content Moderation
Sextortion often occurs in private, encrypted conversations, limiting platforms' ability to detect abusive behavior proactively. While AI tools can flag grooming patterns and Child Sexual Abuse Material (CSAM), identifying sextortion requires human oversight to address more subtle threats. For example, a subtle threat in sextortion could involve a perpetrator saying, "I thought you trusted me enough to share something personal," using emotional manipulation to pressure the victim without making explicit demands. Human oversight is essential in sextortion cases because AI struggles to detect the subtle, context-dependent emotional manipulation perpetrators use, such as ambiguous language and relational dynamics. While AI can flag patterns at scale, humans are better equipped to interpret nuanced behavior, adapt to evolving tactics, and ensure ethical, privacy-conscious responses.
The enforcement of Sextortion policies must also balance user privacy with safety, as overly intrusive monitoring risks violating privacy rights, while insufficient oversight leaves users vulnerable to abuse and exploitation. The high volume of reports also creates resource challenges for moderation teams, who must quickly triage cases and engage with law enforcement and NCMEC. While platforms do not disclose Sextortion report volumes, in 2024, Instagram investigated and removed approximately 63,000 accounts in Nigeria engaged in Sextortion, offering some insight into the scale of the issue. Lastly, responding to evolving tactics used by perpetrators continues as a persistent challenge. For example, perpetrators often use disappearing messages, encrypted platforms, or AI-generated content to evade detection and complicate efforts to trace their activities, requiring moderation teams to adopt more sophisticated detection tools and investigatory strategies.
Case Studies: Snapchat and Instagram’s Strategies to Combat Sextortion
Research conducted by Snapchat revealed that in 2023, 65% of Generation Z users across platforms reported either experiencing or knowing someone who was targeted by online scams involving stolen explicit images or private information. These materials were often used to blackmail victims into providing money, gift cards, or additional sensitive content.
To address this, Snapchat has implemented several new measures, including advanced signal-based detection to identify and remove bad actors preemptively, using NCMEC’s Take It Down database to detect and remove duplicate explicit content, and creating a tailored reporting option for sextortion that educates users and encourages reporting. Snapchat also reports bad actors to NCMEC, proactively refers cases with actionable evidence to law enforcement, and has expanded its in-app Safety Snapshot series to address risks like sextortion, grooming, and trafficking, which is accessible to users and parents via the Family Center parental supervision tool.

Instagram has also implemented and shared several measures to disrupt sextortion, including testing a feature that blurs images containing nudity in direct messages to prevent exposure to unwanted content, providing a dedicated reporting option for threats to share private images, and raising awareness among teens and parents about sextortion scams. The platform also participates in collaborative efforts like the Lantern program to share information about sextortion accounts with other tech companies and supports NCMEC’s "Take It Down" initiative to help individuals regain control over their intimate images and prevent further distribution online.

Collective Action Against Sextortion
The fight against sextortion requires ongoing innovation in content moderation, stronger partnerships between tech platforms, law enforcement, and non profit organisations, and a commitment to user education.
As more effective tools and policies are developed, it is crucial to empower users, especially children and their families, by providing resources, education, and support to help them recognise and report potential threats. Encouraging open conversations about online safety can help break the silence around sextortion and ensure that victims feel supported.
If you or someone you know is experiencing sextortion, report it to the platform and seek support from NCMEC.