Three Eye-Opening Lessons from 100+ Trust & Safety Professionals

← Blog Home
Table of Contents

While you were scrolling, Trust & Safety teams were analyzing millions of user interactions and neutralizing threats before they could cause real harm.

It is estimated that up to 90% of online content may be synthetically generated by 2026, and although AI can help us in the battle against online harm, it also acts as a considerable new threat.

Trust & Safety teams are at the forefront of this battle, and at TrustLab we've witnessed firsthand the exponential growth of these threats.

To truly understand the challenges facing online safety today, we decided to go straight to the source: Trust & Safety professionals themselves. 

We’re excited to finally share what we've learned from those conversation in our 2024 Online Safety Report

This study draws from hundreds of responses and in-depth interviews with over 100 Trust & Safety professionals across diverse sectors. It shines a light on the critical issues facing Trust & Safety teams today and explores new ways in which these professionals can combat online harm. 

Today we’ll share a few those key insights. From the difficultly these teams face in identifying emerging threats, to the lack of resources and reluctance to invest in Trust & Safety. 

You can check out the full report here - it’s totally free, no email required. 

1. Trust & Safety is fragmented and resources are scarce

Trust & Safety is the invisible backbone of our digital world, yet it remains largely misunderstood. 

This lack of awareness is particularly pronounced among executive leadership in companies where Online Safety increasingly plays a crucial role. 

Despite how important these teams are in maintaining safe and trustworthy online environments, many decision-makers underestimate the scope and complexity of Trust & Safety operations. 

This leads to a disconnect between who handles these responsibilities and who makes investment decisions.

If we put the results for "Who Handles Online Safety" and “Who makes Trust & Safety Decisions” side-by-side, the discrepancy is clear: 

Check out the full 2024 Online Safety Report

This shows that while Trust & Safety teams handle content moderation and policy enforcement, the financial and strategic decisions are often made by executives who might not fully understand the challenges involved. 

This disconnect is summed up well by the quote from the report: 

“Leadership doesn’t see bad content on the platform, so they don’t think that they need to invest in T&S.” - Head of Trust & Safety, Social Platform 

Trust & Safety teams are on the frontlines, tackling harmful content and behaviors daily. However, these teams often don’t have the authority to influence high-level investment decisions. 

This mismatch leads to underfunded Trust & Safety efforts, making it hard for these teams to do their jobs effectively.

When we asked Trust & Safety professionals what were the biggest challenges they currently face, “securing sufficient resources and budget” came second, only behind “evolving to new and evolving types of online threats” 

2. It’s increasingly hard to stay ahead of emerging threats

Our survey revealed a stark reality: 63% of Trust & Safety professionals identify "staying ahead of emerging threats" as one of their most significant hurdles. 

This challenge is exacerbated by the absence of systematic, proactive threat detection approaches.

Currently, teams rely on a patchwork of methods to track emerging threats, such as:

💬 User Feedback: While valuable, this approach is inherently reactive, leaving teams one step behind.

🔬 Manual Monitoring: Teams employ anomaly detection and regular discussions, but these methods are time-consuming and prone to oversight.

🗞 External Insights: Information from media, industry reports, and expert collaborations provides a broad perspective but may miss platform-specific nuances.

As one Head of Product from a Gaming Platform put it: 

"If there was a tool, and if you're aware of any, to help feed us those emerging threats, I'd be very interested in that."

This sentiment underscores a critical gap in the Trust & Safety toolkit. 

Despite being on the frontlines of digital safety, these teams often lack the sophisticated, proactive tools needed to identify and address new types of online harm before they become widespread issues.

Yet another reminder that while Trust & Safety is key to the safety of users, and success of online businesses, a lot of these teams are running on fumes.

👉 This is why we built ModerateAI - a smarter content moderation solution that combines expert human judgment with the efficiency of powerful automation. 

3. Identifying bad actors is more challenging than bad content, but not less important

Our research revealed an important distinction in Trust & Safety efforts: content-level signals versus actor-level signals.

📜 Content-level signals focus on individual pieces of content, often using automated systems to review text, images, or videos for compliance with platform policies. While essential for day-to-day moderation, this approach doesn't really address recurrent problematic behavior by users.

🧟 Actor-level signals, on the other hand, focus on user behavior patterns and history. This approach is crucial for identifying persistent bad actors and taking preemptive action against ongoing risks. 

Many survey participants expressed a strong desire for actor-level signals that work across platforms, allowing for early detection of potential threats.

The challenge lies in the complexity of implementing effective actor-level signal detection. It requires more sophisticated analysis and often faces greater technical and privacy-related hurdles than content-level moderation.

The Path Forward for Trust & Safety

As we look to the future of online safety, several themes emerge:

  1. The need for increased investment: Trust & Safety teams require adequate resources and executive support to effectively combat evolving threats.
  2. Proactive threat detection: Developing tools and methodologies for identifying emerging threats before they become widespread is crucial.
  3. Balancing content and actor-level approaches: While content moderation remains important, there's a growing need to focus on actor-level signals to preemptively address persistent harm.
  4. Improved metrics and tools: Trust & Safety teams need better ways to measure their effectiveness and more flexible, customizable tools to meet their specific needs.
  5. Cross-platform collaboration: Many professionals called for increased cooperation between platforms to identify and address potential bad actors before they can spread harm across multiple services.

As online interactions continue to grow in complexity and scale, the role of Trust & Safety will only become more critical. 

By addressing these challenges head-on and fostering innovation in the field, we can work towards creating safer, more trustworthy online environments for all users.

The full 2024 Online Safety Report offers deeper insights into these issues and more. You can check it out here!  

We encourage Trust & Safety professionals, platform leaders, and anyone interested in online safety to explore our findings to better understand and address the challenges facing our digital world.

If you have any questions about the report, or would like to get in touch with us, we’re happy to help

Meet the Author

Carmo Braga da Costa

Head of Content at TrustLab

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch