With online safety threats growing at a galloping pace, Trust & Safety teams have never been so important.
But the past few years have been rough on the industry.
Economic turmoil has resulted in mass layoffs, Trust & Safety is mostly seen as a cost center, and, with governments worldwide are introducing stricter regulations, online platforms require more robust Trust & Safety measures.
Not only does this impact the capacity of teams to maintain online safety, but it also contributes to increased anxiety across the industry.
Despite these setbacks, Trust & Safety teams continue to innovate and adapt.
They’ve shown astounding commitment and resilience, and they’ve turned to advanced tech to help them more effectively detect and mitigate online threats.
In this post we’ll take a look at some of the challenges and emerging threats T&S teams currently face, and how they’re dealing with them.
These insights come from our 2024 Online Safety Report, where spoke to hundreds of Trust & Safety professionals across various industries, including social media, dating, gaming, marketplaces, and more.
The report contains a combination of quantitative data and qualitative insights to can help Trust & Safety professionals navigate and address emerging threats and pressing issues in the Online Safety space.
You can browse through the report for free, here.
Who's Handling Online Safety?
Although we spoke to hundreds of Trust & Safety professionals, we saw a range of different job titles.
In some organizations, Online Safety is handled by dedicated Trust & Safety managers and other team leaders (such as Product or Operations Managers). T&S may also be handled by policy managers, compliance of customer experience leaders, product managers, or even CEOs and founders.
The TSPA actually has a really handy guide to understanding these roles in their T&S curriculum.
Although it’s fair to say that maturity level also plays a role, this shows that not all organizations appear to take online safety seriously enough to appoint dedicated teams, or even dedicated individuals.
Some organizations appoint the task to teams or individuals who are not specialized in content moderation. Others split the task across multiple teams.
Also, many respondents talked about a general lack of investment in T&S. There seems to be a tendency to treat T&S as a cure, rather than a preventative measure. Most T&S decisions tend to come from the top down, and leadership tends to be reactive, rather than proactive.
Rather than investing in tools preemptively, too many organizations only turn to T&S solutions when things are already on fire.
And, due to the fragmentation of the online safety teams, by the time a T&S specialist does come along, they’ll have a huge mess to clean up.
Emerging Threats in the Online Safety Landscape
We asked respondents to list what they see as their biggest emerging threats and challenges. Here’s what they had to tell us:
1. Cyber attacks on user data and privacy
25.5% of respondents identified this as a key emerging threat, a greater proportion than any other threat identified.
The threat of cyber crime has been getting steadily worse for some years now. If cyber criminals can use public platforms for illegal means, then there will obviously be a lot of pressure on the platforms to protect their users.
2. Spread of misinformation and disinformation campaigns
It comes as no surprise that so many respondents are concerned about misinformation and disinformation.
This is, after all, the year of all elections. When 64 countries across the world are going to the polls in the same year, online disinformation campaigns have the potential to do some real damage.
3. Deepfakes and synthetic media manipulation
Generative AI models have the potential to do a lot of harm, yet the sheer scale of the challenge may have taken some by surprise.
The rise of election-related deepfakes on X, for example, was at one point increasing by an average of 130% per month.
4. Regulatory and compliance challenges
Countries across the world have started imposing new regulations for online safety, which can create regulatory challenges for platforms.
Often, the responsibility for meeting these regulations will fall to T&S teams. Be sure to read our guide to getting compliant with the Digital Services Act, if you’re facing this challenge yourself.
Top Challenges for Trust & Safety Teams
The professionals we spoke to talked about numerous challenges they’re facing, from changes in laws and regulations to concerns about media coverage and public scrutiny. Yet two broad problems seem to challenge T&S teams across the board.
Staying Ahead of Emerging Threats
63% of participants specified that “staying ahead of emerging threats” is one of the biggest challenges for their team. There will always be threats and abuse to deal with, and there will always be bad actors who intentionally seek out sophisticated means to get around content moderation policies. And this is all to say nothing of the considerable risks posed by AI.
The problem is that staying ahead of emerging threats is often done completely manually, through looking for unusual activities on the platform, or through performing manual content checks. Some T&S teams often rely on feedback and complaints from users to identify potential harm. Or they might look for external insights from media and industry reports.
In any case, without a systematic and proactive approach to threat detection, T&S teams risk being one step behind anyone who would wilfully spread harmful content on their platforms.
Relying on user feedback is a reactive strategy that would not scale with the platform’s growth. Manual monitoring may be more proactive, but teams may easily miss certain forms of harm or misinformation. And though external insights can be helpful, they will only ever provide a broad perspective that will not account for platform-specific nuances.
“If there was a tool, and if you’re aware of any, to help feed us those emerging threats, I’d be very interested in that.” Head of Product, Gaming Platform
Scaling Operations With User Growth
46% of participants identified major challenges associated with scaling. The more users a platform gets, the more content moderation there needs to be, which makes online safety increasingly complex, time-consuming, and expensive.
To deal with this issue, some T&S teams outsource moderation tasks to third party companies. Yet outsourcing often creates more problems than it solves – it’s costly, and third party companies can be difficult to manage, particularly if they’re based overseas.
Other organizations turn to automation solutions to manage the increased complexity that comes from scaling. But automation, too, is not without its issues. Some teams struggle with the complexity of these tools. Also, off-the-shelf automatic classifiers can return a lot of false positives. One respondent told us that the solution they were using repeatedly labeled images of bananas as explicit.
The Need for Proactive Approaches in Trust & Safety
We asked our respondents what they’d do if they could wave a magic wand to immediately fix all of their problems. What would they wish for?
A major trend here was a wish for more strategic leadership. With a dedicated Head of Safety, organizations could implement more proactive and effective online safety policies.
Because the current way that many organizations seem to handle T&S does not seem sustainable. With so many emerging threats, and with an increasingly demanding regulatory environment, platforms simply cannot afford to take a reactive approach to T&S anymore.
Rather than responding to threats when they happen, leadership needs to proactively strategise and invest in T&S to make it less likely that these threats will happen in the first place.
Sure, there will always be bad actors out there, and they will always look for new ways to exploit, abuse, or mislead others online. Yet with a proactive approach to T&S, platforms will at least have the policies and procedures in place to manage any new threats as soon as they emerge.
T&S teams are on the frontline of an ongoing battle for online safety. With the right tools, resources, investment, and strategy, we can help ensure that they’re not fighting a losing battle.
We’re working on a tool that could help with a lot of these challenges: ModerateAI.
Learn more about the smartest way to moderate your content, here!