Content Moderation & Online Regulations in the Age of AI

← Blog Home
Table of Contents

When it comes to content moderation, AI is an anti-hero. 

A double edged sword that harms and protects, as we watch in anticipation to see which way it swings. 

It's shaping how we create and share content online, and, as a consequence, it’s shaping online regulations

In 2023, we witnessed the European Union introduce the EU AI Act, making Europe the first jurisdiction to propose comprehensive rules for AI development and use - a huge step in the global trend toward a more regulated online experience.

But AI has also introduced new dimensions to the world content moderation, playing a critical role in both creating and fighting harmful content

Just last month, Adobe's Terms of Use update sparked backlash over fears of user content being used to train AI, highlighting the need for transparent communication in AI and content moderation.

As part of our series on the future of online safety on the Click To Trust Podcast, I had the privilege of speaking with Scott Pansing

Scott's career spans interactive production, advertising operations, and policy communications at companies like Capital Records, The Walt Disney Company, and Google. 

Recently, he’s been involved with AILA, a nonprofit in Los Angeles that aims to make the region a hub for responsible AI research and application.

Scott shared his insights on the delicate balance between innovation and regulation in AI and the intricate challenges of content moderation. 

Today, I’ll share some highlights from our conversation, which you can listen to in full here.

AI & Online Regulations

When I asked Scot Pansing about his stance on online regulations, he was unequivocal: 

"I feel strongly that, like several other sectors, the online sector should have some sensible regulation," he said. "Europe, in general, has a tradition of leading in privacy and consumer rights."

Scot highlighted the role of the Digital Services Act (DSA) in Europe and how it sets a precedent for online safety and consumer protection. He also mentioned the recent adoption of the EU AI Act, which will regulate AI technology, with full implementation expected by 2027.

The AI Act is an extension of the Digital Services Act, as it focuses on protecting fundamental rights, privacy, and combating surveillance.

But it also introduces an interesting risk-based approach, where AI models are subject to different levels of regulation based on their potential risks. 

This ensures that high-risk applications undergo stricter scrutiny, providing a balanced framework that promotes innovation while safeguarding user safety.

The EU AI Act’s four risk categories: 

1. Unacceptable Risk: AI systems that are considered a clear threat to the safety, livelihoods, and rights of people are banned.

👉 For example, AI systems or applications that manipulate human behavior to exploit vulnerabilities of a specific group.

2. High Risk: AI systems that significantly affect people's lives and are thus subject to strict regulations. These systems must comply with stringent requirements related to data quality, documentation and traceability, transparency, human oversight, and robustness and accuracy.

👉 For example, AI in employment, workers management, and access to self-employment.

3. Limited Risk: AI systems with specific transparency obligations. Users should be aware that they are interacting with an AI system. 

👉 For example, AI systems that interact with humans (e.g., chatbots). 

4. Minimal or No Risk: AI systems that pose minimal or no risk to citizens' rights or safety. These systems are not subject to specific requirements under the AI Act.

👉 For example, AI used in spam filters.

Now, let's take a look at the two-sided nature of AI in content moderation, contributing to both the creation and the mitigation of harmful content.

The Dual Nature of AI in Content Moderation

When discussing the role of AI in content moderation, Scot highlighted both its benefits and challenges:

"AI really helps with moving that along faster, it's incredible, but there's also the dark side." 

While AI technologies significantly enhance the efficiency of moderating content by helping to quickly identify and remove harmful content, they also have the potential to produce harmful content just as swiftly. 

The rapid production and spread of AI-generated misinformation, deepfakes, and other malicious content can overwhelm existing moderation systems

This means two things for the near future: 

  1. AI technology is imperative in combating misinformation 
  2. Trust & Safety teams will have to adapt 

Let’s dig into that second point. 

The Impact of AI on Content Moderation Teams

It’s no news that Trust & Safety teams play a crucial role in protecting users. 

These teams are responsible for monitoring and removing harmful content, enforcing community guidelines, preventing fraud and abuse, and ensuring user data privacy. They work tirelessly to identify and mitigate potential risks and develop policies that foster a safe and respectful online environment. 

However, Scot noted the unfortunate trend of reducing these teams in the tech industry. 

"Trust & Safety is seen as cost centers that can be cut. But it’s essential for providing a positive user experience and keepingng users safe."

This troubling trend reflects how financial considerations can overshadow the need for robust content moderation. Leadership needs to understand that a safer online experience can attract more users and advertisers, ultimately benefiting the platform’s growth. 

"Users don’t want to be on platforms that expose them to harmful content. A safer platform is a better platform."

The Spectrum of AI Perspectives: From "Doomers" to "Accelerationists"

In our conversation, we touched on two different views on how AI affects society:

  1. Those who fear AI will cause disaster (AI-doomers) 
  2. Those who want to speed up AI development (Accelerationists)

"AI-doomers," sometimes associated with the Effective Altruism movement, worry about the potential existential risks posed by artificial intelligence. They believe that AI could eventually surpass human control, leading to catastrophic consequences for humanity.

On the other end of the spectrum are the "accelerationists" or "Effective Accelerationists." This group, exemplified by figures like venture capitalist and former software engineer, Marc Andreessen, advocates for minimal regulation and oversight of AI development. 

Scot referenced a manifesto by Marc Andreessen which essentially argues for "Techno-Optimism". 

Andreessen advocates for rapid technological advancement and free markets as the key to human progress, while criticizing regulation, "trust and safety" measures, and other constraints as detrimental to innovation and societal growth.

Scot, however, advocates for a more balanced approach:

"I'm excited about artificial intelligence, generative AI, machine learning. There are all kinds of awesome things, especially in science, that we haven't gotten to that I think could really benefit humanity. At the same time, I think there are a lot of concerns."

His primary concern isn't about AI itself becoming a threat, but rather about the potential for bad actors to misuse the technology.

This nuanced perspective aligns with the broader theme of our discussion: the need to balance innovation with responsible development and use of AI technologies. 

Balancing Innovation and Safety

Scot acknowledged the immense potential of AI in various fields, including disease research and material discovery. However, it’s important to put guardrails in place to ensure responsible use of AI. 

He cautioned against overly restrictive regulations that could stifle innovation. 

"There’s a tendency to over-purify these products. In regulating AI technologies, need to find a balance that allows for artistic freedom while ensuring user safety." 

But also highlighted the need for guardrails in AI: 

"We want to make sure AI is used for good while preventing harmful applications. It's a delicate balance."

In the final moments of this episode, I turned to Tom Siegel, to go over some of my conversation with Scot. 

Tom agreed with Scot’s main point, of finding the balance between user safety and innovation. 

He believes that online safety will require a combination of proactive design, industry collaboration, and third-party solutions. 

"We need to think strategically about safety by design, share best practices, and leverage external expertise to create a safer online environment."

He emphasized the importance of transparency and accountability in measuring and addressing online harm. 

"We can't manage what we don't measure. We need to understand the extent of online harm and work together to mitigate it." 

As we navigate the evolving landscape of online regulations and AI, Scot’s insights remind us of the importance of a balanced approach that fosters innovation while protecting consumers. 

By advocating for sensible regulations, responsible AI use, and robust trust and safety measures, we can create a safer and more trustworthy online experience.

As AI continues to blend into our daily work, we will see how these issues shake out, and which way the sword swings. 

My prediction? 

AI will probably be TIME’s person of the year. 

Check back in December 2024. 🔮 

Stay tuned for more write-ups of our conversations on Click To Trust, and don't forget to subscribe to Click to Trust so you don’t miss an episode!

Also, check out Scot’s Podcast “AI Quick Bits” for all things AI, safety, policy, ethics, and more!

You can also watch Scot’s episode on Click to Trust, here: 

Meet the Author

Carmo Braga da Costa

Head of Content at TrustLab

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch