Panicking Responsibly During Elections: Lessons from Katie Harbath

← Blog Home
Table of Contents

Fighting misinformation in online communities is a problem with no finish line.

It’s a never ending game of cat and mouse where platforms are constantly taking action to protect their users, while bad actors are finding new ways to attack.

In an election year, the stakes are even higher. 

As more than two thirds of the world’s democracies head to the polls in 2024, misinformation has the potential to do some real damage.

How can we survive an election year in this environment? 

Enter a new mantra: Panic responsibly. ✨

As part of our series on misinformation on the Click to Trust Podcast, Tom Siegel sat down with Katie Harbath, the Chief Global Affairs Officer at Duco, and an expert on misinformation and election interference. 

Katie shared her thoughts on the role that governments, tech companies, and individuals can play in fighting online misinformation during an election year. 

She also explained her “panic responsibly” mantra, as a way to survive the election year, and perhaps, the upcoming challenges that AI generated content will bring to online communities. 

In this post I’ll share some insights from their chat, which you can listen to in full, here.

To start, we’ll look at how social media companies’ approaches to misinformation have evolved over the years.

Misinformation Lessons from Facebook’s Frontlines 

Katie Harbath has been on the frontline of tech’s war on misinformation from the start, advising political candidates on digital campaign tools before Facebook even existed. 

In 2011, she joined Facebook as the Director of Public Policy for Global Elections, guiding efforts against misinformation. 

Initially, she and her team focused on political ad transparency and combating "coordinated inauthentic behavior" from fake news creators. But the real threat of misinformation became clear in 2016.

Katie can pinpoint the date she felt something change: May 9th 2016. 

On this day, the Philippines went to the polls, but in the run up to its election, journalist (and Nobel Peace Prize winner) Maria Ressa tried to alert Facebook of some strange activity on her news feed. 

image source: reuters

Maria would later accuse the platform of being “biased against facts.”

On that same day, a contractor accused Facebook of suppressing conservative content in its trending topics

By December 2016, Facebook began collaborating with fact-checkers to create content-labeling tools – helping to assess and flag content that could dangerously misinform it’s users. 

Today, Facebook, Instagram and Threads partner with over 100 fact-checking organizations covering more than 60 languages. [we spoke to Meta about this, here]  

💡 Nuance and adaptation are key: Those who spread lies online quickly learned that you need to include some nuggets of truth in a story, otherwise fact-checkers will spot them immediately. 

Platforms have also had to contend with growing political pressure – not just to handle misinformation and disinformation, but to strike the right balance in keeping users informed without compromising on freedom of speech.

Balancing Freedom of Speech and Online Safety

How can we mitigate harmful speech while also safeguarding free expression?

We have previously discussed this difficult trade-off as “the moderation paradox”

Katie’s solution is to move beyond a binary approach to moderation. 

It’s not about “leaving it up or taking it down”. It’s more a question of considering alternative approaches to moderation, such as reducing reach. 

Disinformation expert Renee DiResta described this strategy as “speech, not reach”. Fact checkers might leave questionable content in place while ensuring that fewer people will be able to see it. 

Katie believes that both policymakers and social media companies are going to start moving away from deciding whether content is “OK” or “Not OK”. 

Instead, she believes there needs to be more transparency and clarity about how platforms establish and enforce their content policies.

Users also have the right to question any moderation decisions. 

If a fact-checker labels a piece of content and it gets taken down, or its reach is limited, users can launch an appeal. Then, Meta’s Oversight Board, which Katie describes as the platform’s “Supreme Court”, will consider the appeal and, if necessary, rethink their ruling.

But try as they might to curb misinformation, platforms can't stop crafty bad actors from finding ways to sneak through.

This is why Katie describes the war on misinformation as “a problem with no finish line,” where multiple stakeholders must stand their ground. 

Three Key Stakeholders in Combating Misinformation

People tend to look to tech companies to fight misinformation, and to lay all the blame on the platforms when things go wrong. Yet combating misinformation is a responsibility shared among a number of stakeholders:

1) Policymakers

Governments struggle to keep up with fast-moving tech, often relying on platforms to self-regulate. 

However, not all companies take this responsibility seriously. Regulations can level the playing field, ensuring all tech companies address their Trust and Safety commitments.

In recent years we have seen numerous new regulations and bills come into force across the world, including the UK’s Online Safety Act, and the EU’s Digital Services Act

These new regulations are not without their critics, but they’re essentially measures to ensure that everybody plays by the same rules.

2) Academia

Rigorous academic studies can help broaden our understanding of what’s happening online. Researchers give us insights into the effects that online misinformation can have on public life, and into the free speech implications of Trust and Safety regulations.

However, while the tech world moves fast, the academia world moves slowly.

So, while academia can play an important role in placing these discussions in a wider context, it often does so in the long-term.

The recent news on the potential dismantling of the Stanford Internet Observatory—amidst political pressures and legal challenges— also highlights the vulnerabilities academic institutions face. 

3) The Media

The media sets the narrative for the conversations we have about online safety. 

Katie is particularly concerned about how the media will approach the question of AI. There’s a tendency to ignore AI’s potential for good, and instead focus entirely on its potential for spreading misinformation [which something we’ve written about before.] 

Katie argues that the media needs to take care in separating the signal from the noise. 

Instead of speculating and fear mongering, they need to focus on what is actually happening, and on the possible impact that developments might be having on the overall online and offline environment.  

Panic Responsibly: Preparing for Upcoming Elections

“In this year of global elections, I’m worried, but I’m also a little optimistic. My mantra for this year is to ‘panic responsibly.’”

So what does it mean to “panic responsibly”?

It can mean that you try to stay positive even when things are really chaotic. It can also mean taking personal responsibility for fact-checking instead of relying solely on governments and tech platforms, to remain strong against misinformation.

What should we be doing ourselves, as parents, caretakers, and individuals? 

Here are two things Katie suggests:

1) Do a personal information audit. 

Where are you getting your information from? It’s important to stay informed, and to get a variety of viewpoints from a variety of different sources. 

Don’t depend on one source for all your information. Instead of relying entirely on social media, or on TV news, also listen to podcasts and subscribe to some newsletters. 

Getting a global perspective when it comes to news (especially election-related news) is key. 

2) Don’t take things at face value. 

You might sometimes see things that raise the hair on the back of your neck. But instead of succumbing to panic, take a deep breath. Check the author’s byline and pay attention to the date the content was published. 

Look around for some other sources of news that could back up the story, or refute it, or at least give it a bit more context. 

Remember that some things are designed to scare you, or to make you angry. But a bit of due diligence can go a long way.

If you're interested in delving deeper into these topics, check out Katie’s weekly newsletter at anchorchange.substack.com

Katie also just restarted season two of her podcast, Impossible Tradeoffs, which you can also find at the same Substack link. 

Stay informed, connected, and remember to panic responsibly. 

You can watch our full episode of Click To Trust, with Katie Harbath, here: 

Meet the Author

Carmo Braga da Costa

Head of Content at TrustLab

Let's Work Together

Partner with us to make the internet a safer place.

Get in Touch