Content moderation in Trump's America is a political minefield

In the battle between free speech and content moderation, who sets the rules?
By Christianna Silva  on 
President Donald Trump over a map of the country with images of content moderation floating above.
How is Trump going to effect content moderation? Credit: Stacey Zhu; travellinglight / iStock / Getty Images Plus / via Getty Images

Social media platforms have historically run their content moderation much like a parent running a house full of teenagers: If you live under my roof, you follow my rules. But as social media has become increasingly ubiquitous in our offline life — and more inherently political — the questions have become: Who really owns the roof, who makes those rules, and are our civil liberties at stake?

Under President-elect Donald Trump's administration, this debate will likely intensify until the politicization of content moderation reaches a fever pitch.

How did we get here?

The evolution of content moderation began slowly, gaining speed as social media’s influence grew. It became increasingly clear that something had to be done when Facebook, Twitter, and YouTube played key roles in the Arab Spring, a series of protests in the Arab world in response to government corruption, in the 2010s. Facebook was used as a tool for activists to organize, but it quickly became controversial. YouTube grappled with whether to allow violent videos that served educational or documentary purposes in response to activists in Egypt and Libya exposing police torture. Around the same time, Twitter rolled out its "country withheld tweet" policy

In 2013, leaked documents from Facebook’s moderation offices showed what content, exactly, Facebook was moderating. The following year, the issue of online radicalization emerged across social media platforms. YouTube reversed its policy on allowing certain violent videos after one showing journalist James Foley’s beheading went viral. Twitter faced backlash over unchecked harassment over the release of the women-led Ghostbusters film, which led to a content moderation change. 

Behind the scenes, the people who moderated the content reported horrible working conditions. And then came 2016.

Misinformation and disinformation plagued the U.S. presidential election between Hillary Clinton and Trump. Despite Facebook launching a fact-checking program, platforms struggled to stop the spread of misinformation and election interference. In Myanmar, the Rohingya people faced huge acts of ethnic violence fueled by Facebook content. Meanwhile, Facebook Live became a place to broadcast suicides and shootings, including the murder of Philando Castile. In 2018, TikTok launched in China, and in the same year, Twitter removed 70 million bots to curb the influence of political misinformation. Later that year, YouTube released its first transparency report, and Facebook formed its Oversight Board, allowing users to appeal its decisions. In 2019, the Chirstchurch terrorist attack, which was broadcast on Facebook Live, led to the Christchurch Call to Action to Eliminate Terrorist and Violent Extremist Content Online, a group of nations "working together under the rubric of the Call to prevent terrorists and violent extremists from exploiting the Internet." Twitter allowed its users to appeal content removal later that year, and eventually, TikTok launched internationally. 

All the while, Trump was president. He signed an executive order on Preventing Online Censorship, which targeted Section 230 of the Communications Decency Act and aimed to curb what he saw as biases against himself and other conservatives in how platforms moderate content. This came after many of Trump's tweets were flagged by Twitter for misleading information. He and others in his party accused platforms like Twitter, Facebook, and Google of anti-conservative bias, which led to Congressional hearings and investigations into moderated content — a kind of impact that Katie Harbath, founder and CEO of tech policy firm Anchor Change and a former Facebook executive, calls "reputational."

The pandemic, January 6, and the peak of politicization

Then, COVID-19 hit. Misinformation about the global epidemic ran rampant, and more people died as a result. The rules to moderate content online expanded internationally to counter the ever-growing phenomena of hate speech, election misinformation, and health misinformation. Facebook introduced policies targeting Holocaust denial content, hate groups, organized militia groups, and conspiracy theories, while Twitter launched its transparency center

But January 6, 2021, marked a turning point. Platforms like Facebook, Twitter, and YouTube banned or locked then-President Trump’s accounts for inciting violence during the Capitol attack

Mashable Top Stories
Stay connected with the hottest stories of the day and the latest entertainment news.
Sign up for Mashable's Top Stories newsletter
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

"I would say Trump de-platforming was a peak swing of the pendulum," Katie Harbath, founder and CEO of tech policy firm Anchor Change and a former Facebook exec, told Mashable. "Since then, over the next four years, [platforms have] been coming back a little bit more to center in terms of how much content they are willing to take down. [And] they're being a lot more quiet about it. They're not being as transparent about it because they don't want the political target on their back around that."

Where are we now?

Since then, Trump has been reinstated on all social media platforms. But the focus has remained: Republicans claim that content moderation silences conservative voices. As Berin Szóka, President of TechFreedom, told Mashable: "Censorship is just content moderation that someone doesn't like."

Elon Musk, a self-identified "free-speech absolutist," acquired Twitter in late 2022 and fueled this rhetoric. In January 2023, House Republicans established a subcommittee on the “Weaponization of the Federal Government," targeting alleged censorship of conservative views. In one of their first official acts, they sent letters to research groups demanding any documentation of correspondence between those groups and the federal government or social media companies about content moderation. Meanwhile, a lawsuit alleged that President Joe Biden's administration pressured platforms to suppress COVID-19 misinformation, which attorney generals argued was a form of suppression of speech

Meta, in a notable shift, has reduced its focus on political content, particularly on its Twitter competitor Threads, which Harbath says is "not necessarily content moderation, but it's a decision about what types of content they're presenting to people or not."

What will we see in the future of content moderation?

President-elect Trump has made content moderation a campaign issue. Brendan Carr, his pick to lead the FCC, has already echoed this agenda, calling for the dismantling of what he dubs the "censorship cartel" and an attempt to "restore free speech rights for everyday Americans."

"To do that, they have to either bully or require tech companies to carry speech that they don't want to carry," Szóka said. "Republicans are at war on content moderation."

This "war" will likely play out on a few different fronts: legislative and reputational, as Harbath says. Reputationally, we'll see more congressional hearings with tech execs, more posts on X from Trump, and more dubious energy concerning content moderation in general. Legislatively, we have an interesting road ahead. 

As Szóka says, Carr will likely do Trump's bidding with regard to criteria for eligibility for Section 230 immunity, which "grants complete immunity for publisher or speaker activities regardless of whether the challenged speech is unlawful." This means that Facebook is not liable for misinformation, hate speech, or anything else that goes down on the platform that it owns and runs with its money.

"[Republicans will] use Section 230 because by doing that, they can say, 'We're not requiring anything,’" Szóka said. "You're free, as a private company, to do what you want. But if you want Section 230 immunity, you have to be neutral, and we decide what's neutral."

Harbath sees chaos ahead but questions whether Section 230 will actually change: “There'll probably be a debate and a discussion around it, but whether or not 230 actually changes, I'm skeptical."

At the same time, the rise of AI is reshaping the future of content moderation. "The next four years, how people are consuming information, what we're talking about today is gonna be completely irrelevant and look completely different," Harbath said. "AI is just gonna change how we think about our news feeds, the incentives for people, what they're posting, what that looks like, and it's gonna open up new challenges for the tech companies in terms of how it’s politicized."

Should we freak out? Probably not. According to Harbath, it’s still too early to predict what content moderation under a second Trump term will look like. But we should keep our eyes open. The rules of content moderation — and who gets to write them — are increasingly shaped by political power, public perception, and technological evolution, setting the stage for battles over free speech, corporate responsibility, and the role of government in regulating online spaces.

Topics Politics

Mashable Image
Christianna Silva
Senior Culture Reporter

Christianna Silva is a Senior Culture Reporter at Mashable. They write about tech and digital culture, with a focus on Facebook and Instagram. Before joining Mashable, they worked as an editor at NPR and MTV News, a reporter at Teen Vogue and VICE News, and as a stablehand at a mini-horse farm. You can follow them on Twitter @christianna_j.


Recommended For You

Bluesky ramps up content moderation as millions join the platform
A glowing laptop screen and phone screen both display the Bluesky logo.

Child 'content creators' granted protections in California by Gov. Newsom
Actress Demi Lovato and California governor Gavin Newsom usher in new protections for underage online performers.


Bluesky has growing pains. Here's what it can learn from X/Twitter
Bluesky's butterfly logo covers an X while a man looks at the Bluesky app on his phone

More in Life
How to watch Dallas Cowboys vs. Carolina Panthers online
The end zone on a football field

How to watch Miami Dolphins vs. Houston Texans online
The end zone on a football field

How to watch Baltimore Ravens vs. New York Giants online
An NFL football field.

How to watch New York Jets vs. Jacksonville Jaguars online
The end zone on a football field

How to watch Washington Commanders vs. New Orleans Saints online
A football on a field

Trending on Mashable
NYT Connections hints today: Clues, answers for December 15, 2024
A phone displaying the New York Times game 'Connections.'

Wordle today: Answer, hints for December 15
a phone displaying Wordle



NYT Strands hints, answers for December 15
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!