The decision by Meta to reduce content moderation and scrap fact-checking protocols has been questioned by its own Oversight Board, stressing the need for safety, transparency, and global responsibility.
In its latest verdict, the Oversight Board—a body created by Meta but operating independently—accused the tech giant of making sweeping changes “hastily, in a departure from regular procedure,” without disclosing if any human rights assessments were carried out beforehand.
Meta didn’t just tweak the rules. In January, it abandoned its U.S. fact-checking initiative, relaxed enforcement on inflammatory content, and stopped proactively looking for less severe policy violations.
This means posts that previously wouldn’t have slipped through—those that refer to gay people as mentally ill or women as mere “household objects or property”—are no longer being flagged unless users report them.
Meta said these changes were necessary. Mark Zuckerberg claimed that years of tight moderation had caused “too many mistakes and too much censorship.”
But he didn’t back that up with any data, and bodies say the timing is suspect—just before Donald Trump began another run for the presidency. The board believes the company might have prioritised politics over platform integrity.
In its ruling, the Board asked Meta to “assess whether the changes could have uneven consequences globally, especially in countries experiencing current or recent crises, such as armed conflicts.”
This has the potential to impact millions of lives across fragile regions where misinformation can escalate into violence.
Out of 11 reviewed content cases, the Board supported Meta’s decisions in some and reversed others. It upheld the choice to leave up videos involving transgender women, yet ordered removal of posts related to anti-immigration riots in the UK, citing Meta’s sluggish response to violent and hateful speech.
It also recommended removing the term “transgenderism” from its Hateful Conduct policy entirely.
The Board made 17 recommendations in total. These included stronger enforcement of harassment rules, transparency on how hateful ideologies are handled, and a clear evaluation of the new “Community Notes” feature—which now serves as the company’s main tool for correcting misinformation after ending partnerships with news organisations and independent fact-checkers.
Meta’s response? Lukewarm at best. In a generic statement, the company said it welcomed decisions “that leave up or restore content in the interest of promoting free expression,” but ignored the rulings that demanded removals.
Funding, however, is still intact. According to Oversight Board co-chair Paolo Carozza, there’s no indication that Meta intends to scale back its support.
“We have no reason to think that Meta is soured on the board or planning to make any large scale structural changes in terms of its commitment with the board,” he said.
Meta has allocated at least $35 million annually for the Board’s operations through 2027, and previous commitments—$130 million in 2019, $150 million in 2022—are locked into a trust meant to preserve independence.
But funding alone doesn’t fix the core issue. If Meta continues to make policy decisions behind closed doors and without due diligence, the risk of platform abuse comes up—and the company’s credibility shrinks.
The Board says freedom of expression can’t come at the expense of human rights, nor should corporate convenience override global safety.