In an audacious act of censorship — and what Senate Judiciary Committee members have condemned as “actively interfering” in a presidential election — Twitter and Facebook teamed up last week to suppress a bombshell story on Hunter and Joe Biden by one of the biggest news publishers in the country. The move ended up backfiring massively, MIT concluding that Twitter’s actions “nearly doubled” the visibility of the damaging story. Amid congressional subpoenas and fierce backlash (which Twitter euphemistically called “significant feedback”), the platform reversed course.
The actions of the platforms were the most egregious example of news suppression we’ve seen maybe ever, but even more troubling is the defense of those actions by mainstream media outlets.
On Sunday, the editorial board of The Washington Post — which warns readers on every page that “democracy dies in darkness” — penned a shameful defense of Twitter and Facebook’s impulse to suppress the Biden-damaging story.
“Twitter and Facebook were right to suppress a Biden smear,” the headline declares. “But they should tell us why they did.”
Having announced to readers that they’ve concluded before all the facts are in that the New York Post’s story on the Bidens and Ukraine is a “smear,” the editorial board attempts, like Twitter and Facebook, to come up with an explanation for its premature decision that the claims included in the damaging report were “baseless.”
Facebook justified its choice according to a year-old policy intended to prevent posts from spreading widely when the site detects “signals” of falsehood. This is a smart rule — a sort of circuit-breaker to stop the platform’s own internal mechanics from catapulting a lie to viral status. The problem is that no one knows what the signals in question are: whether they are based on objective measures such as the number of people who reshare a piece and then delete their resharing, or whether they are based on subjective surmises such as the possibility, in this case, that the New York Post article was part of a propaganda campaign against the former vice president. And though Facebook says it has applied this stricture before in sensitive situations, it’s not standard practice.
Twitter, we’re assured, was also correct to blackhole a massive story — the only problem was Twitter’s entire premise for blocking it:
Twitter, on the other hand, based its more radical intervention on an entirely distinct standard: a prohibition on the sharing of hacked materials. Critics were quick to ask why this restriction didn’t also apply to the New York Times’s reporting on President Trump’s tax returns, or any number of prizewinning journalistic products from years past. Twitter backtracked, announcing that it will now only remove hacked content directly shared by hackers or accomplices, and that the inclusion of personal details in the New York Post story was actually responsible for the URL-blocking. But Twitter never shared its basis for believing the materials were hacked. And while the site doesn’t have a general policy against misinformation, it strains credulity to imagine that its action had nothing to do with doubting the legitimacy of the story it shut down.
Having revealed that their own rationale for defending both platforms is as fundamentally flawed as the platforms’ rationale for blocking the story, the Post’s editors conclude by showing their hand about what is actually driving their soft rebuke of the platforms: they’ve opened the door to new “accusations of anti-conservative censorship.”
The contradictions that came with these calls have stirred up a fresh firestorm of accusations of anti-conservative censorship. Allegations of partisan bias in content-moderation decisions have never been borne out by the evidence. Yet it’s much easier to launch such allegations when platforms aren’t clear about precisely what their rules are and precisely how they’re being applied. Twitter would do well to develop a comprehensive misinformation policy; Facebook should better explain how its current misinformation policy actually operates. And both must figure out how their existing policies interact with the concerns about hack-and-leaks haunting the upcoming election. One way of restoring trust in the public sphere is to stem the transmission of untrue tales — but that can create distrust of its own unless it is done forthrightly.
Democracy indeed dies in darkness, and the Post’s editorial board is openly calling for more blackouts.