The Disinformation Panic
National Review, September 19, 2024 (November Issue)
By giving online intermediaries a sizeable degree of immunity from liability for user content, Section 230 of the Communications Decency Act of 1996 combined a characteristically American defense of free expression with a determination to ensure that this promising new sector was not stifled by another American tradition, predatory litigation. The outcome, through blogs, social media, and countless other outlets, has been to open the public square to voices that once would never have been heard.
Unfortunately, some of those voices — bots, trolls, and other, more skilled operatives — were Russian. Their task, particularly after the 2014 invasion of Ukraine, was to deliver disinformation to the West — lies spread to discredit, damage, or disorient an opponent. Their objective was to whip up division and unease, fomenting racial rancor here, circulating rumors of some emergency there.
Two brutally polarizing political battles in 2016, over Brexit in Britain and between Donald Trump and Hillary Clinton in the U.S., offered an obvious opportunity for troublemaking, and Russia took it. We will never know for sure whether online disinformation tipped the scales in either vote (some analysts say yes, others no, and others maybe). I doubt that it did, but significant swaths of the establishment in both countries were happy to entertain the idea. For the Remainers and for Hillary Clinton and her supporters, it soothed the pain of rejection and cast a cloud of suspicion over the result, giving the Kremlin an additional somewhat paradoxical win: The more that disinformation is talked up, the more distrust there will be.
Online disinformation is a real phenomenon, but with its effectiveness something of a mystery, there’s a good chance that the threat it poses has been overstated. Quite a bit of the research in this field is speculative or, one way or another, self-serving. To be sure, in the right place, or if well crafted and timed correctly (shortly before an election, perhaps), it could be a menace. Its impact will also vary with its subject matter. A pandemic will likely attract more attention than politics. And there is reason enough to worry about deepfakes.
Overall, people appear to regard content seen on social media more skeptically than those who would “protect” us from disinformation think (or say they think). More generally, exaggerated views of persuasiveness are connected with a belief in the gullibility of others. Moreover, much, maybe most, disinformation is drowned out by all the other material coursing through its targets’ feeds.
But panic over disinformation (whatever its source) has been too useful to be allowed to let drop. A helpful complement to conveniently flexible “hate,” it has been a handy rationale for greater control over internet speech. It has accelerated the rise of “fact-checkers,” who all too often are propagandists and censors masquerading as guardians of objectivity. Their biases are insufficiently examined (not that they are hard to guess).
The year 2016 was key in the process by which combating disinformation became embedded in the institutional structures of the West. But events in Germany in the previous twelve months had already set in motion the move toward tougher online-content regulation, without which such combat could never take place. In 2015, Angela Merkel flung open Germany’s doors to over a million asylum-seekers. The official narrative, backed up by all the major parties and a compliant media (with exceptions here and there), was of the country’s generous Willkommenskultur. Not all Germans felt the same way, however, and some of them went online to say so, not always politely. Merkel, as if anticipating and mimicking the behavior of the Biden administration during the pandemic, leaned on Mark Zuckerberg to crack down on such talk.
Arguing that social-media companies had not done enough to address this issue and well aware that resentment over the new arrivals was not going away, Merkel encouraged the German parliament to pass the pioneering, influential, and catchily named Netzwerkdurchsetzungsgesetz (the Network Enforcement Act, or NetzDG) in 2017. One key provision was a requirement to take down posts within a certain time after their being reported — 24 hours if they are “manifestly” illegal, seven days (usually) if their illegality lacks that “manifestly.” Repeated breaches of the law can lead to a fine of up to 50 million euros, triggering concerns that, preferring to err on the side of caution, companies would “over-comply.”
Other countries, unburdened by that annoying First Amendment, and unbothered by criticism that it was too harsh (strangely, it had its fans in Moscow), followed suit. And then in 2022, a couple of days, ironically, before Elon Musk concluded his acquisition of Twitter, the European Union passed its Digital Services Act (DSA), with Hillary Clinton cheering the censors on: “For too long, tech platforms have amplified disinformation and extremism with no accountability. The EU is poised to do something about it.”
The DSA imposes a wide range of obligations on online-service providers if they offer their services in the EU. These increase substantially in the case of companies that have more than 45 million users a month there and that Brussels has designated as either a very large online search engine (VLOSE) or a very large online platform (VLOP).
X has been classified as a VLOP and, as such, is required, among many other obligations, to undertake an annual assessment of “systemic” risks arising out of, to oversimplify, the way its operations are set up and the use that is made of its services. Some risks are obvious (dissemination of illegal content), but others are extraordinarily broad (“any actual or foreseeable negative effects on civic discourse and electoral processes”). The VLOP must then explain how it “mitigates” those risks. It is clearly envisaged that the appropriate response to “illegal hate speech” is to remove it, but the overall requirement is that mitigation should be “reasonable, proportionate, and effective.” In the hands of an aggressive regulator, that could mean anything. The EU Commission has already notified X of its preliminary finding that the company is in breach of various provisions of the Digital Services Act. X will push back, and Musk has said that X is looking forward to battling this in court. Another EU Commission investigation into X is still under way. Even though Thierry Breton, the EU commissioner who has had some acrimonious spats with Musk, has now quit, X should not expect that Brussels will ease up.
The immense potential size of the penalties — up to 6 percent of global revenue — for a breach of the Digital Services Act may become an irresistible inducement for Musk to try to cut a deal with the commission and, for that matter, to find a safe haven in a compliance regime staffed with European counterparts of the “content moderators” (censors) he fired from Twitter. Ignoring Brussels would not work. There would be cripplingly hefty fines for that too. If Americans’ online speech is to avoid the EU’s censorship, U.S. social-media companies will have to set up systems to ensure that their customers in the EU see only fare sanitized to Brussels’s standards.
The Digital Services Act is not meant to criminalize any new categories of speech. What is illegal under the law of an individual EU member-state or under EU law will remain illegal. Any amendments to legislation in that area will be left to national parliaments or to the EU’s legislative process. The DSA’s broad language could easily be used to impose de facto censorship on all sorts of theoretically legal speech, in the interest of preventing “harms” that exist only in the progressive imagination and that are hinted at in, among other places, the law’s preamble, but also elsewhere. Thus on its website the EU Commission warns of the dangers of “climate disinformation.” Tackling that is, it states, incorporated within its general approach to disinformation, including making it “more difficult for disinformation actors to misuse online platforms.”
Davosworld, birthplace of the Great Reset, is forever looking for a fresh crisis that can be exploited to advance its agenda, so it was fairly predictable that contributors to the World Economic Forum’s 2024 Global Risks Report reckoned that, on a two-year view, misinformation and disinformation represented “the most severe global risk.” That the following was highlighted was more surprising: “In response to mis- and disinformation, governments could be increasingly empowered to control information based on what they determine to be ‘true.’”
This is already happening. A regulator cannot classify an item of interest as disinformation or “misinformation” (false information that is passed on by someone who thinks it is true) without, among other questions, deciding whether it is true or not. Then there is malinformation. According to the U.K.’s Government Communication Service, this “deliberately misleads by twisting the meaning of truthful information.” One example of this might be a deceptively edited video. Reason’s Jacob Sullum suggested “true but inconvenient” as an alternative definition after a column in which he criticized the CDC for exaggerating the benefits of mask mandates during the pandemic was given two warning labels by Facebook: “missing context” and “could mislead people.”
Malinformation, the Government Communication Service recounts, “can be challenging to contest because it is difficult to inject nuance into highly polarized debates.” If it’s too challenging, that’s a sign that the real objection may be to disagreement, not to disinformation. This could be counterproductive and, in an epidemic, lethal. Crowdsourcing ideas to take advantage of the collective intelligence available online makes sense. Insisting that there can be only one answer frequently does not.
But would-be censors march on. The U.K.’s Online Safety Act is coming into force. Its maximum fine? Ten percent of global revenue. In addition, there’s a possibility of jail. Australia’s government is planning legislation with more modest demands. Its maximum fine? A mere 5 percent of global revenue. Section 230 continues to come under fire from both sides of the aisle. Some Democrats, angered by all the right-wing wrong-think online, want social-media companies to take more responsibility for the content they host. Some Republicans are irritated by anti-conservative bias in content moderation. Meanwhile, Facebook, Google (when its novice chatbot Gemini showcased the extent of the company’s bias, the ensuing PR fiasco was grimly entertaining), and their peers — with the exception of X — carry on as before.
Musk is (as, to take one example, Beijing knows) less of a “free-speech absolutist” than he claims. But the fury his changes at X have stirred up within a large part of the West’s political, regulatory, and media classes has been a disturbing reminder of the depth of the authoritarianism that runs through their ideological mix. In the course of a tirade he wrote for the Guardian in late August, former U.S. labor secretary Robert Reich referred to the arrest in France of Pavel Durov, the co-founder and CEO of Telegram (a company that is both messaging service and social network) and argued that “regulators around the world should threaten Musk with arrest if he doesn’t stop disseminating lies and hate on X.”
Bringing in Durov is also an extrapolation too far. The charges he faces — alleged complicity in crimes such as drug-trafficking, the distribution of child pornography, and refusing to cooperate with the authorities — presumably flow from the ability to send heavily encrypted messages over Telegram, very different legal territory.
Reich also welcomed the Supreme Court’s recent ruling in Murthy v. Missouri, which he described “as a technical win for the public good (technical because the court based its ruling on the plaintiff’s lack of standing to sue).” The Court, he maintained, “had said federal agencies may pressure social media platforms to take down misinformation.” That will surely depend on the circumstances, but that Reich approved of the Court’s letting the feds get away with their appalling behavior in this instance is dispiriting.
The legacy media’s relative indifference to this matter is in marked contrast to its intense criticism of X/Twitter since Musk took over the company. This has extended to performative “departures” from the site and now to AP’s tweeting out a how-to guide to quitting X. The overarching goal, presumably, is to stigmatize X and, by extension, those who post on it. It reflects much of the legacy media’s repudiation of objectivity and its growing discomfort with disagreement.
This is not going to end well