What types of defenses against disinformation are possible? And which of these would we actually want to use in a democracy, where approaches like censorship can impinge on important freedoms? To try to answer these questions, I looked at what three counter-disinformation organizations are actually doing today, and categorized their tactics.
The EU East StratCom Task Force is a contemporary government counter-propaganda agency. Facebook has made numerous changes to its operations to try to combat disinformation, and is a good example of what platforms can do. The Chinese information regime is a marvel of networked information control, and provokes questions about what a democracy should and should not do.
The result is the paper Institutional Counter-disinformation Strategies in a Networked Democracy (pdf). Here’s a video of me presenting this work at the the recent Misinfoworkshop.
I should say from the start that this work is not about defining “disinformation.” Adjudicating which speech is harmful is a profound problem with millennia of history, and what sort of narratives are “false” is one of the major political battles of our time. Instead, my goal here is to describe methods: what kinds of responses are there, and how do they align with the values of an open society?
The core of my analysis is this chart, which organizes the tactics of the above organizations into six groups.
I’ll describe each of these strategies briefly; for more depth (and references) see the talk or the paper.
Refutation, rebuttal, or debunking might be the most obvious counter-strategy. It’s also well within the bounds of democracy, as it’s simply “more speech.” It’s most effective if it’s done consistently over the long term, and in any case it’s practiced by most counter-disinformation organizations.
Exposing inauthenticity combats one of the oldest and best-recognized forms of disinformation: pretending to be someone you are not. Bot networks, “astroturfing,” and undisclosed agendas or conflicts of interest could all be considered inauthentic communication. The obvious response is to discredit the source by exposing it.
Alternative narratives. A long line of experimentation suggests that merely saying that something is false is less effective than providing an alternative narrative, and the non-platforms in this analysis combat disinformation in part by promoting their own narrative
Algorithmic filter manipulation. The rise of platforms creates a truly new way of countering disinformation: demote it by decreasing its ranking in search results and algorithmically generated feeds. Conversely, it is possible to promote alternative narratives by increasing their ranking.
Speech laws. The U.S. Supreme Court has held that the First Amendment generally protects lying; the major exceptions concern defamation and fraud. In Europe, the recent report of the High Level Expert Group on Fake News and Online Disinformation recommended against attempting to regulate disinformation. But in most democracies platforms are still legally liable for hosting certain types of content. For example, Germany requires platforms to remove Nazi-related material within 24 hours or face fines.
Censorship. One way of combatting disinformation is simply to remove it from public view. In the 20th century, censorship was sometimes possible through control over broadcast media. This is difficult with a free press, and it is even harder to eliminate information from a networked ecosystem. Yet platforms do have the power to remove content entirely and often do, both for their own reasons and as required by law. (This differs from speech laws because the latter may impose fines or require disclaimers or otherwise restrict speech without removing it.)
Despite their differences, there are many common patterns between the East StratCom Task Force, Facebook, and the Chinese government. Each of the methods they use has certain advantages and disadvantages in terms of efficacy and legitimacy — that is, alignment with the values of an open society.
A cross-sector response — both distributed and coordinated — is perhaps the biggest challenge. In societies with a free press there is no one with the power to direct all media outlets and platforms to refute or ignore or publish particular items, and it seems unlikely that people across different sectors of society would even agree on what is disinformation and what is not. In the U.S. the State Department, the Defense Department, academics, journalists, technologists and others have all launched their own more-or-less independent counter-disinformation efforts. In many countries, a coordinated response will require coming to terms with a deeply divided population.
But no matter what we collectively choose to do, citizens will require strong assurances that the strategies employed to counter disinformation are both effective and aligned with democratic values.