Facebook just announced that they will remove humans from the production process for their “trending” news section (the list of stories on the right of the page, not the main news feed.) Previously, they’ve had to defend themselves from accusations of liberal bias in how these stories were selected. Removing humans is designed, in part, to address these concerns.
The reaction among technologically literate press and scholars (e.g. here, here, and here) has been skeptical. They point out that algorithms are not unbiased; they are created by humans and operate on human data.
I have to disagree with my colleagues here. I think this change does, or could, remove an important type of bias: a preference along the US liberal-conservative axis. Further, relying on algorithmic processes rather than human processes leads to a sort of procedural fairness. You know that every story is going to be considered for inclusion in the “trending” box in exactly the same way. (Actually, I don’t believe that Facebook’s trending topics were ever politically biased — the evidence was always thin — but this is as much about appearance and legitimacy as any actual wrongdoing.)
Of course algorithms are not at all “unbiased.” I’ve been one of many voices saying this for a long time. I’ve written about the impossibility of creating an objective news filtering algorithm. I teach the students in my computational journalism class how to create such algorithms, and we talk about this a lot. Algorithmic techniques can be biased in all sorts of ways: they can be discriminatory because of the data they use for reference, they can harm minorities due to fundamental statistical problems, and they can replicate the biased ways that humans use language.
And yet, removing humans really can remove an important potential source of bias. The key is recognizing what type of bias Facebook’s critics are concerned about.
Continue reading Sometimes an algorithm really is (politically) unbiased