Sometimes an algorithm really is (politically) unbiased

Facebook just announced that they will remove humans from the production process for their “trending” news section (the list of stories on the right of the page, not the main news feed.) Previously, they’ve had to defend themselves from accusations of liberal bias in how these stories were selected. Removing humans is designed, in part, to address these concerns.

The reaction among technologically literate press and scholars (e.g. here, here, and here) has been skeptical. They point out that algorithms are not unbiased; they are created by humans and operate on human data.

I have to disagree with my colleagues here. I think this change does, or could, remove an important type of bias: a preference along the US liberal-conservative axis. Further, relying on algorithmic processes rather than human processes leads to a sort of procedural fairness. You know that every story is going to be considered for inclusion in the “trending” box in exactly the same way. (Actually, I don’t believe that Facebook’s trending topics were ever politically biased — the evidence was always thin — but this is as much about appearance and legitimacy as any actual wrongdoing.)

Of course algorithms are not at all “unbiased.” I’ve been one of many voices saying this for a long time. I’ve written about the impossibility of creating an objective news filtering algorithm. I teach the students in my computational journalism class how to create such algorithms, and we talk about this a lot. Algorithmic techniques can be biased in all sorts of ways: they can be discriminatory because of the data they use for reference, they can harm minorities due to fundamental statistical problems,  and they can replicate the biased ways that humans use language.

And yet, removing humans really can remove an important potential source of bias. The key is recognizing what type of bias Facebook’s critics are concerned about.

There are many ways to design a “trending topics” algorithm. You can just report which stories are most popular. But this might hide important news behind a wall of Kim Kardashian, so most trending algorithms also include a “velocity” component that responds to how fast a story is growing (e.g. Twitter.) Facebook’s trending topics are also location-specific and personalized. None of this is “objective.” These are choices about what it is important to see, just as an editor makes choices. And perhaps Facebook is making choices that make them the most money, rather than the supposedly neutral and public-service oriented choices of an editor, and that’s a type of bias too. It’s also true that algorithmic systems can be gamed by groups of users working together (which is either a feature or a bug, depending on what you feel deserves coverage.) Users can even work together to suppress topics entirely.

But none of this is left-right political bias, and that’s the kind of bias that everyone has been talking about. I can’t see anything in the design of these types of trend-spotting algorithms that would make them more favorable to one political orientation or another.

This doesn’t mean the results of the algorithm — the trending news stories themselves— are going to be politically neutral. The data that the algorithms operate on might be biased, and probably will be. Facebook monitors the articles that are being shared on their platform, and there is no guarantee that a) news sources produce and promote content in some “neutral” way and b) the users that share them are unbiased. If it turns out that more Facebook users are liberal, or liberal Facebook users are more active, then liberal-friendly articles will be more popular by definition.

However, this is a bias of the users, not Facebook itself. Every social software platform operates under a set of rules that are effectively a constitution. They define what can be said and how power is distributed. And some platform constitutions are more democratic than others: the administrators have power or the users have power in varying degrees over various things. Facebook has previously made other changes to reduce human judgment; this can be seen as a continual process of devolving control to the users, although it’s probably more to do with reducing costs through automation.

By removing humans entirely from the trending topics, Facebook is saying that the trending algorithm itself — which is very likely neutral with regard to the liberal/conservative axis — is the governing law of the system. The algorithm may not be “objective” in any deep way, but it is democratic a certain sense. We can expect the trending stories to mirror the political preferences of the users, rather than the political preferences of Facebook employees. This is exactly what both Facebook and its critics want.

Personally, I think that humans plus machines are terrific way to decide what is newsworthy. The former trends curators did important editorial work highlighting stories even when they weren’t popular: “People stopped caring about Syria … if it wasn’t trending on Facebook, it would make Facebook look bad.” This is exactly what a human editor should be doing. But Facebook just doesn’t want to be in this business.

3 thoughts on “Sometimes an algorithm really is (politically) unbiased”

  1. It is really interesting how you think that an algorithm used to choose trending news stories will be more biased than actual humans. I can see you point because the algorithm is based on the popularity of stories within a certain demographic. Either way, I think it is important for people to read their news from multiple outlets if they really want to avoid getting biased information.

  2.   霍雨浩自然不知道唐舞桐心中在想些什麼,見Nike 鞋款沒有回答自己,就向Nike 運動鞋微笑的點了點頭,轉身朝食堂而去。這幾下對梁夕來說完&#

  3. 冬季酷热的午后,云朵被刺上阳光的印记,阳光的矛头不加润色的从云里穿头,亮堂且耀眼。宏大的飞机擦过天际,留下扑灭性的怒吼,春光的&#3

Leave a Reply

Your email address will not be published.