Sometimes an algorithm really is (politically) unbiased

Facebook just announced that they will remove humans from the production process for their “trending” news section (the list of stories on the right of the page, not the main news feed.) Previously, they’ve had to defend themselves from accusations of liberal bias in how these stories were selected. Removing humans is designed, in part, to address these concerns.

The reaction among technologically literate press and scholars (e.g. here, here, and here) has been skeptical. They point out that algorithms are not unbiased; they are created by humans and operate on human data.

I have to disagree with my colleagues here. I think this change does, or could, remove an important type of bias: a preference along the US liberal-conservative axis. Further, relying on algorithmic processes rather than human processes leads to a sort of procedural fairness. You know that every story is going to be considered for inclusion in the “trending” box in exactly the same way. (Actually, I don’t believe that Facebook’s trending topics were ever politically biased — the evidence was always thin — but this is as much about appearance and legitimacy as any actual wrongdoing.)

Of course algorithms are not at all “unbiased.” I’ve been one of many voices saying this for a long time. I’ve written about the impossibility of creating an objective news filtering algorithm. I teach the students in my computational journalism class how to create such algorithms, and we talk about this a lot. Algorithmic techniques can be biased in all sorts of ways: they can be discriminatory because of the data they use for reference, they can harm minorities due to fundamental statistical problems,  and they can replicate the biased ways that humans use language.

And yet, removing humans really can remove an important potential source of bias. The key is recognizing what type of bias Facebook’s critics are concerned about.

Continue reading Sometimes an algorithm really is (politically) unbiased

Startups vs. Systems: Why Doing Good with Tech is Hard

It’s not easy to make social change with technology. There’s excitement around bringing “innovation” to social problems, which usually means bringing in ideas from the technology industry. But societies are more than software, and social enterprise doesn’t have the same economics as startups.

I knew all this going into my summer fellowship at Blue Ridge Labs, but my experience has given me a clearer idea of why. These are the themes that kept coming up for me after two months working with 16 other fellows on the problem of access to justice (A2J) for low-income New Yorkers.

You have to engage the incumbents

The culture of tech startups is not well adapted to taking on big systems. Startups have traditionally tried to enter the wide open spaces created by the new possibilities of technology, or use technical advantage to bypass incumbents. They generally try avoid engaging with major institutions, yet institutional reform is a key part of the “structural change” that so many of us want.

Uber does an end-run around the taxi system, but you can’t simply do an end run around the court system, the state Bar, or the local police.

Continue reading Startups vs. Systems: Why Doing Good with Tech is Hard