There are many kinds of questions about discrimination fairness or bias where data is relevant. Who gets stopped on the road by the police? Who gets admitted to college? Who gets approved for a loan, and who doesn’t? The data-driven analysis of fairness has become even more important as we start to deploy algorithmic decision making across society.
I attempted to synthesize an introductory framework for thinking about what fairness means in a quantitative sense, and how these mathematical definitions connect to legal and moral principles and our real world institutions of criminal justice, employment, lending, and so on. I ended up with two talks.
This longer talk (50 minutes), presented at Code for America SF, gets into a lot more depth, including the mathematical definitions of different types of fairness, and the whole tricky issue of whether or not algorithms should be “blinded” to attributes like race and gender. It also includes several case studies of real algorithmic systems, and discusses how we might design such systems to reduce bias. (Slides)
My favorite resources on these topics:
- The Workbench workflow analyzing Massachusetts traffic ticket data.
- Sandra Mayson, Bias In, Bias Out. One of my favorite overall discussions of algorithmic bias.
- Megan Stevenson, Assessing Risk Assessment in Action. What happens with criminal justice risk assessment in the real world?
- Open Policing Project findings. A very clearly thought out analysis of US national traffic stop data.
- Workbench Open Policing Project tutorial. An interactive introduction to working with this data.
- Arvind Narayanan, 21 Definitions of Fairness and Their Politics. More on the connection between quantitative and political concepts of fairness.