Ethical Software Engineering Lab Course

There is now, at long last, wide concern over the negative effects of technology, along with calls to teach ethics to engineers. But critique is not enough. What tools are available to the working engineer to identify and mitigate the potential harms of their work?

I’ve been teaching the effects of technology on society for some time, and we cover a lot of it in my computational journalism course. This is an outline for a broader hands-on course, which I’m calling the Ethical Engineering Lab.

This eight-week course is a hands-on introduction to the practice of what you might call harm-aware software engineering. I’ve structured it around the Institute for the Future’s Ethical OS, a framework I’ve found useful for categorizing the places where technology intersects with personal and social harm. Each class is three hours long, split between lecture and lab time. Students must complete a project investigating actual or potential harms from technology, and their mitigations.

Each lecture is structured around a set of issues, cases where technology is or could be involved in harm, and tools, methods for mitigating these harms. The goal is to train students in the current state-of-the-art of these problems, which often requires a deep dive into both the social and technical perspectives. We will study both differential privacy algorithms and HIPAA health data privacy. In many cases there is disagreement over the potential for certain harms and their seriousness, so we will explore the tradeoffs of possible design choices.

Our hands-on exploration (lab time and final projects) will involve a combination of qualitative and quantitative methods. For example, we might read the EULAs of all the products we use and see if there are any surprises. Or we might use a Jupyter notebook with real data from the COMPAS criminal justice risk assessment algorithm to investigate the tradeoffs between different definitions of quantitative fairness. I’ve included example projects that students could do in each area.

Some technical background is required, as the goal is to teach the engineering aspects of these problems. Many but not all final projects will require coding. I particularly encourage students to choose a project related to their work.

This post is just a sketch to suggest the sort of material I’d want to include. Doubtless, a great many things are missing. What else should we cover? What references are especially good on these topics? Do you want me to teach this at your organization? Get in touch.

Truth, Disinformation, Propaganda

Issues

  • Overview of recent disinformation campaigns (2016 election and globally).
  • Disinformation spreads farther than truth.
  • Review of current state-of-the-art in audio, video, text, and photo deepfakes.
  • Defining “propaganda.” The ethics of persuasion.
  • What the most advanced chatbots can do today.

Tools

Discussions

  • How could your technology be used as part of a disinformation campaign?

Example Projects

  • Build a chatbot that impersonates a person or company. See if you can fool your classmates.
  • Build a fake news classifier from one of the common fake news datasets. What signals does it end up learning? Can it be made to work at scale?

 

Addiction & The Dopamine Economy

Issues

  • Addiction psychology, through the example of gambling and casino design.
  • Defining “engagement” and the effects of optimizing for it.
  • Effects of screens on sleep.
  • “Ultra-FOMO”: What do constant images of perfection do to us?

Tools

  • “time well spent” metrics; well-being research
  • Screen time reports
  • Human and algorithmic approaches to evaluating content quality

Discussion

  • What would addiction look like on your platform?
  • How can your business make money without addiction?

Example Projects

  • Estimate quantitative effect of removing a particular addictive feature. Or implement a change to your product and find out.
  • Build a machine learning system that ranks content by “quality,” in the “time well spent” sense. What measure are you using, and why, and how does the classifier perform relative to this standard?

Economic and Asset Inequalities

Issues

Example Projects

  • Reproduce the simulation which showed that pricing algorithms will collude. Under what conditions will this happen? How can AIs be designed not to do this?
  • Analyze real lending data to determine the demographics of who gets a loan now, and how that would change if better prediction was available, as this notebook does.
  • Simulate personalized pricing, using a model to estimate of willingness to pay of different demographics (location, age, etc.). How will this affect the distribution of prices between different income levels?

Machine Ethics & Algorithmic Biases

Issues

Tools

  • Introduction to quantitative fairness measures. Three classic types, their advantages and drawbacks. 1) Demographic parity: hire the same number of men and women. 2) Equal error rates: make sure the classifier fails the same amount for different races. 3) Calibration: ensure a prediction means the same for all groups.
  • Stanford’s Law, Bias, and Algorithms course notebooks.
  • Impossibility theorem: you can only have one of these at once when base rates differ between groups. Type of fairness is a policy choice.
  • Real world outcomes. After recidivism prediction was introduced in Kentucky, judges initially reduced detainment rates in accordance with computed risk scores but the effect gradually wore off. A detailed analysis of the effect of predicting which children will likely require intervention by child protection services.

Example Projects

The Surveillance State

 Issues

Scenarios

  • Data mining tools used for investigative journalism are re-purposed for harassment
  • China’s social credit system grows up and is applied to users worldwide to enforce authoritarian norms.
  • Police facial recognition cameras effectively track every citizen’s location, bypassing 4th amendment protections on tracking.

Discussions

  • What are the technical, legal, and social factors that prevent law enforcement from abusing mass surveillance – in each country? How will your technology interact with these

Project Ideas

  • With their prior permission, investigate a classmate through public information only. What can you correctly infer about their life?
  • Publicly display your heart rate for a week and report your results.

Data Control & Monetization

Issues

  • Data privacy law primer, including GDPR and HIPPA.
  • Inadvertent collection of data. Google Wifi, Mixpanel passwords.
  • Data leaks due to mistakes and hacks.
  • The effect of making ostensibly “public” data more available or interpretable. E.g. Graffiti tracker, The Journal News’ gun map.

Tools

  • Redaction and minimization. Differential privacy, through the example of the new privacy changes for the 2020 Census.
  • Location data. How much it reveals, how easy it is to de-anonymize.
  • Health data. Correlations with life outcomes. Regulatory issues.
  • Issues of personalized recommendations and ads, e.g. targeting ads to pregnant women.
  • General effects of better prediction on the distribution of resources and risk. For example, if you had perfect information on someone’s future health, would that destroy the health insurance market?

Discussions

  • What data do you collect? Split into small groups, discuss and make a list, merge lists. Were any types of data not listed by a group because you were missing someone with a specific perspective?
  • What rights would your users want in regards to their data? What problems will they have if they don’t have these rights?

Example projects

  • Experiment with adding differential privacy to one of your APIs. How easy is it to learn personal information, via reconstructions from multiple API calls, before and after?
  • Reconstruct someone’s life from anonymized location data (someone in the class could give it to you, or you could use the NYC taxi data, or data from apps.)

Implicit Trust and User Understanding

Issues

Projects:

  • Take one day of your browser history, re-visit every site. Read the EULAs and record anything that surprises you.
  • Document the dark patterns you encounter on these sites. 

 

Hateful & Criminal Actors

Issues

Example Projects

  • Build a hate speech classifier. Does it work well enough to be useful? What have you learned about the complexity of this problem?
  • Estimate the percentage of bitcoin transactions which are used for criminal activity
  • Pick a platform or product. Come up with a plan to use it for criminal activity, including the security measures you would take.

Leave a Reply

Your email address will not be published.