Ethical Computing in Practice: 2022-2023
OverviewThis course is intended for students who want to understand how to integrate considerations of ethics and social responsibility into their own practice as computing practitioners. The students will learn (i) several general step-by-step methods for identifying, addressing, and communicating about the ethical dimensions of their own computing projects and (ii) specific issues of algorithmic bias and algorithmic fairness
Learning outcomes1 Identify the ethical implications of a computing project work they are themselves working on. 2 Address such implications. 3 Communicate about them. 4 Analyze and critically assess an algorithmic system for potential bias and unfairness.
The course is structured as follows, for a total of 16 lectures.
- Course overview. [1 lecture]
Overview of the course with a specific eye to orienting students to how the course fits within the field of “technology ethics.”
- Algorithmic bias. [2 lectures]
Introduce the concept of algorithmic bias and orient them to identify and mitigate it in a given algorithmic system.
- Algorithmic fairness. [3 lectures]
Introduce the concept of algorithmic fairness, with a specific eye to understanding the trade-offs between different formal definitions of fairness and the limits of formalizing fairness.
- Step-by-step methods for ethical computing. [8 lectures]
Teach students concrete methods for ethical computing, have them practice using the methods with a computing project they are currently working on or would like to work on in the future.
- Topic of students’ choosing. [2 lectures]
Students will vote from a pre-determined menu of topics to cover.
- A set of interrelated, concrete, step-by-step methods for identifying, addressing, and communicating about the ethical dimensions of a given computing technology. Some of these methods will come from the field of Value-sensitive Design, others from the field of Responsible Innovation, and others from the Ethics Protocol, a methodology—developed by me (Milo Phillips-Brown) and Abby Jaques—that blends insights from Value-sensitive Design and Responsible Innovation.
- Algorithmic bias (e.g. how a facial recognition software can perform better for white men than darker-skinned women).
- Algorithmic fairness (i.e. formal definitions of fairness in algorithms).
- Topic of students’ choosing: this may include e.g. privacy, accessible design, stakeholder engagement, or intersections of computing and policy.
The lectures, classes, and practicals, will be supported by course materials (slides and video lectures), supplemented by additional papers and book excerpts from a range of disciplines including value-sensitive design, philosophy, science and technology studies, and responsible innovation. These readings will be made available for download from the course web page. A sample of readings may include:
- Barocas, S. and Selbst, A. (2016). “Big data’s disparate impact.” California Law Review. 104: 671-732.
- Costanza-Chock, S. (2020). Design justice. MIT Press.
- Friedman, B. and Hendry, D. (2019). Value-sensitive design. MIT Press.
- Hellman, D. (2019). “Measuring algorithmic fairness.” Virginia Public Law and Theory Legal paper. 2091-39.re
- Hedden, B. (2021). “On statistical criteria of algorithmic fairness.” Philosophy and Public Affairs. 49(2): 209-231
- Stilgoe, J., Owen, R., and Macnaghten, P. (2013). “Developing a framework for responsible innovation.” Research Policy. 42(9): 1568-1580.
- Suresh, H. and Guttag, J. (ms). “A framework for understanding unintended consequences of machine learning.” arxiv.org/abs/1901.10002
Students are formally asked for feedback at the end of the course. Students can also submit feedback at any point here. Feedback received here will go to the Head of Academic Administration, and will be dealt with confidentially when being passed on further. All feedback is welcome.