Skip to main content

Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair?

William Seymour


The most commonly cited solution to problems surrounding algorithmic fairness is increased transparency. But how do we reconcile this point of view with the state of the art? Many of the most effective modern machine learning methods (such as neural networks) can have millions of variables, defying human understanding. This paper decomposes the quest for transparency and examines two of the options available using technical examples. By considering some of the current uses of machine learning and using human decision making as a null hypothesis, I suggest that pursuing transparent outcomes is the way forward, with the quest for transparent algorithms being a lost cause.

BIAS 2018