University of Surrey

Test tubes in the lab Research in the ATI Dance Research

Counterfactual Fairness

Kusner, MJ, Loftus, J, Russell, Christopher and Silva, R (2017) Counterfactual Fairness In: 31st Conference on Neural Information Processing Systems (NIPS 2017), 4 - 9 December 2017, Long Beach, CA, USA..

6995-counterfactual-fairness.pdf - Accepted version Manuscript

Download (759kB) | Preview


Machine learning can impact people with legal or ethical consequences when it is used to automate decisions in areas such as insurance, lending, hiring, and predictive policing. In many of these scenarios, previous decisions have been made that are unfairly biased against certain subpopulations, for example those of a particular race, gender, or sexual orientation. Since this past data may be biased, machine learning predictors must account for this to avoid perpetuating or creating discriminatory practices. In this paper, we develop a framework for modeling fairness using tools from causal inference. Our definition of counterfactual fairness captures the intuition that a decision is fair towards an individual if it the same in (a) the actual world and (b) a counterfactual world where the individual belonged to a different demographic group. We demonstrate our framework on a real-world problem of fair prediction of success in law school.

Item Type: Conference or Workshop Item (Conference Poster)
Divisions : Faculty of Engineering and Physical Sciences > Electronic Engineering
Authors :
Kusner, MJ
Loftus, J
Silva, R
Date : 4 December 2017
Funders : EPSRC
Copyright Disclaimer : Copyright 2017 MIT Press
Related URLs :
Depositing User : Melanie Hughes
Date Deposited : 28 Nov 2017 12:32
Last Modified : 11 Dec 2018 11:23

Actions (login required)

View Item View Item


Downloads per month over past year

Information about this web site

© The University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom.
+44 (0)1483 300800