Skip to content

Python code for training fair logistic regression classifiers.

License

Notifications You must be signed in to change notification settings

wearepal/fair-classification

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fairness in Classification

NB: This repository is a fork of Zafar's fair-classification code, mildly extended to support running Zafar et al.'s algorithms on arbitrary datasets. (The original README follows.)

This repository provides a logistic regression implementation in python for the fair classification mechanisms introduced in our AISTATS'17, WWW'17 and NIPS'17 papers.

Specifically:

  1. The AISTATS'17 paper [1] proposes mechanisms to make classification outcomes free of disparate impact, that is, to ensure that similar fractions of people from different demographic groups (e.g., males, females) are accepted (or classified as positive) by the classifier. More discussion about the disparate impact notion can be found in Sections 1 and 2 of the paper.

  2. The WWW'17 paper [2] focuses on making classification outcomes free of disparate mistreatment, that is, to ensure that the misclassification rates for different demographic groups are similar. We discuss this fairness notion in detail, and contrast it to the disparate impact notion, in Sections 1, 2 and 3 of the paper.

  3. The NIPS'17 paper [3] focuses on making classification outcomes adhere to preferred treatment and preferred impact fairness criteria. For more details on these fairness notions, and how they compare to existing fairness notions used in ML, see Sections 1 and 2 of the paper.

Dependencies

  1. numpy, scipy and matplotlib if you are only using the mechanisms introduced in [1].
  2. Additionally, if you are using the mechanisms introduced in [2] and [3], then you also need to install CVXPY and DCCP.

Using the code

  1. If you want to use the code related to [1], please navigate to the directory "disparate_impact".
  2. If you want to use the code related to [2], please navigate to the directory "disparate_mistreatment".
  3. If you want to use the code related to [3], please navigate to the directory "preferential_fairness".

Please cite the corresponding paper when using the code.

References

  1. Fairness Constraints: Mechanisms for Fair Classification
    Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi.
    20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, April 2017.

  2. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
    Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi.
    26th International World Wide Web Conference (WWW), Perth, Australia, April 2017.

  3. From Parity to Preference-based Notions of Fairness in Classification
    Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, Adrian Weller.
    31st Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, December 2017.

About

Python code for training fair logistic regression classifiers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%