Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT: Fairness of exposure #959

Open
bram49 opened this issue Sep 21, 2021 · 2 comments · May be fixed by #974
Open

FEAT: Fairness of exposure #959

bram49 opened this issue Sep 21, 2021 · 2 comments · May be fixed by #974
Labels
enhancement New feature or request

Comments

@bram49
Copy link
Contributor

bram49 commented Sep 21, 2021

Fairness of exposure

Would like to extend Fairlearn with tools to deal with fairness in rankings (#945). I think that an intuitive and good metric for fairness in rankings is exposure. From the paper: Fairness of exposure in rankings by Ashudeep Singh and Thorsten Joachims.

Where exposure is defined as:

,with document d_i, probabilistic ranking P and logarithmic discount v_j to deal with position bias (higher rankings get exponential more attention)

With exposure, all kinds of fairness metrics can be constructed. Such as:

1. Allocation harm

Where you equalize the average exposure of documents.
Denoting average exposure in group k with:

And denoting the demographic parity constraint with:

2. Quality-of-service harm

Where you try to keep the relevance of the items proportional to the exposure. Like in the example on the right, small differences in relevance between candidates can lead to huge differences in exposure.

,where U(G|q), is the average utility of a group. And utility is the relevance score, on which the documents are ranked.

Problem

The problem with this metric is that you need a given probabilistic ranking P, which you can create when you have multiple rankings with the same documents. To work around this, and try to make the metric work for a single ranking, I thought about defining exposure as the sum of logarithmic discounts for ranking tau. Which would define demographic parity as:

Conclusion

Think that with the adjustment, this is an effective way to deal quantify fairness in rankings. Please let me know what you think

@hildeweerts
Copy link
Contributor

Thank you for opening this issue @bram49!

IMO the deterministic version of exposure makes sense, but I'd love to hear other people's thoughts on this as well @fairlearn/fairlearn-maintainers. Could you give an example of how utility would be defined in scenario (2)?

As a side note, I think we need to be careful with overloading terminology. Demographic parity/disparate treatment* are typically use I n the context of classification and regression problems. To avoid confusion, I think it would make sense to instead focus on the types of harm that are being measured. E.g., differences in exposure can be seen as a measure of allocation harm in a ranking scenario, whereas differences in exposure/utility is an indicator of quality-of-service harm.

[*In the Fairlearn community we generally try to avoid the term "disparate treatment" which originates from US laws on employment discrimination. Using that term may suggest compliance with US law even when it's not the case. First of all, most applications will fall outside of the employment domain. Moreover, considering only the output of the model (i.e., disregarding how it's used, by whom, etc.) is a very narrow frame.]

@bram49
Copy link
Contributor Author

bram49 commented Sep 23, 2021

Thank you for the reply @hildeweerts
Indeed it makes more sense to define the methods by the types of harms they try to measure. I will change the headings in the titles to the corresponding harms.
An example of a utility score would be the SAT score to rank the top-k students who will be admitted into a numerus fixus program. Or any type of relevance score, which is calculated to rank a query.

@romanlutz romanlutz added the enhancement New feature or request label Sep 23, 2021
@bram49 bram49 linked a pull request Oct 6, 2021 that will close this issue
2 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants