Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2.4 right to be free from discrimination based on data #155

Open
IreneKnapp opened this issue May 7, 2022 · 2 comments
Open

2.4 right to be free from discrimination based on data #155

IreneKnapp opened this issue May 7, 2022 · 2 comments
Assignees

Comments

@IreneKnapp
Copy link

Section 2.4 enumerates various rights people may have pertaining to their data. People from marginalized backgrounds often have some concern to the effect of: If I provide this entity with my personal data, are they going to use it to discover my marginalized status and discriminate against me? There are relevant laws in some jurisdictions, plus it's a moral right people might assert even in the absence of a legal framework. It seems like that should be captured here.

I see room for discussion as to what types of discrimination should be covered. Regardless of what I might wish, there are types of discrimination which it probably isn't realistic to expect the w3c to fully condemn. For example, a credit bureau cannot do its job without discriminating on the basis of income.

This is distinct from freedom from automated decision-making (which the current draft already addresses) in that it also applies to decisions made by humans.

This idea came up during a Twitter conversation, and I'm writing this issue in an attempt to capture the thought. Please feel free to ask clarifying questions and I'll try to elaborate. I also take no strong position on exactly what form the proposed right should take; I'm just trying to flag a gap that I see in what's there now.

@jyasskin
Copy link
Collaborator

Also see my worries about "automated" in #136 (comment).

I'm not sure how to phrase this though. We usually think of "discrimination" as meaning the inappropriate kind, but it's also discrimination to show a hotel ad to the person who just searched for a plane flight but not the person who just searched for sofas. And it's discrimination to show an Ebony ad to the person you inferred is Black. So what can we say to divide the harmful kind from the probably-beneficial kind?

Maybe it's around the fact that one isn't required to exercise their rights, and so this becomes a right to know when discrimination of any sort is happening so that the person can decide whether they wanted that instance? But that could be overwhelming...

@npdoty
Copy link
Collaborator

npdoty commented May 11, 2022

Thanks for raising this, and Jeffrey for providing some of the context.

There are perhaps two slightly different ways that discrimination against marginalized people is relevant to our privacy principles.

First, we might see the ability for marginalized people to be freer from discrimination as a justification for why we care about protecting privacy. We want users to have privacy because there are well-known downstream harms that come from data being collected, shared and subsequently used in discriminatory ways. In this sense, yes, there are nuanced questions about what kinds of distinction are discriminatory or the harmful kind of discrimination, but those might not be privacy questions, just the fact that we know this discrimination happens is a reason why we want privacy. I think we should add this to the early sections of the document (1 and 1.1) about autonomy and the importance of privacy; there is abstract language about this, but it would be good to be more direct about it.

Second, we might see a specific privacy right about data being the ability to be free from some kinds of decision-making. Historically, privacy in the computer age included specific objections around automated decision-making, and that has also been translated more recently into specific laws including the GDPR. I think that's separate from a justification, because it's more of a procedural right about exempting oneself from some processing as part of being able to access, correct or withdraw consent about data about oneself. Maybe the "automated" part of automated decision-making here really is only a historical quirk and we should drop it from the document. But I also think that we shouldn't try to enumerate here every kind of unjust or harmful decision making. We all have human rights about being treated fairly, but not all of those are best enumerated as privacy rights.

@darobin darobin added the agenda+ Add to the next call's agenda. label Jun 27, 2022
@darobin darobin self-assigned this Oct 19, 2022
@darobin darobin added backburner and removed agenda+ Add to the next call's agenda. labels Jun 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants