Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Policy on Synthetic and Manipulated Media Tools #926

Merged
merged 2 commits into from May 22, 2024

Conversation

jessephus
Copy link
Contributor

@jessephus jessephus commented Apr 18, 2024

Hi! 👋🏽 Today we are proposing a new addition to our Acceptable Use Policies on Misinformation and Disinformation to address the development of synthetic and manipulated media tools for the creation of non-consensual intimate imagery (NCII) and disinformation.

You can review our proposed Acceptable Use Policy addition here. We invite all stakeholders to comment for the 30-day period from now until May 20, and look forward to learning from and engaging with the community on this important topic.

Read more about this change on the GitHub Blog.

@Yardanico
Copy link

Yardanico commented Apr 19, 2024

Hello! I'm really confused by this policy change - if I'm reading it right, it implies that any tool that can be used to create deepfakes and intimate images will not be allowed. Does that mean Stable Diffusion and tools created to work with it (stable-diffusion-webui, ComfyUI and a lot of others) will be not allowed to be hosted on GitHub as well? Training LoRAs (and other types of models) on top of Stable Diffusion is a common technique, and is often used to create small models that allow to reproduce a specific object, style, or a specific person, which can then be used to generate any images of them, including intimate imagery.

UPD - That part seems to partly address my confusion:

Through enforcing this policy, we will differentiate between harmful misuse and legitimate dual or general use and consider proportionate responses to possible abuse.

But wouldn't that make the policy largely ineffective? Stable Diffusion would fall under legitimate dual/general use, but yet it is a very common tool used for the creation of deepfakes.

In my opinion such a policy would just be too vague and open to interpretation, and wouldn't actually help prevent the creation of deepfakes.

@FelixMildon
Copy link

Think ur overestimating your monopoly GitHub (Microsoft) and emboldened to become authoritarian. How about no rules. And who decides what is "misinformation". Porn ok fine ban that, not everyone is a pervert... but everyone IS a liar.

@ZzZombo
Copy link

ZzZombo commented Apr 20, 2024

I don't understand. Several years ago I and some other people I know tried to report and take down repositories of cheating tools like aimbots or similar, and you'd responded that nothing will be done about them despite the software intended only to cause harm to the target games and their communities, that is, only to do evil and absolutely no good. Now you suddenly decided to take a stance on AI tools being misused/abused in a certain way, preventively even. Either expand the scope to include all repositories that contain code that is only intended to be used for nefarious purposes, or scrap this altogether.

@limdingwen
Copy link

limdingwen commented Apr 20, 2024

Personally, I don't like how vague this rule is. Targeting the tools themselves doesn't seem to be productive -- as others have said, how often must this tool be used to create this type of media? What if the tool simply markets itself as something else? Think sex toys being marketed as massage wands online (not that that's bad). I think it's less confusing to target the final images, not the tools used to make them.

Given the vagueness of what abuse is, I also think this is a complex question that our governments need to deal with, not GitHub. Of course, this is a private website, but given GitHub's global influence I think it's important to be careful about what sorts of policies are introduced here.

@mlinksva
Copy link
Contributor

Hi all, GitHub colleague of @jessephus here. Thank you for your comments so far.

To clarify, this policy would not impact general-purpose tools that can generate synthetic media (like Stable Diffusion and other general-purpose tools created to work with it). Tools created specifically for generating non-consensual intimate imagery (NCII) or disinformation (or encouraging it) would not be allowed.

This approach is informed by our policy on malware and exploits; while we allow security research projects on the platform, we do not allow projects that are being used for active attacks. In enforcing this policy, we are careful to pay attention to the context of the project, how its intended use is described, and other factors.

Disallowing projects that are specifically designed to create NCII and disinformation is a change that will significantly limit the availability of tools configured for harm to a mass audience. For more context on this, and the other side of the coin -- protecting research and avoiding security/safety through obscurity, please read the linked blog post announcing this proposal. For even deeper context, not specific to this proposal, but why working to protect benefits and mitigate harms simultaneously will only become more pressing, see our recent response to a US government consultation on open weight AI models.

Further questions or suggestions (this is a pull request!) are most welcome.

@I-AM-ENGINEER
Copy link

I-AM-ENGINEER commented Apr 21, 2024

It is terrible, ai assistant with face reproduction is deepfake tool? And if this assistant can repeat you speech and have avatar customisation? Who would be judge? If user want deepfake, they find theys in another places, that's isn't too hard, but many legal uses can be in danger.

Knife is good weapon for killing people's, should we ban kitchen tools?

@Leokratis
Copy link

This approach is informed by our policy on malware and exploits; while we allow security research projects on the platform

Would uncensored models AI models be affected by this (and/or techniques to uncensor a model)? Since, there are valid reasons to host such model on GitHub (detect bias in AI models, composability etc). Not to mention different types of alignment that would allow "unethical" acts in the context of storytelling.

If this policy is only for NCII (for humans), then it's perfectly fine. My only issue would be if it affected research oriented projects.

@mlinksva
Copy link
Contributor

Thank you for your question @Leokratis.

In general, this policy would not prohibit uncensored models or code and documentation for uncensoring models for research purposes such as the ones you highlighted. But, if these projects are specifically for generating NCII or disinformation, then they would be prohibited under this policy.

We understand that there can be grey areas in moderating projects shared for research purposes. We aim to have a developer-first approach to content moderation, giving users the chance to appeal and providing an opportunity to refute and/or address violations to get their accounts or content reinstated.

Tl;dr: projects directed toward malicious ends are not allowed; research projects shared in good faith are welcome.

@jlf305
Copy link

jlf305 commented May 13, 2024

  • Is there a way that people can report misinformation or wrong information or NCII/CSAM? And if they want to ability to take stuff down, they may want to add a section on that under "Content Removal Policies."
  • Require synthetic media to be labeled so people know what they're looking at.
  • You could consider having a 3 strikes policy -- you violate these terms after so many uses, you get IP blocked and can no longer use our services for X amount of time and you offer to cooperate with investigations (thinking about this in the CSAM / NCII materials).
  • The Limitation of Liability section may want to say that also, Github is not liable for your violation of these Terms of Use (i.e., if you violate our Acceptable Use Policies, we aren't liable)
  • I may have just missed where it says this, but it should say somewhere if it doesn't already, that you using the platform means you agree to these terms.
  • Think about ignorance not being an excuse, so how would you go about building in deterrence measures or features in the tech stack so people don't post misinformation or obscene things
  • Build in audit and compliance programs or governance mechanisms so that people are periodically looking at this issue internally and there is some accountability.

@ZzZombo
Copy link

ZzZombo commented May 16, 2024

When can I expect a comment from you on my feedback?

@jessephus
Copy link
Contributor Author

@jlf305 - Thank you for your suggestions. To answer your first question, users can report content that they suspect violates community guidelines and terms using the same process as reporting abuse or spam.

@ZzZombo - Your feedback has been received. We are not making any further policy changes at this time.

@iperov
Copy link

iperov commented May 21, 2024

my repo https://github.com/iperov/DeepFaceLab has no direct links to intimate content, or link don't work, but the repo will be removed in 3 days 😲

looks like I need to look for a new git platform.

@iperov
Copy link

iperov commented May 21, 2024

By removing such repo's from the public field, you will make deepfake researchers and developers go underground, so there will be nothing to train deepfake detectors from.

I've been developing a new enhancement for face replacement for several years now, and intended to make it public, similar to DeepFaceLab, so that other researchers could reference it in their work.

But with your policy, I won't put it out to the public and will only provide it to VIP customers.

@jessephus
Copy link
Contributor Author

jessephus commented May 21, 2024

@iperov - It is certainly not our intent to remove this type of research from the public sphere. This is why we reached out proactively in the spirit of cooperation. We hope you will keep it public and open source with a few modifications consistent with our new policy. (And let us know if you need a little more time.)

Our Trust and Safety team will provide you with more details through the open support ticket. However, generally speaking, from this point forward we will expect projects hosted on GitHub not to include links to sites that promote and distribute non-consensual intimate imagery.

@mlinksva
Copy link
Contributor

Thanks to everyone for your feedback on our proposed Acceptable Use Policy update to address the use of synthetic and manipulated media tools for non-consensual intimate imagery and disinformation. The 30 day notice-and-comment period has closed and the policy update is now in effect.

We appreciate your engagement on our site policies, especially as we consider how to foster AI-driven innovation while minimizing harms. This policy is intended both to provide clarity on disallowed uses of synthetic media tools and to enable valuable research. For more information on our content moderation practices, check out our Transparency Center. If you have additional questions or feedback on this policy update or other site policies, you may open an issue.

@mlinksva mlinksva merged commit 0588dbb into main May 22, 2024
@mlinksva mlinksva deleted the add-policy-on-deepfake-tools branch May 22, 2024 00:42
@iperov
Copy link

iperov commented May 22, 2024

ok, the support wrote me that I should remove the links to mrdeepfakes.com , but the links only lead to the forum part.
Apparently you only strike large repo's like mine.

Today you remove forum links and tomorrow someone will sue you in the US for not enforcing wording in a policy that even Photoshop falls under.

There is no guarantee that the repo won't be removed a year from now under new policies..

The witch hunt has already begun.

@ZzZombo
Copy link

ZzZombo commented May 23, 2024

Your feedback has been received. We are not making any further policy changes at this time.

Sorry, but this is a non-answer. What exactly sets them both apart? Why all the attention on AI only?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet