Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: nsfw/watermark updates #6260

Closed
wants to merge 6 commits into from
Closed

Conversation

psychedelicious
Copy link
Collaborator

@psychedelicious psychedelicious commented Apr 23, 2024

Summary

Currently, NSFW checking doesn't work on new installs due to a catch-22. See #6252.

I had an idea to move NSFW and watermark to be config settings, then do the check/watermark in the invocation API as images are saved. The NSFW check and watermarking are thus fully automatic and transparent, and works on workflows too - without any user changes. This was really easy to implement.

It works well, but there is a problem on canvas where a graph does many image-saving operations.

For example, an inpaint graph does at least 6x image saves: 4x resize, 1x VAE decode and 1x paste-back. Each time, the image is checked - possibly blurred - and watermarked.

Watermark changes the image

Watermarking subtly changes the images each time an image is saved and this introduces some chaos which impact how the model handles the images. The final output is markedly different than what you would have gotten without any watermarking.

NSFW detection early borks the rest of generation

If an early image in the "chain" is NSFW, it still gets passed along. There are two possible outcomes:

  1. The final image is still NSFW, and is the result of blurring the image, adding the caution symbol, blurring that image again and adding the caution symbol again, and so on. The result is a super-blurred image with blurred caution symbols and then a sharp caution symbol on top.
  2. At some point the image was determined to no longer be NSFW (because it was blurred and a caution symbol put on it, and this was enough to change the determination). Then that gets used as the input to canvas, and the final result is this black and yellow smudge that somewhat follows the original prompt.

Other ideas

Ok, so this isn't viable. Some other ideas:

  1. Remove both NSFW check and watermark entirely.
  2. Just fix the model download and leave NSFW and watermark as part of the graphs, and a UI setting.
  3. Raise a NSFWImageError on NSFW detection - IMO, this is how it should work anyways. The user would get a toast with the error.
  4. Only do the check and watermark on terminal/leaf nodes.

Related Issues / Discussions

Closes #6252
Closes #6092

QA Instructions

n/a for now

Merge Plan

n/a

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added api python PRs that change python files backend PRs that change backend files services PRs that change app services frontend PRs that change frontend files labels Apr 23, 2024
@lstein
Copy link
Collaborator

lstein commented Apr 25, 2024

@psychedelicious If you like I'd be happy to work on this after getting the model manager API updates in. Both the NSFW and watermarking features do have real use cases, even if they aren't used all that frequently.

@psychedelicious
Copy link
Collaborator Author

@lstein Sure thing.

After thinking about it more, I'm leaning towards:

  • If watermark is enabled in config, watermark every image. This is a already done in the PR. As described, the watermarking can change outputs, but I think this is reasonable. The expectation is that watermarking applies to all outputs, so we can't really get around it changing images. And this is the only sane way to support watermarking in the workflow editor.
  • If nsfw_check is enabled in the config, raise a NSFWImageDetectedError to immediately fail the graph. My thinking is, if you want to check for NSFW and NSFW is detected, there's no point in continuing with that generation - it should immediately stop.

If that makes sense, there would only be some minor changes needed for this PR.

@lstein
Copy link
Collaborator

lstein commented Apr 28, 2024

@lstein Sure thing.

After thinking about it more, I'm leaning towards:

  • If watermark is enabled in config, watermark every image. This is a already done in the PR. As described, the watermarking can change outputs, but I think this is reasonable. The expectation is that watermarking applies to all outputs, so we can't really get around it changing images. And this is the only sane way to support watermarking in the workflow editor.
  • If nsfw_check is enabled in the config, raise a NSFWImageDetectedError to immediately fail the graph. My thinking is, if you want to check for NSFW and NSFW is detected, there's no point in continuing with that generation - it should immediately stop.

If that makes sense, there would only be some minor changes needed for this PR.

This sounds reasonable. I'll see if I can get this working later this week.

@psychedelicious
Copy link
Collaborator Author

After thinking through things, this approach isn't viable. We need NSFW and watermark to be user-configurable via the UI, it cannot be something that is set in the config file.

This is superseded by #6360.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api backend PRs that change backend files frontend PRs that change frontend files python PRs that change python files services PRs that change app services
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[bug]: NSFW checker is not working [bug]: Image data not saved
2 participants