Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protect against ludicrous image sizes #2832

Open
Fennecai opened this issue May 2, 2024 · 4 comments · May be fixed by #2845
Open

Protect against ludicrous image sizes #2832

Fennecai opened this issue May 2, 2024 · 4 comments · May be fixed by #2845
Labels
enhancement New feature or request

Comments

@Fennecai
Copy link

Fennecai commented May 2, 2024

Motivation
sometimes when upscaling images in a batch, or resizing them, etc. its possible to crash the entire computer because i unwittingly ran an operation that would result in a 8192+ pixels wide/tall image, and processing it takes up 100% of both the cpu and gpu memory. somewhat related to #2671

Description
when an operation will result in an image being more than 4k resolution at any point, pop up a confirmation window saying something like, "Warning: the current configuration will likely freeze your entire computer due to massive image sizes. do you wish to continue?"

Alternatives
nodes could visibly turn bright orange with caution stripes or something if they detect an overly large image, and hovering over them will show tooltip-esque text explaining the potential memory overuse. heres a very rough mockup:
image

another alternative, which deals with the root of the issue, would be to have a setting that hard-limits how much memory chainner is allowed to use at any given time- so i could set it to half my installed system memory, or half of my gpu memory, for example.
the following screenshot is from a minecraft launcher, and controls java memory limits, but communicates the same concept:
image

@joeyballentine
Copy link
Member

Thank you for the suggestion.

being more than 4k resolution

This is heavily dependent on your computer and how much RAM you have. We would need to be able to do some kind of on the fly estimate of RAM usage per image and what max size makes sense per decide in order to make this meaningful. 4K for my computer is just fine, but for a crappy potato laptop it isn't -- so we'd need a way to automatically determine what is acceptable for the individual device. I can't think of a way we could realistically do something like that.

another alternative, which deals with the root of the issue, would be to have a setting that hard-limits how much memory chainner is allowed to use at any given time

After a quick search, it appears python does have a way to place a hard cap on memory usage. The problem is that this will just result in a bunch of errors and most likely chaiNNer's backend crashing when reached, which while not ideal I suppose still is better than your entire PC crashing, so it's probably worthwhile to implement this. But it won't behave like minecraft where things will just get really laggy if you reach your memory cap, it'll just completely break.

@Fennecai
Copy link
Author

Fennecai commented May 8, 2024

those are great points. i don't have any python experience so i wouldn't have known this was basically impossible to achieve smoothly, and I'm on a pretty good pc myself so it didn't cross my mind for the specs of slower/older computers.... but I'm glad my suggestion at least gave food for thought. thanks for being patient with me in your reply. :)

@RunDevelopment
Copy link
Member

RunDevelopment commented May 9, 2024

Couldn't we use a conservative heuristic? We know exactly how much memory every image takes up (w*h*c*4 bytes), so we could just enforce that an image can be at most totalRAM / 2 bytes. We often need to do copies of images, so we need enough memory to hold 2 of an image, hence the / 2.

Note that / 2 is not enough to prevent the RAM from being filled. Obviously, the OS and other programs also need some RAM. The purpose of the heuristic isn't to prevent chainner from using more memory than physical RAM, but to prevent "ludicrous image sizes" (most likely due to mis-inputs). So very large images might still spill into swap, but that's okay.

This would prevent the worst at runtime and give us a condition to check in the UI to prevent such chains from being executed. What do you think @joeyballentine?


In code, this can simply be implemented by calling a function at the start of a node to check whether the output image fits into RAM. So e.g. the Resize node could look like this:

# ...
def resize_node(img: np.ndarray, new_width: int, new_height: int) -> np.ndarray:
    h, w, c = get_h_w_c(img)
    # check whether the output image fits into RAM before doing the operation
    assert_image_size((new_height, new_width, c))
    # perform the actual operation
    return resize(img, new_width, new_height)

@joeyballentine
Copy link
Member

Sure, that would be good enough probably

@RunDevelopment RunDevelopment linked a pull request May 12, 2024 that will close this issue
@joeyballentine joeyballentine added the enhancement New feature or request label May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants