Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: trouble converting SDXL safetensors and ckpt models to Diffusers #6353

Open
1 task done
MorpheusXX0 opened this issue May 11, 2024 · 8 comments
Open
1 task done
Assignees
Labels
bug Something isn't working

Comments

@MorpheusXX0
Copy link

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 4060

GPU VRAM

8GB

Version number

4.2

Browser

Edge 124.0.2478.80

Python dependencies

No response

What happened

I’m having trouble converting SDXL models to Diffusers. My computer uses all available RAM and crashes invoke, as well as some other programs in the process. I would like to know if there’s anything I can do to prevent the conversion from consuming all the system memory. I was able to convert a smaller SDXL model into diffusers and run it without any issues, the same running already Diffusers models, this problem only happens when converting from SafeTensors or CKPT to Diffusers. Sorry for my bad english.

What you expected to happen

I would expect the model to be converted properly.

How to reproduce the problem

No response

Additional context

No response

Discord username

nemesis_noctis

@MorpheusXX0 MorpheusXX0 added the bug Something isn't working label May 11, 2024
@hipsterusername
Copy link
Member

How much RAM do you have?

@MorpheusXX0
Copy link
Author

How much RAM do you have?

16 GB

@psychedelicious
Copy link
Collaborator

Which model is causing the crash? And to confirm, it's RAM and not VRAM that's getting maxed out?

@MorpheusXX0
Copy link
Author

Which model is causing the crash? And to confirm, it's RAM and not VRAM that's getting maxed out?

Any sdxl model, I was only able to convert one model without this problem, the dreamshaper sdxl turbo, all the others had this problem. And Yes, the problem is the ram not vram, and if I run a sdxl model that its already in diffusers format it can generate imagens without any problems, this only happens when I try to convert from ckpt or safetensors to diffusers. Sd 1.5 can convert without any problems.

@psychedelicious
Copy link
Collaborator

Please let us know a specific model that has the problem so we can attempt to reproduce the issue exactly.

@MorpheusXX0
Copy link
Author

Please let us know a specific model that has the problem so we can attempt to reproduce the issue exactly.

Some examples are juggernaut xl version X , the normal and the hyper version and kohaku xl episilon. I tried with other models too but these are some.

@psychedelicious
Copy link
Collaborator

Thanks. I've done a brief test with Juggernaut X and can confirm conversion uses an excessive amount of RAM.

I had 12GB free and OOM'd when attempting to convert using the convert button. However, I can generate with the safetensors model, the automatic conversion there works without issue. I suspect there is something in the conversion endpoint that isn't set up correctly. @lstein , can you please take a look?

@MorpheusXX0
Copy link
Author

Thanks. I've done a brief test with Juggernaut X and can confirm conversion uses an excessive amount of RAM.

I had 12GB free and OOM'd when attempting to convert using the convert button. However, I can generate with the safetensors model, the automatic conversion there works without issue. I suspect there is something in the conversion endpoint that isn't set up correctly. @lstein , can you please take a look?

For me, even when I try to generate without previously converting, the automatic conversion runs out of memory. Therefore, if I use a SafeTensor or a ckpt model, I cannot use SDXL; instead, I need to use a model that is already in the Diffusers format. Every method of converting the model, whether through the model manager or through the automatic conversion, always runs out of memory. And I think there is no way to use a SafeTensor or ckpt model without converting, right? I can use a space on Hugging Face to convert to Diffusers, but this requires a lot of work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants