-
Notifications
You must be signed in to change notification settings - Fork 477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dev please fix requirements.txt and models.yaml, and also here is a script to load all your custom models into the models.yaml. #102
Comments
New problem, everything works with the custom models but anytime you want to use offline it errors out with this:
We should be able to use custom models offline. It seems like it's still trying to use the sd_xl_base.yaml from huggingface .cache when using other SDXL models. |
Apologies for the late response due to being overwhelmed with other tasks. I'll attempt to address the aforementioned issue within this week. |
Firstly, the Gradio Share should really be
demo.launch(server_name="0.0.0.0", share=False)
by default, notTrue
. Especially when uploading personal pictures of ourselves to test this.I had a lot of issues getting this to run with custom models (or run at all due to outdated requrements.txt). In the end I had to
pip install torch==2.2.1 torchvision==0.17.1+cu118 -f https://download.pytorch.org/whl/cu118/torch_stable.html
thenpip install -r requirements.txt
again which loaded xformers and it somehow started working despite not making sense, but get this error popup every time Though it doesn't seem to affect anything.Then I used this .py script (made with chatgpt 4o) to automatically add all of my .safetensors model directory and diffusers models directory to the models.yaml:
import os
import yaml
first you must do
pip install pyyaml
(PS. you'll have to modify paths and os.listdir and .ospath stuff to make this work for you, just use chatgpt 4o to do it real fast and give it your directories)All of the models showed up in the list. The problem was they would error out on the last step: (this may have been before I installed xfomers btw, it seems it might be needed for custom models to work?) The error:
Then I had to update the load_models.py. Chatgpt 4o said this file had to be changed because
In addition to this, using the xformers
version 0.0.20 to get the custom .safetensor models to work. Here was the modified load_models.py I used to get this to work:
Hope this helps someone.
The text was updated successfully, but these errors were encountered: