Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev please fix requirements.txt and models.yaml, and also here is a script to load all your custom models into the models.yaml. #102

Open
brentjohnston opened this issue May 19, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@brentjohnston
Copy link

brentjohnston commented May 19, 2024

Firstly, the Gradio Share should really be demo.launch(server_name="0.0.0.0", share=False) by default, not True. Especially when uploading personal pictures of ourselves to test this.

I had a lot of issues getting this to run with custom models (or run at all due to outdated requrements.txt). In the end I had to pip install torch==2.2.1 torchvision==0.17.1+cu118 -f https://download.pytorch.org/whl/cu118/torch_stable.html then pip install -r requirements.txt again which loaded xformers and it somehow started working despite not making sense, but get this error popup every time image Though it doesn't seem to affect anything.

Then I used this .py script (made with chatgpt 4o) to automatically add all of my .safetensors model directory and diffusers models directory to the models.yaml:

import os
import yaml

first you must do pip install pyyaml (PS. you'll have to modify paths and os.listdir and .ospath stuff to make this work for you, just use chatgpt 4o to do it real fast and give it your directories)

# Define the directories
safetensors_dir = "D:/stable-diffusion-webui-master/models/Stable-diffusion"
diffusers_dir = "F:/Comfyui-portable/Comfyui/models/diffusers"
models_yaml_path = "C:/StoryDiffusionNewer/StoryDiffusion/config/models.yaml"

# Collect all model filenames from the safetensors directory
safetensors_files = [f for f in os.listdir(safetensors_dir) if os.path.isfile(os.path.join(safetensors_dir, f))]
# Collect all model directories names from the diffusers directory
diffusers_dirs = [d for d in os.listdir(diffusers_dir) if os.path.isdir(os.path.join(diffusers_dir, d))]

# Load the existing models.yaml file
if os.path.exists(models_yaml_path):
    with open(models_yaml_path, 'r') as file:
        models_config = yaml.safe_load(file) or {}
else:
    models_config = {}

# Update the models.yaml file with new model paths without modifying existing models
new_models_count = 0

# Add safetensors files
for model_file in safetensors_files:
    model_name = os.path.splitext(model_file)[0]
    model_path = os.path.join(safetensors_dir, model_file).replace("\\", "/")
    
    # Only add the model if it doesn't already exist in the YAML file
    if model_name not in models_config:
        models_config[model_name] = {
            'path': model_path,
            'single_files': True,
            'use_safetensors': model_file.endswith('.safetensors')
        }
        new_models_count += 1

# Add diffusers directories
for model_dir in diffusers_dirs:
    model_name = model_dir
    model_path = os.path.join(diffusers_dir, model_dir).replace("\\", "/")
    
    # Only add the model if it doesn't already exist in the YAML file
    if model_name not in models_config:
        models_config[model_name] = {
            'path': model_path,
            'single_files': False,
            'use_safetensors': False
        }
        new_models_count += 1

# Save the updated models.yaml file
with open(models_yaml_path, 'w') as file:
    yaml.safe_dump(models_config, file, default_flow_style=False)

print(f"Added {new_models_count} models to {models_yaml_path}")

All of the models showed up in the list. The problem was they would error out on the last step: (this may have been before I installed xfomers btw, it seems it might be needed for custom models to work?) The error:

\Users\NewPC\Downloads\StoryDiffusionNewer\StoryDiffusion\utils\gradio_utils.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
  hidden_states = F.scaled_dot_product_attention(
C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\diffusers\utils\torch_utils.py:106: UserWarning: ComplexHalf support is experimental and many operators don't support it yet. (Triggered internally at ..\aten\src\ATen\EmptyTensor.cpp:31.)
  x_freq = fftn(x, dim=(-2, -1))
100%|█████████████████████████████████████████████████████████████| 20/20 [00:05<00:00,  3.57it/s]
[<PIL.Image.Image image mode=RGB size=1024x1024 at 0x13B22FDFA30>]
1
[0, 2, 3]
[] [Me] at home, reading a news paper
Traceback (most recent call last):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\queueing.py", line 566, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\route_utils.py", line 270, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\blocks.py", line 1847, in process_api
    result = await self.call_function(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\blocks.py", line 1445, in call_function
    prediction = await utils.async_iteration(iterator)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 629, in async_iteration
    return await iterator.__anext__()
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 622, in __anext__
    return await anyio.to_thread.run_sync(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 605, in run_sync_iterator_async
    return next(iterator)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 788, in gen_wrapper
    response = next(iterator)
  File "C:\Users\NewPC\Downloads\StoryDiffusionNewer\StoryDiffusion\gradio_app_sdxl_specific_id_low_vram.py", line 924, in process_generation
    input_id_images_dict[cur_character[0]]
IndexError: list index out of range

Then I had to update the load_models.py. Chatgpt 4o said this file had to be changed because

Refactoring: The functions get_models_dict() and load_models() were added to modularize the loading of models from the models.yaml file.

In addition to this, using the xformers
version 0.0.20 to get the custom .safetensor models to work. Here was the modified load_models.py I used to get this to work:

import yaml
import torch
from diffusers import StableDiffusionXLPipeline
from utils import PhotoMakerStableDiffusionXLPipeline
import os

def get_models_dict():
    with open('config/models.yaml', 'r') as stream:
        try:
            data = yaml.safe_load(stream)
            return data
        except yaml.YAMLError as exc:
            print(exc)

def load_models(model_info, device, photomaker_path=None):
    path = model_info["path"]
    single_files = model_info["single_files"]
    use_safetensors = model_info["use_safetensors"]
    model_type = model_info["model_type"]

    if model_type == "original":
        if single_files:
            pipe = StableDiffusionXLPipeline.from_single_file(path, torch_dtype=torch.float16)
        else:
            pipe = StableDiffusionXLPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=use_safetensors)
        pipe = pipe.to(device)
    elif model_type == "Photomaker":
        if single_files:
            pipe = PhotoMakerStableDiffusionXLPipeline.from_single_file(path, torch_dtype=torch.float16)
        else:
            pipe = PhotoMakerStableDiffusionXLPipeline.from_pretrained(path, torch_dtype=torch.float16, use_safetensors=use_safetensors)
        pipe = pipe.to(device)
        if photomaker_path:
            pipe.load_photomaker_adapter(
                os.path.dirname(photomaker_path),
                subfolder="",
                weight_name=os.path.basename(photomaker_path),
                trigger_word="img"
            )
            pipe.fuse_lora()
    else:
        raise NotImplementedError(f"You should choose between original and Photomaker! But you chose {model_type}")
    return pipe

Hope this helps someone.

@brentjohnston
Copy link
Author

New problem, everything works with the custom models but anytime you want to use offline it errors out with this:

To create a public link, set `share=True` in `launch()`.
C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
  torch.utils._pytree._register_pytree_node(
Traceback (most recent call last):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connection.py", line 198, in _new_conn
    sock = connection.create_connection(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\util\connection.py", line 60, in create_connection
    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\socket.py", line 955, in getaddrinfo
    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connectionpool.py", line 793, in urlopen
    response = self._make_request(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connectionpool.py", line 491, in _make_request
    raise new_e
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
    self._validate_conn(conn)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connectionpool.py", line 1099, in _validate_conn
    conn.connect()
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connection.py", line 616, in connect
    self.sock = sock = self._new_conn()
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connection.py", line 205, in _new_conn
    raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x0000012C970CA6B0>: Failed to resolve 'raw.githubusercontent.com' ([Errno 11001] getaddrinfo failed)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\adapters.py", line 486, in send
    resp = conn.urlopen(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\connectionpool.py", line 847, in urlopen
    retries = retries.increment(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\urllib3\util\retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /Stability-AI/generative-models/main/configs/inference/sd_xl_base.yaml (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000012C970CA6B0>: Failed to resolve 'raw.githubusercontent.com' ([Errno 11001] getaddrinfo failed)"))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\queueing.py", line 501, in call_prediction
    output = await route_utils.call_process_api(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\route_utils.py", line 258, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\blocks.py", line 1710, in process_api
    result = await self.call_function(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\blocks.py", line 1262, in call_function
    prediction = await utils.async_iteration(iterator)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 517, in async_iteration
    return await iterator.__anext__()
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 510, in __anext__
    return await anyio.to_thread.run_sync(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\_backends\_asyncio.py", line 2144, in run_sync_in_worker_thread
    return await future
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\anyio\_backends\_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 493, in run_sync_iterator_async
    return next(iterator)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\gradio\utils.py", line 676, in gen_wrapper
    response = next(iterator)
  File "C:\Users\NewPC\Downloads\StoryDiffusionNewer\StoryDiffusion\gradio_app_sdxl_specific_id_low_vram.py", line 763, in process_generation
    pipe = load_models(model_info, device=device, photomaker_path=photomaker_path)
  File "C:\Users\NewPC\Downloads\StoryDiffusionNewer\StoryDiffusion\utils\load_models_utils.py", line 29, in load_models
    pipe = PhotoMakerStableDiffusionXLPipeline.from_single_file(path, torch_dtype=torch.float16)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\huggingface_hub\utils\_validators.py", line 118, in _inner_fn
    return fn(*args, **kwargs)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\diffusers\loaders\single_file.py", line 263, in from_single_file
    pipe = download_from_original_stable_diffusion_ckpt(
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1319, in download_from_original_stable_diffusion_ckpt
    original_config_file = BytesIO(requests.get(config_url).content)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
  File "C:\Users\NewPC\anaconda3\envs\storydiffusion\lib\site-packages\requests\adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /Stability-AI/generative-models/main/configs/inference/sd_xl_base.yaml (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000012C970CA6B0>: Failed to resolve 'raw.githubusercontent.com' ([Errno 11001] getaddrinfo failed)"))

We should be able to use custom models offline. It seems like it's still trying to use the sd_xl_base.yaml from huggingface .cache when using other SDXL models.

@Z-YuPeng Z-YuPeng added the bug Something isn't working label May 21, 2024
@Z-YuPeng
Copy link
Collaborator

Apologies for the late response due to being overwhelmed with other tasks. I'll attempt to address the aforementioned issue within this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants