Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: T2I-Adapter in tiled-upscaling-sample nodes-workflow isn't working #5370

Closed
1 task done
lukas-frml opened this issue Jan 1, 2024 · 3 comments · Fixed by #6342
Closed
1 task done

[bug]: T2I-Adapter in tiled-upscaling-sample nodes-workflow isn't working #5370

lukas-frml opened this issue Jan 1, 2024 · 3 comments · Fixed by #6342
Labels
bug Something isn't working nodes / workflows

Comments

@lukas-frml
Copy link

lukas-frml commented Jan 1, 2024

Is there an existing issue for this?

  • I have searched the existing issues

OS

Windows

GPU

cuda

VRAM

8GB

What version did you experience this issue on?

3.5.1

What happened?

While testing out if you could use the canny t2i-Adapter instead of a Controlnet in the new Tiled-Upscaling sample nodes-workflow, the following error arises at the beginning of the denoising process (note that I'm using SDXL instead of SD 1.5) :

[2024-01-01 20:32:29,684]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
outputs = invocation.invoke_internal(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal
output = self.invoke(context)
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\app\invocations\latent.py", line 772, in invoke
) = pipeline.latents_from_embeddings(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 381, in latents_from_embeddings
latents, attention_map_saver = self.generate_latents_from_embeddings(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 454, in generate_latents_from_embeddings
step_output = self.step(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", li uc_noise_pred, c_noise_pred = self.invokeai_diffuser.do_unet_step(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 267, in do_unet_step
) = self._apply_standard_conditioning(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusion\shared_invokeai_diffusion.py", line 380, in _apply_standard_conditioning
both_results = self.model_forward_callback(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\invokeai\backend\stable_diffusion\diffusers_pipeline.py", line 664, in _unet_forward
return self.unet(
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\source\Git\Invokeai.venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1087, in forward
sample += down_intrablock_additional_residuals.pop(0)
RuntimeError: The size of tensor a (57) must match the size of tensor b (56) at non-singleton dimension 3

[2024-01-01 20:32:29,695]::[InvokeAI]::ERROR --> Error while invoking:
The size of tensor a (57) must match the size of tensor b (56) at non-singleton dimension 3

Screenshots

No response

Additional context

No response

Contact Details

No response

@lukas-frml lukas-frml added the bug Something isn't working label Jan 1, 2024
@lukas-frml lukas-frml changed the title [bug]: T2I-Adapter in tiled upscaling sample nodes-workflow isn't working [bug]: T2I-Adapter in tiled-upscaling-sample nodes-workflow isn't working Jan 2, 2024
@Adreitz
Copy link

Adreitz commented Jan 4, 2024

What resolution are you trying to generate at? From my experience, T2I Adapters require each dimension to be a multiple of 32, even though the rest of SD works with multiples of 8. I received similar errors on my SDXL upscale workflow when taking a 1280x800 initial generation and trying to upscale it by 2.5x -- even though the initial dimensions were multiples of 32, the upscaled height of 2000 is not. I needed to find an upscale multiplier that produced multiples of 32 for both dimensions simultaneously -- 2.4x and 3x worked for me. Or you can accept a bit of nonuniform stretching of the image and just choose the closest multiple of 32 for your existing dimensions.

[Edit] Reading again, it seems like you're using a tiled upscaling workflow. I haven't tried to do this myself, but I presume you will need to consider the dimensions of your tiles rather than of the overall upscaled image and ensure they are all multiples of 32. [/Edit]

@lukas-frml
Copy link
Author

I'll try that, thank you.

@psychedelicious
Copy link
Collaborator

@RyanJDick Can you suggest any workarounds for T2I Adapter's image dimension requirements?

@joshistoast Here's a GH issue for the problem you reported in https://discord.com/channels/1020123559063990373/1149506274971631688/1238333845648969789

We had some special handling that, when you select a T2I adapter, would force image dimension to a multiple of 64. That didn't make it to Control Layers, in which Control Adapter handling was reworked.

For now, I will:

  • Add a check to the Invoke button, disabling it if a T2I Adapter is in use and the image sizes aren't a multiple of 64 (or is it 32?).
  • Add something to the Control Adapter layer model select to indicate if you have a ControlNet or T2I Adapter selected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working nodes / workflows
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants