You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when choose live mode and start live preview, the loading process very slow.
it seems to be paused after the backend showed "got prompt" and "model_type EPS" .
Several minutes later, it started to excute other procedures.
Could anyone help with this issue?thans!
got prompt
model_type EPS
Using xformers attention in VAE
Using xformers attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SDXLClipModel
Loading 1 new model
E:\software\ComfyUI-aki-v1.1\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load AutoencoderKL
Loading 1 new model
Requested to load SDXL
Loading 1 new model
Prompt executed in 198.61 seconds
The text was updated successfully, but these errors were encountered:
when choose live mode and start live preview, the loading process very slow.
it seems to be paused after the backend showed "got prompt" and "model_type EPS" .
Several minutes later, it started to excute other procedures.
Could anyone help with this issue?thans!
got prompt
model_type EPS
Using xformers attention in VAE
Using xformers attention in VAE
clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Requested to load SDXLClipModel
Loading 1 new model
E:\software\ComfyUI-aki-v1.1\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
Requested to load AutoencoderKL
Loading 1 new model
Requested to load SDXL
Loading 1 new model
Prompt executed in 198.61 seconds
The text was updated successfully, but these errors were encountered: