Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

first time run but facing bug #3443

Open
waldolin opened this issue May 11, 2024 · 0 comments
Open

first time run but facing bug #3443

waldolin opened this issue May 11, 2024 · 0 comments

Comments

@waldolin
Copy link

waldolin commented May 11, 2024

first time run but facing bug
i start Queue Prompt
but not working in clip text encode(prompt)
i reinstall venv and requirement again and the same
i see the model/clip file is empty

Error occurred when executing CLIPTextEncode:

CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

File "D:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI\nodes.py", line 58, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\ComfyUI\comfy\sd.py", line 135, in encode_from_tokens
self.load_model()
File "D:\ComfyUI\comfy\sd.py", line 155, in load_model
model_management.load_model_gpu(self.patcher)
File "D:\ComfyUI\comfy\model_management.py", line 453, in load_model_gpu
return load_models_gpu([model])
File "D:\ComfyUI\comfy\model_management.py", line 447, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "D:\ComfyUI\comfy\model_management.py", line 304, in model_load
raise e
File "D:\ComfyUI\comfy\model_management.py", line 300, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights)
File "D:\ComfyUI\comfy\model_patcher.py", line 270, in patch_model
self.model.to(device_to)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1152, in to
return self._apply(convert)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 825, in _apply
param_applied = fn(param)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)

** ComfyUI start up time: 2024-05-11 11:28:53.663484

Prestartup times for custom nodes:
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager

Total VRAM 49136 MB, total RAM 130883 MB
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon PRO W7900 [ZLUDA] : cudaMallocAsync
VAE dtype: torch.bfloat16
Using pytorch cross attention
Adding extra search path checkpoints D:\Users\lin\stable-diffusion-webui-directml/models/Stable-diffusion
Adding extra search path configs D:\Users\lin\stable-diffusion-webui-directml\configs
Adding extra search path vae D:\Users\lin\stable-diffusion-webui-directml/models/VAE
Adding extra search path loras D:\Users\lin\stable-diffusion-webui-directml/models/Lora
Adding extra search path loras D:\Users\lin\stable-diffusion-webui-directml/models/LyCORIS
Adding extra search path upscale_models D:\Users\lin\stable-diffusion-webui-directml/models/ESRGAN
Adding extra search path upscale_models D:\Users\lin\stable-diffusion-webui-directml/models/RealESRGAN
Adding extra search path upscale_models D:\Users\lin\stable-diffusion-webui-directml/models/SwinIR
Adding extra search path embeddings D:\Users\lin\stable-diffusion-webui-directml/embeddings
Adding extra search path hypernetworks D:\Users\lin\stable-diffusion-webui-directml/models/hypernetworks
Adding extra search path controlnet D:\Users\lin\stable-diffusion-webui-directml\extensions\sd-webui-controlnet
Adding extra search path clip D:\ComfyUI\models/clip/
Adding extra search path clip_vision D:\ComfyUI\models/clip_vision/
Adding extra search path gligen D:\ComfyUI\models/gligen
Adding extra search path custom_nodes D:\ComfyUI\custom_nodes
Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 1867, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI\custom_nodes\ComfyUI-Gemini_init
.py", line 49, in
from .GeminiAPINode import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "D:\ComfyUI\custom_nodes\ComfyUI-Gemini\GeminiAPINode.py", line 6, in
import google.generativeai as genai
ModuleNotFoundError: No module named 'google.generativeai'

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-Gemini module for custom nodes: No module named 'google.generativeai'

Loading: ComfyUI-Impact-Pack (V4.30.3)

Loading: ComfyUI-Impact-Pack (Subpack: V0.5)

Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 1867, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes_init
.py", line 1, in
from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
ModuleNotFoundError: No module named 'inference_core_nodes'

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes module for custom nodes: No module named 'inference_core_nodes'

Loading: ComfyUI-Manager (V1.0.1)

ComfyUI Revision: 2171 [4f63ee9] | Released on '2024-05-10'

Loading: ComfyUI-Workflow-Component (V0.43.3) !! WARN: This is an experimental extension. Extremely unstable. !!

[START] ComfyUI AlekPet Nodes

Node -> ArgosTranslateNode [Loading]
Node -> DeepTranslatorNode [Loading]
Node -> ExtrasNode [Loading]
Node -> GoogleTranslateNode [Loading]
Node -> PainterNode [Loading]
Node -> PoseNode [Loading]

[END] ComfyUI AlekPet Nodes

Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 1867, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI\custom_nodes\ComfyUI-Gemini_init
.py", line 49, in
from .GeminiAPINode import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
File "D:\ComfyUI\custom_nodes\ComfyUI-Gemini\GeminiAPINode.py", line 6, in
import google.generativeai as genai
ModuleNotFoundError: No module named 'google.generativeai'

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-Gemini module for custom nodes: No module named 'google.generativeai'

Loading: ComfyUI-Impact-Pack (V4.30.3)

Traceback (most recent call last):
File "D:\ComfyUI\nodes.py", line 1867, in load_custom_node
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes_init
.py", line 1, in
from inference_core_nodes import NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS
ModuleNotFoundError: No module named 'inference_core_nodes'

Cannot import D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes module for custom nodes: No module named 'inference_core_nodes'

Loading: ComfyUI-Manager (V1.0.1)

ComfyUI Revision: 2171 [4f63ee9] | Released on '2024-05-10'

Loading: ComfyUI-Workflow-Component (V0.43.3) !! WARN: This is an experimental extension. Extremely unstable. !!

[START] ComfyUI AlekPet Nodes

Node -> ArgosTranslateNode [Loading]
Node -> DeepTranslatorNode [Loading]
Node -> ExtrasNode [Loading]
Node -> GoogleTranslateNode [Loading]
Node -> PainterNode [Loading]
Node -> PoseNode [Loading]

[END] ComfyUI AlekPet Nodes

Import times for custom nodes:
0.0 seconds: D:\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_SeeCoder
0.0 seconds: D:\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION
0.0 seconds: D:\ComfyUI\custom_nodes\AIGODLIKE-COMFYUI-TRANSLATION
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
0.0 seconds: D:\ComfyUI\custom_nodes\websocket_image_save.py
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes
0.0 seconds (IMPORT FAILED): D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_TiledKSampler
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
0.0 seconds (IMPORT FAILED): D:\ComfyUI\custom_nodes\ComfyUI-Inference-Core-Nodes
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_JPS-Nodes
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI_SeeCoder
0.0 seconds (IMPORT FAILED): D:\ComfyUI\custom_nodes\ComfyUI-Gemini
0.0 seconds (IMPORT FAILED): D:\ComfyUI\custom_nodes\ComfyUI-Gemini
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Workflow-Component
0.0 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Workflow-Component
0.5 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager
0.6 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
0.9 seconds: D:\ComfyUI\custom_nodes\ComfyUI-Manager
5.1 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
5.2 seconds: D:\ComfyUI\custom_nodes\ComfyUI_Custom_Nodes_AlekPet

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
Requested to load SD1ClipModel
Loading 1 new model
!!! Exception during processing!!! CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "D:\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\ComfyUI\nodes.py", line 58, in encode
cond, pooled = clip.encode_from_tokens(tokens, return_pooled=True)
File "D:\ComfyUI\comfy\sd.py", line 135, in encode_from_tokens
self.load_model()
File "D:\ComfyUI\comfy\sd.py", line 155, in load_model
model_management.load_model_gpu(self.patcher)
File "D:\ComfyUI\comfy\model_management.py", line 453, in load_model_gpu
return load_models_gpu([model])
File "D:\ComfyUI\comfy\model_management.py", line 447, in load_models_gpu
cur_loaded_model = loaded_model.model_load(lowvram_model_memory)
File "D:\ComfyUI\comfy\model_management.py", line 304, in model_load
raise e
File "D:\ComfyUI\comfy\model_management.py", line 300, in model_load
self.real_model = self.model.patch_model(device_to=patch_model_to, patch_weights=load_weights)
File "D:\ComfyUI\comfy\model_patcher.py", line 270, in patch_model
self.model.to(device_to)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1152, in to
return self._apply(convert)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 802, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 825, in _apply
param_applied = fn(param)
File "D:\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1150, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: operation not supported
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Prompt executed in 1.05 seconds

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant