Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with Xformers? #15

Open
Dude045 opened this issue Aug 23, 2023 · 2 comments
Open

Issue with Xformers? #15

Dude045 opened this issue Aug 23, 2023 · 2 comments

Comments

@Dude045
Copy link

Dude045 commented Aug 23, 2023

Hi, to start I'm no coding expert, I barely understand, I follow guides online. Thanks for your understanding. I've been able to install MusicGen, but when I click on submit, it always end in an error. It looks to be a problem with Xformers. I tried uninstalling and installing again, but it changed nothing. I'm really interested in the ''song to continue'' feature of this build. If anyone could help it would be great. Here is the command prompt. Thanks!

G:\MusicGen\audiocraft-infinity-webui>python webui.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu)
Python 3.10.11 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Loading model large
seed: 1532176914
Iterations required: 2
Sample rate: 32000
Traceback (most recent call last):
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1346, in process_api
result = await self.call_function(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1074, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "G:\MusicGen\audiocraft-infinity-webui\webui.py", line 212, in generate
wav = initial_generate(melody_boolean, MODEL, text, melody, msr, continue_file, duration, cf_cutoff, sc_text)
File "G:\MusicGen\audiocraft-infinity-webui\webui.py", line 143, in initial_generate
wav = MODEL.generate(descriptions=[text], progress=False)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 163, in generate
return self._generate_tokens(attributes, prompt_tokens, progress)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 309, in _generate_tokens
gen_tokens = self.lm.generate(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 490, in generate
next_token = self._sample_next_token(
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 354, in _sample_next_token
all_logits = model(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 253, in forward
out = self.transformer(input
, cross_attention_src=cross_attention_input)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 657, in forward
x = self._apply_layer(layer, x, *args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 614, in _apply_layer
return layer(*args, **kwargs)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 508, in forward
self._sa_block(self.norm1(x), src_mask, src_key_padding_mask))
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 599, in _sa_block
x = self.self_attn(x, x, x,
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 367, in forward
x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 193, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 291, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 307, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 96, in _dispatch_fw
return _run_priority_list(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(2, 1, 32, 64) (torch.float32)
key : shape=(2, 1, 32, 64) (torch.float32)
value : shape=(2, 1, 32, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0
decoderF is not supported because:
device=cpu (supported: {'cuda'})
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
flshattF@0.0.0 is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
device=cpu (supported: {'cuda'})
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 64

@diffractometer
Copy link

Running into the same thing might be my smooth brain, assuming something something not running on GPU correctly from command line.... will post results

@Mozoloa
Copy link

Mozoloa commented Sep 27, 2023

Here's what I usually do for this:

From within your env and install folder

pip uninstall torch
pip uninstall xformers
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install xformers
python webui.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants