Coder Social home page Coder Social logo

1aienthusiast / audiocraft-infinity-webui Goto Github PK

View Code? Open in Web Editor NEW
152.0 3.0 19.0 103 KB

License: GNU Affero General Public License v3.0

Python 100.00%
agplv3 artificial-intelligence audiocraft generation machine-learning ml music music-generation musicgen open-source

audiocraft-infinity-webui's Introduction

Audiocraft Infinity WebUI

Adds generation of songs with a length of over 30 seconds.

Adds the ability to continue songs.

Adds a seed option.

Adds ability to load locally downloaded models.

Adds training (Thanks to chavinlo's repo https://github.com/chavinlo/musicgen_trainer)

Adds MacOS support.

Adds queue (on the main-queue branch: https://github.com/1aienthusiast/audiocraft-infinity-webui/tree/main-queue)

Batching (run webuibatch.py instead of webui.py)

Disables (hopefully) the gradio analytics.

Note! Project is currently not actively maintained but accepts PRs.

Installation

Python 3.9 is recommended.

  1. Clone the repo: git clone https://github.com/1aienthusiast/audiocraft-infinity-webui.git
  2. Install pytorch: pip install 'torch>=2.0'
  3. Install the requirements: pip install -r requirements.txt
  4. Clone my fork of the Meta audiocraft repo and chavinlo's MusicGen trainer inside the repositories folder:
cd repositories
git clone https://github.com/1aienthusiast/audiocraft
git clone https://github.com/chavinlo/musicgen_trainer
cd ..

Note!

If you already cloned the Meta audiocraft repo you have to remove it then clone the provided fork for the seed option to work.

cd repositories
rm -rf audiocraft/
git clone https://github.com/1aienthusiast/audiocraft
git clone https://github.com/chavinlo/musicgen_trainer
cd ..

Usage

python webui.py python webuibatch.py - with batching support

Updating

Run git pull inside the root folder to update the webui, and the same command inside repositories/audiocraft to update audiocraft.

Models

Meta provides 4 pre-trained models. The pre trained models are:

Needs a GPU!

I recommend 12GB of VRAM for the large model.

Training

Dataset Creation

Create a folder, in it, place your audio and caption files. They must be WAV and TXT format respectively.

Place the folder in training/datasets/.

Important: Split your audios in 35 second chunks. Only the first 30 seconds will be processed. Audio cannot be less than 30 seconds.

In this example, segment_000.txt contains the caption "jazz music, jobim" for wav file segment_000.wav

Options

  • dataset_path - path to your dataset with WAV and TXT pairs.
  • model_id - MusicGen model to use. Can be small/medium/large. Default: small - model it will be finetuned on
  • lr: Float, learning rate. Default: 0.0001/1e-4
  • epochs: Integer, epoch count. Default: 5
  • use_wandb: Integer, 1 to enable wandb, 0 to disable it. Default: 0 = Disabled
  • save_step: Integer, amount of steps to save a checkpoint. Default: None

Models

Once training finishes, the model (and checkpoints) will be available under the models/ directory.

Loading the finetuned models

Model gets saved to models/ as lm_final.pt

  1. Place it in models/DIRECTORY_NAME/
  2. In the Inference tab choose custom as the model and enter DIRECTORY_NAME into the input field.
  3. In the Inference tab choose the model it was finetuned on

Colab

For google colab you need to use the --share flag.

License

  • The code in this repository is released under the AGPLv3 license as found in the LICENSE file.

audiocraft-infinity-webui's People

Contributors

1aienthusiast avatar jeffdbeats avatar mooggsentry avatar oobabooga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

audiocraft-infinity-webui's Issues

Error immediately when trying to generate

I get these errors after installing PyTorch from here https://pytorch.org/get-started/locally/
I had to get it from there because it gave me errors over having cpu instead of cuda before.

Traceback (most recent call last):
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\routes.py", line 437, in run_predict
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1346, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\gradio\blocks.py", line 1074, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\webui.py", line 212, in generate
    wav = initial_generate(melody_boolean, MODEL, text, melody, msr, continue_file, duration, cf_cutoff, sc_text)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\webui.py", line 143, in initial_generate
    wav = MODEL.generate(descriptions=[text], progress=False)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 163, in generate
    return self._generate_tokens(attributes, prompt_tokens, progress)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 309, in _generate_tokens
    gen_tokens = self.lm.generate(
                 ^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 490, in generate
    next_token = self._sample_next_token(
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 354, in _sample_next_token
    all_logits = model(
                 ^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 253, in forward
    out = self.transformer(input_, cross_attention_src=cross_attention_input)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 657, in forward
    x = self._apply_layer(layer, x, *args, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 614, in _apply_layer
    return layer(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 508, in forward
    self._sa_block(self.norm1(x), src_mask, src_key_padding_mask))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\transformer.py", line 599, in _sa_block
    x = self.self_attn(x, x, x,
        ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "F:\Programme\Local AI\AudioCraft\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 367, in forward
    x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
    return _memory_efficient_attention(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
    return _memory_efficient_attention_forward(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\__init__.py", line 306, in _memory_efficient_attention_forward
    op = _dispatch_fw(inp)
         ^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 94, in _dispatch_fw
    return _run_priority_list(
           ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ghoul\AppData\Local\Programs\Python\Python311\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 69, in _run_priority_list
    raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
     query       : shape=(2, 1, 16, 64) (torch.float32)
     key         : shape=(2, 1, 16, 64) (torch.float32)
     value       : shape=(2, 1, 16, 64) (torch.float32)
     attn_bias   : <class 'NoneType'>
     p           : 0
`flshattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
`tritonflashattF` is not supported because:
    device=cpu (supported: {'cuda'})
    dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
    Operator wasn't built - see `python -m xformers.info` for more info
    triton is not available
`cutlassF` is not supported because:
    device=cpu (supported: {'cuda'})
`smallkF` is not supported because:
    max(query.shape[-1] != value.shape[-1]) > 32
    unsupported embed per head: 64

AttributeError after updating to latest version

Getting this error after updating to latest version

Traceback (most recent call last):
File "C:\audiocraft-infinity-webui\webui.py", line 17, in
from modules import shared, ui
File "C:\audiocraft-infinity-webui\modules\ui.py", line 5, in
class ToolButton(gr.Button, gr.components.FormComponent):
AttributeError: module 'gradio.components' has no attribute 'FormComponent'
NativeCommandExitException: Program "python.exe" ended with non-zero exit code: 1.

[Feature Request] Queue Implementation for Prompts Submission

Hello and thank you "infinitely" for this tool :)

I am writing to suggest a new feature that could enhance user experience: the implementation of a queue system for prompts submission.

In the user interface of another project, audiocraft-webui, there is a feature that allows the queueing of prompts. When a user clicks on the "Submit" button multiple times, instead of overwriting the current prompt, the new prompts are added to a queue.

I believe that incorporating a similar feature into this project could significantly improve its usability, as it would allow users to submit multiple different prompts in a queue without having to wait for each to be processed before submitting the next.

Thank you again for all your work on this tool. I look forward to any discussion on this feature request!

Best regards,

Peace

a couple of bugs

Hi!

Installed everything on python 3.11.4.

I had to make a couple of adjustements for it to work:

in set_seed after line 51 I added:
seed = np.uint32(seed).item()
to convert back to int otherwise random.seed(seed) would throw an exception

in generate in line 203
instead of file_name = "" changed it to file_name = None
otherwise gradio was crashing when trying to load generated file in the ui

What to use instead of musicgen_trainer?

Newbie here.

https://github.com/chavinlo/musicgen_trainer does not exist anymore. What would be a recommended replacement, if any?

Or would audiocraft work without that lib?

ModuleNotFoundError: No module named 'train'

I just git pulled the repo and the audiocraft repo in order to update to the latest version. Now I get the following when trying to start up the UI:

  File "E:\AI\audiocraft-infinity-webui\webui.py", line 27, in <module>
    from train import train
ModuleNotFoundError: No module named 'train'

Batch generation possible?

Is there a way to genrate more than 1 track a one using the same prompt an settings? Would be nice to have if not available at present. Thanks in advance

KeyError: 'dataset'

Tried to get this running on windows on a venv with py3.9, only got the below error.
Tried once more in WSL2 with py3.9 in a conda environment, same error.

Any suggestions on what might be going wrong?

ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi result = await app( # type: ignore[func-returns-value] File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/fastapi/applications.py", line 289, in __call__ await super().__call__(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/middleware/cors.py", line 83, in __call__ await self.app(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ raise e File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ await self.app(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/routing.py", line 66, in app response = await func(request) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/fastapi/routing.py", line 273, in app raw_response = await run_endpoint_function( File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/fastapi/routing.py", line 192, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/starlette/concurrency.py", line 41, in run_in_threadpool return await anyio.to_thread.run_sync(func, *args) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, *args) File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/gradio/routes.py", line 289, in api_info return gradio.blocks.get_api_info(config, serialize) # type: ignore File "/home/dude/miniconda3/envs/infinityaudio/lib/python3.9/site-packages/gradio/blocks.py", line 517, in get_api_info serializer = serializing.COMPONENT_MAPPING[type]() KeyError: 'dataset'

Issue with Xformers?

Hi, to start I'm no coding expert, I barely understand, I follow guides online. Thanks for your understanding. I've been able to install MusicGen, but when I click on submit, it always end in an error. It looks to be a problem with Xformers. I tried uninstalling and installing again, but it changed nothing. I'm really interested in the ''song to continue'' feature of this build. If anyone could help it would be great. Here is the command prompt. Thanks!

G:\MusicGen\audiocraft-infinity-webui>python webui.py
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu)
Python 3.10.11 (you have 3.10.11)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Loading model large
seed: 1532176914
Iterations required: 2
Sample rate: 32000
Traceback (most recent call last):
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1346, in process_api
result = await self.call_function(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\gradio\blocks.py", line 1074, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "G:\MusicGen\audiocraft-infinity-webui\webui.py", line 212, in generate
wav = initial_generate(melody_boolean, MODEL, text, melody, msr, continue_file, duration, cf_cutoff, sc_text)
File "G:\MusicGen\audiocraft-infinity-webui\webui.py", line 143, in initial_generate
wav = MODEL.generate(descriptions=[text], progress=False)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 163, in generate
return self._generate_tokens(attributes, prompt_tokens, progress)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\musicgen.py", line 309, in _generate_tokens
gen_tokens = self.lm.generate(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 490, in generate
next_token = self._sample_next_token(
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 354, in _sample_next_token
all_logits = model(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\models\lm.py", line 253, in forward
out = self.transformer(input
, cross_attention_src=cross_attention_input)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 657, in forward
x = self._apply_layer(layer, x, *args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 614, in _apply_layer
return layer(*args, **kwargs)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 508, in forward
self._sa_block(self.norm1(x), src_mask, src_key_padding_mask))
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\transformer.py", line 599, in _sa_block
x = self.self_attn(x, x, x,
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "G:\MusicGen\audiocraft-infinity-webui\repositories\audiocraft\audiocraft\modules\transformer.py", line 367, in forward
x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 193, in memory_efficient_attention
return memory_efficient_attention(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 291, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha_init
.py", line 307, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 96, in _dispatch_fw
return _run_priority_list(
File "C:\Users\huard\AppData\Local\Programs\Python\Python310\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(2, 1, 32, 64) (torch.float32)
key : shape=(2, 1, 32, 64) (torch.float32)
value : shape=(2, 1, 32, 64) (torch.float32)
attn_bias : <class 'NoneType'>
p : 0
decoderF is not supported because:
device=cpu (supported: {'cuda'})
attn_bias type is <class 'NoneType'>
operator wasn't built - see python -m xformers.info for more info
[email protected] is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
tritonflashattF is not supported because:
device=cpu (supported: {'cuda'})
dtype=torch.float32 (supported: {torch.float16, torch.bfloat16})
operator wasn't built - see python -m xformers.info for more info
triton is not available
cutlassF is not supported because:
device=cpu (supported: {'cuda'})
operator wasn't built - see python -m xformers.info for more info
smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
operator wasn't built - see python -m xformers.info for more info
unsupported embed per head: 64

FileNotFoundError: Could not find module 'libtorchaudio.pyd' (or one of its dependencies). Try using the full path with constructor syntax.

It keeps popping up whenever I run python webui.py in the git bash on win 10.
I have reinstalled both torch and torchaudio multiple times. It is fully installed, but I still cannot run the webui because of that same error. Not sure why.

Two torch versions always keep installing. 2.0.0 and 2.1.0. Not sure if that has anything to do with it, I'm just including it in case there's something with it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.