Coder Social home page Coder Social logo

rvc-webui's Introduction

RVC-WebUI



Launch

Windows

Double click webui-user.bat to start the webui.

Linux or Mac

Run webui.sh to start the webui.


Tested environment: Windows 10, Python 3.10.9, torch 2.0.0+cu118

Troubleshooting

error: Microsoft Visual C++ 14.0 or greater is required.

Microsoft C++ Build Tools must be installed.

Step 1: Download the installer

Download

Step 2: Install C++ Build Tools

Run the installer and select C++ Build Tools in the Workloads tab.


Credits

rvc-webui's People

Contributors

autumnmotor avatar ddpn08 avatar dsanno avatar hetima avatar iamgoofball avatar litagin02 avatar nadare881 avatar terracottahaniwa avatar tylorshine avatar w-okada avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rvc-webui's Issues

Errors in lines

Creating venv in directory C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv using python "C:\Users\User\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe"
venv "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Scripts\Python.exe"
Python 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Commit hash:
Installing torch and torchvision
Installing requirements
Traceback (most recent call last):
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\webui.py", line 3, in
from modules import cmd_opts, ui
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\modules\ui.py", line 9, in
from . import models, shared
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\modules\models.py", line 6, in
from fairseq import checkpoint_utils
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Lib\site-packages\fairseq_init_.py", line 20, in
from fairseq.distributed import utils as distributed_utils
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed_init_.py", line 7, in
from .fully_sharded_data_parallel import (
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed\fully_sharded_data_parallel.py", line 10, in
from fairseq.dataclass.configs import DistributedTrainingConfig
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass_init_.py", line 6, in
from .configs import FairseqDataclass
File "C:\Users\User\Desktop\rvc-webui-main\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass\configs.py", line 1104, in
@DataClass
^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.1264.0_x64__qbz5n2kfra8p0\Lib\dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory
Screenshot 2023-06-24 21 31 37

Wrong Auto Embedder Model used

Error displayed on webui:

Error: Traceback (most recent call last):
  File "D:\Software\RVC-webui\rvc-webui\modules\tabs\inference.py", line 107, in infer
    audio = model.single(
  File "D:\Software\RVC-webui\rvc-webui\modules\models.py", line 128, in single
    raise Exception(f"Not supported embedder: {embedder_model_name}")
Exception: Not supported embedder: hubert_base

Seem to happened after the japanese-hubert-base update. it should auto point to contentvec for older model

Linux環境でextract features に失敗する

ソース

e79ffea

実行環境1

  • ubuntu 20.04
  • Python 3.10.9

実行環境2

  • Windows10
  • Python 3.10.9

現象

環境1で、学習を実行すると、下記エラーが表示され、3_feature256フォルダに特徴量が保存されず、後続の処理に失敗します。
なお、環境2では再現せず、学習が正常に完了することからLinux特有の問題のようです。

2023-04-24 14:47:05 | INFO | fairseq.tasks.hubert_pretraining | current directory is /home/user/rvc-webui
2023-04-24 14:47:05 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
2023-04-24 14:47:05 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
Traceback (most recent call last):
  File "/home/user/rvc-webui/lib/rvc/preprocessing/extract_feature.py", line 26, in load_embedder
    embedder_model = embedder_model.to(device)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
    return self._apply(convert)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 1 more time]
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
    param_applied = fn(param)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
  File "/home/user/rvc-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 235, in _lazy_init
    raise RuntimeError(
RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
Error: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method /home/user/rvc-webui/models/checkpoint_best_legacy_500.pt
(以下、Error: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start methodが出続ける)

調査いただけると幸いです。

ValueError: too many values to unpack (expected 9)

Running the webui.bat and I get this output:

venv "C:\AI\rvc-webui\rvc-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: a4ee6b5
Installing requirements
2023-04-23 20:07:06 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-04-23 20:07:06 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-04-23 20:07:06 | INFO | faiss.loader | Loading faiss.
2023-04-23 20:07:06 | INFO | faiss.loader | Successfully loaded faiss.
C:\AI\rvc-webui\rvc-webui\venv\lib\site-packages\gradio\deprecation.py:43: UserWarning: You have unused kwarg parameters in Number, please remove them: {'step': 1}
warnings.warn(
Traceback (most recent call last):
File "C:\AI\rvc-webui\rvc-webui\webui.py", line 15, in
webui()
File "C:\AI\rvc-webui\rvc-webui\webui.py", line 5, in webui
app = ui.create_ui()
File "C:\AI\rvc-webui\rvc-webui\modules\ui.py", line 133, in create_ui
tab()
File "C:\AI\rvc-webui\rvc-webui\modules\ui.py", line 65, in call
return self.ui(outlet)
File "C:\AI\rvc-webui\rvc-webui\modules\tabs\merge.py", line 277, in ui
(
ValueError: too many values to unpack (expected 9)
Press any key to continue . . .

cant find Inferencing voice

i put all the settings and used one-click training and i got:
"added_IVF985_Flat_nprobe_1_IbrahemHefny_v2.index
All processes have been completed!"
but i couldnt find the model from the "Inferencing voice" slider even though i found the voice model in the log file
please help

Training uses CPU instead of GPU

836d6ad まではGPUで学習ができていたのですが、最新(193d6e7)にアップデートしたらCPU使用率が100%になり、GPUが使用されていないようです。
GPUはNvidiaの3060です。


Training with GPU worked until 836d6ad, but with the newest version (193d6e7) it seems to train with CPU instead with GPU.
My GPU is Nvidia's 3060.

how to download models?

I have downloaded the webui and I downloaded a zip file that contains 2 .pth files and 1 json and I don't understand how do I add the model to the webui, I tried to put the file in the checkpoints folder but nothing happens, even after I reload page or press on the recycle button.
What am I missing?

When reduction with a smaller index size, Inference of silent parts become noisy.

When performing index reduction with a smaller index size, I feel that the inference of silent parts become noisy.
Shouldn't mute.npy be excluded from the k-means algorithm, given that mute.wav is a special file?

少ないインデックスサイズでインデックスの削減を行う際、無音部分の推論がノイズになると感じます。
mute.wavが特別なファイルであることを考慮すると、mute.npyはk-meansアルゴリズムから除外すべきではないですか?

terracottahaniwa@1420aaa

モデルの読み込みが終わりません

環境はREADME-ja.mdに書いてある通りで試しました。
webui-user.batを起動し、普通にhttp://127.0.0.1:7861 にアクセスも出来るのですが、Inferenceタブのモデル選択欄横のリサイクルマークを押しても、いつまでたってもモデルが読み込まれません。
もちろんrvc-webui\models\training\models\checkpoints\ にモデルは配置しました。

↓はバッチファイルの出力
``
Creating venv in directory C:\RVC\rvc-webui\venv using python "C:\Users\rinay\AppData\Local\Programs\Python\Python310\python.exe"
venv "C:\RVC\rvc-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: fae3965
Installing torch and torchvision
Installing requirements
2023-07-11 23:49:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-07-11 23:49:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-07-11 23:49:07 | INFO | faiss.loader | Loading faiss.
2023-07-11 23:49:07 | INFO | faiss.loader | Successfully loaded faiss.
C:\RVC\rvc-webui\venv\lib\site-packages\gradio\deprecation.py:43: UserWarning: You have unused kwarg parameters in Checkbox, please remove them: {'disabled': False}
warnings.warn(
Running on local URL: http://127.0.0.1:7861

To create a public link, set share=True in launch().
2023-07-11 23:57:27 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7861/reset "HTTP/1.1 200 OK"
``

↓いつまでも終わらないRVCの読み込み
RVC

トレーニング再開でエラー

途中経過ファイルが存在する状態でトレーニング再開すると以下のエラーが出ます。

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "G:\venv\RVC\.venv\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
    fn(i, *args)
  File "G:\venv\RVC\rvc-webui\modules\training\train.py", line 506, in run
    lr=float(lr),
UnboundLocalError: local variable 'lr' referenced before assignment

RVC-webuiを起動しようとしたら以下のエラー文が出てきました

エラー文

Python.3.7_qbz5n2kfra8p0\python.exe"
venv "D:\rikiba\rvc-webui-main (1)\rvc-webui-main\venv\Scripts\Python.exe"
Python 3.7.9 (tags/v3.7.9:13c94747c7, Aug 17 2020, 16:30:00) [MSC v.1900 64 bit (AMD64)]
Commit hash:
Installing torch and torchvision
Installing requirements
Traceback (most recent call last):
File "launch.py", line 138, in
prepare_environment()
File "launch.py", line 122, in prepare_environment
errdesc=f"Couldn't install requirements",
File "launch.py", line 36, in run
raise RuntimeError(message)
RuntimeError: Couldn't install requirements.
Command: "D:\rikiba\rvc-webui-main (1)\rvc-webui-main\venv\Scripts\python.exe" -m pip install -r requirements.txt
Error code: 1
stdout: Collecting gradio==3.28.3
Using cached gradio-3.28.3-py3-none-any.whl (17.3 MB)
Collecting tqdm==4.65.0
Using cached tqdm-4.65.0-py3-none-any.whl (77 kB)

stderr: ERROR: Could not find a version that satisfies the requirement numpy==1.23.5 (from -r requirements/main.txt (line 3)) (from versions: 1.3.0, 1.4.1, 1.5.0, 1.5.1, 1.6.0, 1.6.1, 1.6.2, 1.7.0, 1.7.1, 1.7.2, 1.8.0, 1.8.1, 1.8.2, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.10.0.post2, 1.10.1, 1.10.2, 1.10.4, 1.11.0, 1.11.1, 1.11.2, 1.11.3, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 1.13.3, 1.14.0, 1.14.1, 1.14.2, 1.14.3, 1.14.4, 1.14.5, 1.14.6, 1.15.0, 1.15.1, 1.15.2, 1.15.3, 1.15.4, 1.16.0, 1.16.1, 1.16.2, 1.16.3, 1.16.4, 1.16.5, 1.16.6, 1.17.0, 1.17.1, 1.17.2, 1.17.3, 1.17.4, 1.17.5, 1.18.0, 1.18.1, 1.18.2, 1.18.3, 1.18.4, 1.18.5, 1.19.0, 1.19.1, 1.19.2, 1.19.3, 1.19.4, 1.19.5, 1.20.0, 1.20.1, 1.20.2, 1.20.3, 1.21.0, 1.21.1, 1.21.2, 1.21.3, 1.21.4, 1.21.5, 1.21.6)
ERROR: No matching distribution found for numpy==1.23.5 (from -r requirements/main.txt (line 3))
WARNING: You are using pip version 20.1.1; however, version 23.1.2 is available.
You should consider upgrading via the 'D:\rikiba\rvc-webui-main (1)\rvc-webui-main\venv\Scripts\python.exe -m pip install --upgrade pip' command.

webui-user.batを実行中にエラーが発生しました。

実行環境: Windows 11, Python 3.11.3
でwebui-user.batを実行すると、以下のようなエラーが発生しました。
venv "C:\tails.exe\rvc-webui\venv\Scripts\Python.exe"
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]
Commit hash: a9572eb
Traceback (most recent call last):
File "C:\tails.exe\rvc-webui\launch.py", line 164, in
prepare_environment()
File "C:\tails.exe\rvc-webui\launch.py", line 136, in prepare_environment
run_python(
File "C:\tails.exe\rvc-webui\launch.py", line 84, in run_python
return run(f'"{python}" -c "{code}"', desc, errdesc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\tails.exe\rvc-webui\launch.py", line 36, in run
raise RuntimeError(message)
RuntimeError: Error running command.
Command: "C:\tails.exe\rvc-webui\venv\Scripts\Python.exe" -c "import torch; assert torch.cuda.is_available(), 'Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'"
Error code: 1
stdout:
stderr: Traceback (most recent call last):
File "", line 1, in
Assertion Error: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check

続行するには何かキーを押してください . . .

この場合、どの箇所を修正すればよろしいでしょうか?
よろしければ詳しい手順も教えていただけますと幸いです。
何卒よろしくお願いいたします。
RVC-WebUI Error

harvestにするとエラーが出る

初めまして
だだっこぱんださんのcolab版のRVC-WEBUIをmacbookAir(Ventura)で使わせて頂いています
導入が簡単で使いやすいのですが、一つ、モデルの声でvocalを音声変換するときの処理に関して問題があります

ピッチ調整をharvestにすると何故かいつも
Something went wrong
Connection errored out.
というエラー文を吐いて変換出来ません
多くのWifi環境で試しましたが、インターネットに繋がっていても治りません
autoなら変換出来ますが、どうしてもharvestで変換したいです
どうすれば最後まで実行出来ますか?

Dataset glob について

データセットフォルダの指定方法がglobパターンになって自由度は広がりましたが、/**/*.wav 以外のパターンを指定するケースはほぼないのではないかと思います。
フォルダ指定に戻して、スクリプト側で /**/*.wav を付加する方が入力を簡便に済ませられて良いのではないかと思いますがいかがでしょうか。

ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory

Webui.bat and webui-user.bat are throwing this error:

venv "D:\audiogen\rvc-webui-main\venv\Scripts\Python.exe" Python 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] Commit hash: <none> Installing requirements Traceback (most recent call last): File "D:\audiogen\rvc-webui-main\webui.py", line 3, in <module> from modules import cmd_opts, ui File "D:\audiogen\rvc-webui-main\modules\ui.py", line 9, in <module> from . import models, shared File "D:\audiogen\rvc-webui-main\modules\models.py", line 6, in <module> from fairseq import checkpoint_utils File "D:\audiogen\rvc-webui-main\venv\Lib\site-packages\fairseq\__init__.py", line 20, in <module> from fairseq.distributed import utils as distributed_utils File "D:\audiogen\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed\__init__.py", line 7, in <module> from .fully_sharded_data_parallel import ( File "D:\audiogen\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed\fully_sharded_data_parallel.py", line 10, in <module> from fairseq.dataclass.configs import DistributedTrainingConfig File "D:\audiogen\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass\__init__.py", line 6, in <module> from .configs import FairseqDataclass File "D:\audiogen\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass\configs.py", line 1104, in <module> @dataclass ^^^^^^^^^ File "C:\Python311\Lib\dataclasses.py", line 1221, in dataclass return wrap(cls) ^^^^^^^^^ File "C:\Python311\Lib\dataclasses.py", line 1211, in wrap return _process_class(cls, init, repr, eq, order, unsafe_hash, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\dataclasses.py", line 959, in _process_class cls_fields.append(_get_field(cls, name, type, kw_only)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Python311\Lib\dataclasses.py", line 816, in _get_field raise ValueError(f'mutable default {type(f.default)} for field ' ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory Press any key to continue . . .

help with seperating vocals

this the error i got while i tried to seperate vocals from a song:You Only Live Once - The Strokes - 2023 Cover Version (320 kbps).mp3.reformatted.wav->Traceback (most recent call last):
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\librosa\core\audio.py", line 155, in load
context = sf.SoundFile(path)
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\soundfile.py", line 655, in init
self._file = self._open(file, mode_int, closefd)
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\soundfile.py", line 1213, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening 'D:\voice\Retrieval-based-Voice-Conversion-WebUI\TEMP/You Only Live Once - The Strokes - 2023 Cover Version (320 kbps).mp3.reformatted.wav': System error.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\infer-web.py", line 370, in uvr
pre_fun.path_audio(
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\infer_uvr5.py", line 64, in path_audio
) = librosa.core.load( # 理论上librosa读取可能对某些音频有bug,应该上ffmpeg读取,但是太麻烦了弃坑
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\librosa\util\decorators.py", line 104, in inner_f
return f(**kwargs)
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\librosa\core\audio.py", line 174, in load
y, sr_native = __audioread_load(path, offset, duration, dtype)
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\librosa\core\audio.py", line 198, in _audioread_load
with audioread.audio_open(path) as input_file:
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\audioread_init
.py", line 111, in audio_open
return BackendClass(path)
File "D:\voice\Retrieval-based-Voice-Conversion-WebUI\runtime\lib\site-packages\audioread\rawread.py", line 62, in init
self._fh = open(filename, 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'D:\voice\Retrieval-based-Voice-Conversion-WebUI\TEMP/You Only Live Once - The Strokes - 2023 Cover Version (320 kbps).mp3.reformatted.wav'
image

Loading without an end in sight.

When changing any settings that require loading time or clicking the infer button, it loads without an end in sight.

Image

image

Console output:

2023-07-13 00:30:05 | INFO | httpx | HTTP Request: POST http://ip:port/reset "HTTP/1.1 200 OK"
2023-07-13 00:37:50 | ERROR | asyncio | Task exception was never retrieved
future: <Task finished name='rjtxsra2mxc_0' coro=<Queue.process_events() done, defined at D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py:343> exception=1 validation error for PredictBody
event_id
  Field required [type=missing, input_value={'fn_index': 0, 'data': [...on_hash': 'rjtxsra2mxc'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.1.2/v/missing>
Traceback (most recent call last):
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 347, in process_events
    client_awake = await self.gather_event_data(event)
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 220, in gather_event_data
    data, client_awake = await self.get_message(event, timeout=receive_timeout)
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 456, in get_message
    return PredictBody(**data), True
  File "D:\rvc-webui-main\venv\lib\site-packages\pydantic\main.py", line 150, in __init__
    __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
  Field required [type=missing, input_value={'fn_index': 0, 'data': [...on_hash': 'rjtxsra2mxc'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.1.2/v/missing
2023-07-13 00:37:50 | INFO | httpx | HTTP Request: POST http://ip:port/reset "HTTP/1.1 200 OK"
2023-07-13 00:37:55 | INFO | httpx | HTTP Request: POST http://ip:port/reset "HTTP/1.1 200 OK"
2023-07-13 00:42:07 | ERROR | asyncio | Task exception was never retrieved
future: <Task finished name='rjtxsra2mxc_2' coro=<Queue.process_events() done, defined at D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py:343> exception=1 validation error for PredictBody
event_id
  Field required [type=missing, input_value={'fn_index': 2, 'data': [...on_hash': 'rjtxsra2mxc'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.1.2/v/missing>
Traceback (most recent call last):
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 347, in process_events
    client_awake = await self.gather_event_data(event)
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 220, in gather_event_data
    data, client_awake = await self.get_message(event, timeout=receive_timeout)
  File "D:\rvc-webui-main\venv\lib\site-packages\gradio\queueing.py", line 456, in get_message
    return PredictBody(**data), True
  File "D:\rvc-webui-main\venv\lib\site-packages\pydantic\main.py", line 150, in __init__
    __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
  Field required [type=missing, input_value={'fn_index': 2, 'data': [...on_hash': 'rjtxsra2mxc'}, input_type=dict]
    For further information visit https://errors.pydantic.dev/2.1.2/v/missing

トレーニングでエラーが出る

トレーニングを処理開始すると以下のエラーがでます。

train_all: emb_name: contentvec█████████████████████████████████████████████████▉ | 1008/1036 [00:28<00:00, 46.31it/s]
Traceback (most recent call last):████████████████████████████████████████████████▊| 1033/1036 [00:28<00:00, 55.81it/s]
File "F:\ai\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
File "F:\ai\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1193, in process_api
result = await self.call_function(
File "F:\ai\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 930, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\ai\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\ai\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\ai\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\ai\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "F:\ai\rvc-webui\modules\tabs\training.py", line 183, in train_all
train_model(
File "F:\ai\rvc-webui\lib\rvc\train.py", line 199, in train_model
if device.type == "mps":
AttributeError: 'NoneType' object has no attribute 'type'

設定は以下の通りです。

Ignore cache 有効
Speaker ID : 0
Target sampling rate: 40k
f0 Model: Yes
Using phone embedder: contentvec
Embedding channels: 256
GPU ID: 0, 1
Number of CPU processes: 8
Normalize audio volume when preprocess: Yes
Pitch extraction algorithm: harvest
Batch size: 14
Number of epochs: 50
Save every epoch: 10
Cache batch: On
FP16: On
Pre trained generator path: F:\ai\rvc-webui\models\pretrained\f0G40k256.pth
Pre trained discriminator path: F:\ai\rvc-webui\models\pretrained\f0D40k256.pth

パラメータ(Using phone embedder とか Target sampling rate とか)を少しずつ変えて試しましたが、改善しませんでした。
グラボは2枚乗せています。

どのようにすればよいでしょうか。

Inference中のエラー

Google Colabから行っています。
1回目はエラー表示が出るものの期待したファイルが出力されます。
2回目(Source Audioを変えただけ)はファイルも出力されません。
モデルにはindexフォルダ、pth共にあります。

共通エラー表示

Error

1回目

ログ

load contentvec embedder
2023-05-12 16:43:16 | INFO | fairseq.tasks.hubert_pretraining | current directory is /content/rvc-webui
2023-05-12 16:43:16 | INFO | fairseq.tasks.hubert_pretraining | HubertPretrainingTask Config {'_name': 'hubert_pretraining', 'data': 'metadata', 'fine_tuning': False, 'labels': ['km'], 'label_dir': 'label', 'label_rate': 50.0, 'sample_rate': 16000, 'normalize': False, 'enable_padding': False, 'max_keep_size': None, 'max_sample_size': 250000, 'min_sample_size': 32000, 'single_target': False, 'random_crop': True, 'pad_audio': False}
2023-05-12 16:43:16 | INFO | fairseq.models.hubert.hubert | HubertModel Config: {'_name': 'hubert', 'label_rate': 50.0, 'extractor_mode': default, 'encoder_layers': 12, 'encoder_embed_dim': 768, 'encoder_ffn_embed_dim': 3072, 'encoder_attention_heads': 12, 'activation_fn': gelu, 'layer_type': transformer, 'dropout': 0.1, 'attention_dropout': 0.1, 'activation_dropout': 0.0, 'encoder_layerdrop': 0.05, 'dropout_input': 0.1, 'dropout_features': 0.1, 'final_dim': 256, 'untie_final_proj': True, 'layer_norm_first': False, 'conv_feature_layers': '[(512,10,5)] + [(512,3,2)] * 4 + [(512,2,2)] * 2', 'conv_bias': False, 'logit_temp': 0.1, 'target_glu': False, 'feature_grad_mult': 0.1, 'mask_length': 10, 'mask_prob': 0.8, 'mask_selection': static, 'mask_other': 0.0, 'no_mask_overlap': False, 'mask_min_space': 1, 'mask_channel_length': 10, 'mask_channel_prob': 0.0, 'mask_channel_selection': static, 'mask_channel_other': 0.0, 'no_mask_channel_overlap': False, 'mask_channel_min_space': 1, 'conv_pos': 128, 'conv_pos_groups': 16, 'latent_temp': [2.0, 0.5, 0.999995], 'skip_masked': False, 'skip_nomask': False, 'checkpoint_activations': False, 'required_seq_len_multiple': 2, 'depthwise_conv_kernel_size': 31, 'attn_type': '', 'pos_enc_type': 'abs', 'fp16': False}
Loaded /content/rvc-webui/models/checkpoints/hyde4_index/hyde4.0.index and /content/rvc-webui/models/checkpoints/hyde4_index/hyde4.0.big.npy

2回目

ログ

Exception ignored in: <generator object Inference.ui.<locals>.infer at 0x7f54ff55cf20>
Traceback (most recent call last):
  File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 862, in run
    context, func, args, future = item
RuntimeError: generator ignored GeneratorExit
Loaded /content/rvc-webui/models/checkpoints/hyde4_index/hyde4.0.index and /content/rvc-webui/models/checkpoints/hyde4_index/hyde4.0.big.npy

突然Launchができなくなりました

Python 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0]
Commit hash: 3aed378
Installing requirements
Traceback (most recent call last):
File "/content/rvc-webui/webui.py", line 3, in
from modules import cmd_opts, ui
File "/content/rvc-webui/modules/ui.py", line 9, in
from . import models, shared
File "/content/rvc-webui/modules/models.py", line 6, in
from fairseq import checkpoint_utils
File "/opt/conda/lib/python3.11/site-packages/fairseq/init.py", line 20, in
from fairseq.distributed import utils as distributed_utils
File "/opt/conda/lib/python3.11/site-packages/fairseq/distributed/init.py", line 7, in
from .fully_sharded_data_parallel import (
File "/opt/conda/lib/python3.11/site-packages/fairseq/distributed/fully_sharded_data_parallel.py", line 10, in
from fairseq.dataclass.configs import DistributedTrainingConfig
File "/opt/conda/lib/python3.11/site-packages/fairseq/dataclass/init.py", line 6, in
from .configs import FairseqDataclass
File "/opt/conda/lib/python3.11/site-packages/fairseq/dataclass/configs.py", line 1104, in
@DataClass
^^^^^^^^^
File "/opt/conda/lib/python3.11/dataclasses.py", line 1230, in dataclass
return wrap(cls)
^^^^^^^^^
File "/opt/conda/lib/python3.11/dataclasses.py", line 1220, in wrap
return _process_class(cls, init, repr, eq, order, unsafe_hash,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/dataclasses.py", line 958, in _process_class
cls_fields.append(_get_field(cls, name, type, kw_only))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/dataclasses.py", line 815, in _get_field
raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory

URLが出力されることなく、出力が終わってしまいます
PC初心者なのでどうすれば良いかわかりやすくご教授いただければ幸いです

性能の低いGPU向けの設定について

rvc-webuiを自分のPC(GTX 1660 Super メモリ6GB)で動かしたいと思いインストールしたのですが、Inferenceの段階でRuntimeError: CUDA out of memoryエラーにより失敗してしまいました。
Stable Diffusion等でも同様の問題があり、set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.6, max_split_size_mb:128を設定して対処していたため、同様の設定をしたところ、生成はされるのですが曲の長さと同じ無音のファイルが生成されてしまいます。
性能の低いGPU用の設定を追加していただけると幸いです。よろしくお願いします。

Launch webuiを実行するとERRORが発生するようになりました。

いつもだだっこパンダさんのWebUIを使わせて頂いています。
昨日からセルを実行すると以下のようなエラーが発生するようになりました。
ファイルはColabのこちら(https://github.com/ddPn08/rvc-webui-colab) を利用しています。
利用できるように修正する方法をご教授頂けないでしょうか。
以下エラーコードです。


/bin/bash: line 2: /opt/conda/bin/conda: No such file or directory
Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0]
Commit hash: c4a12a8
Installing requirements
Traceback (most recent call last):
File "/content/rvc-webui/launch.py", line 138, in
prepare_environment()
File "/content/rvc-webui/launch.py", line 119, in prepare_environment
run(
File "/content/rvc-webui/launch.py", line 36, in run
raise RuntimeError(message)
RuntimeError: Couldn't install requirements.
Command: "/usr/bin/python3" -m pip install -r requirements.txt
Error code: 1
stdout: Collecting gradio==3.36.1 (from -r requirements/main.txt (line 1))
Downloading gradio-3.36.1-py3-none-any.whl (19.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.8/19.8 MB 41.9 MB/s eta 0:00:00
Requirement already satisfied: tqdm==4.65.0 in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 2)) (4.65.0)
Collecting numpy==1.23.5 (from -r requirements/main.txt (line 3))
Downloading numpy-1.23.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.1/17.1 MB 70.4 MB/s eta 0:00:00
Collecting faiss-cpu==1.7.3 (from -r requirements/main.txt (line 4))
Downloading faiss_cpu-1.7.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.0/17.0 MB 99.4 MB/s eta 0:00:00
Collecting fairseq==0.12.2 (from -r requirements/main.txt (line 5))
Downloading fairseq-0.12.2.tar.gz (9.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.6/9.6 MB 120.5 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: matplotlib==3.7.1 in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 6)) (3.7.1)
Collecting scipy==1.9.3 (from -r requirements/main.txt (line 7))
Downloading scipy-1.9.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (33.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 33.7/33.7 MB 54.3 MB/s eta 0:00:00
Collecting librosa==0.9.1 (from -r requirements/main.txt (line 8))
Downloading librosa-0.9.1-py3-none-any.whl (213 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 213.1/213.1 kB 21.6 MB/s eta 0:00:00
Collecting pyworld==0.3.2 (from -r requirements/main.txt (line 9))
Downloading pyworld-0.3.2.tar.gz (214 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 214.4/214.4 kB 25.6 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Requirement already satisfied: soundfile==0.12.1 in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 10)) (0.12.1)
Collecting ffmpeg-python==0.2.0 (from -r requirements/main.txt (line 11))
Downloading ffmpeg_python-0.2.0-py3-none-any.whl (25 kB)
Collecting pydub==0.25.1 (from -r requirements/main.txt (line 12))
Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)
Requirement already satisfied: soxr==0.3.5 in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 13)) (0.3.5)
Collecting transformers==4.28.1 (from -r requirements/main.txt (line 14))
Downloading transformers-4.28.1-py3-none-any.whl (7.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.0/7.0 MB 128.9 MB/s eta 0:00:00
Collecting torchcrepe==0.0.20 (from -r requirements/main.txt (line 15))
Downloading torchcrepe-0.0.20-py3-none-any.whl (72.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 72.3/72.3 MB 12.8 MB/s eta 0:00:00
Collecting Flask==2.3.2 (from -r requirements/main.txt (line 16))
Downloading Flask-2.3.2-py3-none-any.whl (96 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.9/96.9 kB 14.2 MB/s eta 0:00:00
Requirement already satisfied: tensorboard in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 18)) (2.12.3)
Collecting tensorboardX (from -r requirements/main.txt (line 19))
Downloading tensorboardX-2.6.1-py2.py3-none-any.whl (101 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.6/101.6 kB 14.5 MB/s eta 0:00:00
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from -r requirements/main.txt (line 20)) (2.27.1)
Collecting aiofiles (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading aiofiles-23.1.0-py3-none-any.whl (14 kB)
Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (3.8.4)
Requirement already satisfied: altair>=4.2.0 in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (4.2.2)
Collecting fastapi (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading fastapi-0.100.0-py3-none-any.whl (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.7/65.7 kB 8.7 MB/s eta 0:00:00
Collecting ffmpy (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading ffmpy-0.3.1.tar.gz (5.5 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting gradio-client>=0.2.7 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading gradio_client-0.2.10-py3-none-any.whl (288 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 289.0/289.0 kB 34.3 MB/s eta 0:00:00
Collecting httpx (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading httpx-0.24.1-py3-none-any.whl (75 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 75.4/75.4 kB 10.1 MB/s eta 0:00:00
Collecting huggingface-hub>=0.14.0 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading huggingface_hub-0.16.4-py3-none-any.whl (268 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 268.8/268.8 kB 33.2 MB/s eta 0:00:00
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (3.1.2)
Requirement already satisfied: markdown-it-py[linkify]>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (3.0.0)
Requirement already satisfied: markupsafe in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (2.1.3)
Collecting mdit-py-plugins<=0.3.3 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading mdit_py_plugins-0.3.3-py3-none-any.whl (50 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.5/50.5 kB 7.1 MB/s eta 0:00:00
Collecting orjson (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading orjson-3.9.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (138 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 138.7/138.7 kB 16.6 MB/s eta 0:00:00
Requirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (1.5.3)
Requirement already satisfied: pillow in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (8.4.0)
Requirement already satisfied: pydantic in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (1.10.11)
Requirement already satisfied: pygments>=2.12.0 in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (2.14.0)
Collecting python-multipart (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading python_multipart-0.0.6-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.7/45.7 kB 2.2 MB/s eta 0:00:00
Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from gradio==3.36.1->-r requirements/main.txt (line 1)) (6.0)
Collecting semantic-version (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)
Collecting uvicorn>=0.14.0 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading uvicorn-0.23.1-py3-none-any.whl (59 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 59.5/59.5 kB 8.1 MB/s eta 0:00:00
Collecting websockets>=10.0 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading websockets-11.0.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (129 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 129.9/129.9 kB 16.2 MB/s eta 0:00:00
Requirement already satisfied: cffi in /usr/local/lib/python3.10/dist-packages (from fairseq==0.12.2->-r requirements/main.txt (line 5)) (1.15.1)
Requirement already satisfied: cython in /usr/local/lib/python3.10/dist-packages (from fairseq==0.12.2->-r requirements/main.txt (line 5)) (0.29.36)
Collecting hydra-core<1.1,>=1.0.7 (from fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading hydra_core-1.0.7-py3-none-any.whl (123 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 123.8/123.8 kB 17.6 MB/s eta 0:00:00
Collecting omegaconf<2.1 (from fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading omegaconf-2.0.6-py3-none-any.whl (36 kB)
Requirement already satisfied: regex in /usr/local/lib/python3.10/dist-packages (from fairseq==0.12.2->-r requirements/main.txt (line 5)) (2022.10.31)
Collecting sacrebleu>=1.4.12 (from fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading sacrebleu-2.3.1-py3-none-any.whl (118 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 118.9/118.9 kB 16.9 MB/s eta 0:00:00
Requirement already satisfied: torch in /usr/local/lib/python3.10/dist-packages (from fairseq==0.12.2->-r requirements/main.txt (line 5)) (2.0.1+cu118)
Collecting bitarray (from fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading bitarray-2.7.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (273 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 273.6/273.6 kB 31.4 MB/s eta 0:00:00
Requirement already satisfied: torchaudio>=0.8.0 in /usr/local/lib/python3.10/dist-packages (from fairseq==0.12.2->-r requirements/main.txt (line 5)) (2.0.2+cu118)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (1.1.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (4.41.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (23.1)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (3.1.0)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib==3.7.1->-r requirements/main.txt (line 6)) (2.8.2)
Requirement already satisfied: audioread>=2.1.5 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (3.0.0)
Requirement already satisfied: scikit-learn>=0.19.1 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (1.2.2)
Requirement already satisfied: joblib>=0.14 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (1.3.1)
Requirement already satisfied: decorator>=4.0.10 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (4.4.2)
Collecting resampy>=0.2.2 (from librosa==0.9.1->-r requirements/main.txt (line 8))
Downloading resampy-0.4.2-py3-none-any.whl (3.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 93.1 MB/s eta 0:00:00
Requirement already satisfied: numba>=0.45.1 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (0.56.4)
Requirement already satisfied: pooch>=1.0 in /usr/local/lib/python3.10/dist-packages (from librosa==0.9.1->-r requirements/main.txt (line 8)) (1.6.0)
Requirement already satisfied: future in /usr/local/lib/python3.10/dist-packages (from ffmpeg-python==0.2.0->-r requirements/main.txt (line 11)) (0.18.3)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.28.1->-r requirements/main.txt (line 14)) (3.12.2)
Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.28.1->-r requirements/main.txt (line 14))
Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8/7.8 MB 114.2 MB/s eta 0:00:00
Requirement already satisfied: Werkzeug>=2.3.3 in /usr/local/lib/python3.10/dist-packages (from Flask==2.3.2->-r requirements/main.txt (line 16)) (2.3.6)
Requirement already satisfied: itsdangerous>=2.1.2 in /usr/local/lib/python3.10/dist-packages (from Flask==2.3.2->-r requirements/main.txt (line 16)) (2.1.2)
Requirement already satisfied: click>=8.1.3 in /usr/local/lib/python3.10/dist-packages (from Flask==2.3.2->-r requirements/main.txt (line 16)) (8.1.4)
Collecting blinker>=1.6.2 (from Flask==2.3.2->-r requirements/main.txt (line 16))
Downloading blinker-1.6.2-py3-none-any.whl (13 kB)
Requirement already satisfied: absl-py>=0.4 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (1.4.0)
Requirement already satisfied: grpcio>=1.48.2 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (1.56.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (2.17.3)
Requirement already satisfied: google-auth-oauthlib<1.1,>=0.5 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (1.0.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (3.4.3)
Requirement already satisfied: protobuf>=3.19.6 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (3.20.3)
Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (67.7.2)
Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (0.7.1)
Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.10/dist-packages (from tensorboard->-r requirements/main.txt (line 18)) (0.40.0)
Collecting protobuf>=3.19.6 (from tensorboard->-r requirements/main.txt (line 18))
Downloading protobuf-4.23.4-cp37-abi3-manylinux2014_x86_64.whl (304 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 304.5/304.5 kB 33.7 MB/s eta 0:00:00
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->-r requirements/main.txt (line 20)) (1.26.16)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->-r requirements/main.txt (line 20)) (2023.5.7)
Requirement already satisfied: charset-normalizer~=2.0.0 in /usr/local/lib/python3.10/dist-packages (from requests->-r requirements/main.txt (line 20)) (2.0.12)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->-r requirements/main.txt (line 20)) (3.4)
Requirement already satisfied: entrypoints in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio==3.36.1->-r requirements/main.txt (line 1)) (0.4)
Requirement already satisfied: jsonschema>=3.0 in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio==3.36.1->-r requirements/main.txt (line 1)) (4.3.3)
Requirement already satisfied: toolz in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio==3.36.1->-r requirements/main.txt (line 1)) (0.12.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi->fairseq==0.12.2->-r requirements/main.txt (line 5)) (2.21)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements/main.txt (line 18)) (5.3.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements/main.txt (line 18)) (0.3.0)
Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements/main.txt (line 18)) (1.16.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.10/dist-packages (from google-auth<3,>=1.6.3->tensorboard->-r requirements/main.txt (line 18)) (4.9)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.10/dist-packages (from google-auth-oauthlib<1.1,>=0.5->tensorboard->-r requirements/main.txt (line 18)) (1.3.1)
Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from gradio-client>=0.2.7->gradio==3.36.1->-r requirements/main.txt (line 1)) (2023.6.0)
Requirement already satisfied: typing-extensions~=4.0 in /usr/local/lib/python3.10/dist-packages (from gradio-client>=0.2.7->gradio==3.36.1->-r requirements/main.txt (line 1)) (4.7.1)
Collecting antlr4-python3-runtime==4.8 (from hydra-core<1.1,>=1.0.7->fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading antlr4-python3-runtime-4.8.tar.gz (112 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 112.4/112.4 kB 13.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py[linkify]>=2.0.0->gradio==3.36.1->-r requirements/main.txt (line 1)) (0.1.2)
Collecting linkify-it-py<3,>=1 (from markdown-it-py[linkify]>=2.0.0->gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading linkify_it_py-2.0.2-py3-none-any.whl (19 kB)
INFO: pip is looking at multiple versions of mdit-py-plugins to determine which version is compatible with other requirements. This could take a while.
Collecting mdit-py-plugins<=0.3.3 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading mdit_py_plugins-0.3.2-py3-none-any.whl (50 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.4/50.4 kB 6.3 MB/s eta 0:00:00
Downloading mdit_py_plugins-0.3.1-py3-none-any.whl (46 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.5/46.5 kB 5.3 MB/s eta 0:00:00
Downloading mdit_py_plugins-0.3.0-py3-none-any.whl (43 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 43.7/43.7 kB 4.8 MB/s eta 0:00:00
Downloading mdit_py_plugins-0.2.8-py3-none-any.whl (41 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.0/41.0 kB 4.5 MB/s eta 0:00:00
Downloading mdit_py_plugins-0.2.7-py3-none-any.whl (41 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 41.0/41.0 kB 4.5 MB/s eta 0:00:00
Downloading mdit_py_plugins-0.2.6-py3-none-any.whl (39 kB)
Downloading mdit_py_plugins-0.2.5-py3-none-any.whl (39 kB)
INFO: pip is looking at multiple versions of mdit-py-plugins to determine which version is compatible with other requirements. This could take a while.
Downloading mdit_py_plugins-0.2.4-py3-none-any.whl (39 kB)
Downloading mdit_py_plugins-0.2.3-py3-none-any.whl (39 kB)
Downloading mdit_py_plugins-0.2.2-py3-none-any.whl (39 kB)
Downloading mdit_py_plugins-0.2.1-py3-none-any.whl (38 kB)
Downloading mdit_py_plugins-0.2.0-py3-none-any.whl (38 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Downloading mdit_py_plugins-0.1.0-py3-none-any.whl (37 kB)
Collecting markdown-it-py[linkify]>=2.0.0 (from gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 87.5/87.5 kB 11.5 MB/s eta 0:00:00
Downloading markdown_it_py-2.2.0-py3-none-any.whl (84 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 84.5/84.5 kB 11.6 MB/s eta 0:00:00
Requirement already satisfied: llvmlite<0.40,>=0.39.0dev0 in /usr/local/lib/python3.10/dist-packages (from numba>=0.45.1->librosa==0.9.1->-r requirements/main.txt (line 8)) (0.39.1)
Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->gradio==3.36.1->-r requirements/main.txt (line 1)) (2022.7.1)
Requirement already satisfied: appdirs>=1.3.0 in /usr/local/lib/python3.10/dist-packages (from pooch>=1.0->librosa==0.9.1->-r requirements/main.txt (line 8)) (1.4.4)
Collecting portalocker (from sacrebleu>=1.4.12->fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading portalocker-2.7.0-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.10/dist-packages (from sacrebleu>=1.4.12->fairseq==0.12.2->-r requirements/main.txt (line 5)) (0.8.10)
Collecting colorama (from sacrebleu>=1.4.12->fairseq==0.12.2->-r requirements/main.txt (line 5))
Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Requirement already satisfied: lxml in /usr/local/lib/python3.10/dist-packages (from sacrebleu>=1.4.12->fairseq==0.12.2->-r requirements/main.txt (line 5)) (4.9.3)
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.19.1->librosa==0.9.1->-r requirements/main.txt (line 8)) (3.1.0)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (1.11.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (3.1)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (2.0.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (3.25.2)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (16.0.6)
Collecting h11>=0.8 (from uvicorn>=0.14.0->gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading h11-0.14.0-py3-none-any.whl (58 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 58.3/58.3 kB 8.9 MB/s eta 0:00:00
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (23.1.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (4.0.2)
Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (1.4.0)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio==3.36.1->-r requirements/main.txt (line 1)) (1.3.1)
Collecting starlette<0.28.0,>=0.27.0 (from fastapi->gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading starlette-0.27.0-py3-none-any.whl (66 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 67.0/67.0 kB 10.1 MB/s eta 0:00:00
Collecting httpcore<0.18.0,>=0.15.0 (from httpx->gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading httpcore-0.17.3-py3-none-any.whl (74 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 74.5/74.5 kB 10.3 MB/s eta 0:00:00
Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from httpx->gradio==3.36.1->-r requirements/main.txt (line 1)) (1.3.0)
Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/lib/python3.10/dist-packages (from httpcore<0.18.0,>=0.15.0->httpx->gradio==3.36.1->-r requirements/main.txt (line 1)) (3.7.1)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair>=4.2.0->gradio==3.36.1->-r requirements/main.txt (line 1)) (0.19.3)
Collecting uc-micro-py (from linkify-it-py<3,>=1->markdown-it-py[linkify]>=2.0.0->gradio==3.36.1->-r requirements/main.txt (line 1))
Downloading uc_micro_py-1.0.2-py3-none-any.whl (6.2 kB)
Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.10/dist-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->-r requirements/main.txt (line 18)) (0.5.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.10/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<1.1,>=0.5->tensorboard->-r requirements/main.txt (line 18)) (3.2.2)
Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch->fairseq==0.12.2->-r requirements/main.txt (line 5)) (1.3.0)
Requirement already satisfied: exceptiongroup in /usr/local/lib/python3.10/dist-packages (from anyio<5.0,>=3.0->httpcore<0.18.0,>=0.15.0->httpx->gradio==3.36.1->-r requirements/main.txt (line 1)) (1.1.2)
Building wheels for collected packages: fairseq, pyworld, antlr4-python3-runtime, ffmpy
Building wheel for fairseq (pyproject.toml): started
Building wheel for fairseq (pyproject.toml): finished with status 'done'
Created wheel for fairseq: filename=fairseq-0.12.2-cp310-cp310-linux_x86_64.whl size=11288391 sha256=4433d208ac38053842712a855af0ac1b09539bb8e5b6a3e3cddaf8f71aea4ada
Stored in directory: /root/.cache/pip/wheels/e4/35/55/9c66f65ec7c83fd6fbc2b9502a0ac81b2448a1196159dacc32
Building wheel for pyworld (pyproject.toml): started
Building wheel for pyworld (pyproject.toml): finished with status 'done'
Created wheel for pyworld: filename=pyworld-0.3.2-cp310-cp310-linux_x86_64.whl size=860747 sha256=6b27bae81e44db4bb36580f4e4c80888a2faaf313fa5ae76cd7538384e71cf86
Stored in directory: /root/.cache/pip/wheels/35/48/7e/e25bdd25fda4326d47010c157709436a6ee7a1423e18a24195
Building wheel for antlr4-python3-runtime (setup.py): started
Building wheel for antlr4-python3-runtime (setup.py): finished with status 'done'
Created wheel for antlr4-python3-runtime: filename=antlr4_python3_runtime-4.8-py3-none-any.whl size=141210 sha256=3742190ebad57422c802f9be39ebfca163a436b347aa1657ad64b0e7a83cd758
Stored in directory: /root/.cache/pip/wheels/a7/20/bd/e1477d664f22d99989fd28ee1a43d6633dddb5cb9e801350d5
Building wheel for ffmpy (setup.py): started
Building wheel for ffmpy (setup.py): finished with status 'done'
Created wheel for ffmpy: filename=ffmpy-0.3.1-py3-none-any.whl size=5579 sha256=b890f87d7ddd0d5c0cdf539dbcd56b9b4a761538f34cb339aeee21831961092f
Stored in directory: /root/.cache/pip/wheels/01/a6/d1/1c0828c304a4283b2c1639a09ad86f83d7c487ef34c6b4a1bf
Successfully built fairseq pyworld antlr4-python3-runtime ffmpy
Installing collected packages: tokenizers, pydub, ffmpy, faiss-cpu, bitarray, antlr4-python3-runtime, websockets, uc-micro-py, semantic-version, python-multipart, protobuf, portalocker, orjson, omegaconf, numpy, markdown-it-py, h11, ffmpeg-python, colorama, blinker, aiofiles, uvicorn, tensorboardX, starlette, scipy, sacrebleu, pyworld, mdit-py-plugins, linkify-it-py, hydra-core, huggingface-hub, httpcore, Flask, transformers, resampy, httpx, fastapi, librosa, gradio-client, gradio, torchcrepe, fairseq
Attempting uninstall: protobuf
Found existing installation: protobuf 3.20.3
Uninstalling protobuf-3.20.3:
Successfully uninstalled protobuf-3.20.3
Attempting uninstall: numpy
Found existing installation: numpy 1.22.4
Uninstalling numpy-1.22.4:
Successfully uninstalled numpy-1.22.4
Attempting uninstall: markdown-it-py
Found existing installation: markdown-it-py 3.0.0
Uninstalling markdown-it-py-3.0.0:
Successfully uninstalled markdown-it-py-3.0.0
Attempting uninstall: blinker
Found existing installation: blinker 1.4

stderr: ERROR: Cannot uninstall 'blinker'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.

UnboundLocalError: local variable 'loss_gen_all' referenced before assignment

こんなエラーが出ます
Win10,Python 3.10.11, torch2.0.0+cu118

2023-04-18 19:18:49 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-04-18 19:18:49 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-04-18 19:18:49 | INFO | faiss.loader | Loading faiss.
2023-04-18 19:18:50 | INFO | faiss.loader | Successfully loaded faiss.
2023-04-18 19:18:55 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:1 to store for rank: 0
2023-04-18 19:18:55 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
gin_channels: 256 self.spk_embed_dim: 109
loaded pretrained C:\rvc\rvc-webui\models\pretrained\f0G40k.pth C:\rvc\rvc-webui\models\pretrained\f0D40k.pth
0it [00:00, ?it/s, epoch=1]
Traceback (most recent call last):
File "C:\rvc\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 395, in run_predict
output = await app.get_blocks().process_api(
File "C:\rvc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1193, in process_api
result = await self.call_function(
File "C:\rvc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 930, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\rvc\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\rvc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\rvc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\rvc\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "C:\rvc\rvc-webui\modules\tabs\training.py", line 112, in train_all
train_model(
File "C:\rvc\rvc-webui\modules\training\train.py", line 182, in train_model
mp.spawn(
File "C:\rvc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "C:\rvc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 197, in start_processes
while not context.join():
File "C:\rvc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "C:\rvc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
fn(i, *args)
File "C:\rvc\rvc-webui\modules\training\train.py", line 623, in training_runner
loss_g=float(loss_gen_all),
UnboundLocalError: local variable 'loss_gen_all' referenced before assignment

train indexについて

こちらのプログラムで生成したnpyファイルとindexファイルをVC Clientで読み込もうとすると下記のエラーがでてしまって使えません。ずんだもんのROHAN4600を100epochで学習させています。
image

マルチGPUで v2 のトレーニング中にエラーが出る

マルチGPUのマシンでトレーニングを実施するとエラーが出て停止しました。

トレーニングの設定内容は以下の通りです。

Model version: v2
Target sampling rate: 40k
f0 Model: Yes
Using phone embedder: contentvec
Embedding channels: 768
Embedding output layer: 12
GPU ID: 0, 1
Number of CPU processes: 8
Normalize audio volume when preprocess: Yes
Pitch extraction algorithm: harvest
Batch side: 14
Number of epochs: 40
Save every epoch: 10
Cache batch: Yes
FP16: Yes

以下のようなエラーが出ました。

2023-05-23 11:39:50 | INFO | torch.nn.parallel.distributed | Reducer buckets have been rebuilt in this iteration.
2023-05-23 11:39:50 | INFO | torch.nn.parallel.distributed | Reducer buckets have been rebuilt in this iteration.
0%|▏ | 1/440 [00:23<2:53:03, 23.65s/it, epoch=1]
0%|▏ | 1/440 [00:23<2:53:29, 23.71s/it, epoch=1]
Traceback (most recent call last):
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "F:\ai\vc\rvc-webui\modules\tabs\training.py", line 221, in train_all
train_model(
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 264, in train_model
mp.spawn(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 197, in start_processes
while not context.join():
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\multiprocessing\spawn.py", line 69, in _wrap
fn(i, *args)
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 664, in training_runner
loss_mel = F.l1_loss(y_mel, y_hat_mel) * config.train.c_mel
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\nn\functional.py", line 3264, in l1_loss
return torch._C._nn.l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!

マシンは AMD Ryzen 5800x で、GPU を2枚搭載しています。
2枚とも使う(0,1)だとエラーが出ますが、1枚だけ使う(0)だとエラーが出ずにトレーニングが完了します。

また、GPU1 (1) だけ使うようにすると別なエラーが出ます。

100%|█████████████████████████████████████████████████████████████████████████████| 423/423 [00:00<00:00, 16923.16it/s]
GPU 1 is not available | 0/423 [00:00<?, ?it/s]
Traceback (most recent call last):█████████████████████████████████████████████████████| 53/53 [00:18<00:00, 4.24it/s]
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict3 [00:17<00:00, 4.38it/s]
output = await app.get_blocks().process_api(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function00:13<00:02, 4.04it/s]
prediction = await anyio.to_thread.run_sync(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "F:\ai\vc\rvc-webui\modules\tabs\training.py", line 210, in train_all
create_dataset_meta(training_dir, f0)
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 112, in create_dataset_meta
names = set(list_data(gt_wavs_dir)) & set(list_data(co256_dir))
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 106, in list_data
for subdir in os.listdir(dir):
File "F:\ai\vc\rvc-webui\webui.py", line 10, in listdir4mac
return [file for file in _list_dir(path) if not file.startswith(".")]
FileNotFoundError: [WinError 3] 指定されたパスが見つかりません。: 'F:\ai\vc\rvc-webui\models\training\models\test_v2_40k_cont_768_12_harv_14_30\3_feature256'

また一度エラーが出た後だと、利用する GPU を 0 だけに設定しても、別なエラーが出ました。
webui 再起動後すると、GPU 0 だけの設定でトレーニング出来ました。

100%|████████████████████████████████████████████████████████████████████████████████| 423/423 [00:14<00:00, 29.44it/s]
train_all: emb_name: contentvec█████████████████████████████████████████████████████▏| 419/423 [00:14<00:00, 37.24it/s]
Traceback (most recent call last):
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function
prediction = await anyio.to_thread.run_sync(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "F:\ai\vc\rvc-webui\modules\tabs\training.py", line 221, in train_all
train_model(
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 243, in train_model
training_runner(
File "F:\ai\vc\rvc-webui\lib\rvc\train.py", line 342, in training_runner
dist.init_process_group(
File "F:\ai\vc\rvc-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 853, in init_process_group
raise RuntimeError("trying to initialize the default process group " "twice!")
RuntimeError: trying to initialize the default process group twice!

enhancement: data augmentationの追加

net_gを使って入力のphoneを別の話者の音声に変換してから学習することで、変換の精度を上げられないかという実験です。
jpHuBERTだとphone embeddingに話者性が残ってしまうのでそれを除去する目的です。

  • 第一弾: 事前学習時に話者をシャッフルしてピッチを変更する
  • 第二弾: ファインチューニング時にpretrainのnet_gを用意してaugmentationする?

モデル学習時のエラー

実行環境

  • OS: Windows 11
  • RAM: 32GB
  • CPU: AMD Ryzen 7 3700X 8-Core
  • GPU: RTX 2070 SUPER

発生した事象

モデルの学習を行おうとすると、下記のエラーが表示され学習が行われない。

エラーログ

E:\rvc-webui-main\venv\lib\site-packages\gradio\deprecation.py:43: UserWarning: You have unused kwarg parameters in Checkbox, please remove them: {'disabled': False}
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:07 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:07 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:08 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:08 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:08 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:08 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:08 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:09 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:09 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:09 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss.
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-02 17:33:09 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-02 17:33:09 | INFO | faiss.loader | Loading faiss.
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\multiprocessing\spawn.py", line 125, in _main
    prepare(preparation_data)
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 289, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 96, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "E:\rvc-webui-main\webui.py", line 3, in <module>
    from modules import cmd_opts, ui
  File "E:\rvc-webui-main\modules\ui.py", line 9, in <module>
    from . import models, shared
  File "E:\rvc-webui-main\modules\models.py", line 13, in <module>
    from lib.rvc.pipeline import VocalConvertPipeline
  File "E:\rvc-webui-main\lib\rvc\pipeline.py", line 9, in <module>
    import scipy.signal as signal
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\signal\__init__.py", line 323, in <module>
    from ._filter_design import *
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\signal\_filter_design.py", line 16, in <module>
    from scipy import special, optimize, fft as sp_fft
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\__init__.py", line 225, in __getattr__
    return _importlib.import_module(f'scipy.{name}')
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\optimize\__init__.py", line 421, in <module>
    from ._shgo import shgo
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\optimize\_shgo.py", line 9, in <module>
    from scipy import spatial
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\__init__.py", line 225, in __getattr__
    return _importlib.import_module(f'scipy.{name}')
  File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "E:\rvc-webui-main\venv\lib\site-packages\scipy\spatial\__init__.py", line 107, in <module>
    from ._qhull import *
ImportError: DLL load failed while importing _qhull: ページング ファイルが小さすぎるため、この操作を完了できません。

2回目以降の学習がエラーで停止する

一度でも学習したことがあると、以下のエラーで失敗します。再起動すると直ります。
Commit hashはe56959bb016fd87d53874c0415fd1af12fd2f401です。

Traceback (most recent call last):
  File "F:\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
    output = await app.get_blocks().process_api(
  File "F:\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
    result = await self.call_function(
  File "F:\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "F:\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "F:\rvc-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "F:\rvc-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "F:\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
    return next(iterator)
  File "F:\rvc-webui\modules\tabs\training.py", line 221, in train_all
    train_model(
  File "F:\rvc-webui\lib\rvc\train.py", line 243, in train_model
    training_runner(
  File "F:\rvc-webui\lib\rvc\train.py", line 342, in training_runner
    dist.init_process_group(
  File "F:\rvc-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 865, in init_process_group
    raise RuntimeError("trying to initialize the default process group " "twice!")
RuntimeError: trying to initialize the default process group twice!

webui-user.bat実行時にエラーが出る

スクリーンショット (61)
webui-user.bat実行時に画像のようなエラーが発生してしまいます。
解決策を教えていただけないでしょうか。
今のところgradioの再インストールとpipの更新を試してみましたが、だめでした。

学習を開始するとRuntimeErrorとなる

学習を開始すると以下のようにRuntimeErrorとなり、Internal Server Errorとなります。
原因、解消方法などご教授頂きたいと思います。

2023-06-01 22:40:56 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
train_all: emb_name: contentvec
Traceback (most recent call last):
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function
prediction = await anyio.to_thread.run_sync(
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
return next(iterator)
File "E:\AIVoice\rvc-webui\modules\tabs\training.py", line 246, in train_all
train_model(
File "E:\AIVoice\rvc-webui\lib\rvc\train.py", line 338, in train_model
training_runner(
File "E:\AIVoice\rvc-webui\lib\rvc\train.py", line 446, in training_runner
dist.init_process_group(
File "E:\AIVoice\rvc-webui\venv\lib\site-packages\torch\distributed\distributed_c10d.py", line 865, in init_process_group
raise RuntimeError("trying to initialize the default process group " "twice!")

RuntimeError: trying to initialize the default process group twice!
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-06-01 22:47:03 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

Trainにキーが見つからないエラー

トレーニングの設定

  • Model version: v2
  • Target sampling rate: 48k
  • f0 Model: Yes
  • Using phone embedder: hubert-base-japanese
  • Embedding channels: 768
  • Embedding output layer: 12
  • GPU ID: 0
  • Number of CPU processes: 12
  • Normalize audio volume when preprocess: Yes
  • Pitch extraction algorithm: harvest
  • Batch side: 4
  • Number of epochs: 100
  • Save every epoch: 10
  • Cache batch: Yes
  • FP16: Yes

発生したエラー

RuntimeError: Error(s) in loading state_dict for MultiPeriodDiscriminator:
        Missing key(s) in state_dict: "discriminators.7.convs.0.bias", "discriminators.7.convs.0.weight_g", 
"discriminators.7.convs.0.weight_v", "discriminators.7.convs.1.bias", "discriminators.7.convs.1.weight_g",
 "discriminators.7.convs.1.weight_v", "discriminators.7.convs.2.bias", "discriminators.7.convs.2.weight_g",
 "discriminators.7.convs.2.weight_v", "discriminators.7.convs.3.bias", "discriminators.7.convs.3.weight_g",
 "discriminators.7.convs.3.weight_v", "discriminators.7.convs.4.bias", "discriminators.7.convs.4.weight_g",
 "discriminators.7.convs.4.weight_v", "discriminators.7.conv_post.bias", "discriminators.7.conv_post.weight_g",
 "discriminators.7.conv_post.weight_v", "discriminators.8.convs.0.bias", "discriminators.8.convs.0.weight_g",
 "discriminators.8.convs.0.weight_v", "discriminators.8.convs.1.bias", "discriminators.8.convs.1.weight_g",
 "discriminators.8.convs.1.weight_v", "discriminators.8.convs.2.bias", "discriminators.8.convs.2.weight_g",
 "discriminators.8.convs.2.weight_v", "discriminators.8.convs.3.bias", "discriminators.8.convs.3.weight_g",
 "discriminators.8.convs.3.weight_v", "discriminators.8.convs.4.bias", "discriminators.8.convs.4.weight_g",
 "discriminators.8.convs.4.weight_v", "discriminators.8.conv_post.bias", "discriminators.8.conv_post.weight_g",
 "discriminators.8.conv_post.weight_v".

Question: How do you load and utilize RVC v2 models for inference?

I am curious how you are able to make use of RVC v2 models using this repo. I have tried placing the .pth and .index files under models/checkpoints as I would with the RVC V1 models, but am not having any luck finding documentation on V2. Is it supported? If so how do you load them? Thank you!

モデルを作成していると勝手にGPU接続が切れる、ModelTraningに移行しない

colab版RVCについての質問です
モデルを作成していると、エラーが出てTraning modelに移らなかったり、「一定時間操作が確認されなかったのでGPUを自動切断した」旨のメッセージが出て時間と使用GPUを無駄にしてしまいます
接続切れ対策として
%%javascript
function ClickConnect(){
console.log("Working");
document.querySelector("colab-toolbar-button#connect").click()
}setInterval(ClickConnect,60000)
と言うコードを実行してもすぐにLaunchがエラーを吐いて止まってしまいます
このコードはどこに入力すればよかったのでしょうか

apiエラーが出て進めない&要望

2023-08-16 01:31:28 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error" と出てしまい、先に勧めません。 APIエラー?のようです。
#50 とは違うエラーのようです。

OSはWindows Server 2022
GPUはRTX3090ti
VS2022 Build Tools LTSC or VS2022 communityです。

それとpagefile.sysを使わず、既存のRAMを使う学習機能を追加してほしいです。
image

webui-user.bat でエラーが起こる。

webui-user.batを実行した際以下のようなエラーが発生しました。
依存しているライブラリによるエラーのように見えるのですが、どのようにすればよろしいでしょうか。
状態としてはupdate.batを行った後です。
よろしくお願いいたします。

venv "C:\User\......\rvc-webui-main\venv\Scripts\Python.exe"
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr  4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)]
Commit hash: c7aa393122803193f93e89a0739db415413f6340
Installing requirements
Traceback (most recent call last):
  File "C:\User\......\rvc-webui-main\webui.py", line 3, in <module>
    from modules import cmd_opts, ui
  File "C:\User\......\rvc-webui-main\modules\ui.py", line 9, in <module>
    from . import models, shared
  File "C:\User\......\rvc-webui-main\modules\models.py", line 6, in <module>
    from fairseq import checkpoint_utils
  File "C:\User\......\rvc-webui-main\venv\Lib\site-packages\fairseq\__init__.py", line 20, in <module>
    from fairseq.distributed import utils as distributed_utils
  File "C:\User\......\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed\__init__.py", line 7, in <module>
    from .fully_sharded_data_parallel import (
  File "C:\User\......\rvc-webui-main\venv\Lib\site-packages\fairseq\distributed\fully_sharded_data_parallel.py", line 10, in <module>
    from fairseq.dataclass.configs import DistributedTrainingConfig
  File "C:\User\......\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass\__init__.py", line 6, in <module>
    from .configs import FairseqDataclass
  File "C:\User\......\rvc-webui-main\venv\Lib\site-packages\fairseq\dataclass\configs.py", line 1104, in <module>
    @dataclass
     ^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 1223, in dataclass
    return wrap(cls)
           ^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 1213, in wrap
    return _process_class(cls, init, repr, eq, order, unsafe_hash,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 958, in _process_class
    cls_fields.append(_get_field(cls, name, type, kw_only))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\dataclasses.py", line 815, in _get_field
    raise ValueError(f'mutable default {type(f.default)} for field '
ValueError: mutable default <class 'fairseq.dataclass.configs.CommonConfig'> for field common is not allowed: use default_factory

要望: rmvpeのサポート

本家RVCやVC Clientでは、harvestに代わる学習・推論時のピッチ推定としてrmvpeが導入されており、
harvestと比べて速度の面でも質の面でも高くなっております。
それのサポートがここでもあればよいと思います。

trainingをしようとすると特定のPickleが読み込めないとエラーが出る

トレーニングをしようとするとエラーになりましたので報告させていただきます。

===

train_all: emb_name: contentvec
2023-05-22 09:43:17 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-22 09:43:17 | INFO | faiss.loader | Successfully loaded faiss with AVX2 support.
2023-05-22 09:43:18 | INFO | torch.distributed.distributed_c10d | Added key: store_based_barrier_key:1 to store for rank: 0
2023-05-22 09:43:18 | INFO | torch.distributed.distributed_c10d | Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
gin_channels: 256 self.spk_embed_dim: 109 emb_channels: 256
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/gradio/routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1299, in process_api
result = await self.call_function(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1035, in call_function
prediction = await anyio.to_thread.run_sync(
File "/opt/conda/lib/python3.10/site-packages/anyio/to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 867, in run
result = context.run(func, *args)
File "/opt/conda/lib/python3.10/site-packages/gradio/utils.py", line 491, in async_iteration
return next(iterator)
File "/content/rvc-webui/modules/tabs/training.py", line 183, in train_all
train_model(
File "/content/rvc-webui/lib/rvc/train.py", line 227, in train_model
mp.spawn(
File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:

-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/content/rvc-webui/lib/rvc/train.py", line 396, in training_runner
net_g_state = torch.load(pretrain_g, map_location="cpu")["model"]
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 815, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/opt/conda/lib/python3.10/site-packages/torch/serialization.py", line 1033, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, 'E'.

pretrained generator modelのf0G40k256.pthがtorchのバージョン等の問題で読み込めていないのかなと推測していますが、私の方で認識違い等ございましたらすみません。

混合精度の利用 + bfloat16のオプション追加

RVCの訓練では混合精度を利用しています。
混合精度はモデルのパラメータをfloat32で保存し、計算の数値だけfloat16で計算することでfloat32の精度とfloat16の速度のいいとこどりをする手法です。
https://www.tensorflow.org/guide/mixed_precision?hl=ja
現状学習のコードを見ると、float16利用の時モデルの方もfloat16にしてしまっているので修正します。

  • 指数部がfloat32で仮数部を小さくすることでfloat32と同じ桁の幅を表現できるbfloat16というのもあり、transformerではこちらの方が安定しやすいのでoptionにbfloat16も追加し選べるようにします。

緊急ではないのでゆっくりめに進めます。

RuntimeError: Found no NVIDIA driver on your system.

제목 없음-1
AMD GPUを使用中なので、以前はCPUを利用して学習していましたが、アップデート後にエラーが発生し、学習が行われません。
解決できる方法はありますか?

gradioライブラリがエラーを出す

実行環境: Windows 11, Python 3.9.13, touch 2.1.0.dev20230412+cu118
で、webui-user.batを実行すると以下のようなエラーが出ます。

(venv) PS C:\Users\melan\Documents\GitHub\rvc-webui> ./webui-user.bat
venv "C:\Users\melan\Documents\GitHub\rvc-webui\venv\Scripts\Python.exe"
Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)]
Commit hash: 7df7737
Installing requirements
2023-04-14 06:53:50 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-04-14 06:53:50 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-04-14 06:53:50 | INFO | faiss.loader | Loading faiss.
2023-04-14 06:53:50 | INFO | faiss.loader | Successfully loaded faiss.
gin_channels: 256 self.spk_embed_dim: 109
Traceback (most recent call last):
File "C:\Users\melan\Documents\GitHub\rvc-webui\webui.py", line 15, in
webui()
File "C:\Users\melan\Documents\GitHub\rvc-webui\webui.py", line 5, in webui
app = ui.create_ui()
File "C:\Users\melan\Documents\GitHub\rvc-webui\modules\ui.py", line 62, in create_ui
tab.tab()
File "C:\Users\melan\Documents\GitHub\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1285, in exit
self.config = self.get_config_file()
File "C:\Users\melan\Documents\GitHub\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1262, in get_config_file
"output": list(block.output_api_info()), # type: ignore
AttributeError: 'Dropdown' object has no attribute 'output_api_info'
続行するには何かキーを押してください . . .

禁じ手ではありますが、gradioライブラリの該当エラーがでている箇所(\lib\site-packages\gradio\blocks.py", line 1261,line 1262,)をコメントアウトしたらwebuiが立ち上がりました。

Can't train or infer in macos // issue with ffmpeg

I saw there is a macos version of the gui and got it running.

I was able to "train index" one time and "train", but when trying to use "Inference" afterwards, it crashed with an ffmpeg error.

Error while train indexing

Using MPS
Traceback (most recent call last):
  File "/Users/cptnray/rvc-webui/lib/rvc/utils.py", line 36, in load_audio
    ffmpeg.input(file, threads=0)
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/ffmpeg/_run.py", line 313, in run
    process = run_async(
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/ffmpeg/_run.py", line 284, in run_async                                  | 0/24 [00:00<?, ?it/s]
    return subprocess.Popen(
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 971, in __init__
    self._execute_child(args, executable, preexec_fn, close_fds,
  File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/subprocess.py", line 1863, in _execute_child
    raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/routes.py", line 439, in run_predict
    output = await app.get_blocks().process_api(
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1384, in process_api
    result = await self.call_function(
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1103, in call_function
    prediction = await utils.async_iteration(iterator)
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 343, in async_iteration
    return await iterator.__anext__()
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 336, in __anext__
    return await anyio.to_thread.run_sync(
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 319, in run_sync_iterator_async
    return next(iterator)
  File "/Users/cptnray/rvc-webui/venv/lib/python3.10/site-packages/gradio/utils.py", line 688, in gen_wrapper
    yield from f(*args, **kwargs)
  File "/Users/cptnray/rvc-webui/modules/tabs/training.py", line 74, in train_index_only
    split.preprocess_audio(
  File "/Users/cptnray/rvc-webui/lib/rvc/preprocessing/split.py", line 195, in preprocess_audio
    write_mute(mute_wav_path, speaker_id, waves_dir, waves16k_dir, sampling_rate)
  File "/Users/cptnray/rvc-webui/lib/rvc/preprocessing/split.py", line 65, in write_mute
    tmp_audio = load_audio(mute_wave_filename, sampling_rate)
  File "/Users/cptnray/rvc-webui/lib/rvc/utils.py", line 41, in load_audio
    raise RuntimeError(f"Failed to load audio: {e}")
RuntimeError: Failed to load audio: [Errno 2] No such file or directory: 'ffmpeg'
2023-07-24 20:52:26 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
2023-07-24 20:52:26 | INFO | httpx | HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

F0 curve Fileを適用できない

C:\rvc-webui\models\training\models\kuruminoah_fp32\2a_f0から**.wav.npyファイルをアップロード(1つしか選択できない)してInferenceを行ったところ


Traceback (most recent call last): 
 File "C:\rvc-webui\lib\rvc\pipeline.py", line 288, in __call__  
inp_f0.append([float(i) for i in line.split(",")])   
 File "C:\rvc-webui\lib\rvc\pipeline.py", line 288, in <listcomp>
inp_f0.append([float(i) for i in line.split(",")]) 
ValueError: could not convert string to float: "哲UMPY\x01\x00v\x00{'descr': '<i4'"  


とターミナルで表示されたのちに

Error: Traceback (most recent call last):
  File "C:\rvc-webui\modules\tabs\inference.py", line 107, in infer
    audio = model.single(
  File "C:\rvc-webui\modules\models.py", line 151, in single
    audio_opt = self.vc(
  File "C:\rvc-webui\lib\rvc\pipeline.py", line 295, in __call__
    pitch, pitchf = self.get_f0(audio_pad, p_len, transpose, f0_method, inp_f0)
  File "C:\rvc-webui\lib\rvc\pipeline.py", line 99, in get_f0
    (inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
TypeError: list indices must be integers or slices, not tuple

とWeb UIに表示され推論が失敗します。

kmeansによるembeddingの圧縮+GUIの機能追加

embeddingを768次元に変更したことにより、index、npyファイルが大きくなっているという声を聞きます。
MiniBatch KMeansでembeddingを圧縮する機能を追加し、GUIに項目を追加しようと思います。

また、事前学習等で複数話者の巨大なデータを学習した後はindexの学習をスキップしたいので、スキップするオプションも合わせて追加しようと思います。

5/27 5/28の土日でコード書いてPR作成予定です。もし不要であれば教えてください。

training中に指定されたパスが見つかりませんとエラーが出る。

タイトルの通りですがtrainingの準備段階でエラーが発生します。
動作環境は
windows11
python 3.10.9
4070ti+13700Fです

ファイル名に2バイト文字を入れない、c直下に置く等考えられる対策は行いましたが改善しなかったため
投稿させていただきます。

以下コマンドプロンプトです

To create a public link, set `share=True` in `launch()`.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss with AVX2 support.
2023-05-17 22:11:45 | INFO | faiss.loader | Could not load library with AVX2 support due to:
ModuleNotFoundError("No module named 'faiss.swigfaiss_avx2'")
2023-05-17 22:11:45 | INFO | faiss.loader | Loading faiss.
2023-05-17 22:11:46 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:46 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:47 | INFO | faiss.loader | Successfully loaded faiss.
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
0it [00:00, ?it/s]
2023-05-17 22:11:48 | INFO | faiss.loader | Successfully loaded faiss.
2023-05-17 22:11:49 | INFO | faiss.loader | Successfully loaded faiss.
Traceback (most recent call last):
  File "C:\rvc-webui\venv\lib\site-packages\gradio\routes.py", line 412, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
    result = await self.call_function(
  File "C:\rvc-webui\venv\lib\site-packages\gradio\blocks.py", line 1035, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\rvc-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\rvc-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "C:\rvc-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "C:\rvc-webui\venv\lib\site-packages\gradio\utils.py", line 491, in async_iteration
    return next(iterator)
  File "C:\rvc-webui\modules\tabs\training.py", line 68, in train_index_only
    extract_f0.run(training_dir, num_cpu_process, pitch_extraction_algo)
  File "C:\rvc-webui\lib\rvc\preprocessing\extract_f0.py", line 128, in run
    for pathname in sorted(list(os.listdir(dataset_dir))):
  File "C:\rvc-webui\webui.py", line 10, in listdir4mac
    return [file for file in _list_dir(path) if not file.startswith(".")]
FileNotFoundError: [WinError 3] 指定されたパスが見つかりません。: 'C:\\rvc-webui\\models\\training\\models\\test\\1_16k_wavs'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.