Coder Social home page Coder Social logo

Comments (25)

chitsanfei avatar chitsanfei commented on August 28, 2024 1

image how do i fix this error?

May ignore this step. It wont affect your training. Perhaps... 🤔

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024 1

This is a memory error due to a recent change, there should be a new release in about 5 minutes, please restart your notebook and try again when it is released.

from so-vits-svc-fork.

Lordmau5 avatar Lordmau5 commented on August 28, 2024 1

Can you try modifying the colab so it runs svc pre-resample -n 1 instead of just svc pre-resample and then try the rest again?

I had a similar issue locally, maybe that can also fix it for you

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024 1

I'm tired of it. It's a simple task and I'll ask someone else to do it. In addition, I am looking for a collaborator.

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024 1

just a last ask- a screen recording of the install and train process would be godly- no need to any talking just show the steps in order no cuts or edits 🙏

feel free to just say no if its a lot of work

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

This is an error that occurs when the checkpoint is an incomplete file for some reason.

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

checkpoint is an incomplete file for some reason.
i reckon that might be my fault- i messed up a couple steps. so just restarting the notebook and running it again should solve it? also can i ask- do you run EVERY cell in order or are some cells optional? (cluster train vs train)

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

Just delete the corrupt file and resume training.(svc train only) I believe this is due to insufficient drive space or interruption while writing the checkpoint.

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

image
how do i fix this error?

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

automatic preprocessing throws this error, is the local training any easier? i tried once but couldnt get it working but id be willing to give it another go

[12:40:53] Version: 1.4.2
Preprocessing:  71% 75/106 [01:18<01:29,  2.88s/it][12:42:14] exception calling callback for <Future at 0x7fa7aeaef760 state=finished raised TerminatedWorkerError>
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/_base.py", line 625, in _invoke_callbacks
    callback(self)
  File "/usr/local/lib/python3.9/dist-packages/tqdm_joblib/__init__.py", line 20, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 360, in __call__
    self.parallel.dispatch_next()
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 797, in dispatch_next
    if not self.dispatch_one_batch(self._original_iterator):
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 864, in dispatch_one_batch
    self._dispatch(tasks)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 782, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/usr/local/lib/python3.9/dist-packages/joblib/_parallel_backends.py", line 531, in apply_async
    future = self._workers.submit(SafeFunction(func))
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/reusable_executor.py", line 177, in submit
    return super(_ReusablePoolExecutor, self).submit(
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/process_executor.py", line 1115, in submit
    raise self._flags.broken
joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

The exit codes of the workers are {SIGKILL(-9)}
Preprocessing:  74% 78/106 [01:18<00:28,  1.01s/it]
Traceback (most recent call last):
  File "/usr/local/bin/svc", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/__main__.py", line 420, in pre_resample
    preprocess_resample(
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/preprocess_resample.py", line 111, in preprocess_resample
    Parallel(n_jobs=n_jobs)(
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 1061, in __call__
    self.retrieve()
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 938, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/usr/local/lib/python3.9/dist-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
    return future.result(timeout=timeout)
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 446, in result
    return self.__get_result()
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/_base.py", line 625, in _invoke_callbacks
    callback(self)
  File "/usr/local/lib/python3.9/dist-packages/tqdm_joblib/__init__.py", line 20, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 360, in __call__
    self.parallel.dispatch_next()
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 797, in dispatch_next
    if not self.dispatch_one_batch(self._original_iterator):
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 864, in dispatch_one_batch
    self._dispatch(tasks)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 782, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "/usr/local/lib/python3.9/dist-packages/joblib/_parallel_backends.py", line 531, in apply_async
    future = self._workers.submit(SafeFunction(func))
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/reusable_executor.py", line 177, in submit
    return super(_ReusablePoolExecutor, self).submit(
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/process_executor.py", line 1115, in submit
    raise self._flags.broken
joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

The exit codes of the workers are {SIGKILL(-9)}

IGNORING THIS ---->
the training cell gives

[12:45:24] Version: 1.4.2
[12:45:28] Version: 1.4.2
[12:45:30] {'train': {'log_interval': 200, 'eval_interval': 800, 'seed': 1234, 'epochs': 10000, 'learning_rate': 0.0001, 'betas': [0.8, 0.99], 'eps': 1e-09, 'batch_size': 6, 'fp16_run': False, 'lr_decay': 0.999875, 'segment_size': 10240, 'init_lr_ratio': 1, 'warmup_epochs': 0, 'c_mel': 45, 'c_kl': 1.0, 'use_sr': True, 'max_speclen': 512, 'port': '8001', 'keep_ckpts': 3}, 'data': {'training_files': 'filelists/44k/train.txt', 'validation_files': 'filelists/44k/val.txt', 'max_wav_value': 32768.0, 'sampling_rate': 44100, 'filter_length': 2048, 'hop_length': 512, 'win_length': 2048, 'n_mel_channels': 80, 'mel_fmin': 0.0, 'mel_fmax': 22050}, 'model': {'inter_channels': 192, 'hidden_channels': 192, 'filter_channels': 768, 'n_heads': 2, 'n_layers': 6, 'kernel_size': 3, 'p_dropout': 0.1, 'resblock': '1', 'resblock_kernel_sizes': [3, 7, 11], 'resblock_dilation_sizes': [[1, 3, 5], [1, 3, 5], [1, 3, 5]], 'upsample_rates': [8, 8, 2, 2, 2], 'upsample_initial_channel': 512, 'upsample_kernel_sizes': [16, 16, 4, 4, 4], 'n_layers_q': 3, 'use_spectral_norm': False, 'gin_channels': 256, 'ssl_dim': 256, 'n_speakers': 200}, 'spk': {'matty': 0}, 'model_dir': 'drive/MyDrive/so-vits-svc-fork/logs/44k'}
2023-03-26 12:45:30.522644: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 12:45:31.947160: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-26 12:45:31.947346: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib64-nvidia
2023-03-26 12:45:31.947376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
[12:45:33] NumExpr defaulting to 2 threads.
[12:45:33] Added key: store_based_barrier_key:1 to store for rank: 0
[12:45:33] Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 1 nodes.
[12:45:49] Loaded checkpoint 'drive/MyDrive/so-vits-svc-fork/logs/44k/G_0.pth' (iteration 1)
[12:46:00] Loaded checkpoint 'drive/MyDrive/so-vits-svc-fork/logs/44k/D_0.pth' (iteration 1)
[12:46:00] Start training
  0% 0/10000 [00:00<?, ?it/s][12:46:04] Version: 1.4.2
[12:46:04] Version: 1.4.2
  0% 0/10000 [00:06<?, ?it/s]
Traceback (most recent call last):
  File "/usr/local/bin/svc", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/__main__.py", line 96, in train
    train(config_path=config_path, model_path=model_path)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 49, in train
    mp.spawn(
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 240, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 158, in run
    train_and_evaluate(
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 200, in train_and_evaluate
    for batch_idx, items in enumerate(train_loader):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/lib/python3.9/dist-packages/torch/_utils.py", line 543, in reraise
    raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 61, in fetch
    return self.collate_fn(data)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/data_utils.py", line 145, in __call__
    spec_padded[i, :, : spec.size(1)] = spec
RuntimeError: expand(torch.FloatTensor{[2, 483, 483]}, size=[2, 483]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)

from so-vits-svc-fork.

chitsanfei avatar chitsanfei commented on August 28, 2024

@ruckusmattster

According to the log, when the program performed the preprocessing step and reached the 75/106 data, the parallel processing worker of joblib that was invoked was abnormally terminated. As the log indicates, it may have exceeded the memory limit or may have been due to some deeper level errors.

I can't give you more helpful information. But I suggest you checking memory usage chart and your dataset first, try reducing the number of datasets (such as start from 20) and give it another try to test if it is a program issue. Most importantly, don't be discouraged.

Ok, this will be solved. 😊 linked to #136

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

sorry to keep chucking errors at you but ive got

svc pre-hubert:
File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/process_executor.py", line 436, in _process_worker
    r = call_item()
  File "/usr/local/lib/python3.9/dist-packages/joblib/externals/loky/process_executor.py", line 288, in __call__
    return self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.9/dist-packages/joblib/_parallel_backends.py", line 595, in __call__
    return self.func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 263, in __call__
    return [func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 263, in <listcomp>
    return [func(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/preprocess_hubert_f0.py", line 67, in _process_batch
    _process_one(
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/preprocess_hubert_f0.py", line 38, in _process_one
    c = utils.get_hubert_content(hubert_model, wav_16k_tensor=wav16k)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/utils.py", line 369, in get_hubert_content
    logits = hmodel.extract_features(**inputs)
  File "/usr/local/lib/python3.9/dist-packages/fairseq/models/hubert/hubert.py", line 535, in extract_features
    res = self.forward(
  File "/usr/local/lib/python3.9/dist-packages/fairseq/models/hubert/hubert.py", line 437, in forward
    features = self.forward_features(source)
  File "/usr/local/lib/python3.9/dist-packages/fairseq/models/hubert/hubert.py", line 392, in forward_features
    features = self.feature_extractor(source)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/fairseq/models/wav2vec/wav2vec2.py", line 895, in forward
    x = conv(x)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/container.py", line 204, in forward
    input = module(input)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/conv.py", line 313, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/conv.py", line 309, in _conv_forward
    return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Calculated padded input size per channel: (1). Kernel size: (2). Kernel size can't be greater than actual input size
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/bin/svc", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/__main__.py", line 519, in pre_hubert
    preprocess_hubert_f0(
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/preprocess_hubert_f0.py", line 96, in preprocess_hubert_f0
    Parallel(n_jobs=n_jobs)(
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 1061, in __call__
    self.retrieve()
  File "/usr/local/lib/python3.9/dist-packages/joblib/parallel.py", line 938, in retrieve
    self._output.extend(job.get(timeout=self.timeout))
  File "/usr/local/lib/python3.9/dist-packages/joblib/_parallel_backends.py", line 542, in wrap_future_result
    return future.result(timeout=timeout)
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 446, in result
    return self.__get_result()
  File "/usr/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
    raise self._exception
RuntimeError: Calculated padded input size per channel: (1). Kernel size: (2). Kernel size can't be greater than actual input size
/usr/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 4 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '


train
File "/usr/local/bin/svc", line 8, in <module>
    sys.exit(cli())
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.9/dist-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/__main__.py", line 96, in train
    train(config_path=config_path, model_path=model_path)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 49, in train
    mp.spawn(
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 240, in spawn
    return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
    while not context.join():
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 160, in join
    raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException: 

-- Process 0 terminated with the following error:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
    fn(i, *args)
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 158, in run
    train_and_evaluate(
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/train.py", line 200, in train_and_evaluate
    for batch_idx, items in enumerate(train_loader):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1333, in _next_data
    return self._process_data(data)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 1359, in _process_data
    data.reraise()
  File "/usr/local/lib/python3.9/dist-packages/torch/_utils.py", line 543, in reraise
    raise exception
FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/data_utils.py", line 102, in __getitem__
    return self.get_audio(self.audiopaths[index][0])
  File "/usr/local/lib/python3.9/dist-packages/so_vits_svc_fork/data_utils.py", line 67, in get_audio
    f0 = np.load(filename + ".f0.npy")
  File "/usr/local/lib/python3.9/dist-packages/numpy/lib/npyio.py", line 405, in load
    fid = stack.enter_context(open(os_fspath(file), "rb"))
FileNotFoundError: [Errno 2] No such file or directory: 'dataset/44k/matty/sample_050.wav.f0.npy'

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

same error as before- is the colab working okay for you?

from so-vits-svc-fork.

Lordmau5 avatar Lordmau5 commented on August 28, 2024

same error as before- is the colab working okay for you?

I haven't touched the colab anymore ever since I set it up locally to run on my own GPU I'm afraid :(

from so-vits-svc-fork.

chitsanfei avatar chitsanfei commented on August 28, 2024

Ok for me using v1.4.1. You can temporarily use this version by changing step Install dependencies by adding ==1.4.1 to specify the version as following and re-run this step and give it another try.

if BRANCH == "none":
    %pip install -U so-vits-svc-fork==1.4.1

🤔

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

Can you try modifying the colab so it runs svc pre-resample -n 1 instead of just svc pre-resample and then try the rest again?

I had a similar issue locally, maybe that can also fix it for you

Ok for me using v1.4.1. You can temporarily use this version by changing step Install dependencies by adding ==1.4.1 to specify the version as following and re-run this step and give it another try.

if BRANCH == "none":
    %pip install -U so-vits-svc-fork==1.4.1

🤔

@ruckusmattster
It's a pre-hubert error, different issue.
Are there any audio files being unusually short in dataset/44k?

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

ooh actually, looking now theres a couple. ill edit them out. otherwise ill try fumble through running it locally; its all set up but the file structure etc confuse me

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

image
we need this in pre-hubert

from so-vits-svc-fork.

chitsanfei avatar chitsanfei commented on August 28, 2024

just a last ask- a screen recording of the install and train process would be godly- no need to any talking just show the steps in order no cuts or edits 🙏

feel free to just say no if its a lot of work

ooh actually, looking now theres a couple. ill edit them out. otherwise ill try fumble through running it locally; its all set up but the file structure etc confuse me

Sry 4 i cant give your a record now, but i can show u about my file structure.

image

Hopefully it can help you. 😢

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

alright- everything looks right to me- local may just be the way to go- if i can figure that out 😆

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

image
where do i find this in the local install

from so-vits-svc-fork.

ruckusmattster avatar ruckusmattster commented on August 28, 2024

image
is this the correct location? do i create that folder? just drop a thumbs up if yes

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

image is this the correct location? do i create that folder? just drop a thumbs up if yes

sovits/
If you have other questions, please ask them in Discussion.

from so-vits-svc-fork.

34j avatar 34j commented on August 28, 2024

@allcontributors add ruckusmattster bug

from so-vits-svc-fork.

allcontributors avatar allcontributors commented on August 28, 2024

@34j

I've put up a pull request to add @ruckusmattster! 🎉

from so-vits-svc-fork.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.