auspicious3000 / autopst Goto Github PK
View Code? Open in Web Editor NEWGlobal Rhythm Style Transfer Without Text Transcriptions
License: MIT License
Global Rhythm Style Transfer Without Text Transcriptions
License: MIT License
@auspicious3000 We tried reproducing results using your codebase and the dataset found here https://datashare.ed.ac.uk/handle/10283/3443 (that you use) but unfortunately, we were unable to. The outputs that we have so far are extremely noisy (even if source and target speakers are the same). Could you please share a working code with us that might help us reproduce the results? I would greatly appreciate your inputs!
Hi @auspicious3000,
First, thanks for releasing this repository! I've been trying to compare AutoPST to some upcoming work but I'm having an issue with the stop token prediction when converting utterances longer than 1 or 2 seconds. I noticed that you clipped some of the VCTK files for your demo page (and in the test dictionary you provided) so that they're much shorter. How did you use the test utterances in your evaluations? Do you have any recommendations so that I can make as fair a comparison as possible.
Thanks,
Benjamin
Error No module named 'synthesis while running demo.ipynb
I have not success to test final section of vocoder code in autopst. In conda envionment, all dependencies have installed.
Error something like"from synthessis cannot import build_model" allways shows.
I want to ask you, do I need to train in AutoVC speakers, cause I want to use for my own sounds, in another language, not English?
just want to clone voice of one recording to onother. for different speakers.
Do these recordings need to be same lenght and same sentences spoken to be compared during training?
BTW, I have RTX3060, and this card not supported by version of 1.6.0 of pytorch. I have installed fist onmt python package, than pytorch 1.7.0 with Cuda 11
Thank you
Hi,
I am wondering how the "test_vctk.meta" is created in the demo file?
Thanks!
Hi,
I run into an error about onmt
ModuleNotFoundError Traceback (most recent call last)
in
5 import torch.nn.functional as F
6 from collections import OrderedDict
----> 7 from onmt.utils.misc import sequence_mask
8 from model_autopst import Generator_2 as Predictor
9 from hparams_autopst import hparams
ModuleNotFoundError: No module named 'onmt'
but I see folder onmt_modules exists
then I install onmt(pip install onmt) and notice it's installing torch 1.3.0 although the requirements say that PyTorch == v1.6.0
Could you help me with this issue? What is the best approach to solve this?
Hello, I'm reading D.6. SPEECHSPLIT Baseline in the paper.
Am I understanding this correctly that SpeechSplit performs better for rythm transfer when the doing conversation to a seen speaker ?
Hello.
I have referred to your paper.
Based on your experiment, I conducted experiment on accent transformation using English accent data from different countries.
But the result is very unsatisfactory, I can't even hear the transformed voice clearly.
I think there may be a problem in the process of training SEA model.
But I don't know exactly where the problem is.
The images show my code for training SEA.
Could you help me with this issue? What is the best approach to solve this?
The mean and std I created are different from the values in mfcc_stats.pkl you provided.
Can you please check if I am doing something wrong?
I attached a simple code below.
thanks.
mfcc_list = list()
for path in tqdm(wav_path):
wav, sampling_rate = sf.read(path)
mfcc = librosa.feature.mfcc(y=wav, sr=sampling_rate, n_mfcc=80, n_fft=1024, hop_length=256) # [80, T]
mfcc_list.append(mfcc)
mfcc_list = np.concatenate(mfcc_list , axis=1) # [80, T]
mfcc_mean = mfcc_list.mean(axis=1) # [80]
mfcc_std = mfcc_list.std(axis=1) # [80]
dctmx = scipy.fftpack.dct(np.eye(80), type=2, axis=1, norm='ortho') # [80, 80]
with open('assets/mfcc_stats.pkl', 'wb') as f:
pickle.dump([mfcc_mean, mfcc_std, dctmx], f, pickle.HIGHEST_PROTOCOL)
Sorry, I’m not familiar with English grammar, please forgive me if I offend.
I want to try to execute this Github project, but failed.
The only changed part of the program is (Because i don't have GPU)
(prepare_train_data.py)
device = 'cuda:0' (Change to the following line)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
Run >python main_1.py
Problem Description:RuntimeError: repeats has to be Long tensor
Would i ask for help?
I will be grateful for any help you can provide.
Hi there,
I am trying to follow the code with my own dataset and could run Main_1.py and main_2.py to get xxx-A.ckpt and xxx-B.ckpt files.
Now I am not able to understand to run the demo file to prepare specific speakers dictionary to create and convert. Any help is appreciated with a little more direction to follow the steps.
What is the license of this repository and model?
Why did the speech content of the converted voice with my own trained model changed? Do you know the reason?
The pretrained model sea.ckpt just fit dataset which have 82 speaker, However, I have a huge dataset including 300 speaker at least. How could I train a corresponding SAE model?
Hi and thank you for this amazing project!
I was trying to create a notebook in colab that would allow me to input an audio file, then select the speaker and produce an output accordingly.
Here the code, it works but I am missing the part on how to change speaker timbre.
Do you have any tips on that?
Thanks a lot in advance!
How to make 'mfcc_stats.pkl' and 'spk2emb_82.pkl'?
I want to test another dataset.
Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.