Coder Social home page Coder Social logo

fastvocoder's Introduction

Fast (GAN Based Neural) Vocoder

Todo

  • Support Basis-MelGAN
  • Add more demo
  • Add pretrained model
  • Support NHV

Discription

Include Basis-MelGAN (paper link: https://arxiv.org/pdf/2106.13419.pdf), MelGAN, HifiGAN and Multiband-HifiGAN, maybe include Neural Homomorphic Vocoder in the future. Developed on BiaoBei dataset, you can modify conf and hparams.py to fit your own dataset and model.

Demo

RTF

  • Platform: MacBook Pro M1
  • HiFiGAN (large): NaN
  • HiFiGAN (light, baseline): 0.2424
  • MultiBand-HiFiGAN (large): 0.4956
  • MultiBand-HiFiGAN (light): 0.1591
  • Basis-MelGAN: 0.0498
  • HiFiGAN (light) : MultiBand-HiFiGAN (large) : MultiBand-HiFiGAN (light) : Basis-MelGAN = 50 : 102 : 33 : 10

Usage (of Basis-MelGAN)

1. abstract

Recent studies have shown that neural vocoders based on generative adversarial network (GAN) can generate audios with high quality. While GAN based neural vocoders have shown to be computationally much more efficient than those based on autoregressive predictions, the real-time generation of the highest quality audio on CPU is still a very challenging task. One major computation of all GAN-based neural vocoders comes from the stacked upsampling layers, which were designed to match the length of the waveform's length of output and temporal resolution. Meanwhile, the computational complexity of upsampling networks is closely correlated with the numbers of samples generated for each window. To reduce the computation of upsampling layers, we propose a new GAN based neural vocoder called Basis-MelGAN where the raw audio samples are decomposed with a learned basis and their associated weights. As the prediction targets of Basis-MelGAN are the weight values associated with each learned basis instead of the raw audio samples, the upsampling layers in Basis-MelGAN can be designed with much simpler networks. Compared with other GAN based neural vocoders, the proposed Basis-MelGAN could produce comparable high-quality audio but significantly reduced computational complexity from HiFi-GAN V1's 17.74 GFLOPs to 7.95 GFLOPs.

2. Prepare data

  • Refer to xcmyz: ConvTasNet4BasisMelGAN to get dataset for Basis-MelGAN
  • Move ConvTasNet4BasisMelGAN/Basis-MelGAN-dataset to FastVocoder
  • Write path of wav data in a file, for example: cd dataset && python3 basismelgan.py
  • Run bash preprocess.sh dataset/basismelgan.txt Basis-MelGAN-dataset/processed dataset/audio dataset/mel

3. Train

  • command:
bash train.sh \
    <GPU ids> \
    /path/to/audio/train \
    /path/to/audio/valid \
    /path/to/mel/train \
    /path/to/mel/valid \
    <model name> \
    /path/to/configuration/file \
    <if use scheduler> \
    <if mix precision training>
  • for example:
bash train.sh \
    0 \
    dataset/audio/train \
    dataset/audio/valid \
    dataset/mel/train \
    dataset/mel/valid \
    basis-melgan \
    conf/basis-melgan/light.yaml \
    0 0

4. Train from checkpoint

  • command:
bash train.sh \
    <GPU ids> \
    /path/to/audio/train \
    /path/to/audio/valid \
    /path/to/mel/train \
    /path/to/mel/valid \
    <model name> \
    /path/to/configuration/file \
    <if use scheduler> \
    <if mix precision training> \
    /path/to/checkpoint \
    <step of checkpoint>

5. Synthesize

  • command:
bash synthesize.sh \
    /path/to/checkpoint \
    /path/to/mel \
    /path/for/saving/wav \
    <model name> \
    /path/to/configuration/file

Usage (of MelGAN, HifiGAN and Multiband-HifiGAN)

1. Prepare data

  • write path of wav data in a file, for example: cd dataset && python3 biaobei.py
  • bash preprocess.sh <wav path file> <path to save processed data> dataset/audio dataset/mel
  • for example: bash preprocess.sh dataset/BZNSYP.txt processed dataset/audio dataset/mel

2. Train

  • command:
bash train.sh \
    <GPU ids> \
    /path/to/audio/train \
    /path/to/audio/valid \
    /path/to/mel/train \
    /path/to/mel/valid \
    <model name> \
    /path/to/configuration/file \
    <if use scheduler> \
    <if mix precision training>
  • for example:
bash train.sh \
    0 \
    dataset/audio/train \
    dataset/audio/valid \
    dataset/mel/train \
    dataset/mel/valid \
    hifigan \
    conf/hifigan/light.yaml \
    0 0

3. Train from checkpoint

  • command:
bash train.sh \
    <GPU ids> \
    /path/to/audio/train \
    /path/to/audio/valid \
    /path/to/mel/train \
    /path/to/mel/valid \
    <model name> \
    /path/to/configuration/file \
    <if use scheduler> \
    <if mix precision training> \
    /path/to/checkpoint \
    <step of checkpoint>

4. Synthesize

  • command:
bash synthesize.sh \
    /path/to/checkpoint \
    /path/to/mel \
    /path/for/saving/wav \
    <model name> \
    /path/to/configuration/file

fastvocoder's People

Contributors

miralan avatar xcmyz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fastvocoder's Issues

16K sampling rate

Hi, thanks for your code.
I have a question. If the sampling rate is 16K, how do I set the parameters?

why set the L=30 ?

hello,I have some question, in the paper ,the shape of basis matrix is [32, 256] , but in the code ,the shape is [30, 256] .
And according to the function "overlap_and_add" , output_size = (frames - 1) * frame_step + frame_length, if the L=30, I think it cannot match the real wave length ?
for example, hop_len=256, mel.shape=[80, 140] , theoretically the output wave length is 140*256=35840.
according to the code, the output wave length is 33600.

Thanks in advance.

Shape mismatch error on new dataset

Hi, thanks for your work!

The frame rate of my dataset is 22050, and hop size of text2mel model is 256. I have changed hparams.py accordingly, but training results in an expcetion: (preprocessing was fine, anyway)

  File "/home/user/speechlab/FastVocoder-main/model/loss/loss.py", line 23, in forward
    assert est_source_sub_band.size(1) == wav_sub_band.size(1)

I figured out that model inference still uses hop-size of 240. So how to make your code fully compatible with other datasets? it seems that the codes are somehow hardcoded for Biaobei dataset.

Random start index in WeightDataset

At this line:

start_index = random.randint(0, len_data - hparams.fixed_length - 1)

If the input mel size smaller than fix-length, the random raise issue, I have try except to pass these short audios, but I just wonder it is handle in collate.

More than that, the segment size as I found in hifigan is 32, but in basic-melgan it (fix-length) is set to 140. Are there any difference between the 140 for biaobei and the one for LJspeech

Link to Basis-MelGAN paper?

Hi Zhengxi, congrats on your paper's acceptance on Interspeech 2021!

I got pretty interested in your paper while reading the abstract of Basis-MelGAN on the README, but I could not find any link to the paper. Though the Interspeech conference is only 2 months away, don't you have any plans on publishing the paper on arXiv in near future?

Multiband Architecture

Hi author, I have found the notes as "the generated audio has interference at a specific frequency" in this repo. I have encountered with the straight line at a specific frequency when developing similar multiband architecture, and I wonder if such phenomenon is the one you mentioned? And do you have some advice or solutions? Thanks.
audio

Transform Layer is not used

Are there any reason that the transform layer is not used in default source code?

Have you found any inefficiency when using Transform?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.