Coder Social home page Coder Social logo

Comments (12)

andi611 avatar andi611 commented on June 26, 2024

Hi,

You don't have to change anything in utility/audio.py.
For the pre-trained Mockingjay model, you need the following arguments:
python preprocess_any.py --feature_type=mel --delta=True --delta_delta=False --apply_cmvn=True
The other 80-dim is from delta, mel 80 + delta 80 = the final 160.

from s3prl.

SenYan1999 avatar SenYan1999 commented on June 26, 2024

Hi,

You don't have to change anything in utility/audio.py.
For the pre-trained Mockingjay model, you need the following arguments:
python preprocess_any.py --feature_type=mel --delta=True --delta_delta=False --apply_cmvn=True
The other 80-dim is from delta, mel 80 + delta 80 = the final 160.

Thank you very much! Another question is when I load Mockingjay model, I just type transformer = TRANSFORMER(options=options, inp_dim=160) that changes inp_dim to 160, is it right?

from s3prl.

andi611 avatar andi611 commented on June 26, 2024

Yes, inp_dim=160.
You also have to change the content of the options dictionary depending on your need:

options = {
    'ckpt_file'     : './result/result_transformer/your_path_to_ckpt/states-1000000.ckpt',
    'load_pretrain' : 'True',
    'no_grad'       : 'True',
    'dropout'       : 'default',
    'spec_aug'      : 'False',
    'spec_aug_prev' : 'True',
    'weighted_sum'  : 'False',
    'select_layer'  : -1,
}

from s3prl.

SenYan1999 avatar SenYan1999 commented on June 26, 2024

Yes, inp_dim=160.
You also have to change the content of the options dictionary depending on your need:

options = {
    'ckpt_file'     : './result/result_transformer/your_path_to_ckpt/states-1000000.ckpt',
    'load_pretrain' : 'True',
    'no_grad'       : 'True',
    'dropout'       : 'default',
    'spec_aug'      : 'False',
    'spec_aug_prev' : 'True',
    'weighted_sum'  : 'False',
    'select_layer'  : -1,
}

OK I see, thanks for sharing such a great project, there are actually not so much audio pretrained models to download and use.

from s3prl.

SenYan1999 avatar SenYan1999 commented on June 26, 2024

another question is that how to preprocess the audio data.

  1. the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor. But if I do this, how to give the mask information to the model?

  2. I have found that my audio is very long. Most of their time step is about 80K or so.

        # forward the whole sequence at once
        if self.max_input_length == 0 or input_len <= self.max_input_length:
            spec_stacked, pos_enc, attn_mask = self.process_input_data(x) # x shape: (B, T, D)
            x = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
        # forward the sequence in chunks then concat
        else:
            chunks = torch.chunk(x, chunks=math.ceil(input_len / self.max_input_length), dim=1)
            x_ = []
            for chunk in chunks:
                spec_stacked, pos_enc, attn_mask = self.process_input_data(chunk) # x shape: (B, T, D)
                chunk = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
                x_.append(torch.stack(chunk) if type(chunk) is list else chunk)
            x = torch.cat(x_, dim=2 if (self.weighted_sum or self.select_layer != -1) else 1)

should i just use the chunk?

from s3prl.

SenYan1999 avatar SenYan1999 commented on June 26, 2024

another question is that how to preprocess the audio data.

  1. the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor. But if I do this, how to give the mask information to the model?
  2. I have found that my audio is very long. Most of their time step is about 80K or so.
        # forward the whole sequence at once
        if self.max_input_length == 0 or input_len <= self.max_input_length:
            spec_stacked, pos_enc, attn_mask = self.process_input_data(x) # x shape: (B, T, D)
            x = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
        # forward the sequence in chunks then concat
        else:
            chunks = torch.chunk(x, chunks=math.ceil(input_len / self.max_input_length), dim=1)
            x_ = []
            for chunk in chunks:
                spec_stacked, pos_enc, attn_mask = self.process_input_data(chunk) # x shape: (B, T, D)
                chunk = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
                x_.append(torch.stack(chunk) if type(chunk) is list else chunk)
            x = torch.cat(x_, dim=2 if (self.weighted_sum or self.select_layer != -1) else 1)

should i just use the chunk?

...after reading nn_transformer.py, i know that just pad 0 to a fixed length is right. Thanks.

from s3prl.

andi611 avatar andi611 commented on June 26, 2024

another question is that how to preprocess the audio data.

  1. the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor.

You can first sort the audio according to their length, then put audio with similar length in the same batch. Padding is also required if their length are different, this can be done with torch.nn.utils.rnn.pad_sequence.

But if I do this, how to give the mask information to the model?

At inference time, spectrogram masking is not used.

  1. I have found that my audio is very long. Most of their time step is about 80K or so.
    should i just use the chunk?

Yes, you can use the chunk, and feed the long input chunk by chunk.

from s3prl.

SenYan1999 avatar SenYan1999 commented on June 26, 2024

another question is that how to preprocess the audio data.

  1. the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor.

You can first sort the audio according to their length, then put audio with similar length in the same batch. Padding is also required if their length are different, this can be done with torch.nn.utils.rnn.pad_sequence.

But if I do this, how to give the mask information to the model?

At inference time, spectrogram masking is not used.

  1. I have found that my audio is very long. Most of their time step is about 80K or so.
    should i just use the chunk?

Yes, you can use the chunk, and feed the long input chunk by chunk.

It works! Thanks a lot!

from s3prl.

aviasd avatar aviasd commented on June 26, 2024

Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the 40 dim fmllr necessary for the pretrained Tera models.
How can I do it without the use of Kaldi?
Is there a way to do it as easy as in the 160 dim mel that was used in Mockingjay?

from s3prl.

andi611 avatar andi611 commented on June 26, 2024

Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the 40 dim fmllr necessary for the pretrained Tera models.

Since fmllr extraction requires likelihood maximization, to use the pre-trained Tera models, you must download the original fmllr data that we used during our pre-training.
Instructions are provided here.

How can I do it without the use of Kaldi?
Is there a way to do it as easy as in the 160 dim mel that was used in Mockingjay?

Kaldi is the best way I've found so far. To the best of my knowledge, there is no other easier way to extract fmllr.

from s3prl.

aviasd avatar aviasd commented on June 26, 2024

Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the 40 dim fmllr necessary for the pretrained Tera models.

Since fmllr extraction requires likelihood maximization, to use the pre-trained Tera models, you must download the original fmllr data that we used during our pre-training.
Instructions are provided here.

I am trying to run the pretrained model on my own data (downstream).
Do I need to download your original training data for this?
I downloaded the checkpoint from fmllrBase460-F-N-K-libri.
By the way, what is the meaning of F N K in the model checkpoint name?

from s3prl.

andi611 avatar andi611 commented on June 26, 2024

I am trying to run the pretrained model on my own data (downstream).

I suggest you pre-train your own model, since you are using them on your own data, and the pre-trained models are trained with fmllr.

Do I need to download your original training data for this?
I downloaded the checkpoint from fmllrBase460-F-N-K-libri.
By the way, what is the meaning of F N K in the model checkpoint name?

-F: frequency alteration
-N: noise alteration
-K: it only means Kaldi, you can ignore this.
(time alteration is always used, hence not specified here)

from s3prl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.