Comments (12)
Hi,
You don't have to change anything in utility/audio.py.
For the pre-trained Mockingjay model, you need the following arguments:
python preprocess_any.py --feature_type=mel --delta=True --delta_delta=False --apply_cmvn=True
The other 80-dim is from delta, mel 80 + delta 80 = the final 160.
from s3prl.
Hi,
You don't have to change anything in utility/audio.py.
For the pre-trained Mockingjay model, you need the following arguments:
python preprocess_any.py --feature_type=mel --delta=True --delta_delta=False --apply_cmvn=True
The other 80-dim is from delta, mel 80 + delta 80 = the final 160.
Thank you very much! Another question is when I load Mockingjay model, I just type transformer = TRANSFORMER(options=options, inp_dim=160)
that changes inp_dim to 160, is it right?
from s3prl.
Yes, inp_dim=160
.
You also have to change the content of the options
dictionary depending on your need:
options = {
'ckpt_file' : './result/result_transformer/your_path_to_ckpt/states-1000000.ckpt',
'load_pretrain' : 'True',
'no_grad' : 'True',
'dropout' : 'default',
'spec_aug' : 'False',
'spec_aug_prev' : 'True',
'weighted_sum' : 'False',
'select_layer' : -1,
}
from s3prl.
Yes,
inp_dim=160
.
You also have to change the content of theoptions
dictionary depending on your need:options = { 'ckpt_file' : './result/result_transformer/your_path_to_ckpt/states-1000000.ckpt', 'load_pretrain' : 'True', 'no_grad' : 'True', 'dropout' : 'default', 'spec_aug' : 'False', 'spec_aug_prev' : 'True', 'weighted_sum' : 'False', 'select_layer' : -1, }
OK I see, thanks for sharing such a great project, there are actually not so much audio pretrained models to download and use.
from s3prl.
another question is that how to preprocess the audio data.
-
the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor. But if I do this, how to give the mask information to the model?
-
I have found that my audio is very long. Most of their time step is about 80K or so.
# forward the whole sequence at once
if self.max_input_length == 0 or input_len <= self.max_input_length:
spec_stacked, pos_enc, attn_mask = self.process_input_data(x) # x shape: (B, T, D)
x = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
# forward the sequence in chunks then concat
else:
chunks = torch.chunk(x, chunks=math.ceil(input_len / self.max_input_length), dim=1)
x_ = []
for chunk in chunks:
spec_stacked, pos_enc, attn_mask = self.process_input_data(chunk) # x shape: (B, T, D)
chunk = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D)
x_.append(torch.stack(chunk) if type(chunk) is list else chunk)
x = torch.cat(x_, dim=2 if (self.weighted_sum or self.select_layer != -1) else 1)
should i just use the chunk?
from s3prl.
another question is that how to preprocess the audio data.
- the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor. But if I do this, how to give the mask information to the model?
- I have found that my audio is very long. Most of their time step is about 80K or so.
# forward the whole sequence at once if self.max_input_length == 0 or input_len <= self.max_input_length: spec_stacked, pos_enc, attn_mask = self.process_input_data(x) # x shape: (B, T, D) x = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D) # forward the sequence in chunks then concat else: chunks = torch.chunk(x, chunks=math.ceil(input_len / self.max_input_length), dim=1) x_ = [] for chunk in chunks: spec_stacked, pos_enc, attn_mask = self.process_input_data(chunk) # x shape: (B, T, D) chunk = self.model(spec_stacked, pos_enc, attn_mask, output_all_encoded_layers=self.weighted_sum or self.select_layer != -1) # (B, T, D) or # (N, B, T, D) x_.append(torch.stack(chunk) if type(chunk) is list else chunk) x = torch.cat(x_, dim=2 if (self.weighted_sum or self.select_layer != -1) else 1)should i just use the chunk?
...after reading nn_transformer.py, i know that just pad 0 to a fixed length is right. Thanks.
from s3prl.
another question is that how to preprocess the audio data.
- the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor.
You can first sort the audio according to their length, then put audio with similar length in the same batch. Padding is also required if their length are different, this can be done with torch.nn.utils.rnn.pad_sequence
.
But if I do this, how to give the mask information to the model?
At inference time, spectrogram masking is not used.
- I have found that my audio is very long. Most of their time step is about 80K or so.
should i just use the chunk?
Yes, you can use the chunk, and feed the long input chunk by chunk.
from s3prl.
another question is that how to preprocess the audio data.
- the audio's time step is different, and i have seen what you have written in TRANSFORMER.process_input_data(), but should i pre-pad 0 to the time step to make the audios in a batch have the same time step? Because if the T is different, I can't build the tensor.
You can first sort the audio according to their length, then put audio with similar length in the same batch. Padding is also required if their length are different, this can be done with
torch.nn.utils.rnn.pad_sequence
.But if I do this, how to give the mask information to the model?
At inference time, spectrogram masking is not used.
- I have found that my audio is very long. Most of their time step is about 80K or so.
should i just use the chunk?Yes, you can use the chunk, and feed the long input chunk by chunk.
It works! Thanks a lot!
from s3prl.
Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the 40 dim fmllr
necessary for the pretrained Tera models.
How can I do it without the use of Kaldi?
Is there a way to do it as easy as in the 160 dim mel
that was used in Mockingjay?
from s3prl.
Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the40 dim fmllr
necessary for the pretrained Tera models.
Since fmllr extraction requires likelihood maximization, to use the pre-trained Tera models, you must download the original fmllr data that we used during our pre-training.
Instructions are provided here.
How can I do it without the use of Kaldi?
Is there a way to do it as easy as in the160 dim mel
that was used in Mockingjay?
Kaldi is the best way I've found so far. To the best of my knowledge, there is no other easier way to extract fmllr.
from s3prl.
Hey,
I am just transferring code from the old Mockingjay to the new Tera.
I want to preprocess my audio to the40 dim fmllr
necessary for the pretrained Tera models.Since fmllr extraction requires likelihood maximization, to use the pre-trained Tera models, you must download the original fmllr data that we used during our pre-training.
Instructions are provided here.
I am trying to run the pretrained model on my own data (downstream).
Do I need to download your original training data for this?
I downloaded the checkpoint from fmllrBase460-F-N-K-libri.
By the way, what is the meaning of F
N
K
in the model checkpoint name?
from s3prl.
I am trying to run the pretrained model on my own data (downstream).
I suggest you pre-train your own model, since you are using them on your own data, and the pre-trained models are trained with fmllr.
Do I need to download your original training data for this?
I downloaded the checkpoint from fmllrBase460-F-N-K-libri.
By the way, what is the meaning ofF
N
K
in the model checkpoint name?
-F: frequency alteration
-N: noise alteration
-K: it only means Kaldi, you can ignore this.
(time alteration is always used, hence not specified here)
from s3prl.
Related Issues (20)
- University Work: Mispronunciation Detection HOT 3
- i dont think the paper distilhubert is clear. HOT 4
- Asking for how to use pretrained weight of Hugging Face models in downstream tasks. HOT 7
- An error occurrs when adding new downstream tasks. HOT 7
- Feature request for Language Identification on ML-SUPERB dataset HOT 5
- Multiresolution HuBERT as a new upstream HOT 6
- No module named 's3prl.superb' HOT 1
- Is this required for the SS and SE task? assert abs(feat_list[i].size(0) - length_list[i]) < 5. I am getting this error for wav2vec HOT 6
- Different upstream and downstream learning rates HOT 1
- ValueError: mutable default <class 's3prl.upstream.roberta.roberta_model.EncDecBaseConfig'> for field encoder is not allowed: use default_factory HOT 3
- Not able to submit the results. HOT 4
- The rules for conformity for emotion recognition. HOT 5
- Potential SpecAug Issue HOT 1
- What is the accept rate in the VC task evaluation output? HOT 1
- a question about two-stage downstream task HOT 1
- ASVspoof Dateset Support HOT 2
- Requesting to add CLSRIL-23 pretrained model as new upstream HOT 6
- Cannot submit my results in the leaderboard HOT 4
- Document link broken HOT 1
- Broken link HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s3prl.