Comments (7)
Hi, thank you for appreciating our work.
To answer your questions:
- In this work, we did not use any labels in the pre-training process. The reason is, as you mentioned, the model can learn good representations from reconstructing frames along. The purpose of this unsupervised learning framework is to allow models to take advantage of the easily acquired large amount of unlabeled data. However, you can always try to add labels in attempt to feed more information to the model.
- Yes, when training downstream tasks, we train classifiers from scratch on top of the pre-trained models, where we use only 0.1% of labeled data (x:audio, y:phone) among the whole dataset (train-clean-360).
but in preprocess.py, why use text? Why not just use voice datasets for training?
Thanks for your reply and work. I will try to test with different language also.
from s3prl.
Hi, thank you for appreciating our work.
To answer your questions:
-
In this work, we did not use any labels in the pre-training process. The reason is, as you mentioned, the model can learn good representations from reconstructing frames along. The purpose of this unsupervised learning framework is to allow models to take advantage of the easily acquired large amount of unlabeled data. However, you can always try to add labels in attempt to feed more information to the model.
-
Yes, when training downstream tasks, we train classifiers from scratch on top of the pre-trained models, where we use only 0.1% of labeled data (x:audio, y:phone) among the whole dataset (train-clean-360).
from s3prl.
Hi, thank you for appreciating our work.
To answer your questions:
- In this work, we did not use any labels in the pre-training process. The reason is, as you mentioned, the model can learn good representations from reconstructing frames along. The purpose of this unsupervised learning framework is to allow models to take advantage of the easily acquired large amount of unlabeled data. However, you can always try to add labels in attempt to feed more information to the model.
- Yes, when training downstream tasks, we train classifiers from scratch on top of the pre-trained models, where we use only 0.1% of labeled data (x:audio, y:phone) among the whole dataset (train-clean-360).
but in preprocess.py, why use text? Why not just use voice datasets for training?
from s3prl.
The reason that preprocess.py
includes text preprocessing is that in our future work, we plan to expand this project with a downstream ASR system. Currently, the text is not used.
You can simply try to remove that part of the code if you do not wish to download and preprocess the text data.
from s3prl.
The reason that
preprocess.py
includes text preprocessing is that in our future work, we plan to expand this project with a downstream ASR system. Currently, the text is not used.
You can simply try to remove that part of the code if you do not wish to download and preprocess the text data.
text label is required to train asr, this pre trian model is unsupervisied, why use text label in pre training with a downstream asr system, should not all use voice data? likely bert and so on
from s3prl.
text label is required to train asr, this pre trian model is unsupervisied,
That is correct.
why use text label in pre training with a downstream asr system,
First of all, text labels are for supervised training of an ASR system. And, we do not use text labels in pre-training of the Mockingjay feature extraction model.
should not all use voice data? likely bert and so on
Yes, only voice data is used for pre-training. We did not use any text label in pre-training, hence it is like BERT.
from s3prl.
text label is required to train asr, this pre trian model is unsupervisied,
That is correct.
why use text label in pre training with a downstream asr system,
First of all, text labels are for supervised training of an ASR system. And, we do not use text labels in pre-training of the Mockingjay feature extraction model.
should not all use voice data? likely bert and so on
Yes, only voice data is used for pre-training. We did not use any text label in pre-training, hence it is like BERT.
thanks!
from s3prl.
Related Issues (20)
- i dont think the paper distilhubert is clear. HOT 4
- Asking for how to use pretrained weight of Hugging Face models in downstream tasks. HOT 7
- An error occurrs when adding new downstream tasks. HOT 7
- Feature request for Language Identification on ML-SUPERB dataset HOT 5
- Multiresolution HuBERT as a new upstream HOT 7
- No module named 's3prl.superb' HOT 1
- Is this required for the SS and SE task? assert abs(feat_list[i].size(0) - length_list[i]) < 5. I am getting this error for wav2vec HOT 6
- Different upstream and downstream learning rates HOT 1
- ValueError: mutable default <class 's3prl.upstream.roberta.roberta_model.EncDecBaseConfig'> for field encoder is not allowed: use default_factory HOT 3
- Not able to submit the results. HOT 4
- The rules for conformity for emotion recognition. HOT 5
- Potential SpecAug Issue HOT 1
- What is the accept rate in the VC task evaluation output? HOT 1
- a question about two-stage downstream task HOT 1
- ASVspoof Dateset Support HOT 2
- Requesting to add CLSRIL-23 pretrained model as new upstream HOT 6
- Cannot submit my results in the leaderboard HOT 4
- Document link broken HOT 1
- Broken link HOT 4
- How to extract weighted sum SSL representations from an audio dataset?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s3prl.