Comments (5)
First let me explain the distillation configuration above if it is confusing.
For BERT-base, there are 12 layers we name them as 1,2,...12.
The example matches the student's layers to teacher's layers evenly:
teacher's 0th layer goes to student's 0th layer (which are the embedding layers)
teacher's 3rd layer goes to student's 1st layer
teacher's 6th layer goes to student's 2nd layer
teacher's 9th layer goes to student's 3rd layer
teacher's 12th layer(the last layer of teacher) goes to student's 4th layer (the last layer of student)
Since the dimensions of teacher and student are different, we use a linear mapping from 312 (student's dim) to 768 (teacher's dim) to project student's hidden states into a higher dimensional space.
For the above mappings, we take the 'hidden' features (which should be defined in the adaptor by users; it's users responsibility to tell textbrewer what 'hidden' is) from each layer and calculate the 'hidden_mse' loss (defined in the losses.py) between the features from the student and the teacher.
The following lines
{'layer_T' : [0,0], 'layer_S': [0,0], ....}
...
use a different loss 'nst', which requires two similarity matrices.
For example, {'layer_T' : [0,0], 'layer_S': [0,0], ....} means:
- calculate the similarity matrix of the 'hidden' feature from teacher's 0th layer with the 'hidden' feature from teacher's 0th layer (self-similarity)
- calculate the similarity matrix of the 'hidden' feature from student's 0th layer with the 'hidden' feature from student's 0th layer (self-similarity)
- compute the 'nst' loss on above two similarity matrices.
For a three-layer thiner BERT T3-small, you can map the layers 0-0, 4-1, 8-2, 12-3, and use 'proj':[384,768] to match the dimensions.
Lines that contain 'nst' loss can be removed if you want to keep the configuration simple.
from textbrewer.
Thank you so much for the detailed explanation. If you can add these to your docs it will be super useful.
I am following up on the conll2003 example. I changed the distill_config as the following. (I am using Transformers 4.17.0)
distill_config = DistillationConfig(
temperature = 8,
# intermediate_matches = [{'layer_T':10, 'layer_S':3, 'feature':'hidden','loss': 'hidden_mse', 'weight' : 1}]
intermediate_matches = [
{'layer_T':0, 'layer_S':0, 'feature':'hidden','loss': 'hidden_mse', 'weight' : 1,'proj':['linear',384,768]},
{'layer_T':4, 'layer_S':1, 'feature':'hidden','loss': 'hidden_mse', 'weight' : 1,'proj':['linear',384,768]},
{'layer_T':8, 'layer_S':2, 'feature':'hidden','loss': 'hidden_mse', 'weight' : 1,'proj':['linear',384,768]},
{'layer_T':12, 'layer_S':3, 'feature':'hidden','loss': 'hidden_mse', 'weight' : 1,'proj':['linear',384,768]}]
)
The run_conll2003_distill_T3.sh file looks as the following.
export OUTPUT_DIR="resource/taggers/T3-small-bert-finetuned"
export BATCH_SIZE=32
export NUM_EPOCHS=3
export SAVE_STEPS=750
export SEED=42
export MAX_LENGTH=128
export BERT_MODEL_TEACHER="resource/taggers/bert-finetuned"
python run_ner_distill.py \
--data_dir english_dataset \
--model_type bert \
--labels label_prod.txt \
--model_name_or_path $BERT_MODEL_TEACHER \
--output_dir $OUTPUT_DIR \
--max_seq_length $MAX_LENGTH \
--num_train_epochs $NUM_EPOCHS \
--per_gpu_train_batch_size $BATCH_SIZE \
--num_hidden_layers 3 \
--save_steps $SAVE_STEPS \
--learning_rate 1e-4 \
--warmup_steps 0.1 \
--seed $SEED \
--do_distill \
--do_train \
--do_eval \
--do_predict
I am getting an index out of range error. Can you please check?
Traceback (most recent call last):
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/run_ner_distill.py", line 531, in
main()
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/run_ner_distill.py", line 460, in main
train(args, train_dataset,model_T, model, tokenizer, labels, pad_token_label_id,predict_callback)
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/run_ner_distill.py", line 147, in train
distiller.train(optimizer,train_dataloader,args.num_train_epochs,
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/textbrewer/distiller_basic.py", line 283, in train
self.train_with_num_epochs(optimizer, scheduler, tqdm_disable, dataloader, max_grad_norm, num_epochs, callback, batch_postprocessor, **args)
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/textbrewer/distiller_basic.py", line 212, in train_with_num_epochs
total_loss, losses_dict = self.train_on_batch(batch,args)
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/textbrewer/distiller_general.py", line 79, in train_on_batch
total_loss, losses_dict = self.compute_loss(results_S, results_T)
File "/Users/akalia/Research_Projects/NER-EL-Evaluation/textbrewer_ner_distiller/textbrewer/distiller_general.py", line 143, in compute_loss
inter_S = inters_S[feature][layer_S]
IndexError: list index out of range
from textbrewer.
Did you set the model to return hidden states by model.config.output_hidden_states=True
(if you distilled with hidden states)?
If it is still not working, would you please print the length of the inters_S[feature]
and inters_T[feature]
by inserting inters_S[feature]
and inters_T[feature]
to the line 140 of textbrewer/distiller_general.py
?
from textbrewer.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
from textbrewer.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
from textbrewer.
Related Issues (20)
- pre-trained student weights HOT 3
- Where to find gs4210.pkl file or how to generate it ? thanks HOT 2
- Show the progress bar when training. HOT 3
- Picking right layers HOT 3
- How about the distillation effect of gpt2 ? HOT 2
- Does it support translation model? HOT 2
- 在VisionTransformer HOT 7
- 关于ner数据的处理 HOT 2
- notebook_examples/msra_ner.ipynb 运行报错 HOT 12
- 不同维度蒸馏有对应的例子吗,从768降到256 HOT 4
- msra_ner.ipynb最后的trainer.evaluate()显示CUDA out of memory,请问训练的显存要求是多大?十分感谢! HOT 2
- 老师,您好,请问有多任务多教师的蒸馏的demo吗? HOT 4
- 老师您好,我想问一下,比如roberta蒸馏到tinybert,中间的hidden是通过线性层拉到同样的维度去算mse,那在推理的时候岂不是这些经过梯度更新的线性层毫无作用?那请问这些线性层仅仅就是为了调整维度? HOT 2
- 蒸馏后的模型进行evaluate,报错AxisError: axis 2 is out of bounds for array of dimension 1 HOT 5
- 可以使用chatgpt蒸馏到bert或者T5吗? HOT 2
- 麻烦问下,目前支持llama模型吗 HOT 2
- 请问支持BERT-of-Theseus的蒸馏方式吗 HOT 3
- 学生模型权重初始化问题 HOT 2
- TextBrewer/src/textbrewer/distiller_utils.py get_outputs_from_batch fails tocheck dicts properly for maskedLM HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from textbrewer.