We are working on NMT which involves a very low resourced language for which pretrained models for BERT or GPT2 are not available.How to deal with this situation?.
2.If we work with e.g., EN- X (X is a low resourced language). What should we do?
should we freeze and fine tune any of encoder or decoder.?
Thank you very much for your translation project. I'm very benefited and have leaned more about it. Could you change it into a suitable one for various components?such as bert2bert2,gpt2gpt2,gpt2bert and so on. I believe it will be a great work of art