Coder Social home page Coder Social logo

[QUESTION] Memory footprint about comet HOT 21 CLOSED

vince62s avatar vince62s commented on September 22, 2024
[QUESTION] Memory footprint

from comet.

Comments (21)

vince62s avatar vince62s commented on September 22, 2024 1

Hmm that seems like either the model is not converging or your ground truth is all the same scores.

it's the plain wmt 2020 da csv file

ok, with miniLM learning_rate needs to be much higher and it works fine with 2020 data.

from comet.

ricardorei avatar ricardorei commented on September 22, 2024 1

You mean the DA's from WMT 22? Some years of WMT are known to have very noisy DA's. For WMT 22 I would not use the DA's... For WMT you have the SQM data or the MQM from the metrics task... the DA's from WMT 2022 were collected only into english and are known to be noisy.

from comet.

ricardorei avatar ricardorei commented on September 22, 2024 1

Its too big. Ill share it by email

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

You can train an XLMRoberta-large on a 24GB if you keep the embeddings frozen. XLM-R embeddings take a lot of space. But keeping them frozen has no impact on performance and reduces memory a lot.

from comet.

vince62s avatar vince62s commented on September 22, 2024

How do I do that? is it documented somewhere?
EDIT: found it but they were frozen already ....

also is the exact same 1720-da.csv dataset downladable somewhere? cause I am running tests independently wit 17, 18, 19 but with 20 I am getting this error: (encoder is miniLM)

Loading data/2020-da.csv.
Epoch 0: 30%|█████████████████████████████████████████████████▊ | 3167/10555 [01:59<04:39, 26.45it/s, v_num=0]Encoder model fine-tuning
Epoch 0: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10555/10555 [13:16<00:00, 13.26it/s, v_num=0/home/vincent/miniconda3/envs/pt2.1.0/lib/python3.11/site-packages/scipy/stats/_stats_py.py:5445: ConstantInputWarning: An input array is constant; the correlation coefficient is not defined.████| 13/13 [00:00<00:00, 37.22it/s]
warnings.warn(stats.ConstantInputWarning(warn_msg))
/home/vincent/miniconda3/envs/pt2.1.0/lib/python3.11/site-packages/scipy/stats/_stats_py.py:4781: ConstantInputWarning: An input array is constant; the correlation coefficient is not defined.
warnings.warn(stats.ConstantInputWarning(msg))
Epoch 0: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 10555/10555 [13:18<00:00, 13.22it/s, v_num=0, val_kendall=nan.0, val_spearman=nan.0, val_pearson=nan.0]Epoch 0, global step 1320: 'val_kendall' reached -inf (best -inf), saving model to '/home/vincent/nlp/COMET/lightning_logs/version_0/checkpoints/epoch=0-step=1320-val_kendall=nan.ckpt' as top 5

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

Also you should use precision at 16.

To keep embeddings frozen just keep this flag at true.

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

Hmm that seems like either the model is not converging or your ground truth is all the same scores.

You can find the data here

from comet.

vince62s avatar vince62s commented on September 22, 2024

embedding frozen is already set to true in the unified_metric.yaml so not helping.
When I set precision: 16, I am getting a warning saying it's better to use 16-mixed for AMP.
I'll try 16 but I think I got an error with 16 only.

from comet.

dmar1n avatar dmar1n commented on September 22, 2024

Hi @vince62s,

The precision value I currently use to avoid the warning is 16-mixed (following this). Also, you might want to try with nr_frozen_epochs: 1.0 and a bigger value for accumulate_grad_batches.

Hope this helps.

from comet.

vince62s avatar vince62s commented on September 22, 2024

Hmm that seems like either the model is not converging or your ground truth is all the same scores.

it's the plain wmt 2020 da csv file

from comet.

vince62s avatar vince62s commented on September 22, 2024

Hi @vince62s,

The precision value I currently use to avoid the warning is 16-mixed (following this). Also, you might want to try with nr_frozen_epochs: 1.0 and a bigger value for accumulate_grad_batches.

Hope this helps.

the memory issue appears as soon as the encoder is no longer frozen. so to test (to avoid waiting) I put nr_frozen_epochs=0.0 so that I see rigth away if things fit in the vRam.
with precision:16 / batch_size 4 we are a the very limit of 24GB, would be a pity if it crashes. there could be twp nice options: 1) have a filtertoolong catch to exclude examples taht are very long and tirgger this, and 2) a try/except when it goes OOM so that it can discard the batch and continue.

from comet.

vince62s avatar vince62s commented on September 22, 2024

You can find the data here

@ricardorei can you share the script that computes those csv files ? I would like to redo the same but exclude some specific systems, or do you have the same with the system name as a column?

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

I actually found the notebooks I used...but I did not saved the data. Just the raw notebooks. They should help you redo the data

Archive.zip

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

they also point to the previous WMT websites where you can download the data.

from comet.

vince62s avatar vince62s commented on September 22, 2024

Thanks, in the meantime I managed to do it for wmt2021. I was able to exclude one system but it gives me the same results.

I still have an issue with wmt22 data whatever the learning rate when training only on those data it does not converge.

from comet.

vince62s avatar vince62s commented on September 22, 2024

but here: https://huggingface.co/datasets/RicardoRei/wmt-da-human-evaluation
it has some 2022 data, is it DA or something else?
I trained on the 2022 extract from there so must be DA

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

yes exactly. Its those DA's from WMT 22

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

Usually I only use DA's from 2017 to 2020. Even those from 2021 I don't trust too much

from comet.

vince62s avatar vince62s commented on September 22, 2024

but do you have the exact data set used for wmt23-cometkiwi-da-xl and for wmt22-cometkiwi-da ?

from comet.

ricardorei avatar ricardorei commented on September 22, 2024

Yes I do. Let me download it and I'll share here.

Its basically WMT 17 to 20 + MLQE-PE data.

from comet.

vince62s avatar vince62s commented on September 22, 2024

closing this but training with xlmroberta large or XL is still an issue with 24GB vram
maybe using Lora would help and be the solution.

from comet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.