Coder Social home page Coder Social logo

Comments (5)

parasj avatar parasj commented on July 3, 2024 1

Hi @Vitvicky and @kambehmw! Thanks for trying out our code.

It takes quite a bit of time to preprocess the data, so we provide pre-augmented datasets at this location https://drive.google.com/file/d/1YfPTPPOv4evldpN-n_4QBDWDWFImv7xO/view?usp=sharing, as part of this dataset folder.

However, if you want to modify some of the augmentations and add a new transform, these are an example set of commands we used to generate the augmented pre-processed dataset:

$ split -l$((`wc -l < javascript_dedupe_definitions_nonoverlap_v2_train.jsonl`/136)) javascript_dedupe_definitions_nonoverlap_v2_train.jsonl javascript_dedupe_definitions_nonoverlap_v2_train.split.jsonl -da 3
$ mkdir -p $OUTDIR
$ find . -name "*split.jsonl*" | parallel --files -I% --max-args 1 -j137 wc -l %
$ find . -name "*split.jsonl*" | parallel --files -I% --max-args 1 -j137 node node_src/transform_jsonl.js % ../augmented/%.augmented

from contracode.

kambehmw avatar kambehmw commented on July 3, 2024

Thank you for sharing this code. I also would like to know how to perform pre-processing and compiler transforms. I would appreciate it if you could update the README on the procedure.

from contracode.

QZH-eng avatar QZH-eng commented on July 3, 2024

Hi Parasj, thanks for publishing your code. I want to ask what are the parameters of the equipment used in your experiment? I found that 16G memory is not enough if using the javascript_augmented.pickle.gz file.

from contracode.

parasj avatar parasj commented on July 3, 2024

Hi @QZH-eng -- thank you for trying our repository!

Pretraining is memory hungry as contrastive learning benefits from large batch sizes (see https://arxiv.org/abs/2002.05709). Moreover, the transformer backbone we leverage uses significantly more memory than typical image classification architectures.

We generally performed pretraining over 2-4 16GB V100 GPUs. We provide pretrained checkpoints due to the large cost of pretraining. Finetuning is very cheap and was performed on 1 V100 GPU.

Some recommendations to reduce memory consumption:
(1) reducing the sequence length for the Transformer encoder
(2) decreasing the hidden dimension size of our model
(3) adding checkpoint annotations for gradient checkpointing (e.g. PyTorch gradient checkpointing)

from contracode.

parasj avatar parasj commented on July 3, 2024

@QZH-eng I created a new issue #17 -- please use this for further discussion

from contracode.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.