Comments (4)
Hi @kambehmw,
Thank you for trying our repository! Are you trying to extend ContraCode to a new programming language?
Pretraining is memory hungry as contrastive learning benefits from large batch sizes (see https://arxiv.org/abs/2002.05709). Moreover, the transformer backbone we leverage uses significantly more memory than typical image classification architectures.
We generally performed pretraining over 4 16GB V100 GPUs, 2 48GB RTX 8000 GPUs or 4 24GB RTX 6000 GPUs. We provide pretrained checkpoints due to the large cost of pretraining.
Given the lower memory capacity of the RTX 2080TI, I would recommend (1) reducing the sequence length for the Transformer encoder, (2) decreasing the hidden dimension size of our model and (3) adding checkpoint annotations for gradient checkpointing (e.g. PyTorch gradient checkpointing).
Thanks,
Paras
from contracode.
Thanks for your reply.
I am tinkering with your source code to try to understand the methodology of your paper.
If I can come up with an idea, I will try to extend ContraCode to a new programming language.
Thanks for the memory saving advice as well. I'll also consider using GPUs with good specs in the cloud as an option.
Thanks again.
from contracode.
@kambehmw Thanks! Happy to discuss further as well. My email in my profile.
from contracode.
@parasj
Let me ask one more question.
We ran pretraining on the following environment in GCP
GCP:
- n1-standard-16 (vCPU x 16, 60 GB memory)
- V100 x 4
- HDD 512GB
As in the README, the command to be executed is as follows
python representjs/pretrain_distributed.py pretrain_lstm2l_hidden \
--num_epochs=200 --batch_size=512 --lr=1e-4 --num_workers=4 \
--subword_regularization_alpha 0.1 --program_mode contrastive --label_mode contrastive --save_every 5000 \
--train_filepath=data/codesearchnet_javascript/javascript_augmented.pickle.gz \
--spm_filepath=data/codesearchnet_javascript/csnjs_8k_9995p_unigram_url.model \
--min_alternatives 2 --dist_url tcp://localhost:10001 --rank 0 \
--encoder_type lstm --lstm_project_mode hidden --n_encoder_layers 2
The data/codesearchnet_javascript/javascript_augmented.pickle.gz
file will take quite a long time to load.
It takes about 1-2 hours to load the file and it's not finished. How long did it take you to load it when you ran it? (Is it because of the GCP environment?)
Also, do you have any ideas for reducing the load time?
from contracode.
Related Issues (11)
- How to generate augmented js file HOT 5
- Memory requirements for ContraCode HOT 3
- Proper Pytorch version HOT 1
- Memory explosion when pretrain Bidirectional LSTM HOT 2
- ask help for the codeclone dataset HOT 4
- code embedding HOT 1
- Python functions extension HOT 2
- data.zip HOT 7
- Cannot obtain the checkpoint HOT 1
- How many GPUs are used by this project? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from contracode.