Comments (3)
Hi, I've been planning to train this model, I have a tpu pod(v3-128) through trc, which should equate to ~ 5 tb of ram and 2 tb of vram, I had a few questions about how to begin training the model.
- What would be the appropriate dataset to train on? I was currently considering using the pile for pre training, but gathering human feedback for rlhf still seems like a challenge.
- How large of a model would you recommend?
- I saw you mentioned flash attention, are there any drawbacks to using it, because it seems to be practically the best attention
Thanks for all of your implementations, they have been really helpful to learn from
I am currently doing a distributed training run and will be open-sourcing all of the weights for a PaLM-rlhf-pytorch model. I will open a PR when training finishes.
To answer these questions:
- The Pile, or its deduplicated version, is currently the best bet for a large open-source dataset. The Pile with the NEOX tokenizer is over ~300B tokens. I am releasing a massive instruction FLAN dataset soon with the help of the original Google authors. You could combine both of these datasets. There are a few different human feedback datasets such as the hh dataset by Anthropic on the hub.
- The size of the model completely depends on the amount of computing you have and how many tokens you want to train for. You would have to see how long it would take to train a model of x size with x number of batches on your TPU pods. What sequence length you plan on using as well as other factors such as TFLOPs.
- You can't use Flash Attention with TPUs.
I will update as training progresses.
Best,
Enrico
from palm-rlhf-pytorch.
Hi Enrico,
Thanks for your response, on the note of flash attention not being possible on tpus, does this imply that tpu context size/efficiency will be substantially behind gpus for the foreseeable future? Or is there an alternative to get similar improvements with tpus? Memory management and partitioning has been what I struggle with the most in trying to train on tpus with Jax.
Thanks,
Aakash
from palm-rlhf-pytorch.
Hi Enrico, Thanks for your response, on the note of flash attention not being possible on tpus, does this imply that tpu context size/efficiency will be substantially behind gpus for the foreseeable future? Or is there an alternative to get similar improvements with tpus? Memory management and partitioning has been what I struggle with the most in trying to train on tpus with Jax.
Thanks, Aakash
TPUs are already pretty highly efficient at scale. You can benchmark Flash Attention vs Jax if you are interested in the exact speed tests.
from palm-rlhf-pytorch.
Related Issues (20)
- Value function
- Can not train the model using PyTorch version 2? HOT 1
- train your reward model issue HOT 1
- KL divergence loss HOT 1
- mask raised error HOT 2
- Confusion about KL divergence calculation for human feedback policies HOT 13
- Reason for using pooled critic embedding instead of the last embedding for value head HOT 3
- Calculating the kl loss seems has a mistake. HOT 1
- Column and Row Parallel Linear for Apex Tensor Parallel HOT 1
- i use other params with palm, but got error HOT 4
- norm.gamma not used during backprop HOT 2
- speed up with flash attn in A6000? HOT 2
- memory-efficient attention is default opened? if i dont use flash attn HOT 3
- Model Name HOT 3
- I looked at the llama source code and there is an intermedie layer
- Flash Attention 2
- Possible incorrect creation of Rotary Embeddinigs HOT 1
- Should critic's input be prompt only?
- How to use lora?
- Is there any documentation to train this on my own data ?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from palm-rlhf-pytorch.