Comments (6)
Hi,
Thanks for the question! I think the default is outdated and not optimal. I would recommend following the readme instructions and run with end-to-end training by adding the argument --app "--predict_xstart True --training_mode e2e
For the random initialized embedding, I find that larger embeddings dimensions would hurt performance, especially if you dont set --predict_xstart True
. My intuition is that it would need to spend too much modeling effort memorizing the large embeddings.
In general, if you use the command in README (i.e., --app "--predict_xstart True --training_mode e2e
) to train, dimension size is more of a hyper-parameter. For harder datasets, it's important to set the dimension size larger (e.g. 128 or 256), further expanding the embedding size can lead to diminishing returns. For easier datsaets, it's fine setting dimension even smaller.
The other factor you asked is about is vocab size. I think smaller vocab size is compatible with modeling via smaller embedding dimensions. Enlarge the vocab size requires also make the embedding dimension larger. I have tried some preliminary experiments using full BERT vocabulary, and it also seems to generate fluent text.
Hope it helps!
from diffusion-lm.
Thanks for the reply!
To clarify, you're suggesting remove the --vocab_size 821/11043
argument from the README commands? Does that keep the vocab larger? (I trained the two models for e2e and ROC with the commands in the README as given). Also, since you used spacy, you have a whitespace tokenization (relevant to later comment).
Cool. So I'm clearly not a reviewer otherwise I wouldn't be chatting with you directly at this time, but I guess if I were a core question/request I'd have would be a better understanding of the relationship between the diffusion random variable dimensionality, say 16, and the transformer hidden state dimensionality, say 128. It's not obvious to me what choices you'd need to make about the expressiveness of the diffusion space versus the expressiveness of the
Your figure 4 does give an ablation of learned versus fixed embeddings as well as
Based on the dynamics of your ROC stories experiment (I decoded all checkpoints with the model at 10k step increments) it takes a while for the larger vocabulary/dimensionality model to move away from repeating the same words and structures, and start generating variety, grammar, and appropriate padding. I'm curious to see whether your Diffusion-LM method scales beyond these vocabularies to full scale bpe/subword tokenization schemes, language modeling dataset sizes, less "homogenous" sentence structures than the restaurants and ROC, and longer sequence lengths.
Inspired to maybe look into some of these myself 馃檪
from diffusion-lm.
re1: no. I am not saying that you should remove --vocab_size xxx. If you remove it, you would get a run time error about dimension size mismatch. You probably would need to go into the code to change the tokenizer setup (aka. unk thresholding) to adjust the vocab size.
re2: I didnt try adjusting the dimension of the Transformer block. You are right, one could try adjusting the output dimension of the Transformer model, and this could be an interesting ablation studies. For all my experiments, I use the same Transformer architecture. So, I dont have much intution about whether I need " strong upscaling to > 2x the diffusion vector size".
from diffusion-lm.
-
馃憤 yup that's what I thought
-
Yea of course, there's always a long tail of things to try no worries
from diffusion-lm.
Hi again,
I was wondering whether you could provide any more insight into the parameter settings you used when you
tried some preliminary experiments using full BERT vocabulary, and it also seems to generate fluent text.
I'm working on a slight reimplementation of your setup and what doesn't seem to work is a standard BERT tokenizer (30k vocab) and embedding/hidden dimension for the model. Basically a standard hf model configured like so:
"hidden_size": 768, # model and embedding dim
"intermediate_size": 3072,
"max_position_embeddings": 512, # max seq len
"num_attention_heads": 12,
"num_hidden_layers": 12,
Without your up-projection and down-projection layers, and at the standard bert max seq length, this would mean that the diffusion is occurring in a (512,768)
dim space. i.e. this is the shape of the "image" you're passing to whatever diffusion utility library you're using to implement noising and schedules.
Did you ever experiment on this scale?
Also, the datasets you used have a somewhat more repetitive structure than standard LM data like wikitext, or C4. Did you ever attempt this with data with more structural diversity?
from diffusion-lm.
Hi,
I have tried using BERT tokenizer (30k vocab) but with a smaller dimension_dim = 128. And it seems to be working. So I would recommend trying that first, before scaling to dim=768.
from diffusion-lm.
Related Issues (20)
- I wander where to find the model in the predictability HOT 1
- Training on A100
- Separate weights for word embedding and lm-head?
- Questions about the result of success rate of PPLM? HOT 2
- Why not directly use Emb(W) as X_0? HOT 2
- Error when running training script on Google Colab HOT 2
- Fail to load GPT2 pretrained model for attribute controled generation
- Reproducing Table 5: Sentence Infilling - CIDEr / BLEU-4 metrics HOT 1
- Baseline reproduction
- error when runing锛欵xception in thread Thread-4:路路路路路路路ValueError: signal number 32 out of range
- Which classifier to use in custom_trainer.py for controllable generation?
- About the tT_loss HOT 1
- The difference between this code and the paper "IDDPM" in the run_loop function in train_util.py.
- The relevant code that caused the error is in the Controllable Text Generation section, after the model trained for 6 epochs and started evaluating, it raised a KeyError: 'eval_loss' HOT 2
- Questions about the NLL loss
- E2E training procedure
- Issue while generating controllable text generation
- How to Execute the Semantic Content Subtask with infill.py
- Seq2Seq tasks with Diffusion LM
- Difficulty in running code
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
馃枛 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 馃搳馃搱馃帀
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google 鉂わ笍 Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from diffusion-lm.