Coder Social home page Coder Social logo

microsoft / lora Goto Github PK

View Code? Open in Web Editor NEW
9.1K 62.0 561.0 31.68 MB

Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"

Home Page: https://arxiv.org/abs/2106.09685

License: MIT License

Python 100.00%
gpt-2 adaptation language-model gpt-3 low-rank pytorch deep-learning roberta deberta lora

lora's Introduction

LoRA: Low-Rank Adaptation of Large Language Models

This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in Hugging Face. We only support PyTorch for now. See our paper for a detailed description of LoRA.

LoRA: Low-Rank Adaptation of Large Language Models
Edward J. Hu*, Yelong Shen*, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
Paper: https://arxiv.org/abs/2106.09685
Video explainer: https://www.youtube.com/watch?v=DhRoTONcyZE

Update 2/2023: LoRA is now supported by the State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) library by Hugging Face.

LoRA reduces the number of trainable parameters by learning pairs of rank-decompostion matrices while freezing the original weights. This vastly reduces the storage requirement for large language models adapted to specific tasks and enables efficient task-switching during deployment all without introducing inference latency. LoRA also outperforms several other adaptation methods including adapter, prefix-tuning, and fine-tuning.

We obtain result comparable or superior to full finetuning on the GLUE benchmark using RoBERTa (Liu et al., 2019) base and large and DeBERTa (He et al., 2020) XXL 1.5B, while only training and storing a fraction of the parameters. Click the numbers below to download the RoBERTa and DeBERTa LoRA checkpoints.

RoBERTa base
Fine-tune
RoBERTa base
LoRA
DeBERTa XXL
Fine-tune
DeBERTa XXL
LoRA
# of Trainable Params. 125M 0.8M 1.5B 4.7M
MNLI (m-Acc/mm-Acc) 87.6 87.5±.3/86.9±.3 91.7/91.9 91.9±.1/91.9±.2
SST2 (Acc) 94.8 95.1±.2 97.2 96.9±.2
MRPC (Acc) 90.2 89.7±.7 92.0 92.6±.6
CoLA (Matthew's Corr) 63.6 63.4±1.2 72.0 72.4±1.1
QNLI (Acc) 92.8 93.3±.3 96.0 96.0±.1
QQP (Acc) 91.9 90.8±.1 92.7 92.9±.1
RTE (Acc) 78.7 86.6±.7 93.9 94.9±.4
STSB (Pearson/Spearman Corr) 91.2 91.5±.2/91.3±.2 92.9/92.6 93.0±.2/92.9±.3
Average 86.40 87.24 91.06 91.32

Note: You still need the original pre-trained checkpoint from Hugging Face to use the LoRA checkpoints.

Fine-tuning numbers are taken from Liu et al. (2019) and He et al. (2020). We include confidence intervals on results from our experiments. Please follow the instructions in examples/NLU/ to reproduce our results.

On GPT-2, LoRA compares favorably to both full finetuning and other efficient tuning methods, such as adapter (Houlsby et al., 2019) and prefix tuning (Li and Liang, 2021). We evaluated on E2E NLG Challenge, DART, and WebNLG:

Method # of Trainable Params E2E (BLEU) DART (BLEU) WebNLG (BLEU-U/S/A)
GPT-2 M (Fine-Tune) 354.92M 68.2 46.0 30.4/63.2/47.6
GPT-2 M (Adapter) 0.37M 66.3 42.4 45.1/54.5/50.2
GPT-2 M (Prefix) 0.35M 69.7 45.7 44.1/63.1/54.4
GPT-2 M (LoRA) 0.35M 70.4±.1 47.1±.2 46.7±.4/62.1±.2/55.3±.2
GPT-2 L (Fine-Tune) 774.03M 68.5 46.5 41.7/64.6/54.2
GPT-2 L (Adapter) 0.88M 69.1±.1 45.7±.1 49.8±.0/61.1±.0/56.0±.0
GPT-2 L (Prefix) 0.77M 70.3 46.5 47.0/64.2/56.4
GPT-2 L (LoRA) 0.77M 70.4±.1 47.5±.1 48.4±.3/64.0±.3/57.0±.1

Non-LoRA baselines, except for adapter on GPT-2 large, are taken from Li and Liang (2021). We include confidence intervals on results from our experiments.

Download the GPT-2 LoRA checkpoints:

Please follow the instructions in examples/NLG/ to reproduce our result.

Repository Overview

(The initial release of this repo has been archived in the branch "snapshot-9-15-2021")

There are several directories in this repo:

  • loralib/ contains the source code for the package loralib, which needs to be installed to run the examples we provide;
  • examples/NLG/ contains an example implementation of LoRA in GPT-2 using our package, which can be used to reproduce the result in our paper;
  • examples/NLU/ contains an example implementation of LoRA in RoBERTa and DeBERTa using our package, which produces competitive results on the GLUE benchmark;
  • See how we use loralib in GPT-2, RoBERTa, and DeBERTa v2

Quickstart

  1. Installing loralib is simply
pip install loralib
# Alternatively
# pip install git+https://github.com/microsoft/LoRA
  1. You can choose to adapt some layers by replacing them with counterparts implemented in loralib. We only support nn.Linear, nn.Embedding, and nn.Conv2d for now. We also support a MergedLinear for cases where a single nn.Linear represents more than one layers, such as in some implementations of the attention qkv projection (see Additional Notes for more).
# ===== Before =====
# layer = nn.Linear(in_features, out_features)

# ===== After ======
import loralib as lora
# Add a pair of low-rank adaptation matrices with rank r=16
layer = lora.Linear(in_features, out_features, r=16)
  1. Before the training loop begins, mark only LoRA parameters as trainable.
import loralib as lora
model = BigModel()
# This sets requires_grad to False for all parameters without the string "lora_" in their names
lora.mark_only_lora_as_trainable(model)
# Training loop
for batch in dataloader:
   ...
  1. When saving a checkpoint, generate a state_dict that only contains LoRA parameters.
# ===== Before =====
# torch.save(model.state_dict(), checkpoint_path)
# ===== After =====
torch.save(lora.lora_state_dict(model), checkpoint_path)
  1. When loading a checkpoint using load_state_dict, be sure to set strict=False.
# Load the pretrained checkpoint first
model.load_state_dict(torch.load('ckpt_pretrained.pt'), strict=False)
# Then load the LoRA checkpoint
model.load_state_dict(torch.load('ckpt_lora.pt'), strict=False)

Now training can proceed as usual.

Additional Notes

  1. While we focus on a simple yet effect setup, namely adapting only the q and v projection in a Transformer, in our examples, LoRA can be apply to any subsets of pre-trained weights. We encourage you to explore different configurations, such as adapting the embedding layer by replacing nn.Embedding with lora.Embedding and/or adapting the MLP layers. It's very likely that the optimal configuration varies for different model architectures and tasks.

  2. Some Transformer implementation uses a single nn.Linear for the projection matrices for query, key, and value. If one wishes to constrain the rank of the updates to the individual matrices, one has to either break it up into three separate matrices or use lora.MergedLinear. Make sure to modify the checkpoint accordingly if you choose to break up the layer.

# ===== Before =====
# qkv_proj = nn.Linear(d_model, 3*d_model)
# ===== After =====
# Break it up (remember to modify the pretrained checkpoint accordingly)
q_proj = lora.Linear(d_model, d_model, r=8)
k_proj = nn.Linear(d_model, d_model)
v_proj = lora.Linear(d_model, d_model, r=8)
# Alternatively, use lora.MergedLinear (recommended)
qkv_proj = lora.MergedLinear(d_model, 3*d_model, r=8, enable_lora=[True, False, True])
  1. Training bias vectors in tandem with LoRA might be a cost-efficient way to squeeze out extra task performance (if you tune the learning rate carefully). While we did not study its effect thoroughly in our paper, we make it easy to try in lora. You can mark some biases as trainable by passing "all" or "lora_only" to bias= when calling mark_only_lora_as_trainable. Remember to pass the corresponding bias= argument to lora_state_dict when saving a checkpoint.
# ===== Before =====
# lora.mark_only_lora_as_trainable(model) # Not training any bias vectors
# ===== After =====
# Training all bias vectors associated with modules we apply LoRA to 
lora.mark_only_lora_as_trainable(model, bias='lora_only')
# Alternatively, we can train *all* bias vectors in the model, including LayerNorm biases
lora.mark_only_lora_as_trainable(model, bias='all')
# When saving a checkpoint, use the same bias= ('all' or 'lora_only')
torch.save(lora.lora_state_dict(model, bias='all'), checkpoint_path)
  1. Calling model.eval() will trigger the merging of LoRA parameters with the corresponding pretrained ones, which eliminates additional latency for subsequent forward passes. Calling model.train() again will undo the merge. This can be disabled by passing merge_weights=False to LoRA layers.

Contact

Please contact us or post an issue if you have any questions.

For questions related to the package loralib:

The GPT-2 example:

The RoBERTa/DeBERTa example:

Acknowledgements

We thank in alphabetical order Jianfeng Gao, Jade Huang, Jiayuan Huang, Lisa Xiang Li, Xiaodong Liu, Yabin Liu, Benjamin Van Durme, Luis Vargas, Haoran Wei, Peter Welinder, and Greg Yang for providing valuable feedback.

Citation

@inproceedings{
hu2022lora,
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=nZeVKeeFYf9}
}

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

lora's People

Contributors

ar-kareem avatar danielzgtg avatar dannyadkins avatar dependabot[bot] avatar edwardjhu avatar eltociear avatar hongwen-sun avatar infrared1029 avatar luw315 avatar microsoft-github-policy-service[bot] avatar msft-edward avatar nivibilla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

lora's Issues

Replicating Result on WebNLG

Thanks for your nice work.

I am try to replicate result on webNLG, but the finnal epochs of checkpoint is only 11270, different from 20000. This results in a significant difference in the accuracy of the reproduction compared to your results.

Here is the my instruct:

python -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py
--train_data ./data/webnlg_challenge_2017/train.jsonl
--valid_data ./data/webnlg_challenge_2017/valid.jsonl
--train_batch_size 8
--grad_acc 1
--valid_batch_size 4
--seq_len 512
--model_card gpt2.md
--init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin
--platform local
--clip 0.0
--lr 0.0002
--weight_decay 0.01
--correct_bias
--adam_beta2 0.999
--scheduler linear
--warmup_step 500
--max_epoch 5
--save_interval 1000
--lora_dim 4
--lora_alpha 32
--lora_dropout 0.1
--label_smooth 0.1
--work_dir ./trained_models/GPT2_M/webnlgv9
--random_seed 110

Different hyper-parameter between the paper and the code? (lora_alpha and a global batch size)

Hello, thank you for sharing the source code. While trying to reproduce SST2 task result with RoBERTa-base model, I've encountered some questions regarding the hyper-parameters, lora_alpha, and a global batch size.
Since the paper's hyper-parameter setting and the reproducing script which does both training and evaluation (examples/NLU/roberta_base_sst2.sh) had some conflict.

First of all, is the reproducing script the actual script that you used for creating the numbers for the paper?

스크린샷 2022-11-22 오전 10 38 19

  1. lora-alpha (8 or 16?)
    I'd like to know the exact lora-alpha that you used in training.
    In Appendix D, lora_alpha is 8. However, in examples/NLU/roberta_base_sst2.sh, it is written that lora-alpha is 16.

스크린샷 2022-11-22 오전 10 58 43

https://github.com/microsoft/LoRA/blob/70ca1efd17b6ca4a45bbdba98554d5b312a8d48c/examples/NLU/roberta_base_sst2.sh#L24

When I tried evaluation, lora-alpha 16 gave the better result.

Maybe you used lora_alpha as 8 in training, but lora_alpha was 16 in evaluation or else... it's a little bit confusing.

  1. global batch size while training (16? 64? 128? or else?)
    In Appendix D, it is written that the batch size is 16, so I thought 16 was the global batch size while training. However, in examples/NLU/roberta_base_sst2.sh, it is written that per_device_train_batch_size is 16 and the number of gpu is 8. (So the global batch size should be 128) Moreover, the explanation in https://github.com/microsoft/LoRA/tree/main/examples/NLU#adapting-to-the-glue-benchmark said that the number of gpu used is 4. (So the global batch size should be 64)

When the global batch size was 128, my reproduction result was lower than in paper. (94.5 accuracies) Thanks.

  1. weight decay of AdamW optimizer
    The weight decay hyperparameter was in the script examples/NLU/roberta_base_sst2.sh, but was not present in the paper (for the GLUE tasks)
    Did you use the weight decay parameter?

I wrote down your hyper-parameter setting like this, and I'd appreciate the specification.
스크린샷 2022-11-22 오후 12 03 18

A question about the implementation of LoRA in the GLUE Benchmark

In the scripts of the implementaion of LoRA in GLUE Benchmark, for instance in "roberta_base_mrpc.sh" and "roberta_base_rte.sh", you include the args "--lora_path roberta_base_lora_mnli.bin", whose final result is pretty high. But without initializing LoRA layers with "roberta_base_lora_mnli.bin", the result goes down. And I wonder why we need to initialize LoRA layer with it.

Clarifying questions about the paper

Hi,
Thank you for sharing the source code. I really enjoy the work you propose.

While reading the paper and reproducing the results I got a couple of questions:

  1. In the Table 3 the row with the results for GPT2-M AdapterH written without asterix. But I couldn't find any source code that implements GPT2-M with AdapterH. Is this a typo?
  2. Regarding the computation of METEOR for WebNLG and DART datasets. I can't reproduce the result of this metric with the script you proposed from GenerationEval repo. I wrote my own script that evaluates WebNLG and DART using HuggingFace evaluate library and got same (very close) results. So, how did you obtain such METEOR score?

LoRA/loralib/layers.py line 151-155 nn.Linear.forward()

Hi,

For LoRA/loralib/layers.py line 151-155, why the feedforward implementation of the Linear layer is to first go through the original network and then add LoRA to result. This is different from the implementation of the Conv layer, where the weight is added with LoRA before feedforward.
Did I make a mistake? Or is there no difference between the two implementations for the Linear layer?

Thank you!

Is the data correct in LoRA/examples/NLG/data?

I try to run the code using

python -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py \
    --train_data ./data/e2e/train.jsonl \
    --valid_data ./data/e2e/valid.jsonl \
    --train_batch_size 8 \
    --grad_acc 1 \
    --valid_batch_size 4 \
    --seq_len 512 \
    --model_card gpt2.md \
    --init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin \
    --platform local \
    --clip 0.0 \
    --lr 0.0002 \
    --weight_decay 0.01 \
    --correct_bias \
    --adam_beta2 0.999 \
    --scheduler linear \
    --warmup_step 500 \
    --max_epoch 5 \
    --save_interval 1000 \
    --lora_dim 4 \
    --lora_alpha 32 \
    --lora_dropout 0.1 \
    --label_smooth 0.1 \
    --work_dir ./trained_models/GPT2_M/e2e \
    --random_seed 110

But it didn't work and I noticed that in the data folder the files are e2e/train.txt e2e/test.txt ... Do you make any changes to these files?

Alternate implementation of Lora leveraging tensor subclasses and reparametrization.

I thought this might be interesting as an alternate implementation of LoRA leveraging tensor subclasses and reparametrization.

https://gist.github.com/Chillee/a8d2070b1b7b3f97d8c87bac3c366f8e

The main idea here is that we can leverage parametrization in order to transform our parameter in a manner that's composable with existing modules (i.e. we don't need to use a totally new layer).

Then, since LoRA also requires us to leverage special matrix structure for efficiency, we return a tensor subclass that has special handling when we encounter F.linear(x: Tensor, weight: LoraTensor, bias: Tensor). This tensor subclass composes with things like autograd and such, so we can still differentiate through our tensor.

Questions about the experiments details

Hi, thanks for sharing the source code.

  1. In Table 2, are these reported numbers the results of the test split or the validation split?
  2. In Table 2, for the RoBbase (LoRA) on the RTE task, the reported result is 86.6, is this a typo? cause it is even much higher than the full-tuning results (delta = 7.9).

Lora name

Hi,

Just wanted to point out that Lora is already used and can possibly generate confusion

Is it possible to fine-tuning model to extend token limit length?

For now, most of open LLM models have token size of 2048. I need to expand it to 4096 or more.
Is is possible expand token size by fine-tuning with longer texts exceed 2048 using LoRA?
Or does it need re-training from scratch?
I searched about this but unable to find the information.
The LoRA freezes original model weight and add trainable layers, so I feel like it's bit difficult.

Did the number of parameters take into account the parameters in the tunable classification head?

Thanks for releasing the code! I noticed that the reporting number of parameters of Lora module is 0.3 M for roberta-base. After experiments, I found that there are 0.5M parameters tunable in the sequence classification head (but it's the same for all baselines, so I am not arguing about the fairness). I wonder was I correct about the setting? Did the performances in the paper were the ones that also tune a classification head for classification tasks?

gpt2_ft.py: error: unrecognized arguments: --local-rank=0

(gh_LoRA) ub2004@ub2004-B85M-A0:~/llm_dev/LoRA/examples/NLG$ python3 -m torch.distributed.launch --nproc_per_node=1 src/gpt2_ft.py --train_data ./data/e2e/train.jsonl --valid_data ./data/e2e/valid.jsonl --train_batch_size 8 --grad_acc 1 --valid_batch_size 4 --seq_len 512 --model_card gpt2.md --init_checkpoint ./pretrained_checkpoints/gpt2-medium-pytorch_model.bin --platform local --clip 0.0 --lr 0.0002 --weight_decay 0.01 --correct_bias --adam_beta2 0.999 --scheduler linear --warmup_step 500 --max_epoch 5 --save_interval 1000 --lora_dim 4 --lora_alpha 32 --lora_dropout 0.1 --label_smooth 0.1 --work_dir ./trained_models/GPT2_M/e2e --random_seed 110
/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launch.py:181: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use-env is set by default in torchrun.
If your script expects --local-rank argument to be set, please
change it to read from os.environ['LOCAL_RANK'] instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions

warnings.warn(
usage: gpt2_ft.py [-h] [--platform PLATFORM] [--local_rank LOCAL_RANK] [--rank RANK] [--device DEVICE] [--world_size WORLD_SIZE] [--random_seed RANDOM_SEED] [--lr LR] [--weight_decay WEIGHT_DECAY]
[--correct_bias] [--adam_epislon ADAM_EPISLON] [--no_decay_bias] [--adam_beta1 ADAM_BETA1] [--adam_beta2 ADAM_BETA2] [--scheduler {cosine,inv_sqrt,dev_perf,constant,linear,cycle}]
[--max_step MAX_STEP] [--max_epoch MAX_EPOCH] [--warmup_step WARMUP_STEP] [--i_steps I_STEPS] [--i_lrs I_LRS] --train_data TRAIN_DATA --valid_data VALID_DATA
[--train_batch_size TRAIN_BATCH_SIZE] [--valid_batch_size VALID_BATCH_SIZE] [--grad_acc GRAD_ACC] [--clip CLIP] [--seq_len SEQ_LEN] [--model_card {gpt2.sm,gpt2.md,gpt2.lg}]
[--init_checkpoint INIT_CHECKPOINT] [--fp16] [--log_interval LOG_INTERVAL] [--eval_interval EVAL_INTERVAL] [--save_interval SAVE_INTERVAL] [--work_dir WORK_DIR] [--lora_dim LORA_DIM]
[--lora_alpha LORA_ALPHA] [--obj {jlm,clm}] [--lora_dropout LORA_DROPOUT] [--label_smooth LABEL_SMOOTH] [--roll_interval ROLL_INTERVAL] [--roll_lr ROLL_LR] [--roll_step ROLL_STEP]
[--eval_epoch EVAL_EPOCH]
gpt2_ft.py: error: unrecognized arguments: --local-rank=0
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 2) local_rank: 0 (pid: 50826) of binary: /usr/bin/python3
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 196, in
main()
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 192, in main
launch(args)
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launch.py", line 177, in launch
run(args)
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ub2004/.local/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

src/gpt2_ft.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2023-05-25_21:34:15
host : ub2004-B85M-A0
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 50826)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

(gh_LoRA) ub2004@ub2004-B85M-A0:~/llm_dev/LoRA/examples/NLG$

Why Linear and MergedLinear?

Hi,

Thank you for this really nice paper.

This is not an issue but a general question, why is there a Linear and MergedLinear class?

Thank you,

Maxime.

Inference benefits

Hi, I'm curious if LoRA can provide any benefits in reducing model size or latency during inference? Could this or related techniques help make deploying LLMs to edge devices more feasible?

My current understanding is that this mostly benefits training rather than inference, because the model is reconstructed to full size after the low-rank adaptation.

It also seems to provide benefit when it's desired to have multiple models for different tasks. However if the desire is to have a single LLM deployed for inference more efficiently on an edge device, are there any benefits to be had in this case?

I appreciate any clarification you can provide, thanks!

Current implementation can't be converted to ONNX

Please avoid using .T

RuntimeError: Exporting the operator numpy_T to ONNX opset version 12 is not supported. Please open a bug to request ONNX export support for the missing operator.

Thanks :)

get state dict OOM

I train llama 13 in 8 3090 with lora. Model can be forwarded and backwarded. But when model get state dict, gpu is OOM.
image

What does `lora_moe` mean?

Good job! I extremely like LoRA. After a shot glimpse of the code, I find some config is related to lora_moe in model.py. But I did not see any arguments related to lora_moe in gpt2_ft.py. Can you give more introductions about lora_moe? Is it designed for models which are trained with moe? Or is it just a deprecated feature of LoRA?

Some questions regarding the label shift in model training and the evaluation hyperparameters for WebNLG

Hi,

I really enjoy the work you propose! In learning the paper and the code, I have a question regarding the implementation of GPT2LMModel's forward function (

loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), lm_labels.view(-1)).view(_batch, _len)
).
I notice the label and logit are not shifted like other GPT2 model does:

shift_logits = lm_logits[..., :-1, :].contiguous(); shift_labels = labels[..., 1:].contiguous()

May I ask whether the shift is necessary in your code, or in which part you have implemented the shift?

Besides, I also fail to obtain the expected result of LoRA on WebNLG (Table 14 in the paper, LoRA 0.35M) with the checkpoint provided in this repo. The script and the hyperparameters I use is

`python3 -m torch.distributed.launch --nproc_per_node=1 src/gpt2_beam.py
--data ./data/webnlg_challenge_2017/test.jsonl
--batch_size 1
--seq_len 512
--eval_len 64
--model_card gpt2.md
--init_checkpoint ./trained_models/GPT2_M/webnlg/gpt2_md_lora_webnlg.pt
--platform local
--lora_dim 4
--lora_alpha 32
--beam 10
--length_penalty 0.8
--no_repeat_ngram_size 4
--repetition_penalty 1.0
--eos_token_id 628
--work_dir ./trained_models/GPT2_M/webnlg
--output_file predict.lora.md.jsonl

python3 src/gpt2_decode.py
--vocab ./vocab
--sample_file ./trained_models/GPT2_M/webnlg/predict.lora.md.jsonl
--input_file ./data/webnlg_challenge_2017/test_formatted.jsonl
--ref_type webnlg
--ref_num 6
--output_ref_file eval/GenerationEval/data/references_webnlg
--output_pred_file eval/GenerationEval/data/hypothesis_webnlg
--tokenize --lower`

Does the hyperparameters I use seem right? The final metric I got is

BLEU Seen: 59.66
BLEU Unseen: 45.47
BLEU All: 53.27
METEOR Seen: 0.43
METEOR Unseen: 0.38
METEOR All: 0.41
TER Seen: 0.40
TER Unseen: 0.52
TER All: 0.45

(I modify gpt2_beam.py a little bit (see below), to first load the parameters from "./pretrained_checkpoints/gpt2-medium-pytorch_model.bin", and then from "gpt2_md_lora_webnlg.pt", the checkpoint provided. Is the modification sensible, or how would you recommend to load the model?

original:

lm_net = GPT2LMModel(config)

new:
` lm_net = GPT2LMModel(config)

cp = torch.load("./pretrained_checkpoints/gpt2-medium-pytorch_model.bin", map_location=torch.device('cpu'))
lm_net.load_weight(cp)

if args.init_checkpoint is not None:
    print('loading model pretrained weight.')
    cp = torch.load(args.init_checkpoint, map_location=torch.device('cpu'))
    lm_net.load_weight(cp)
lm_net = lm_net.cuda()`

)

Questions about Frobenius norm (Table 7)

Hi~

Thanks for your excellent work.

I have a question about the Table 7, where you calculate Frobenius norm. In my view, setting the rank as 4 or 64 only affects $\Delta W_q$, and does not affect $W_q$. The figures at (a) and (b) in the following table are only relevant to $W_q$, since $U, V, W_q$ all come from $W_q$. Therefore, the figures at (a) and (b) should be the same. I could not figure out why they are different.

image

Is there anything I understood wrong?

How does MergedLinear work?

I understand why we need MergedLinear but is there a simple example of how the forward pass works for a MergedLinear? Specifically this line -> https://github.com/microsoft/LoRA/blob/main/loralib/layers.py#L248. I'm struggling to understand what the 1d conv is doing here.

I would also appreciate a mathematical explanation. For the Linear case, I understand the simple matrix multiplication of deltaW * x = B * A * x. But for MergedLinear, what would be the equation for deltaW?

Confused by README wrt supporting HuggingFace models

Thanks for the nice repo! Currently, the readme states:

This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in HuggingFace.

However, from the examples, it seems like in order to use loralib with a HuggingFace model, we need to actually change the code implementation of each model, replacing each nn.Linear with the lora equivalent. If this is the case, I think it's a bit confusing to say the examples show integration with HuggingFace, because as far as I can tell, the examples use re-implementation of GPT2. I was hoping there may be some mechanism to do this automatically, e.g.

import transformers, loralib

model = transformers.AutoModel.from_pretrained("gpt2")
lora_model = loralib.wrap(model)  # wrap all nn.Linear modules
lora_params = loralib.get_lora_only_params(lora_model)

Is this possible? Thanks a lot!

Conv2D and groups > 1

Conv2D seems broken with groups > 1, for example with Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, r=8) and an input of size input of shape torch.Size([32, 256, 64, 64]) I get the following error:

  File "/home/miniconda3/envs/pytorch2/lib/python3.10/site-packages/loralib/layers.py", line 319, in forward
    self.weight + (self.lora_B @ self.lora_A).view(self.weight.shape) * self.scaling,
RuntimeError: shape '[256, 1, 3, 3]' is invalid for input of size 589824

I believe the first dimension should be calculated as out_channels//self.groups*kernel_size here: https://github.com/microsoft/LoRA/blob/main/loralib/layers.py#L280

Here's a full example:

import torch
import loralib as lora

conv = lora.Conv2d(256, 256, 3, groups = 1, r=8)
grouped_conv = lora.Conv2d(256, 256, 3, groups = 2, r=8)
input_tensor = torch.randn((8, 256, 32, 32))

conv(input_tensor)
grouped_conv(input_tensor)

Possible wrong implementation in loralib layers

Hi,

In loralib's layer modules,

def eval(self):

It seems like eval() function which merges W+BA is never called.

This is because when changing the model to evaluation mode in torch by calling model.eval(), submodules of the model calls module.train(mode=False), not module.eval()
https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module

I think this may be a bug and the weight was never merging in the evaluation mode.
Is it possible to make edits to implement current eval() function in train(mode=False)?

Fintuning 176B Bloom with lora

The paper says that it only need 350G VRAM to train 175B GPT3 with rank =4. Can you elaborate more about how this is done? Like, do you use Megraton-deepspeed?

In my experiment with bloom-3b, fintuning all parameters need 29G. After using lora with different experiment set, trainable parameters differ form 10M to 0.8M. But they all need around 20G VRAM. I find this a little bit weird.

MergedLinear bug?

Hi, I meet a issue:
after_B = F.conv1d(
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [50, 14, 16, 14]

result = F.linear(x, T(self.weight), bias=self.bias) if self.r > 0: after_A = F.linear(self.lora_dropout(x), self.lora_A) after_B = F.conv1d( after_A.transpose(-2, -1), self.lora_B.unsqueeze(-1), groups=sum(self.enable_lora) ).transpose(-2, -1) result += self.zero_pad(after_B) * self.scaling
x.shape is [50, 14, 14, 768]
lora_A.shape is [16, 768]
after_A.shape is ([50, 14, 14, 16])

How to fix it?

Can't reproduce the results for GLUE CoLA

My steps:

git clone https://github.com/microsoft/LoRA.git
cd LoRA
pip install -e .
cd examples/NLU
pip install -e .

Change export num_gpus=8 to export num_gpus=1 in roberta_large_cola.sh

Then CUDA_VISIBLE_DEVICES=0 bash roberta_large_cola.sh

Running on a single A100

Using:

  • datasets 2.6.1
  • python 3.9.13
  • PyTorch 1.13.0+cu117

During training, the eval_matthews_correlation is stuck to 0 at all epochs. I actually had the same issue on the current transformers version, and decreasing the learning rate + no warmup helped to regain OKeyish numbers during training, but not as shiny as 0.68.

Do you have an idea of what I could be doing wrong?

Update: using

export num_gpus=1
export CUBLAS_WORKSPACE_CONFIG=":16:8" # https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility
export PYTHONHASHSEED=0
export output_dir="./roberta_cola_custom_sh"
python -m torch.distributed.launch --nproc_per_node=$num_gpus \
examples/text-classification/run_glue.py \
--model_name_or_path roberta-large \
--task_name cola \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 8 \  # original: 4
--learning_rate 2e-5 \  # original: 3e-4
--num_train_epochs 20 \
--output_dir $output_dir/model \
--logging_steps 10 \
--logging_dir $output_dir/log \
--evaluation_strategy epoch \
--save_strategy epoch \
--warmup_ratio 0.0 \  # original: 0.06
--apply_lora \
--lora_r 8 \
--lora_alpha 16 \
--seed 0 \
--weight_decay 0.0  # original: 0.1

trains just fine, I have no eval_matthews_correlation = 0 during training.

Bug after the latest commit #63

Phenomenon

layers.py, line 269,

  File "/storage_fast/jzzhang/loralib/layers.py", line 308, in __init__
    super(Conv2d, self).__init__(nn.Conv2d, *args, **kwargs)
  File "/storage_fast/jzzhang/loralib/layers.py", line 269, in __init__
    self.conv.weight.new_zeros((out_channels//self.groups*kernel_size, r*kernel_size))
  File "/storage/jzzhang/miniconda3/envs/lora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2d' object has no attribute 'groups'

Analysis

In the latest commit, the change in line 257 make lora.Conv2d has no more inheritance from nn.Conv2d, but instead nn.Modules, which havn't a attribute called groups.

Possible Solution

change line 269 to
self.conv.weight.new_zeros((out_channels//self.conv.groups*kernel_size, r*kernel_size))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.