This repository is an implementation of the paper - LoRA: Low-Rank Adaptation of Large Language Models by E. J. Hu et al, 2021.
LoRA is a technique that enables low-rank adaptation of large language models. It aims to reduce the computational cost and memory requirements of fine-tuning language models by leveraging low-rank approximations.
To get started with this repository, follow the steps below:
- Clone the repository:
>> git clone https://github.com/nikhil-chigali/Low-Rank-Adaptation-of-LLMs.git
- Navigate to project directory:
>> cd Low-Rank-Adaptation-of-LLMs
- Install poetry:
>> pip install poetry
- Install the required dependencies:
>> poetry install --no-root
- Run the training script:
>> python finetune_roberta_with_lora.py
Here are some examples of how to use the LoRA adaptation technique: [To be added]
I would like to acknowledge the use of following resources as reference:
- Code LoRA from Scratch by Sebastian Raschka - Lightning Studio
- LoRA: Low-Rank Adaptation of Large Language Models by E. J. Hu et al, 2021 - Paper