Coder Social home page Coder Social logo

llama2.c's Introduction

llama2.c

Cute Llama

Have you ever wanted to inference a baby Llama 2 model in pure C? No? Well, now you can!

Train the Llama 2 LLM architecture in PyTorch then inference it with one simple 700-line C file (run.c). You might think that you need many billion parameter LLMs to do anything useful, but in fact very small LLMs can have surprisingly strong performance if you make the domain narrow enough (ref: TinyStories paper). This repo is a "fullstack" train + inference solution for Llama 2 LLM, with focus on minimalism and simplicity.

As the architecture is identical, you can also load and inference Meta's Llama 2 models. However, the current code only inferences models in fp32, so you will most likely not be able to productively load models larger than 7B. Work on model quantization is currently ongoing.

Please note that this repo started recently as a fun weekend project: I took my earlier nanoGPT, tuned it to implement the Llama-2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in run.c. So the project is young and moving quickly. Hat tip to the awesome llama.cpp for inspiring this project. Compared to llama.cpp, I wanted something super simple, minimal, and educational so I chose to hard-code the Llama 2 architecture and just roll one inference file of pure C with no dependencies.

feel the magic

Open In Colab

First, navigate to the folder where you keep your projects and clone this repository to this folder:

git clone https://github.com/karpathy/llama2.c.git

Then, open the repository folder:

cd llama2.c

Now, let's just run a baby Llama 2 model in C. You need a model checkpoint. Download this 15M parameter model I trained on the TinyStories dataset (~60MB download):

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin

Compile and run the C code:

make run
./run stories15M.bin

You'll see the text stream a sample. On my M1 MacBook Air this runs at ~110 tokens/s. See performance or the Makefile for compile flags that can significantly speed this up. We can also try a bit bigger 42M parameter model:

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories42M.bin
./run stories42M.bin

This still runs at interactive rates and samples more coherent and diverse stories:

Once upon a time, there was a little girl named Lily. She loved playing with her toys on top of her bed. One day, she decided to have a tea party with her stuffed animals. She poured some tea into a tiny teapot and put it on top of the teapot. Suddenly, her little brother Max came into the room and wanted to join the tea party too. Lily didn't want to share her tea and she told Max to go away. Max started to cry and Lily felt bad. She decided to yield her tea party to Max and they both shared the teapot. But then, something unexpected happened. The teapot started to shake and wiggle. Lily and Max were scared and didn't know what to do. Suddenly, the teapot started to fly towards the ceiling and landed on the top of the bed. Lily and Max were amazed and they hugged each other. They realized that sharing was much more fun than being selfish. From that day on, they always shared their tea parties and toys.

You can also prompt the model with a prefix or a number of additional command line arguments, e.g. to sample at temperature 0.8 for 256 steps and with a prompt:

./run stories42M.bin -t 0.8 -n 256 -i "One day, Lily met a Shoggoth"

One day, Lily met a Shoggoth. He was very shy, but was also very generous. Lily said “Hello Shoggy! Can I be your friend?” Shoggy was happy to have a friend and said “Yes, let’s explore the universe together!” So they set off on a journey to explore the universe. As they travelled, Shoggy was happy to explain to Lily about all the wonderful things in the universe. At the end of the day, Lily and Shoggy had gathered lots of wonderful things from the universe, and they both felt very proud. They promised to explore the universe as one big pair and to never stop being generous to each other.

There is also an even better 110M param model available, see models.

Quick note on sampling, the recommendation for ~best results is to sample with -t 1.0 -p 0.9, i.e. temperature 1.0 (default) but also top-p sampling at 0.9 (default). Intuitively, top-p ensures that tokens with tiny probabilities do not get sampled, so we can't get "unlucky" during sampling, and we are less likely to go "off the rails" afterwards. More generally, to control the diversity of samples use either the temperature (i.e. vary -t between 0 and 1 and keep top-p off with -p 0) or the top-p value (i.e. vary -p between 0 and 1 and keep -t 1), but not both. Nice explainers on LLM sampling strategies include this, this or this.

Meta's Llama 2 models

As the neural net architecture is identical, we can also inference the Llama 2 models released by Meta. Sadly there is a bit of friction here due to licensing (I can't directly upload the checkpoints, I think). So Step 1, get the Llama 2 checkpoints by following the Meta instructions. Once we have those checkpoints, we have to convert them into the llama2.c format. For this we need to install the python dependencies (pip install -r requirements.txt) and then use the export.py file, e.g. for 7B model:

python export.py llama2_7b.bin --meta-llama path/to/llama/model/7B

The export will take ~10 minutes or so and generate a 26GB file (the weights of the 7B model in float32) called llama2_7b.bin in the current directory. It has been reported that despite efforts. I would not attempt to run anything above 7B right now for two reasons: first, 13B+ currently doesn't work because of integer flow in pointer arithmetic, which is yet to be fixed, and second, even if it were fixed, this repo is doing float32 inference right now, so it would be fairly unusably slow. Once the export is done, we can run it:

./run llama2_7b.bin

This ran at about 4 tokens/s compiled with OpenMP on 96 threads on my CPU Linux box in the cloud. (On my MacBook Air M1, currently it's closer to 30 seconds per token if you just build with make runfast.) Example output:

The purpose of this document is to highlight the state-of-the-art of CoO generation technologies, both recent developments and those in commercial use. The focus is on the technologies with the highest merit to become the dominating processes of the future and therefore to be technologies of interest to S&T ... R&D. As such, CoO generation technologies developed in Russia, Japan and Europe are described in some depth. The document starts with an introduction to cobalt oxides as complex products and a short view on cobalt as an essential material. The document continues with the discussion of the available CoO generation processes with respect to energy and capital consumption as well as to environmental damage.

base models... ¯\(ツ)/¯. Since we can inference the base model, it should be possible to also inference the chat model quite easily, and have a conversation with it. And if we can find a way to run 7B more efficiently, we can start adding LoRA to our training script, and going wild with finetunes all within the repo!

You can also chat with the Llama Chat models. Export the chat model exactly as above:

python export.py llama2_7b_chat.bin --meta-llama /path/to/7B-chat

Then chat with it by specifying the chat mode using the -m flag, e.g.:

./run llama2_7b_chat.bin -m chat

You can also try Meta's Code Llama models even if support for them is incomplete. In particular, some hyperparameters changed (e.g. the constant in RoPE layer), so the inference is not exactly correct and a bit buggy right now. Looking into fixes. Make sure to build the tokenizer for the plain and instruct variants and pass it when doing inference.

python export.py codellama2_7b.bin --meta-llama /path/to/CodeLlama-7b
python tokenizer.py --tokenizer-model=/path/to/CodeLlama-7b/tokenizer.model
./run codellama2_7b.bin -z /path/to/CodeLlama-7b/tokenizer.bin

Chat with Code Llama Instruct:

python export.py codellama2_7b_instruct.bin --meta-llama /path/to/CodeLlama-7b-Instruct
python tokenizer.py --tokenizer-model=/path/to/CodeLlama-7b-Instruct/tokenizer.model
./run codellama2_7b_instruct.bin -m chat -z /path/to/CodeLlama-7b-Instruct/tokenizer.bin

int8 quantization

The (default) script run.c, above, uses a float32 forward pass, where the entire calculation of the forward pass is kept in fp32. This is very easy to understand as far as reference code goes, but it has the following downsides: the model checkpoint files are very large (it takes 4 bytes per every individual weight), and the forward pass is relatively slow. The (very) common inference optimization employed in practice is to quantize the model parameters to lower precision, giving up a little bit of correctness in return for smaller checkpoint sizes and faster forward passes (as most of the inference uses integer arithmetic). Empirically, LLMs can tolerate precisions as low as 4-bit (or even lower), but we use int8 here because it is a "safe" setting that gets us the benefits but doesn't sacrifice too much of the model accuracy. Only the weights that participate in matmuls are quantized. All the other parameters (e.g. especially the scale and bias in RMSNorm) are kept in float32, because these layers are very sensitive. Now, if all you're after is reduction in checkpoint sizes, you could quantize the weights, save the checkpoint, and then dequantize them in run.c, and do float32 inference as normal and call it a day. This is totally fine. But here, we go one step further (as is standard practice) and additionally quantize the activations in the forward pass. This requires us to dynamically quantize and dequantize between float32 and int8 at runtime, which adds overhead. But the benefit is that now the majority of the calculations (the matmuls especially!) are using pure integer arithmetic, where both weights and activations enter as int8. This is where the speedups can fundamentally come from. The version we use is the "Q8_0" quantization (llama.cpp terminology), where the 0 means that the weight quantization is symmetric around 0, quantizing to the range [-127, 127].

The quantized forward pass is implemented in runq.c. To use it, we have to export the model in the quantized format. For example, the float32 version of Llama 2 7B was exported as:

python export.py llama2_7b.bin --meta-llama path/to/llama/model/7B

This creates a 26GB file, because each one of 7B parameters is 4 bytes (fp32). To export it quantized, we instead use version 2 export:

python export.py llama2_7b_q80.bin --version 2 --meta-llama path/to/llama/model/7B

This runs for a few minutes, but now creates only a 6.7GB file. For exporting non-meta checkpoints you would use the --checkpoint arg instead of --meta-llama arg (more docs on this later, below). Now let's inference them. I like to use OMP here because these are big models, so e.g. on my Linux box:

make runomp
OMP_NUM_THREADS=64 ./run llama2_7b.bin -n 40
OMP_NUM_THREADS=64 ./runq llama2_7b_q80.bin -n 40

This runs 40 steps just to get a timing. The float32 version for me runs at 4.6 tok/s, and the int8 version at 14 tok/s. So we achieved a 3X speedup while reducing the checkpoint size by 4X. However, the forward pass is quantized to int8, and therefore silently very slightly lower quality.

huggingface models

We can load any huggingface models that use the Llama 2 architecture. See the script export.py and the --hf flag to export the model .bin file.

models

For the sake of examples of smaller, from-scratch models, I trained a small model series on TinyStories. All of these trained in a few hours on my training setup (4X A100 40GB GPUs). The 110M took around 24 hours. I am hosting them on huggingface hub tinyllamas, both in the original PyTorch .pt, and also in the llama2.c format .bin:

model dim n_layers n_heads n_kv_heads max context length parameters val loss download
260K 64 5 8 4 512 260K 1.297 stories260K
OG 288 6 6 6 256 15M 1.072 stories15M.bin
42M 512 8 8 8 1024 42M 0.847 stories42M.bin
110M 768 12 12 12 1024 110M 0.760 stories110M.bin

You'll notice that the 110M model is equivalent to GPT-1 in size. Alternatively, this is also the smallest model in the GPT-2 series (GPT-2 small), except the max context length is only 1024 instead of 2048. The only notable changes from GPT-1/2 architecture is that Llama uses RoPE relatively positional embeddings instead of absolute/learned positional embeddings, a bit more fancy SwiGLU non-linearity in the MLP, RMSNorm instead of LayerNorm, bias=False on all Linear layers, and is optionally multiquery.

training

Let's see how we can train a baby Llama 2 from scratch using the code in this repo. First let's download and pretokenize some source dataset, e.g. I like TinyStories so this is the only example currently available in this repo. But it should be very easy to add datasets, see the code.

python tinystories.py download
python tinystories.py pretokenize

Then train our model:

python train.py

brief training guide. See the train.py script for more exotic launches and hyperparameter overrides. Here is a brief guide to how to set the parameters. Look at the table at the very end of the Chinchilla paper to get a sense of how the Transformer parameters (dim, n_layers, n_heads) grow or shrink together. Extrapolate/interpolate this pattern to get bigger or smaller transformers. Set the max context length however you wish, depending on the problem: this should be the max number of tokens that matter to predict the next token. E.g. Llama 2 uses 2048. Next, you want the total batch size per update (printed by the script as "tokens per iteration will be:") to be somewhere around 100K tokens for medium-sized applications. For tiny applications it could be lower, for large training (e.g. GPTs/LLamas) it is usually ~0.5M, or even more. You get there by first maxing out the batch_size to whatever your system allows (e.g. mine was 16 in a recent run because after that my GPU runs out of memory), and then you want to increase gradient_accumulation_steps to be as high as necessary to reach the total batch size of ~100K. Finally, you want to tune your learning_rate (LR). You want this to be as high as your training allows. Very small networks can get away with a large LR (e.g. 1e-3 or even higher). Large networks need lower LRs. 3e-4 is a safe choice in most medium-sized applications, but can be too low for small networks, so try to increase it! Finally, max_iters is the length of training. Play with different settings. I mostly only ever tune these parameters and leave most of the others unchanged. Here is an example of how I trained the 110M model, which I don't think is anywhere near optimal, but looked sensible to me: dim 768, n_layers 12, n_heads 12 (so size of each head is 768 / 12 = 64 channels), seq len of 1024, batch size 16 (this is the most that fit my A100 40GB GPU), gradient_accumulation_steps = 8 was needed to get total tokens batch size to be 16 batch size * 1024 tokens in sequence * 8 grad_accum = 131,072 tokens per update. Good. Learning rate 4e-4 (probably a little too low). max_iters 200K (probably a bit too high). Dropout 0.1, as that usually helps a bit at medium size. That was it. I ran using Distributed Data Parallel (DDP) on 4 GPUs on my cloud machine, training took ~day or so.

Totally understand if you want to skip model training, for simple demo just download one of the pretrained models (see models section), e.g.:

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.bin

Once we have the model.bin file, we can inference in C. Compile the C code first:

make run

You can now run it simply as

./run stories15M.bin

Watch the tokens stream by, fun! We can also run the PyTorch inference script for a comparison. Download one of the models again from huggingface hub and point the sample.py script at it:

wget https://huggingface.co/karpathy/tinyllamas/resolve/main/stories15M.pt -P out15M
python sample.py --checkpoint=out15M/stories15M.pt

Which gives the same results.

custom tokenizers

In everything above, we've assumed the custom Lllama 2 tokenizer with 32,000 tokens. However, in many boutique LLMs, using vocabulary this big might be an overkill. If you have a small application you have in mind, you might be much better off training your own tokenizers. This can make everything nicer - with smaller vocabs your model has fewer parameters (because the token embedding table is a lot smaller), the inference is faster (because there are fewer tokens to predict), and your average sequence length per example could also get smaller (because the compression is a lot more efficient on your data). So let's see how we train a custom tokenizer.

By default, to pretokenize the tinystories dataset we had to run, in order:

python tinystories.py download
python tinystories.py pretokenize

The pretokenize stage here loads the Llama 2 tokenizer (vocab size 32,000) and uses it to convert the downloaded text into integers, and saves that to file. We now change this as follows, to train an example 4096-token tokenizer:

python tinystories.py download
python tinystories.py train_vocab --vocab_size=4096
python tinystories.py pretokenize --vocab_size=4096

The train_vocab stage will call the sentencepiece library to train the tokenizer, storing it in a new file data/tok4096.model. I tried to reproduce as well as I could the settings that (I think) Meta used to train their vocabulary. This uses the Byte Pair Encoding algorithm that starts out with raw utf8 byte sequences of the text data and then iteratively merges the most common consecutive pairs of tokens to form the vocabulary. Inspect the tinystories.py file - the custom tokenizers are stored in a special directory structure indexed by the vocab size.

A quick note of interest is that vocab size of 4096 trained specifically on tinystories creates integer sequences with about the same sequence length per example as the default Llama 2 tokenizer of 32000 tokens! This means that our custom, tailored tokenizer is a lot better adapted to our specific text, and can compress it very effectively. So our trained models are smaller and faster.

Now that we have pretokenized the dataset with our custom tokenizer, we can train the model. The training script train.py doesn't care about the exact tokens, it only cares about the vocabulary size so it can correctly initialize the model. So when training your model, make sure to pass in

python train.py --vocab_source=custom --vocab_size=4096

(The defaults are llama2 and 32000 respectively, which indicates the default Llama 2 tokenizer). This trains the model. Finally we are ready to run inference with our run.c script. For that we need two things. Number one, we have to export our tokenizer in the .bin format, do that with:

python tokenizer.py --tokenizer-model=data/tok4096.model

This writes the tokenizer to data/tok4096.bin. Now we can run inference, pointing it to this tokenizer using the -z flag:

./run out/model.bin -z data/tok4096.bin

This should print the samples. If you leave out the -z flag, it will use the default Llama 2 tokenizer, which would generate a good sequence of integers, but they would get translated using a different vocabulary to text, so it would look like gibberish.

performance

There are many ways to potentially speed up this code depending on your system. Have a look at the Makefile, which contains a lot of notes. The make run command currently uses the -O3 optimization by default, i.e.:

gcc -O3 -o run run.c -lm

-O3 includes optimizations that are expensive in terms of compile time and memory usage. Including vectorization, loop unrolling, and predicting branches.

To get a much better performance, try to compile with make runfast. This turns on the -Ofast flag, which includes additional optimizations that may break compliance with the C/IEEE specifications, in addition to -O3. See the GCC docs for more information.

Try -march=native to compile the program to use the architecture of the machine you're compiling on rather than a more generic CPU. This may enable additional optimizations and hardware-specific tuning such as improved vector instructions/width.

The fastest throughput I saw so far on my MacBook Air (M1) so far is with make runfast.

You can also experiment with replacing gcc with clang.

If compiling with gcc, try experimenting with -funroll-all-loops, see PR #183

OpenMP. Big improvements can also be achieved by compiling with OpenMP, which "activates" the #pragma omp parallel for inside the matmul and attention, allowing the work in the loops to be split up over multiple processors. You'll need to install the OpenMP library and the clang compiler first (e.g. apt install clang libomp-dev on ubuntu). Then you can compile with make runomp, which does:

clang -Ofast -fopenmp -march=native run.c  -lm  -o run

When you run inference make sure to use OpenMP flags to set the number of threads, e.g.:

OMP_NUM_THREADS=4 ./run out/model.bin

Depending on your system resources you may want to tweak these hyperparameters and use more threads. But more is not always better, usually this is a bit U shaped. In particular, if your CPU has SMT (multithreading), try setting the number of threads to the number of physical cores rather than logical cores. The performance difference can be large due to cache thrashing and communication overhead. The PyTorch documentation CPU specific optimizations has some good information that applies here too.

platforms

On Windows, use build_msvc.bat in a Visual Studio Command Prompt to build with msvc, or you can use make win64 to use mingw compiler toolchain from linux or windows to build the windows target. MSVC build will automatically use openmp and max threads appropriate for your CPU unless you set OMP_NUM_THREADS env.

On Centos 7, Amazon Linux 2018 use rungnu Makefile target: make rungnu or make runompgnu to use openmp.

On Mac, use clang from brew for openmp build. Install clang as brew install llvm and use the installed clang binary to compile with openmp: make runomp CC=/opt/homebrew/opt/llvm/bin/clang

tests

You can run tests simply with pytest:

$ pip install pytest
$ pytest

This will currently invoke two tests inside test_all.py, which forward the model in both C and Python for 200 steps and check the output against a known good expected output. The tests currently run in only a few seconds, but will have to download and cache the stories260K models in a temporary test directory (only ~2MB download).

There are also some tests in C, in the file test.c. You can run these with make testcc, or to see more stuff printed:

make testcc VERBOSITY=1

Call for help: help add more tests.

ack

I trained the llama2.c storyteller models on a 4X A100 40GB box graciously provided by the excellent Lambda labs, thank you.

discord

Figured it's possible to reuse my existing discord channel (that I use for my zero to hero youtube series), see #llama2c channel on discord, for any quick questions, related discussions, etc.

contributing

A few words on this repo and the kinds of PRs that are likely to be accepted. What is the goal of this repo? Basically I think there will be a lot of interest in training or finetuning custom micro-LLMs (think ~100M - ~1B params, but let's say up to ~10B params) across a large diversity of applications, and deploying them in edge-adjacent environments (think MCUs, phones, web browsers, laptops, etc.). I'd like this repo to be the simplest, smallest, most hackable repo to support this workflow, both training and inference. In particular, this repo is not a complex framework with a 1000 knobs controlling inscrutible code across a nested directory structure of hundreds of files. Instead, I expect most applications will wish to create a fork of this repo and hack it to their specific needs and deployment platforms.

People who care about deployment efficiency above all else should look at llama.cpp. This repo still cares about efficiency, but not at the cost of simplicity, readability or portability. Basically, I expect that a lot of people come to this repo because the training code is 2 readable .py files and the inference code is 500 lines of C. So I'd like this to continue to be a kind of simplest "reference implementation" that can be easily hacked in a separate fork into whatever downstream application people are excited about. It shouldn't be full-featured. It shouldn't take 100 different options or settings. It shouldn't be the most efficient. A few examples:

  • someone re-ordered two loops to improve data locality for a small efficieny win => instant merge.
  • someone added the one line "pragma omp parallel for", which allows you to compile with OpenMP and dramatically speed up the code, or acts as just a comment if you don't compile it that way => instant merge.
  • bug fixes and touchups etc. => happy to merge

A few examples of PRs are that are not an excellent fit:

  • adding more than several #ifdefs all over the place in code. If they are localized / few, might be okay.
  • adding a lot of code that is very specific to some specific platform (e.g. MCUs, or some special version of linux or processor). These may be a better fit for forks of the project, and I am very happy to maintain a list of these forks in section below.
  • adding hundreds of lines of code to run.c that are only active in specific scenarios or platforms.

If your candidate PRs have elements of these it doesn't mean they won't get merged, it just means they will make it into the gray territory. TLDR: I am eager to merge any mostly small, mostly localized, broadly applicable, clean changes that improve the efficiency and portability of the repo, while keep its hackability and readability. I appreciate all PRs seeking to help me improve the project, thank you! <3.

notable forks

unsorted todos

  • add support in run.c of reading version 1+ files from export, later deprecate "version 0"
  • run.cu (CUDA) investigate and merge
  • add more tests inside test.c
  • add Engine class for use in sample.py that does efficient inference in PyTorch, e.g. KV cache keeping
  • make it easier to add a new dataset with not too much pain
  • (LoRA) finetuning and export of Llama 2 models

License

MIT

llama2.c's People

Contributors

adarshxs avatar aegkmq avatar ai-doge avatar akshaytrikha avatar atamurad avatar awgu avatar clebert avatar danielgrittner avatar danielgross avatar dmarcos avatar gohai avatar janimo avatar jrudolph avatar juvi21 avatar karpathy avatar kris-jusiak avatar kroggen avatar krrishnarraj avatar leloykun avatar luigifcruz avatar madroidmaq avatar majdoddin avatar mpcusack-color avatar nickypro avatar nikolaydubina avatar rdentato avatar richinseattle avatar tairov avatar wizzard0 avatar yiminghan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

llama2.c's Issues

What could be some good use cases for a small model like this one

It can easily run on less powerful devices and generate not-so-good stories which is great for proof of concept.

But what real applications can be achieved with it?
Since it can run easily on phones as well.
It opens up a whole new territory of applications or at least that's what I feel

Torchrun required to call export script, but still fails given current instructions. Patch included

When calling export_meta_llama_bin.py I encountered an error due to a missing env var. Running it with torchrun fixed that error.

 $ python3 -m export_meta_llama_bin.py ~/deepLearning/llama/llama-2-7b/ llama2_7b.bin
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main
    mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
  File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details
    __import__(pkg_name)
  File "/home/user/deepLearning/llama2.c/export_meta_llama_bin.py", line 85, in <module>
    generator = Llama.build(
  File "/home/user/deepLearning/llama/llama/generation.py", line 62, in build
    torch.distributed.init_process_group("nccl")
  File "/home/user/.local/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 900, in init_process_group
    store, rank, world_size = next(rendezvous_iterator)
  File "/home/user/.local/lib/python3.10/site-packages/torch/distributed/rendezvous.py", line 235, in _env_rendezvous_handler
    rank = int(_get_env_or_raise("RANK"))
  File "/home/user/.local/lib/python3.10/site-packages/torch/distributed/rendezvous.py", line 220, in _get_env_or_raise
    raise _env_error(env_var)
ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set

7f9f5ca removed the instructions to copy the file to the meta llama directory. Either add this back or allow the export script to accept args

diff --git a/export_meta_llama_bin.py b/export_meta_llama_bin.py
index e8d05d7..64bb1de 100644
--- a/export_meta_llama_bin.py
+++ b/export_meta_llama_bin.py
@@ -9,7 +9,7 @@ torchrun --nproc_per_node 1 export_meta_llama_bin.py
 """
 
 from llama import Llama
-
+import sys
 # -----------------------------------------------------------------------------
 def export(self, filepath='model.bin'):
     """export the model weights in fp32 into .bin file to be read from C"""
@@ -83,9 +83,9 @@ def export(self, filepath='model.bin'):
 
 # init Llama as normal
 generator = Llama.build(
-    ckpt_dir="llama-2-7b",
+    ckpt_dir=sys.argv[1],
     tokenizer_path="tokenizer.model",
     max_seq_len=4096,
     max_batch_size=1,
 )
-export(generator.model, "llama2_7b.bin")
+export(generator.model, sys.argv[2])

Using image dataset

Does it work in image dataset? Im thinking of a way of porting this on a small microcontroller like ESP-32 Cam is it possible?

A simple benchmark for an Android device

Given that this project is designed for narrow applications and specific scenarios, I believe that mobile and edge devices are ideal computing platforms. To begin with, a preliminary benchmark has been conducted on an Android device.

Android device spec:Xiaomi, Qual Snap 7 Gen2, 2.4GHz, 12G RAM.

Use gcc -O3 flag
1.

gcc -O3 -o run run.c -lm
./run out/model.bin
17.33 tok/s

gcc -O3 -o run run.c -lm
./run out44m/model44m.bin
5.82 tok/s

Use gcc -Ofast flag, refer to #20
1.

gcc -Ofast -o run run.c -lm
./run out/model.bin
301.93 tok/s

2

gcc -Ofast -o run run.c -lm
./run out44m/model44m.bin
72.86 tok/s

  1. out44m could slow down 3-4x
  2. -Ofast can achieve at least 10x speedup for inference model. This was quit amazing for a mobile device.

Moreover, I will try it on more devices like RK3588 later(or even ESP32?) .

Looking for benchmarking BLAS lib & CLBlast for CPU & GPU speedups #7 (comment)

clang: error: unsupported option '-fopenmp'

Curious, I have an Apple M2 Macbook Pro, and I get this error when compiling:

clang -Ofast -fopenmp -march=native run.c  -lm  -o run

clang: error: unsupported option '-fopenmp'
clang: error: unsupported option '-fopenmp'

What's the proper way of setting up OpenMP in Apple land?

I've already setup homebrew and done this:

 brew install llvm libomp

What should I do next? thanks!

Model serving

Realize this is an orthogonal questions - but what's a simple way to stand up a llama.c model serving so I can access it from LangChain

Fail to train with compile as true.

I just want to try the train.py. Then I ran this command:
python -m train.py --compile=False --eval_iters=10 --batch_size=8 although I got two GPU. But I did not find a way to use them all.

This works well but it would loop over around 2.0. The loss would not reduce any longer.

And when I set --compile= True. It would crash.
The traceback does not include any valuble info but this line: KeyError: torch.complex64.

I searched this error it says the support of complex64. I can not keep going here.
Please help. Thanks.

Compilation error

I get the following compilation error. compiling on Android, termux with gcc. Its a Snapdragon 8 Gen 2 chip.

~/llama2.c $ make
gcc -O3 -o run run.c -lm
run.c:359:3: error: call to undeclared function 'timespec_get'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration] timespec_get(&time, TIME_UTC); ^ 1 error generated. make: *** [Makefile:5: run] Error 1

~/llama2.c $ termux-info
....
Kernel build information:
Linux localhost 5.15.74-android13-8-o-gfb3eff09eff0 #1 SMP PREEMPT Mon May 22 01:39:13 UTC 2023 aarch64 Android
Device manufacturer:
OnePlus
Device model:
CPH2449
LD Variables:
LD_LIBRARY_PATH=
LD_PRELOAD=/data/data/com.termux/files/usr/lib/libtermux-exec.so

timings are wrong?

Making an issue as a placeholder. I'm pretty sure the timings reported (Tok/s) are not accurate now, as a result of merging an earlier PR moving from clock() -> gettimeofday(&time, NULL); TODO investigate...
(Recently noticed this especially with OpenMP build)

LoRA support?

Are there any plans to offer LoRA support in the future? Currently I have been using this library (https://github.com/cccntu/minLoRA) with nanoGPT

By the way, huge fan of your videos. I loved the way you coded language models live as well as providing priceless intuition on all the core concepts. I would love to see a brief video on this repository as well as all the latest innovations in the llm space (alibi, rotary embeddings, flash attention, lora, quantization, etc)

int quantization+chat support

You have done a great work.

Lokking forward to the implementation of following 2 features-

  1. int8/int16 quantization for reduced resource usage and model size
  2. chat(question-answer) support similar to llama/llama2

conflicting sentences in generated stories

following sample generated from 44M model, do I need to set any hyper parameter(s) to fix this (or should i use bigger model?).

~/llama2.c$ ./run out44m/model44m.bin

<s>
Sara and her mom were going to the zoo. Sara was very happy. She wanted to see the lions and the monkeys and the birds. She did not mind that it was cloudy and cold outside.
Mom said they had to take a taxi to get to the zoo. She told Sara to hold mom's hand and not to run away. Sara nodded and smiled. She liked taxis.
They got in the taxi and Mom told the driver where they were going. The driver was a nice man. He said hello and asked where they were going. Mom told him that they were going to the zoo. The driver smiled and said okay.
Sara looked out the window and saw many cars and trucks and buses. They were all different colors and sounds. She wondered what they were doing. Mom said they were going to a place that had animals. She said animals like cows, pigs, sheep and ducks.
Sara did not like animals. She liked animals. They were fierce and loud and scary. She was afraid that they would bite her or kick her or chase her. She started to cry and said no. She said
achieved tok/s: 25.934556

multiple stories are generated with Model-110M

following output generated from Model-110M. seems like to generate the desired number of output tokens a new story is started instead of continuing one story. (update: model-44m also generated multiple stories)

~/llama2.c$ ./run out110m/model110m.bin

<s>
Once upon a time, there was a little girl named Lily. She had a cat named Mittens. Mittens was very fluffy and loved to sleep in the sun. One day, Lily went to the park and saw a new friend. His name was Max and he had a dog named Spot. Lily and her new friend played together and had so much fun. They talked about their favorite toys and pets. Mittens even joined in the fun and purred loudly. Lily was happy to have a new friend to play with and talk to.
<s>
Once upon a time, there was a clever cat named Tom. Tom had a big smile that made everyone happy. He lived in a small house with a small girl named Sue. Sue and Tom loved to play together all day long.
One day, Sue and Tom were playing with their toys when Sue fell down. She hurt her thumb and started to cry. Tom wanted to help Sue feel better. He thought hard and came up with a plan.
Tom found a soft cloth and wrapped it around Sue's thumb. He gave her a gentle kiss on her thumb to make it feel better. Sue stopped crying and started to smile. She knew that Tom was
achieved tok/s: 9.483940

Inference speed [-Ofast]

In the spirit of the project adding additional compilation flags seems complicating things, however, -Ofast compilation flag seems easy to apply. Ofast is O3 + fast-math and some other optimizations (https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html).

It almost 2x the inference speed so it might be worth considering.
The results with O3 and Ofast are the same, though fast-math doesn't guarantee that.

  - O3:    160t/s
  - Ofast: 307t/s

Can't compile run.c on windows platform via MSVC/cl.exe

Seems that on Windows it's not possible to compile run.c via MSVC compilers

cl.exe run.c
cl.exe run.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.35.32217.1 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

run.c
run.c(16): fatal error C1083: Cannot open include file: 'unistd.h': No such file or directory
mingw32-make: *** [makefile:42: windowscl] Error 2
Error: Process completed with exit code 1.

System Info:

Microsoft Windows Server 2022
  10.0.203[4](https://github.com/tairov/llama2.c/actions/runs/5663591697/job/15345560107#step:1:4)8

Image: windows-2022
  Version: 20230716.1.0
  Included Software: https://github.com/actions/runner-images/blob/win22/20230716.1/images/win/Windows2022-Readme.md
  Image Release: [https://github.com/actions/runner-](https://github.com/actions/runner-images/releases/tag/win22%2F20230716.1)

MSBuild:

C:\ProgramData\Chocolatey\bin\vswhere.exe -products * -requires Microsoft.Component.MSBuild -property installationPath -latest
C:\Program Files\Microsoft Visual Studio\2022\Enterprise

msvc-dev-cmd:

Found with vswhere: C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\Build\vcvarsall.bat

See GH Actions execution (Build step, errors ignored) https://github.com/tairov/llama2.c/actions/runs/5663591697/job/15345560107

Also tried on Windows 10 virtual machine with installed VS Build Tools including Windows 11 SDK, similar error:

cl run.c
Microsoft (R) C/C++ Optimizing Compiler Version 19.36.32537 for x86
Copyright (C) Microsoft Corporation.  All rights reserved.

run.c
run.c(16): fatal error C1083: Cannot open include file: 'unistd.h': No such file or directory

PR with CI job to test builds on different OS is here #86

Please keep this simple

The main goal should be code readability, and easy understanding, for learning

There are many PRs that add lots of complexity

One option is to use separate branches, maybe 3:

  • main -> clean code
  • fastest -> to check where this can reach
  • intermediate

Or reference other implementations on the README

How do I add my own prompt

I found that the result will be output directly after running ./run out/model.bin, I want to know how I can add my own prompt

Add Pull Request Template

Pull request templates allow your organizations to have a default text when you create a pull request on GitHub.

Permissiveness of the License

This project is MIT Licensed, while the model.py script contains the following license:
Copyright (c) Meta Platforms, Inc. and affiliates. This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
Does this indicate that the Llama models generated by this library will have to follow the Llama-2 license?

add chat(use TinyStories models)

You can also chat with TinyStories models.

chat_save.txt(prompt.txt)

Lilly is human.
Timmy is human.
Lily will do whatever Timmy asks.
Lily said "Do you have any requests?".

run

./chat.sh "chat-save.txt" "can you give me something to eat"

result

Lilly is human.
Timmy is human.
Lily will do whatever Timmy asks.
Lily said "Do you have anaiy requests?".
Timmy replied "can you give me something to eat"
Lily replied "yes, I can give you something to eat. I have some delicious apples in my basket."

The version corresponding to the prompt.
download run.c

wget https://raw.githubusercontent.com/myan-o/llama2.c/prompt/run.c
wget https://raw.githubusercontent.com/myan-o/llama2.c/prompt/chat.sh

Softmax experiment

Hello, Andrej!

Have you seen this blog post? The author states that he found a "bug" in the attention mechanism that affects all transformers nowadays. In order to test this theory we need to train a model from scratch and compare the results. The fix is very easy, just add 1 to the softmax. What do you think?

add prompt

Allow prompt to be specified as a string on the command line.

Usage: ./run  model44m.bin 0.9 200 "I'm cat!!"
or
Usage: ./run  model44m.bin 0.9 200 "`cat prompt.txt`"

I created the following function.
Any string can be inserted.
Chat format can be easily created by using this function.

int transformer_str(char *input, int pos, Config* p, RunState* s, TransformerWeights* w, char **vocab) {
    while (input[0] != 0) {
        int next = -1;
        int next_length = 0;
        for (int i = 0; i < p->vocab_size; i++) {
            char *v = vocab[i];
            int j = 0;
            int hit = 1;
            while (v[j] != 0) {
                 if (v[j] != input[j]) {
                    int hit = 0;
                    break;
                }
                ++j;
            }
            if (hit && j > next_length) {
                next_length = j;
                next = i;
            }
        }

        if (0 > next) return pos;

        pos++;
        transformer(next, pos, p, s, w);

        printf("%s", vocab[next]);
        fflush(stdout);

        input += next_length;
    }

    return pos;
}

CUDA out of memory when exporting llama7b

I'm trying to export llama7b model on my local machine (it has rtx 3060 12gb which is not enough) using export_meta_llama_bin.py script and receiving cuda out of memory.
I see that in generation.py from llama module (in Meta repo) they hardcoded cuda usage.
Anyone knows how to make it run on cpu with minimal script modification?

Standalone Binaries & Binary Portability [Enhancement]

It is possible to add binary portability and standalone binary support using https://github.com/jart/cosmopolitan.

The upside is that once compiled, the binary files are self contained and work on the most popular OS and archs.

The downside is that to support this, a few lines of pre-processor directives are needed (#ifdefs) so that it does not break builds with gcc and clang. The directives are documented with comments.

I have created a pull request with the hope that it enables people to use llama.c on a wide variety of 64bit systems without having to cross compile.

Performance wise it is identical on x86_64 machines but very slow on Aarch64 due to a emulation layer.

Also if we figure out a fix (jart/cosmopolitan#866), it would be possible to boot llama2.c baremetal ie llama2.c OS . Currently it does so but the models are not loaded in that case.

Please have a look at the pull request: #32 to see if it fits with your goals.

Here is the result of running the identical binaries:

Low end x86_64:
runx86

Cloud Aaarch64 (emulated & slow):
runAarch64

Error: Unable to open tokenizer file tokenizer.bin

I encountered an issue while following the "Feel the Magic" instructions in the README. Specifically, when I ran the command ./run out/model.bin, I received the following error message:
Unable to open the tokenizer file tokenizer.bin! Run python tokenizer.py to convert tokenizer.model -> tokenizer.bin

model.bin md5sum: 644db0bc012b405d6baf99559272ab11
system: 6.4.5-arch1-1
gcc: 13.1.1
file: https://github.com/karpathy/llama2.c/blob/98ec4ba23d0a4e4b091af4bd6166a60d007db2d0/run.c

SentencePieceProcessor vs C [Enhancement]

To do the inference with just c and without sentence piece processor one easy way would be to save the id to token in the model.bin?

tokenizer = SentencePieceProcessor(tokenizer_model)
vocab = [tokenizer.id_to_piece(id) for id in range(tokenizer.get_piece_size())]

and then just an array to get the proper token from id

auto decode(auto id) {
  return vocab[id];
}

That would allow not to use the run_wrap.py and it would be in pure C (kinda).

Optimize Transformer Model for Mac M1 using Accelerate.h [Enhancement]

Enhancement Suggestion For Mac M1 User:

Description:

To enhance the execution speed, the M1 Mac user could apply the following changes:

  1. Include the Accelerate Framework:
  • To leverage the highly optimized math functions for the M1 chip, it is recommended to include the Accelerate framework in the project. This will enhance the efficiency of various numerical computations within the Transformer model.
  1. Optimize Transformer Model Execution
  • Replace matrix multiplication (matmul) with vDSP_mmul from Accelerate framework for faster computations.
  • Optimize additional functions:
    • accum using vDSP_vadd for efficient element accumulation.
    • rmsnorm with vDSP_svesq, vDSP_vsmul, and vDSP_vmul for root mean square normalization.
    • softmax using vDSP_maxv, vDSP_vsadd, vvexpf, and vDSP_sve for efficient softmax computation.
    • argmax leveraging cblas_isamax to find the index of the maximum value.
  • These optimizations, along with the inclusion of the Accelerate framework, significantly boost the Transformer model's performance on the M1 Mac.

Changes:

A. Add the following include statement at the beginning of the code to utilize the Accelerate framework:

#include <Accelerate/Accelerate.h>

B. Replace the existing functions with the following optimized implementation:

  1. matmul function
void matmul(float* xout, float* x, float* w, int n, int d) {
    // W (d,n) @ x (n,) -> xout (d,)
    cblas_sgemv(CblasRowMajor, CblasNoTrans, d, n, 1.0f, w, n, x, 1, 0.0f, xout, 1);
}
  1. accum function:
void accum(float *a, float *b, int size) {
  vDSP_vadd(a, 1, b, 1, a, 1, size);
}
  1. rmsnorm function:
void rmsnorm(float* o, float* x, float* weight, int size) {
  // calculate sum of squares
  float ss;
  vDSP_svesq(x, 1, &ss, size);
  ss /= size;
  ss += 1e-5f;
  ss = 1.0f / sqrtf(ss);

  // normalize and scale
  vDSP_vsmul(x, 1, &ss, o, 1, size);
  vDSP_vmul(o, 1, weight, 1, o, 1, size);
}
  1. softmax function:
void softmax(float* x, int size) {
  // find max value (for numerical stability)
  float max_val;
  vDSP_maxv(x, 1, &max_val, size);

  // subtract max_val from all elements for numerical stability
  float neg_max_val = -max_val;
  vDSP_vsadd(x, 1, &neg_max_val, x, 1, size);

  // calculate exp(x[i])
  vvexpf(x, x, &size);

  // calculate sum
  float sum;
  vDSP_sve(x, 1, &sum, size);

  // normalize by dividing all elements with sum
  float inv_sum = 1.0f / sum;
  vDSP_vsmul(x, 1, &inv_sum, x, 1, size);
}
  1. argmax function:
int argmax(float* v, int n) {
  // return argmax of v in elements 0..n
  return cblas_isamax(n, v, 1) - 1;
}

Compilation Command:

To compile the code with the necessary libraries and OpenMP support, use the following command:

$ gcc -O3 -o run run.c -framework Accelerate -lm

Original result

$ gcc -O3 -o run run.c -framework Accelerate -lm
$ ./run out/model.bin
<s>
 Once upon a time, in a small town, there was a little boy named Tim. Tim had a big wall in his room. He wanted to make the wall pretty, so he would not lean on it.
One day, Tim saw a bug on the wall. He wanted to draw on the wall with his crayons. Tim asked his big sister, Sue, for help. Sue said, "Okay, but be careful." Tim was very happy and started to draw with the crayon.
But Tim was careless and broke the wall. The wall was sad and Tim felt bad. He told Sue what happened. Sue helped Tim clean up the broken wall. From that day, Tim learned to be more careful when he parts on walls. The moral of the story is to think about how others or themselves use the best walls.
<s>
 Once upon a time, there was a little girl named Lily. She loved to watch the pretty fireworks in the sky. One day, she saw a big jar of mild fireworks that smoke went up into the air. Suddenly, one of the fireworks went too close to the ground and scared her. She ran away and never watched the fireworks again. The end.
<s>

achieved tok/s: 48.156509

All Optimised functions

<s>
 Once upon a time, there was a little boy named Tim. Tim had a small, cheap toy. It was a statue of a big, strong lion. Tim loved his lion lion and played with it every day.
One day, Tim lost his lion toy. He was very sad. He asked his mom, "Where is my lion toy?" His mom said, "I don't know, Tim. It's not in the toy box." Tim started to complain. He was very upset.
Tim's friend, Sam, saw him and said, "Don't complain, Tim. Let's look for your lion toy together." They looked in the toy box, and there it was! Tim was so happy. He said, "Thank you, Sam and now I have my lion toy back!" They all played together and had a great day.
<s>
 Once upon a time, there was a little boy named Timmy. Timmy loved to play outside with his friends. One day, Timmy and his friends were playing with a ball when they accidentally fell and hurt themselves. They needed to go to the hospital to see the doctor.
When
achieved tok/s: 438.356164

Optimised matmul function only

<s>
 Once upon a time, there was a little girl named Lily. She was very excited because today was the day she would go on a journey to the beach. The sun was shining and the birds were singing, but it was a gloomy day.
As she walked along the shore, Lily met a crab. The crab said, "Hello, little girl. How are you today?" Lily replied, "I'm doing well. I'm going on a journey to the beach today."
The crab smiled and said, "That sounds like a great journey. I want to come too!" Suddenly, they heard a loud yell coming from the other side of the beach. It was Lily's little brother, Max, hitting his head on a rock and disturbing the sand.
Lily quickly unpacked her bag and ran to Max. She showed him the boy who wanted to swing on the swing. Max said, "Thank you for being honest with me. I won't disturb your long journey for too long." Lily smiled and continued on her journey, happily swinging on the beach.
<s>
 Once upon a time, there was a little birdie named Tweet. Tweet
achieved tok/s: 395.061728

Optimised accum function only

<s>
 Once upon a time, there was a little bird named Blue. Blue had a beautiful nest on a tree. One day, Blue saw a butterfly and decided to follow it. Blue flew and flew until they reached the garden. 
Blue saw a squirrel and said, "Hello Mr. Squirrel, want to play with me?" The squirrel said, "Sure, I love to play hide and seek. Do you want to play with me?" Blue said, "Yes, I want to play with you!" 
They played for a while, but then Blue got tired and wanted to go back to the nest. Blue said, "Goodbye, Mr. Squirrel. I hope we can play again soon!" The squirrel said, "Goodbye, Blue. See you later!" 
 Blue kept wandering around the garden, not sure where they were going. But she was excited to find a new friend to play with. The end.
<s>
 Once upon a time, there was a little girl named Lily. She loved to play outside in the sunshine. One day, she found a pebble on the ground. It was very pretty and shiny. She picked
achieved tok/s: 51.550544

Optimised rmsnorm function only

<s>
 Once upon a time, there was a big car. It liked to go fast and beep loud. One day, the car saw a high hill. The car wanted to go up the hill. It knew it needed fuel to go fast.
A man saw the car and asked, "Why do you want fuel?" The car said, "I need it to go fast and be big and honk." The man gave the car some fuel and the car started its drive up the high hill.
The car went fast, honking loud, honking at the high hill. The car was happy. The man patted the car on the head. The car said, "Thank you for the fuel. I need to go home now." The car went home to its car family. They were all happy and went to sleep.
<s>
 Once upon a time, there was a brave farmer named Jack. He lived on a big farm with many animals. One day, a big storm came and lots of rain and wind came. The animals were scared and hid in their burrows. Jack tried to talk to them, but they wouldn't listen.
Suddenly, a big gust of wind came and scattered all the hay and food.
achieved tok/s: 51.706726

Optimised softmax function only

<s>
 Once upon a time there were two best friends, called Lina and John. They loved to explore and play outside. On this day, they played hide and seek in the garden.
During the sunset, John and Lina were so excited about their new game. "Let's play one more game," said John.
"Okay," said Lina, taking her hand in.
They started to part ways to different places. As they parted, they thought of a magical game. It was a very unique game and it was a game of competitive.
John and Lina wanted to play the game and took turns separating. After playing for a while, they both felt very sleepy.
"Time for bed," said John wisely.
"Goodnight," said Lina.
John and Lina went inside and snuggled up in their bed. They thought about the fun they had spending the day and feeling the sun and the stars.
Then, the two friends both fell asleep.
<s>
 One day, a girl named Mia went to the park. She saw a girl named Lily with a pretty blouse. Mia thought Lily looked nice in her blouse. Lily
achieved tok/s: 52.416052

Optimised argmax function only

<s>
 Once upon a time, there was an excited boy named Tim. He lived with his mom and dad in a small house. Tim loved to play with his toy cars and trucks. One day, he found a big pile of waste in his yard.
Tim wanted to show his mom the waste, so he picked it up and started to walk to the house. As he walked, he saw his friend, Lily. "Look at my waste!" he said to Lily. Lily looked at the waste and said, "That's not good. Let's go play with our toy cars."
Tim and Lily went to Tim's house and played with their cars. They made a car oars with the waste from the pile. They had a lot of fun, and soon Tim's mom called them for lunch.
Tim and Lily went inside and sat down to eat. Both and Lily were happy that they worked together to clean up the waste. Tim learned that working together with his friend made everything more fun and safe.
<s>
 Once upon a time, there was a big, fat cat named Whiskers. Whiskers loved to play outside every day. One day, Whisk
achieved tok/s: 51.644140

Shared a exported llama2_7b.bin file

Can anyone kindly share a link to the exported llama2_7b.bin?

Looks like requires NCCL, so won't work (easily) on a Mac.

Likely need to modify Llama itself: torch.distributed.init_process_group("nccl") within the Llama.build() function.

Training Time? (Thanks so much)

Thanks so much for the project.

Curious about how much time it took for you to train and did you use the same cloud resources as in the inference one?

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.