Coder Social home page Coder Social logo

Comments (17)

MarkSchmidty avatar MarkSchmidty commented on August 27, 2024 7

122GB.

What would be interesting is to benchmark quality versus memory size, i.e. does say a fp16 13B model generate better output than a int4 60GB model?

The answer is no, At around 20B parameters you only need 3 bits to get about the same performance quality as the same 20B parameter model in uncompressed fp16. As a rule of thumb, for each 4x more parameters you can drop a bit off while still getting close to 16bit quality.

So an 80B parameter model would have around the same quality in 2bit as in 16bit and a 320B parameter model would have around the same quality in 1bit as in 16bit. Beyond 1bit quantization can be achieved through various methods, such as re-using bins of bits from non-connected layers, which are only applicable to massive models and which will only maintain output quality for ~1T+ parameter models.

I'm not going to list every source for this. But these papers are a good start:
GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers | Oct 2022
The case for 4-bit precision: k-bit Inference Scaling Laws | Dec 2022 - Updated Feb 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot | Jan 2023

Also we're running empirical tests to validate this with LLaMA specifically over in #9 and so far they are turning out as expected. (No surprises there since the same tests have already been done on a half a dozen different models in a dozen sizes from 200M to over 500B parameters in the papers linked above.)

P.s. The only LLaMA which will have a quality benefit from 8-bit is 7B. But the benefit will be so small as to be insignificant. Even a minor amount of finetuning worth $10 of compute is enough to overcome the difference between 8-bit and 4-bit at 7B parameters.

from llama.cpp.

riverzhou avatar riverzhou commented on August 27, 2024 6

Waiting for int8 quantization....

from llama.cpp.

gjmulder avatar gjmulder commented on August 27, 2024 5

This issue is perhaps misnamed, now, as 8bit will likely improve quality over 4bit but not performance.

In summary:

  • Inference performance: 4bit > 8bit > fp16 (as the code looks to be primarily memory-bound, with only a 50% performance increase from going from 8 cores to 16 cores on my 16 core / 32 hyperthread Ryzen 1650X)
  • Precision quality: fp16 > 8bit > 4bit (as more precision improves inference quality)
  • Scaling quality: 65B > 30B > 13B > 7B (scaling of models improves inference quality significantly)

Which led me to wonder where the sweet spots are among two parameters for a given memory footprint?

Once the model is able to be loaded once and called repeatedly (issue #23) and the python bindings are merged (issue #82 and https://github.com/thomasantony/llama.cpp/tree/feature/pybind), I can test all the permutations against say the SQuAD benchmark and we can understand the impact of quantization versus model size.

from llama.cpp.

ggerganov avatar ggerganov commented on August 27, 2024 2

No 8-bit support atm, but can be added similar to 4-bit
I expect it will be slower, because it will increase memory traffic. But it also depends on how efficient the SIMD is implemented

from llama.cpp.

j-f1 avatar j-f1 commented on August 27, 2024 2

Some interesting use cases for 4GB inference including running at near native speeds fully in a web browser on any device with WebAssembly and running on the very popular Raspberry Pi 4GB. :)

Also newish iPhones which allow up to 4080MB of memory use with the “Increased Memory Limit” entitlement!

from llama.cpp.

gjmulder avatar gjmulder commented on August 27, 2024

I tried the intermediate fp16 and could get the model to run in 122GB of resident memory. With a Ryzen 1950X 16 Core CPU and slower memory than you:

4bit quantized:

main: mem per token = 71159620 bytes
main:     load time = 18022.09 ms
main:   sample time =   279.06 ms
main:  predict time = 139437.72 ms / 787.78 ms per token

fp16:

main: mem per token = 71159620 bytes
main:     load time = 136686.84 ms
main:   sample time =   372.38 ms
main:  predict time = 303936.28 ms / 2356.10 ms per token
main:    total time = 482714.19 ms

from llama.cpp.

apollotsantos avatar apollotsantos commented on August 27, 2024

I tried the intermediate fp16 and could get the model to run in 122GB of resident memory. With a Ryzen 1950X 16 Core CPU and slower memory than you:

4bit quantized:

main: mem per token = 71159620 bytes
main:     load time = 18022.09 ms
main:   sample time =   279.06 ms
main:  predict time = 139437.72 ms / 787.78 ms per token

fp16:

main: mem per token = 71159620 bytes
main:     load time = 136686.84 ms
main:   sample time =   372.38 ms
main:  predict time = 303936.28 ms / 2356.10 ms per token
main:    total time = 482714.19 ms
```

How did you run with the fp16 version?

from llama.cpp.

gjmulder avatar gjmulder commented on August 27, 2024

./main -m ./models/65B/ggml-model-f16.bin -t 16 -n 128

from llama.cpp.

apollotsantos avatar apollotsantos commented on August 27, 2024

from llama.cpp.

neuhaus avatar neuhaus commented on August 27, 2024

OK i tried it with the fp16 model too, it only swapped a little bit (i have an 8-core Ryzen 7 3700X and 128GB RAM):

$ ./main -m models/65B/ggml-model-f16.bin -t 8 -n 128
main: mem per token = 70897348 bytes
main:     load time = 71429.04 ms
main:   sample time =   324.53 ms
main:  predict time = 402116.09 ms / 3117.18 ms per token
main:    total time = 483291.78 ms

I also tried using -t 16 (to take advantage of multithreading) but it ended up being slightly slower.

I'm still hoping that 8bit could be faster than 4bit - it it likely?

from llama.cpp.

apollotsantos avatar apollotsantos commented on August 27, 2024

from llama.cpp.

neuhaus avatar neuhaus commented on August 27, 2024

As of now "quantize" only knows how to do 4bit.

from llama.cpp.

gjmulder avatar gjmulder commented on August 27, 2024

122GB.

What would be interesting is to benchmark quality versus memory size, i.e. does say a fp16 13B model generate better output than a int4 60GB model?

@apollotsantos are you in Lisboa? I'm in Carcavelos.

from llama.cpp.

neuhaus avatar neuhaus commented on August 27, 2024

I believe to have noticed a significant quality increase going from 7B to 13B and from 13B to 30B (on GPU) and i've just started with 65B and it is a bit slow on my CPU.

from llama.cpp.

apollotsantos avatar apollotsantos commented on August 27, 2024

@gjmulder actually not. I'm in Brazil

from llama.cpp.

AGSaidi avatar AGSaidi commented on August 27, 2024

Arm has SMMLA instructions which for newer arm targets should give another 4x over fp16.

from llama.cpp.

MarkSchmidty avatar MarkSchmidty commented on August 27, 2024

Which led me to wonder where the sweet spots are among two parameters for a given memory footprint?

13B appears to have negligible quality difference at 3-bit.

So you'll want to 13B-65B in 3-bit to save memory and run faster for effectively the same quality output, once it is implemented.

For 7B 4bit is practically always best. If you really want to run it in 4GB of memory then 3bit will make it fit at a reduced quality, but not so much as to make it unusable, especially with finetuning.

Some interesting use cases for 4GB inference including running at near native speeds fully in a web browser on any device with WebAssembly and running on the very popular Raspberry Pi 4GB. :)

from llama.cpp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.