Comments (7)
This is great and the inference e2e integration like a good candidate for addition to https://github.com/pytorch/ao . Let us know if you are interested in contributing!
As far fp6 dtype in PT core, check out https://dev-discuss.pytorch.org/t/supporting-new-dtypes-in-pytorch/1833 for the current thinking on adding new dtypes. We do expect fp6 to get silicon support in the future so it would be a good candidate to add when that silicon support is closer. We don't actually need an fp6 dtype in core to enable w6a16 as implemented in the code linked to this issue.
from ao.
Keeping this open because we still need to do the subclass work and the end to end integration
from ao.
Tracker:
- FP16 act - FP6 weight linear CUDA kernel (#223)
- Improve FP32/FP16/BF16 <-> FP6 conversion (with CUDA support) (#248)
- Improve weight splitting (with CUDA support) (#279)
- User-friendly API (either Tensor subclass or FP6Linear module) (#279 #283)
- End2end benchmark
- Remove unnecessary code (e.g.
weight_quant.cu
,weight_prepacking.cpp
)
from ao.
Just to update people here on the progress. We have added a user API for FP6-LLM
from torchao.quantization.fp6_llm import convert_fp6_llm
convert_fp6_llm(model) # convert model in-place, replacing nn.Linear modules with Fp6LlmLinear
Everything should work (in eager mode). Some local end2end testing by me and @Iron-Bound show that it works as expected. We will probably close this issue once we have an LLM eval in this repo for uniform evaluation across quantization methods (there is also a small difference in how we handle FP16->FP6 quantization compared to the released code, so I want to make sure this difference is not significant).
Some known limitations:
- The kernel is for FP16 activation - FP6_E3M2 weight. If your model is BF16, it should still work, but you will spend some small overhead converting BF16 <-> FP16. (perhaps we can implement a BF16 version in a future PR? not sure how much work is required - only need to change weight dequant logic and call the correct tensor core instruction?)
- When tested with gpt-fast, torch.compile does not work for an FP6-LLM end2end model (it does work for small test cases though - we have CI for that). Need to debug this.
- UPDATE: adding
torch._inductor.config.triton.cudagraph_trees = False
fixes the issue.
- UPDATE: adding
Data from gpt-fast for meta-llama/Llama-2-7b-chat-hf
on 4080
name | tokens/s |
---|---|
BF16 baseline (w/ compile) | 49.15 |
FP6-LLM (no compile) | 82.55 |
int8 (w/ compile) | 91.12 |
hellaswag eval (from https://github.com/EleutherAI/lm-evaluation-harness) for meta-llama/Llama-2-7b-chat-hf
(credits to @Iron-Bound)
name | acc_norm |
---|---|
baseline | 75.50 |
FP6-LLM | 75.36 |
from ao.
Seconded!
from ao.
Just a nit on "User-friendly API (either Tensor subclass or FP6Linear module)". You can implement an FP6Linear module using the a Tensor subclass based fp6 dtype. Just call self.weight = nn.Parameter(to_fp6(self.weight))
within the __init__
of your nn.Linear
replacement. The FP6Linear module then is one way of injecting that code into the model. It seems like a very popular way of doing that, so it's reasonable to provide as a primitive. Pretty much I'm only pointing out that you don't duplicate work by doing both :) You can then also make it easier for people to add coverage as shown in our toy example
Lines 24 to 36 in ad12663
from ao.
Thank you for your feedback. They are just a few suggested ways as discussed with @msaroufim, we haven't decided on what is the final API for FP6 yet.
Of course if we have FP6 subclass, we don't need FP6Linear anymore. But implementing subclass is harder, and almost all ops, except F.linear, do not make sense for FP6. This is because in FP6-LLM, the weight is split and re-arranged in a certain way to optimize global memory access for tensor cores.
I tried implementing FP6 subclass in #223 (and removed it in the end). Even implementing dispatch for aten.linear feels finicky because it seems pytorch will dispatch aten.mm (or aten.addmm) instead, so I have to store the transposed flag, set and check it correctly before calling the FP6-linear kernel (the CUDA kernel only works with A @ W.T i.e. Linear layer). To support other ops, we would need to (1) re-arrange the weight in natural order and (2) dequantize to FP32/FP16/BF16 (and reverse it back to FP6). It would be too expensive.
So I think implementing a custom FP6Linear layer would be easier, since we don't need to guarantee anything about the weight i.e. The weight itself is an internal implementation detail.
Just some of my thoughts when working on this. Once #248 is merged, I will work on adapting weight splitting logic (currently it's a CPU-only C++ extension). Note that the original code does not have weight un-splitting logic.
from ao.
Related Issues (20)
- Semi-Structured Sparsity unsupported for Windows HOT 1
- [NF4][FSDP2]: enable multi-gpu CI
- [NF4][FSDP2] avoid peaking GPU memory when constructing NF4 tensors HOT 1
- [NF4][FSDP2] DTensor + fused adam on cpu
- NF4Tensor uses 8 bits of memory HOT 7
- Doc build failing on main
- [BUG] No module named 'expecttest' when import `torchao`
- FloatQuantization subclass HOT 1
- Building torchao from source installs unnecessary torch and nvidia packages every time HOT 1
- [Question] MBU in automated CI? HOT 2
- [Tracker] WIP features for torchao 0.3 HOT 3
- HQQ Tracker HOT 1
- torchao init: ImportError: libcudart.so.12: cannot open shared object file: No such file or directory HOT 1
- Error when using to_nf4 function, inside NF4Tensor Class
- Bitnet 1.58 prework, POC, and staging HOT 2
- Generic packing algorithms from size N to M HOT 4
- torchao.utils.benchmark_model support cpu and mps benchmarking
- custom cuda extensions make installing ao hard HOT 4
- `dequantize_affine` modified the `input` in-place HOT 4
- Numerics checks between NF4 and bnb nf4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ao.