Comments (5)
Glad you got it working! We'll be adding some better docs soon to make these parameters easier to find. Closing this issue for now.
from lorax.
Do you measure before or after warmup?
With the start up the kv cache gets reserved, with quantited model there is more memory for cache, but the total used memory is the same.
You could limit the available memory by an argument
from lorax.
Hey @prd-tuong-nguyen, as @flozi00 said this is likely due to the warmup phase, where we allocate additional memory in advance for batching to avoid having to allocate it on the fly during inference.
For example, here's the memory usage reported by nvidia-smi
when running with nf4 quantization using mistral-7b before warmup:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100-PCIE-40GB On | 00000000:11:00.0 Off | 0 |
| N/A 27C P0 54W / 250W | 5011MiB / 40960MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
And here are the results after warmup:
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A100-PCIE-40GB On | 00000000:11:00.0 Off | 0 |
| N/A 27C P0 54W / 250W | 38911MiB / 40960MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
As you can see, lorax will use as much memory as it can get away to maximize the batch size.
from lorax.
@flozi00 , @tgaddair Oh thanks for your fast reply, I see that the memory before warming up is lower then after that.
Can I reduce the additional memory for batching
, in that way I can serve multi-instance in the same GPU.
from lorax.
Resolved by edit cuda_memory_fraction
, thank you <3
from lorax.
Related Issues (20)
- Combining multiple LoRA adapters HOT 1
- 10s latency of lora inference caused by None base_model_name_or_path in adapter_config
- [Question] Usage about the `adapter-memory-fraction` HOT 1
- Improve the latency of `load_batched_adapter_weights` HOT 1
- Fix PyTorch CUDA version in Docker
- Idefics2 and LLaVA
- Fallback to Flash Attention v1 for pre-Ampere GPUs HOT 1
- Private LORA Adapter Error - Server error: No valid adapter config file found: tried None and None HOT 1
- Llama3-8b-Instruct won't stop generating
- Speculative tokens fails during warmup in some scenarios HOT 1
- Batch inference endpoint (OpenAI compatible)
- Add HF authentication instructions to lorax-launcher docs HOT 6
- Improve async load for adapters to avoid main thread lockups in server
- Retrieve all lora models from Huggingface hub by base model setting.
- Add all launcher args as optional in the Helm charts
- AutoTokenzier.from_pretrains needs setting with `trust_remote_code` inside `load_module_map` HOT 2
- Ensure api_token is not included in the response on error HOT 3
- [QUESTION] How to change HuggingFace model download Path in Lorax When deployed to Kubernetes through HelmChart HOT 1
- Bug Report: lorax-launcher failed with --source "s3" for model_id "mistralai/Mistral-7B-Instruct-v0.2"
- Improve warmup checking for max new tokens when using speculative decoding
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lorax.