Comments (3)
@LysandreJik
Thank you for the feedback.
This works for me.
+ environment:
+ CUDA_VISIBLE_DEVICES: "0,1"
//
- device_ids: ['0', '1']
+ count: all
I'm closing this ticket.
from text-generation-inference.
Hey! I've tried reproducing it locally, but unfortunately, I can't manage to do so. I don't have you model ID, but I see it seems to fail on the mistral import, so I designed the following docker compose file with mistral-community/Mistral-7B-v0.2
:
version: '3.7'
services:
inference-chat:
image: ghcr.io/huggingface/text-generation-inference:2.0
ports:
- 8080:80
volumes:
- /home/ubuntu/data:/data
command:
- "--model-id"
- "mistral-community/Mistral-7B-v0.2"
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0']
capabilities: [gpu]
networks:
default:
driver: bridge
I don't have any difference between using raw docker and docker compose, and it doesn't fail at the flash attention step. I'm using an A10G GPU which is the same generation as your A100 GPU.
Can you maybe try with the checkpoint above to see if it fails for you, to see if we identify whether the problem comes from a specific architecture or not?
from text-generation-inference.
Awesome 🙌
from text-generation-inference.
Related Issues (20)
- Can I run text-generation-interface "offline" HOT 1
- Guide/support for quantization + MLPSpeculator HOT 2
- ROCm: Server error: transport error when running batch size >=2 (Falcon-11B) HOT 1
- ROCm: Support models with head_dim>128
- ROCm: mismatch in generation for gpt2
- Poor/inconsistent results from Phi-3-mini-128k HOT 1
- idefics2: Sizes of tensors must match except in dimension 0. Expected size 448 but got size 447 for tensor number 2 in the list. HOT 2
- Some typo error in the picture of flash_attention.md HOT 1
- Unable to run TGI following the instructions on the readme HOT 5
- Phi-3-mini-128k crashes on simple query HOT 2
- Add Environment Variable for OTLP Service Name HOT 2
- ImportError: libcuda.so.1: cannot open shared object file: No such file or directory HOT 10
- Tree-attention for medusa HOT 2
- get stucked when run text-generation-benchmark on AMD gpu
- Unable to load Qwen2-72B-Instruct-exl2 model HOT 2
- `mistralai/Mixtral-8x22B-Instruct-v0.1`: Getting `RuntimeError: 'ptxas' failed with error code 127` while warming up on 8 GPUs
- `mistralai/Mixtral-8x22B-Instruct-v0.1`: Successful warmup, crashes on inference HOT 1
- Long install report HOT 1
- P40 with USE_FLASH_ATTENTION=False
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from text-generation-inference.