Comments (3)
The current ghcr.io/bentoml/openllm:latest
image (sha256:1860863091163a8e8cb1225c99d6e1b0735c11871e14e8d8424a22a5ad6742fa
) shows an error:
ValueError: The checkpoint you are trying to load has a model type of `cohere`, which Transformers does not recognize. This may be due to a problem with the checkpoint or an outdated version of Transformers.
when doing this:
docker run --rm --gpus all -p 3000:3000 -it ghcr.io/bentoml/openllm start CohereForAI/c4ai-command-r-v01 --backend vllm
also when installing openllm[vllm]
it brings 0.2.7 version of vLLM
Though vLLM version in main branch is 0.4.0:
https://github.com/bentoml/OpenLLM/blob/main/openllm-core/pyproject.toml#L83 and https://github.com/bentoml/OpenLLM/blob/main/tools/dependencies.py#L157
from openllm.
I think this should be the same prompting system, there is also CohereForAI/c4ai-command-r-plus
available and it would be nice to be able to run it too.
from openllm.
should be supported on main now. Will release a new version soon.
from openllm.
Related Issues (20)
- bug: start chatglm-6b locally err
- I'm having trouble getting statted with openllm, but I don't want to use conda and I have WSL2 HOT 1
- feat: support volta architecture GPUs for the vLLM backend
- Deploying LLM in On-Premises Server to Assist Users to Launch Locally in Work Laptop - Web Browser HOT 3
- Deprecation Warning for PyTorch Backend HOT 2
- FileNotFoundError: [Errno 2] No such file or directory: b'/root/bentoml/models/pt-google--gemma-7b-it/latest'
- feat: support Qwen1.5 HOT 1
- feat: any plan to support NPU HOT 1
- bug: An exception occurred while instantiating runner 'llm-mistral-runner' HOT 2
- bug: Not enough data for satisfy transfer length header HOT 1
- feat: Can you support llama3? HOT 3
- bug: WARNING: openllm 0.4.44 does not provide the extra 'gemma' HOT 1
- feat: support LMDeploy backend HOT 3
- bug: error coming up while install the vllm using pip install "openllm[vllm]"
- For AMD/GPU, how to use multi GPUS in the api_server.py HOT 1
- bug: pip package version ssues
- feat: Multimodal LLMs?
- feat: support enforce_eager option from cli
- bug: Cannot Run an OpenLLM server regardless of where I try to get it from or what model I use HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openllm.