Comments (4)
I noticed this behaviour too while working with LocalAI. I also tried to set the prompt_cache_all option to false but this didn't change anything. Setting temperature, top_k, top_p, seed to different values also doesn't change anything. The strangest part is, it keeps generating the same output even after recreating the whole container except the ID is now different.
I also tried to modify the prompt slightly by adding an "!" at the end for example, but even then, the answer is identical.
I'm using localai/localai:v2.9.0-cublas-cuda12
from localai.
While i'm having a look at this - the options seems passed just fine up to the gRPC server and llama.cpp.
However, even when I set slot.params.seed = time(NULL);
right before we configure the slot ready to be processed
LocalAI/backend/cpp/llama/grpc-server.cpp
Line 879 in dc919e0
Also to note, that's the full json data printed out (and looks indeed setting seed
, top_k
, and top_p
accordingly):
12:36PM DBG GRPC(c0c3c83d0ec33ffe925657a56b06771b-127.0.0.1:40341): stdout {"cache_prompt":false,"frequency_penalty":0.0,"grammar":"","ignore_eos":false,"mirostat":0,"mirostat_eta":0.0,"mirostat_tau":0.0,"n_keep":0,"n_predict":-1,"penalize_nl":false,"presence_penalty":0.0,"prompt":"Instruct: tell me a story about llamas\nOutput:\n","repeat_last_n":0,"repeat_penalty":0.0,"seed":-1,"stop":[],"stream":false,"temperature":0.20000000298023224,"tfs_z":0.0,"top_k":40,"top_p":0.949999988079071,"typical_p":0.0}
so it looks definitely something is off in the llama.cpp implementation, as our is just a gRPC wrapper on top of the http example (with few edits to avoid bugs like #1333 )
from localai.
Ok, tracing seed
seemed to be just a red herring and dragged me in the wrong direction. I'm tracing back the usage and it looks like it is because it doesn't have enough candidates to select the tokens.
I've tried to switch sampler with phi-2 and I got finally a more undeterministic result. It looks to me it is very much depending on the model/sampler strategy - mirostat can get more candidates, while the temperature sampler has less to select which (so it is more deterministic).
E.g. with phi-2:
name: phi-2
context_size: 2048
f16: true
gpu_layers: 90
mmap: true
trimsuffix:
- "\n"
parameters:
model: huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
temperature: 1.0
top_k: 40
top_p: 0.95
mirostat: 2
mirostat_eta: 1.0
mirostat_tau: 1.0
seed: -1
template:
chat: &template |
Instruct: {{.Input}}
Output:
completion: *template
usage: |
To use this model, interact with the API (in another terminal) with curl for instance:
curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "phi-2",
"messages": [{"role": "user", "content": "How are you doing?", "temperature": 0.1}]
}'
from localai.
There were incongruences with the docs, updated also the samples in #1820. If the issue persist feel free to re-open
from localai.
Related Issues (20)
- “runtime error: invalid memory address or nil pointer dereference” happened,when i send test prompt to cublas-cuda12-core HOT 1
- Cant change DTYPE inside VLLM settings
- feature request: add docker tags "latest-ffmpeg-core" and "latest-ffmpeg" HOT 1
- document use of api_key / API_KEY HOT 4
- localai:v2.10.1-cublas-cuda12-core ”Fails: grpc process not found” HOT 4
- Not correctly accounting for Line Endings in tmpl files.
- LocalAI sends empty chunk to chatbot_ui and closes stream HOT 5
- single binary HOT 3
- [AIO] NVIDIA Detection on WSL2
- [AIO] WSL detection for NVIDIA VRAM size is not working
- Dockerhub images referenced in the documentation don't exist HOT 5
- feat: Retrieval
- gpt4all backend doesn't respect gpu_layers config HOT 2
- ci: latest image tags
- default front-page
- "error":{"code":500,"message":"rpc error: code = Unknown desc = unimplemented","type":""}} HOT 13
- 'response_format' field in OpenAI image creation request does not match OpenAI API spec HOT 2
- can we build non avx cpu aio images? HOT 1
- llamacpp chat/completions response unrelated to prompt on cpu local deploy HOT 1
- feat(llama-cpp): support token count HOT 9
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from localai.