Comments (4)
Implementing Send for *mut c_void
is foot-shot, AFAIKT. Well, you can see this commit. Main problem is in moving value from one position to another when you are prompting llama. It causes, for me, several redis errors (I'm using LLama in several fullstack apps). That's why you should not request this feature, in my opinion.
But! You can solve your problem by this implementation:
use llama_cpp_rs::{LLama, options::{ModelOptions, PredictOptions}};
pub fn start_llama_thread(
llama_channel_rx: mpsc::Receiver<(String, mpsc::Sender<String>)>,
) {
thread::spawn(move || {
let llama = LLama::new(
"../models/zephyr-7b-beta.gguf".into(),
&ModelOptions {
context_size: 2048,
..Default::default()
}
).unwrap();
while let Ok((user_msg, response_tx)) = llama_channel_rx.recv() {
let predict_options = PredictOptions {
threads: 14,
temperature: 0.7,
penalty: 1.1,
..Default::default()
};
match llama.predict(
format!("<|system|>\n</s>\n<|user|>\n{}</s>\n<|assistant|>", user_msg).into(),
predict_options,
) {
Ok(result) => if let Err(e) = response_tx.send(result) { log::warn!("{}", e) },
Err(e) => { log::warn!("{}", e) },
}
}
});
}
and
let (response_tx, response_rx) = mpsc::channel::<String>();
if let Err(_) = llama_channel_tx.send(("This is my own request to Zephyr! Hello! How are you?".into(), response_tx)) {
return Err("Cannot ask the model.".into())
}
log::info!("Sent a request to Zephyr-7b-β model.");
match response_rx.recv() {
Err(_) => Err("Cannot receive answer from model.".into()),
Ok(result) => Ok(result),
}
from rust-llama.cpp.
I think with the code you provided llama_channel_rx.recv()
should be llama_channel_rx.recv().await
which causes the issues with Send
. The other thing I was looking into (haven't gotten to work yet) is tokio::task::LocalSet
which lets you run !Send
things. Still unsure if that's the right move because the main thing I'm trying to avoid is the startup time with LLama::new()
- I just want to instantiate it once on boot and then call it throughout the run of my program with an mpsc
channel.
Edit: I tried the commit with the unsafe impl of Send
and it works! I'm starting to think that in my specific context I won't run into any issues with the unsafe
ness. Since every prompt to the llama is being passed in via an mpsc
, the single consumer guarantees that all prompts are processed serially which is what we want. Am I right about that?
Edit 2: every couple queries I get seg faults and other memory errors like this: Incorrect checksum for freed object 0x7ff00f505bf0: probably modified after being freed. Corrupt value: 0x6e61206e616d2061
from rust-llama.cpp.
I think with the code you provided
llama_channel_rx.recv()
should bellama_channel_rx.recv().await
It's should not, because you have to use std::sync::mpsc
, not tokio::sync::mpsc
. That's why it uses std::thread::spawn
, not tokio::task::spawn
.
UPD: One more time: "LLama is not Send" means "LLama should run only in single thread, without async context switch - without any async code at total". In standard thread it's guaranteed to be safe at Rust guards.
from rust-llama.cpp.
ahh my bad. This is working for me now, thanks!
from rust-llama.cpp.
Related Issues (20)
- Cant compile on Win64 HOT 2
- Cant build on Mac aarch64
- Cant build on Ubuntu 22.04 HOT 1
- Error in loading models HOT 5
- llama.cpp ./embedding HOT 4
- Feature flag metal: Fails to load model when n_gpu_layers > 0 HOT 8
- Slow Performance compared to Python Binding HOT 1
- Sometimes crashes with UTF8 error HOT 3
- Error when enabling CUDA on Windows HOT 12
- clang - fatal error: 'assert.h' file not found HOT 2
- remove or add a way to disable `println!("count {}", reverse_count);` HOT 1
- Error running Phi2 Models
- Using metal and `n_gpu_layers` produces no tokens HOT 5
- Support for GBNF Grammars HOT 2
- Include ggml-metal.metal file in source code HOT 1
- Maintance and improvements HOT 6
- Bug cannot build correct in macos with m2 chip
- Compiling with metal feature has `ggml-metal.o` linker failure HOT 3
- Not cloning llama.cpp submodule HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rust-llama.cpp.