Building infrastructure products:
64bit / async-openai Goto Github PK
View Code? Open in Web Editor NEWRust library for OpenAI
Home Page: https://docs.rs/async-openai
License: MIT License
Rust library for OpenAI
Home Page: https://docs.rs/async-openai
License: MIT License
Building infrastructure products:
I'm trying to cache openai responses so that my tests will produce deterministic results, but the response types defined in this library don't implement Serialize
, so I have to maintain a copy of them which does.
Would you be open to adding Serialize
to all of the response types? This obviously increases the amount of generated code, but it makes consuming the data in cases like mine much easier.
In a non reproducible way errors like the following occur.
Error: failed to deserialize api response: expected value at line 1 column 1
For more info, these are caused when the responses are awaited.
Just like examples/function-call
but using streaming call.
Note: There is open PR at the moment but no way to test it #83
GPT-4 Turbo supports multiple function calls, but I couldn't any examples of this in async-openai yet. It took me a little while to stitch together how to do this since it's a bit divergent from single function calls, so maybe this minimal demonstration will be helpful to someone else:
let request = CreateChatCompletionRequestArgs::default()
.model("gpt-4-1106-preview")
.messages([
ChatCompletionRequestUserMessageArgs::default()
.content(prompt)
.build()?
.into(),
ChatCompletionRequestSystemMessageArgs::default()
.content(SYSTEM_PROMPT)
.build()?
.into(),
])
.max_tokens(512u16)
.stream(false)
.tools([
ChatCompletionToolArgs::default()
.r#type(ChatCompletionToolType::Function)
.function(
// ... build ChatCompletionFunctions here ...
)
.build()?,
...
])
.build()?;
if let Some(tool_calls) = response_message.tool_calls {
tool_calls
.into_iter()
.map(|tool_call| ...)
.collect()?;
}
In OpenAIConfig
and AzureConfig
handle API key via https://docs.rs/secrecy/latest/secrecy/
Hey!
I'm currently using this library to generate Embeddings but facing this issue when generating them in batches. With a batch size of 25
let mut req_count = 0;
for chunk in descriptions_for_embeddings.chunks(25) {
req_count += 1;
info!("Request: {}", req_count);
let embeddings_res = OPENAI_CLIENT
.embeddings()
.create(CreateEmbeddingRequest {
model: "text-embedding-ada-002".to_string(),
input: EmbeddingInput::StringArray(Vec::from(chunk.clone())),
user: Some(ctx.task_owner.to_string()),
})
.await?;
embeddings.extend(embeddings_res.data);
}
But after a few requests anywhere between 12 - 16 requests this just failes for some reason. I'm definitly not hitting any ratelimits either so not sure what is causing this.
Please add Deserialize
derive to OpenAIConfig
and AzureConfig
. I store my Azure configs in a json file for testing. Without Deserialize
, I need to copy a same struct and add Deserialize
for it and convert it to AzureConfig
, which is unnecessarily verbose.
I am trying to understand how to upload a file with contents from memory and can't figure out how. It appears that file input is only from a file? Do I have to save my bytes to file system to read them back?
My overall goal is to upload a file my program receives, from an incoming network request to an assistant thread.
Requires Api key to create embedding but no direction as to how in the example.
This can be resolved by including .with_api_key("insert api key")
Hi,
I've been experimenting with a few libraries in different languages - I've got this working correctly, but with the streamed response, even with your own examples it seems to stream by newlines rather than by token. This means that the time until output is very long, introducing a large delay with a much less responsive overall feel.
Your own example has this clearly shown: (see https://raw.githubusercontent.com/64bit/async-openai/assets/completions-stream/output.svg)
Correctly implemented, the streaming should function in the same way as ChatGPT's web API. If you would like a reference on a library that implements this correctly, you can find this here: https://github.com/OkGoDoIt/OpenAI-API-dotnet#streaming
I hope you can fix this, because I'd prefer to be using rust for this project than C#!
Thank you for this excellent project!
I want to ask if it supports context? By looking through the code, that might have not been supported, or maybe the API doesn't support it.
async-openai
's existing types for Chat stream were written from observation of OpenAI response.
Recently (16th June 2023) officially types were update for it, and needs to be used in async-openai
as well
Ref: https://github.com/openai/openai-openapi/pull/48/files
async-openai
has used the type names from spec for majority of types - hence ChatCompletionStreamResponseDelta
and CreateChatCompletionStreamResponse
names should be used.
Here's one example that only gives a JSONDeserialize error.
{
"error": {
"code": 429,
"message": "Requests to the Creates a completion for the chat message Operation under Azure OpenAI API version 2023-05-15 have exceeded token rate limit of your current OpenAI S0 pricing tier. Please retry after 60 seconds. Please go here: https://aka.ms/oai/quotaincrease if you would like to further increase the default rate limit."
}
}
JSONDeserialize(Error("missing field type
"))
Thanks for making this crate, it seems very useful :)
Currently the crate always depends on e.g. tokio which means it can't be compiled to wasm for use in frontends (or serverless wasm workers like on AWS/Cloudflare) that want to make OpenAI API requests.
It would be great if this crate could also be compiled to wasm.
Hello! I'm using your library and I noticed that there's no option to customize the internal HTTP client used for API requests when calling Client::new(). This makes it impossible to implement a proxy for all HTTP requests.
I was wondering if it would be possible to add a customization option for the internal reqwest client, similar to what the chatgpt-api offer. This would allow for greater flexibility and more use cases for this library.
I use this example is ok.
import openai
import os
import json
def get_completion(prompt, model="gpt-3.5-turbo"):
completion = openai.ChatCompletion.create(
model=model,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "what is 1 + 1?"}
],
temperature=0.6
)
return completion.choices[0].message
names = [obj["id"] for obj in openai.Model.list()["data"]]
json_names = json.dumps(names)
# print(openai.Model.list())
# print(json_names)
print(get_completion("What is 1 + 1?"))
but, I run examples/chat, meet this error.
โฏ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.30s
Running `/Users/davirain/rust/async-openai/examples/target/debug/chat`
Error: ApiError(ApiError { message: "You exceeded your current quota, please check your plan and billing details.", type: "insufficient_quota", param: None, code: None })
Using FineTunes
of async-openai
with GPT3.5-Turbo as the base model, the following response is returned:
Some("invalid_request_error"): gpt-3.5-turbo can only be fine-tuned on the new fine-tuning API (
/fine_tuning/jobs
). This API (/fine-tunes
) is being deprecated. Please refer to our documentation for more information: https://platform.openai.com/docs/api-reference/fine-tunin
It seems that the async-openai
API implementation needs to be updated before it can be used. ๐ฅบ
CreateFineTuneRequestArgs::default()
.model("gpt-3.5-turbo")
(https://docs.rs/async-openai/0.14.3/async_openai/types/struct.CreateFineTuneRequest.html#structfield.model ).build()
and then reqeust with client client.fine_tunes().create(fine_tune_request).await
pls see documentation
form submissions are currently not retried when rate limited because the Form
object used is not clonable.
I recently tried to upgrade my application to the latest version in order to make use of some of the new stuff announced at dev day. I have a telegram bot that uses teloxide, with a use of the image and transcription endpoints. However, I was running into this error with compiling:
the trait bound `fn(teloxide::Bot, teloxide::prelude::Message) -> impl futures_util::Future<Output = Result<(), RequestError>> {respond_to_voice}: Injectable<_, _, _>` is not satisfied
the following other types implement trait `Injectable<Input, Output, FnArgs>`:
<Asyncify<Func> as Injectable<Input, Output, ()>>
<Asyncify<Func> as Injectable<Input, Output, (A, B)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C, D)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F, G)>>
<Asyncify<Func> as Injectable<Input, Output, (A, B, C, D, E, F, G, H)>>
and 2 others
Instead of trying to upgrade directly to the latest version from 0.12, I tried upgrading incrementally until discovering that the issue appeared with the upgrade to 0.14. It looks like it might be related to the changes around retrying for rate limiting, but I'm at a bit of a loss with how to fix it.
My implementation is pretty simple in terms of how I'm calling the library itself:
pub async fn get_ai_transcription(audio: &String) -> Result<String, CustomError> {
let client = Client::new();
let request = CreateTranscriptionRequestArgs::default()
.file(audio)
.model("whisper-1")
.build()?;
let response = client.audio().transcribe(request).await?;
Ok(response.text)
}
but I was able to determine that it was occurring due to the let response = client.audio().transcribe(request).await?;
line.
After trying a lot of debugging to even get the error message to change, I added this basic test
fn test() {
fn assert_send(_: impl Send) {}
assert_send(get_ai_transcription(&"".to_string()));
}
and finally got a more readable error that disappears in 0.13 and reappears around 0.14.
|
309 | assert_send(get_ai_transcription(&"".to_string()));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ future returned by `get_ai_transcription` is not `Send`
|
= help: the trait `std::marker::Send` is not implemented for `dyn futures_util::Future<Output = Result<Form, OpenAIError>>`
note: required by a bound in `assert_send`
--> src/ai.rs:308:28
|
308 | fn assert_send(_: impl Send) {}
| ^^^^ required by this bound in `assert_send`
Unfortunately, now I'm a bit stuck. I'm pretty new to Rust, so debugging can still be a bit difficult. Do you have any suggestions or ideas for what I might be able to try?
Add a standalone fine tuning example
cargo run
command.Rust code continue to work because I already had these fields as Option<String>
but following PR is confirmation they are not needed.
Hello,
I've seen the dev day update for this and attempted a basic upgrade to my own project to enable the use of DALL-E 3 rather than DALL-E 2 but I've been unable to compile the changes due to errors that are coming from this library.
The only change that I have made to my project is adding .model(ImageModel::DallE3)
to the image request
I've put the cargo.toml and output of cargo build below and can confirm that I'm using rustc 1.73.0 which I updated today
I wouldn't doubt that I did something wrong but I can't figure out what it is.
cargo.toml
[dependencies]
tokio = { version = "1.29.1", features = ["macros", "rt-multi-thread"] }
serenity = { default-features = false, features = ["client", "gateway", "model", "rustls_backend", "cache"], version = "0.11.5"}
async-openai = "0.16.0"
rusqlite = { version = "0.29.0", features = ["bundled"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
reqwest = { version = "0.11", features = ["blocking"]}
uuid = "1.4.1"
rand = "0.8.5"
[profile.release.package."*"]
strip = true
opt-level = "z"
[profile.release]
lto = true
cargo build output (username redacted from file paths)
Compiling async-openai v0.16.0
error: unknown serde variant attribute `untagged`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:59:13
|
59 | #[serde(untagged)]
| ^^^^^^^^
error: unknown serde variant attribute `untagged`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:309:13
|
309 | #[serde(untagged)]
| ^^^^^^^^
error: unknown serde variant attribute `untagged`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1364:13
|
1364 | #[serde(untagged)]
| ^^^^^^^^
error: unknown serde variant attribute `untagged`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1619:13
|
1619 | #[serde(untagged)]
| ^^^^^^^^
error: unknown serde variant attribute `untagged`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1630:13
|
1630 | #[serde(untagged)]
| ^^^^^^^^
error[E0277]: the trait bound `types::types::ImageModel: Serialize` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:329:24
|
329 | #[derive(Debug, Clone, Serialize, Default, Builder, PartialEq)]
| ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ImageModel`
...
340 | /// The model to use for image generation.
| ------------------------------------------ required by a bound introduced by this call
|
= help: the following other types implement trait `Serialize`:
&'a T
&'a mut T
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
(T0, T1, T2, T3, T4, T5)
and 289 others
= note: required for `std::option::Option<types::types::ImageModel>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
|
1901 | T: Serialize;
| ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`
error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Serialize` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1368:17
|
1368 | #[derive(Clone, Serialize, Default, Debug, Builder, Deserialize, PartialEq)]
| ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ChatCompletionToolChoiceOption`
...
1454 | #[serde(skip_serializing_if = "Option::is_none")]
| - required by a bound introduced by this call
|
= help: the following other types implement trait `Serialize`:
&'a T
&'a mut T
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
(T0, T1, T2, T3, T4, T5)
and 289 others
= note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
|
1901 | T: Serialize;
| ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`
error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Serialize` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1368:17
|
1368 | #[derive(Clone, Serialize, Default, Debug, Builder, Deserialize, PartialEq)]
| ^^^^^^^^^ the trait `Serialize` is not implemented for `types::types::ChatCompletionFunctionCall`
...
1461 | /// Controls how the model responds to function calls.
| ------------------------------------------------------ required by a bound introduced by this call
|
= help: the following other types implement trait `Serialize`:
&'a T
&'a mut T
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
(T0, T1, T2, T3, T4, T5)
and 289 others
= note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Serialize`
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
|
1901 | T: Serialize;
| ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`
error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1455:22
|
1455 | pub tool_choice: Option<ChatCompletionToolChoiceOption>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `next_element`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1729:12
|
1729 | T: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `SeqAccess::next_element`
error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1468:24
|
1468 | pub function_call: Option<ChatCompletionFunctionCall>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `next_element`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1729:12
|
1729 | T: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `SeqAccess::next_element`
error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1455:22
|
1455 | pub tool_choice: Option<ChatCompletionToolChoiceOption>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `next_value`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1868:12
|
1868 | V: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `MapAccess::next_value`
error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1468:24
|
1468 | pub function_call: Option<ChatCompletionFunctionCall>,
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `next_value`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\de\mod.rs:1868:12
|
1868 | V: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `MapAccess::next_value`
error[E0277]: the trait bound `types::types::ChatCompletionToolChoiceOption: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1454:5
|
1454 | #[serde(skip_serializing_if = "Option::is_none")]
| ^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionToolChoiceOption`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionToolChoiceOption>` to implement `Deserialize<'_>`
note: required by a bound in `config::_::_serde::__private::de::missing_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\private\de.rs:22:8
|
22 | V: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `missing_field`
error[E0277]: the trait bound `types::types::ChatCompletionFunctionCall: Deserialize<'_>` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1461:5
|
1461 | /// Controls how the model responds to function calls.
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Deserialize<'_>` is not implemented for `types::types::ChatCompletionFunctionCall`
|
= help: the following other types implement trait `Deserialize<'de>`:
&'a Path
&'a [u8]
&'a str
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
and 341 others
= note: required for `std::option::Option<types::types::ChatCompletionFunctionCall>` to implement `Deserialize<'_>`
note: required by a bound in `config::_::_serde::__private::de::missing_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\private\de.rs:22:8
|
22 | V: Deserialize<'de>,
| ^^^^^^^^^^^^^^^^ required by this bound in `missing_field`
error[E0277]: the trait bound `SpeechModel: Serialize` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1665:53
|
1665 | #[derive(Clone, Default, Debug, Builder, PartialEq, Serialize)]
| ^^^^^^^^^ the trait `Serialize` is not implemented for `SpeechModel`
...
1675 | /// One of the available [TTS models](https://platform.openai.com/docs/models/tts): `tts-1` or `tts-1-hd`
| --------------------------------------------------------------------------------------------------------- required by a bound introduced by this call
|
= help: the following other types implement trait `Serialize`:
&'a T
&'a mut T
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
(T0, T1, T2, T3, T4, T5)
and 289 others
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
|
1901 | T: Serialize;
| ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`
error[E0277]: the trait bound `Voice: Serialize` is not satisfied
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\async-openai-0.16.0\src\types\types.rs:1665:53
|
1665 | #[derive(Clone, Default, Debug, Builder, PartialEq, Serialize)]
| ^^^^^^^^^ the trait `Serialize` is not implemented for `Voice`
...
1678 | /// The voice to use when generating the audio. Supported voices are `alloy`, `echo`, `fable`, `onyx`, `nova`, and `shimmer`.
| ----------------------------------------------------------------------------------------------------------------------------- required by a bound introduced by this call
|
= help: the following other types implement trait `Serialize`:
&'a T
&'a mut T
()
(T0, T1)
(T0, T1, T2)
(T0, T1, T2, T3)
(T0, T1, T2, T3, T4)
(T0, T1, T2, T3, T4, T5)
and 289 others
note: required by a bound in `config::_::_serde::ser::SerializeStruct::serialize_field`
--> C:\Users\[USERNAME]\.cargo\registry\src\github.com-1ecc6299db9ec823\serde-1.0.163\src\ser\mod.rs:1901:12
|
1901 | T: Serialize;
| ^^^^^^^^^ required by this bound in `SerializeStruct::serialize_field`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `async-openai` due to 16 previous errors
ChatCompletionResponseFormat
was introduced to support JSON output in the chat completions API, but I believe the r#type
field should be public instead of private? Apologies if I'm missing something. Happen to open a PR if this is the case.
base_url default is https://api.openai.com
,
if the cultivation base_url for other?
.Env file, is it supported?
Hi,
Can support for proxy services be increased? It's like this:
async fn execute<O>(&self, request: reqwest::Request) -> Result<O, OpenAIError>
where
O: DeserializeOwned,
{
let client = match self.proxy {
Some(p) => reqwest::Client::builder().proxy(reqwest::Proxy::all(p)?).build()?,
None => reqwest::Client::new(),
};
...
}
The following code won't compile because CompletionResponseStream isn't marked as Send:
fn interpret_bool(token_stream: &mut CompletionResponseStream) -> BoxFuture<'_, bool> {
async move {
while let Some(response) = token_stream.next().await {
match response {
Ok(response) => {
let token_str = &response.choices[0].text.trim();
if !token_str.is_empty() {
return token_str.contains("yes") || token_str.contains("Yes");
}
},
Err(_) => panic!()
}
}
false
}.boxed()
}
This limits the ability to integrate this crate into a project using a multi-threaded runtime. The issue isn't fundamental however, because the underlying Stream trait is Send. I have prototyped a change to address this. I will submit a PR.
Thank you.
Currently the Deserialize
trait is not derived for CreateCompletionRequest
https://github.com/64bit/async-openai/blob/main/async-openai/src/types/types.rs#L53
Deriving it would make it easier to compose with this awesome crate
https://github.com/64bit/async-openai/blob/main/examples/function-call/src/main.rs
The current function call example uses the now deprecated ChatCompletionResponseMessage::function_call
field. It was deprecated in favor of ChatCompletionResponseMessage::tool_calls
. Can an example be provided using the new tool_calls
method?
Works with turbo but not gpt4. Error message doesn't help much.
error: stream failed: Invalid status code: 404 Not Found
Code
#[derive(Deserialize)]
struct ChatInput {
content: String,
model: Option<String>,
}
#[post("/stream", format = "json", data = "<chat_input>")]
async fn stream(chat_input: Json<ChatInput>) -> TextStream![String] {
let client = Client::new();
let model = chat_input.model.clone().unwrap_or_else(|| "gpt-3.5-turbo".to_string());
println!("{}", model);
let model = "gpt-4-32k".to_string();
let request = CreateChatCompletionRequestArgs::default()
.model(model)
// .max_tokens(512u16)
.messages([
ChatCompletionRequestMessageArgs::default()
.content(chat_input.content.clone())
.role(Role::User)
.build()
.unwrap(),
])
.build()
.unwrap();
dbg!(&request);
let mut stream = client.chat().create_stream(request).await.unwrap();
TextStream! {
while let Some(result) = stream.next().await {
match result {
Ok(response) => {
for chat_choice in &response.choices {
if let Some(ref content) = chat_choice.delta.content {
yield content.clone();
}
}
}
Err(err) => {
yield format!("error: {}", err);
}
}
}
}
}
A self contained examples build using Assistants API - where any one can run it by simply cargo run
.
Referring to https://platform.openai.com/docs/api-reference/chat/create, these messages can have names, but I don't find them in ChatCompletionRequestUserMessage
or ChatCompletionRequestAssistantMessage
or ChatCompletionRequestSystemMessage
. This field is seldom used but it counts for tokens and (based on my experiments) is recognizable by GPTs.
See #67 (comment)
https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
I imagine gpt-4 vision is going to have some API differences compared to normal language models that just take text instead of files. Supporting it would probably be desirable to many people however.
Hi @64bit!
TL;DR: async-openai seems to silently create unexpected, invalid requests because the OpenAI API expects the content field be present, even if it is null.
The content field seems to be required in situations that, AFAIK, is only mentioned en passant in the OpenAI API reference documentation:
content (string or null) Required
The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.
(Emphases mine.) As such, I don't know if this should be addressed in async-openai. Depending on how you interpret the documentation, it might be the case that async-openai is in fact violating the API contract. In any case, I had such a bad time debugging that I think it's worth creating an issue.
The following fails by using curl
directly:
$ curl https://api.openai.com/v1/chat/completions -u :$OPENAI_API_KEY -H 'Content-Type: application/json' -d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"What is the weather like in Boston?"},{"role":"assistant","function_call":{"name":"get_current_weather","arguments":"{\"location\":\"Boston, MA\"}"}},{"role":"function","content":"{\"forecast\":[\"sunny\",\"windy\"],\"location\":\"Boston, MA\",\"temperature\":\"72\",\"unit\":null}","name":"get_current_weather"}],"functions":[{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"properties":{"location":{"description":"The city and state, e.g. San Francisco, CA","type":"string"},"unit":{"enum":["celsius","fahrenheit"],"type":"string"}},"required":["location"],"type":"object"}}],"temperature":0.0,"max_tokens":null}'
{
"error": {
"message": "'content' is a required property - 'messages.1'",
"type": "invalid_request_error",
"param": null,
"code": null
}
}
The second message {"role":"assistant","function_call":{"name":"get_current_weather","arguments":"{\"location\":\"Boston, MA\"}"}}
does not contain a content field. The solution is to insert one. Both "content":null
and "content":""
work. So there's a difference between omitting the content field and having "content":null
.
I created the failing example above by serializing a CreateChatCompletionRequest
directly. The issue lies in skipping serializing the content field if it is None
:
async-openai/async-openai/src/types/types.rs
Lines 724 to 727 in 3776935
The current workaround is to set content to an empty string:
aot::ChatCompletionRequestMessageArgs::default()
.role(aot::Role::Assistant)
.function_call(aot::FunctionCall { name, arguments })
.content("")
.build()?
Again, depending on how you interpret the OpenAI API documentation, this behaviour might break the API contract. What do you think?
Would like to enable something like this:
fn use_client<C: Config>(client: Box<dyn CommonClientTrait<ConfigType = C>>) {
// You can now call any method defined in the CommonClientTrait on the client
let models = client.models();
// ...
}
fn main() {
let openai_config = OpenAIConfig::default();
let openai_client = Client::with_config(openai_config);
use_client(Box::new(openai_client));
let azure_config = AzureConfig::default();
let azure_client = Client::with_config(azure_config);
use_client(Box::new(azure_client));
}
It's seem like max_tokens
filed is missed in args builder, but it presented in documentation:
https://platform.openai.com/docs/api-reference/chat/create
Text to speech via the OpenAI API was anounced at the OpenAI DevDay, Opening Keynote
https://platform.openai.com/docs/models/tts
https://platform.openai.com/docs/guides/text-to-speech
I'd be very grateful if this library would add support.
I noticed the latest release specifies minimum versions of the latest release of all dependency crates. In some cases this crate requires a version of another crate released only a few days ago.
The problem is that some larger projects version-lock certain dependencies, and it can take them many months to migrate to the latest. This means it's impossible to integrate async-openai into a project along-side certain other frameworks.
Obviously, when functionality from the latest version of a dependency is needed, then it's necessary to specify that dependency version, however automatically advancing to the latest version of every crate can cause a lot of unnecessary problems.
Thank you.
It's very strange; when I use SpeechResponseFormat::Flac
, the file created is not valid and cannot be played. Something gets saved, but I'm not sure what it is (I cannot open it). (I did put the matching file extension for each of those types)
However, when I use SpeechResponseFormat::Aac
or SpeechResponseFormat::Mp3
explicitly, it works fine. And leaving things to None, works too, as it is mp3.
let request = CreateSpeechRequestArgs::default()
.input("Today is a wonderful day to build something people love!")
.voice(Voice::Alloy)
.model(SpeechModel::Tts1)
.response_format(SpeechResponseFormat::Flac) // Mp3, Aac will work fine
.build()?;
let response = client.audio().speech(request).await?;
response.save("./data/audio.flac").await?;
Note: I checked via openai Typescript api, and the
flac
does work.
Running `/Users/robert/src/rust/openai/async-openai/examples/target/debug/codex`
Error: ApiError(ApiError { message: "The model: `code-davinci-002` does not exist", type: "invalid_request_error", param: None, code: Some(String("model_not_found")) })
async-openai is ๐๐, but I found myself with a lot of function-call related logic to deal with, and much more still to come, and inlining json objects isn't why I came to the Rust party. openai-func-enums lets you compose descriptions of "functions" using enum types and enum variants to represent functions and their arguments, generate the function json, and deserialize into struct instances (types generated by the library) that have properties matching your enum field types, so you can match on the variants and get on with it. Just thought I'd share in case it was useful to anyone else.
Over time src/types/types.rs
and src/types/impls.rs
have gotten huge.
It would be quality improvement to refactor them similar to src/types/assistants
Any plan for the new OpenAI assistant/thread API ?
Whenever there are updates to this library, it causes compatibility issues with tiktoken-rs. It could be better to flip the script and have this library depend on tiktoken-rs instead, especially since this library often needs changes to keep up with OpenAI API updates.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.