Coder Social home page Coder Social logo

orhanerday / open-ai Goto Github PK

View Code? Open in Web Editor NEW
2.1K 41.0 267.0 2.09 MB

OpenAI PHP SDK : Most downloaded, forked, contributed, huge community supported, and used PHP (Laravel , Symfony, Yii, Cake PHP or any PHP framework) SDK for OpenAI GPT-3 and DALL-E. It also supports chatGPT-like streaming. (ChatGPT AI is supported)

Home Page: https://orhanerday.gitbook.io/openai-php-api-1/

License: MIT License

PHP 100.00%
openai gpt-3 laravel php dall-e dalle2 openai-api cakephp symfony yii

open-ai's Introduction

Hi there 👋

Anurag's GitHub stats

You can

Buy Me A Coffee

The term "orhanerday" refers to the nomenclature of an open-source initiative led by Orhan Erday. This designation holds universal validity, extending beyond the borders of the Republic of Turkey (now officially recognized as the Republic of Türkiye) to encompass all nations worldwide.

open-ai's People

Contributors

adetch avatar ali-wells avatar assert6 avatar bashar94 avatar cotrufoa avatar dependabot[bot] avatar dougkulak avatar dsampaolo avatar fireqong avatar github-actions[bot] avatar gouguoyin avatar joacir avatar johanvanhelden avatar mahadsprouttech avatar marcosegato avatar muchwat avatar mydnic avatar orhan-cmd avatar orhanerday avatar reply2future avatar slaffik avatar yakupseymen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

open-ai's Issues

Open AI Ada text completion returns bad results in code but works in playground

Describe the bug

I'm trying to use the Ada language processor of OpenAi to summarize a piece of text. When I try to use their playground, the function works and I get a summarization that makes sense and can be used by humans.
This is the cURL from the playground:

curl https://api.openai.com/v1/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $OPENAI_API_KEY" \ -d '{ "model": "text-ada-001", "prompt": "Please write a one paragraph professional synopsis:\n\nSome text", "temperature": 0, "max_tokens": 60, "top_p": 1, "frequency_penalty": 0, "presence_penalty": 0 }'

This is the code that I use in PHP:

`$open_ai_key = 'xxx';
$open_ai = new OpenAi($open_ai_key);

    $complete = $open_ai->completion([
        'model' => 'text-ada-001',
        'prompt' => 'Please write a one paragraph professional synopsis: ' . $text,
        'temperature' => 0,
        'max_tokens' => 60,
        'frequency_penalty' => 0,
        'presence_penalty' => 0,
    ]);

    return $complete;`

I have also tried to use a both ada and davinci and in both cases it returns nonsense. I'm saying nonsense, because the text that it returns is not something that can be read and said, 'Hey, this is a proffesional synopsis'. I'm going to give an example of a sentence that I got in one of the iterations:

'It's not pretty and no I thought to myself, oh look IT'S NOT THAT REPUBLICAN kids would kno one of these things. OH IT'S A RESTRICTIOUS SCHOOL'.

I can assure you, there are no mentions of republicans or kids in the text that I'm processing.

My question is, am I doing something wrong? Does OpenAi work differently on their playground and in code?

To Reproduce

  1. Write a new PHP function that will try to create a proffesional synopsis using the ada model.
  2. Paste in a piece of text and make the request
  3. Compare the results in the OpenAI playground and in code and see that they are wildly different.

Example text that I'm using:

Alright then. Uh So welcome to another episode of E. M. S. On the Mountain. This one from the mountain. Oh snap. This is the show for those interested in Austrian rulers medicine. I'm Sean as always joined by my backcountry partner mike. And today's show we're gonna talk about one of our cases we had, I don't know, probably a couple of years ago now, while back. Yeah, so this was mike's so he's going to lead us through this party. So I'm guessing because I'm so bad at the editing and the getting things online at a reasonable time that this will be the first case review that we put up on the interwebs. But this is a new thing we're trying out. We're uh we're gonna do a few shows from the mountain. So normally I sit in my basement and Sean sits in his basement. We stare at each other across the magic of the interwebs. But today we are literally sitting across the table from each other in a old old building in the middle of the woods, attempting to record something that's lively and exciting for you folks. So let's see how this goes. So, so if you hear background noise like no kidding animals, birds and random people, radios, people, doors, it's all part of the part of the party and it could be a train wreck, but I have a feeling that at least one person and only one person johnny b will give us his feedback on what he thinks about this and maybe our wives will care. But yeah, all right, so let's talk about this. We had a case a couple of years ago that started out as a normal, it's a normal saturday afternoon in the woods. And uh, the tones drop. No, the tones did not drop. I apologize. There was no tone drop it. There was a, there was a call on the radio from a uh, from a report of it went out as a person with a little trouble breathing. That was an understatement in a certain respect. And it was suggested that perhaps we go kind of check it out, see what's going on. So we jump in our magic buggy and bop on over, luckily this patient wasn't too too far away from where we were at the time. We head down trail we found said person, they were 100 yards off of a fire road. And that fire road was maybe 5200 yards from a paved road. So they weren't real deep in the woods. But they were definitely hiking. They were definitely on a trail. They were definitely about £400. Yes, Yes. They were just want to make sure we paint that map properly. So we're, we're about 100 50 yards down a rocky trail. Maybe 200. I said I'd go between two and 300. Okay, a little bit further. That trail led to eventually an intersection with a fire road and that fire road about 100 yards or so from hard top if you will. So here we are with a £400 individual who's breathing at a rate of, I don't know, 800 or so. It's more like it was, it was more like between 35 and 40. They were uh yeah, this patient was panting like a champion. Now let's back up because I love storytelling and I like to do it and completely jacked up time order because I didn't make notes prior to proceeding down trail. After we stepped out of our trusty steed, we were greeted by the patient's family member and this family member immediately informed us that they were within two or three minutes. They informed us that their family member was just having a quote sugar problem because they had the diabetes and that she was in fact a paramedic at one point but did not want to keep that up because they were going to make her do si es She didn't know she had to do si es tu research so she lost it real quick. Like but she definitely was a paramedic and she was definitely convinced that her family member was experiencing a sugar problem. Yeah. So I'm not so convinced that she was ever a paramedic. I'm just going by what I was dog hey, it is what it is. Yeah. So we also learned as we proceeded down to link up with said patient that a number of bystanders. Another a number. Excuse me not bystanders. A number of other hikers that had come by had provided various snacks and sugar free products because it was believed that this individual is experiencing a hypoglycemic event and needed more sugar. So upon arrival on scene, by the way, just a little note here, we did bring a cardiac monitor with us because it was only 100 yards or so. It was a report of a person having trouble breathing. The one and only time mike and I have ever taken a life pack down a trail. There was one other time. We'll talk about that one. That's when I carried it all the way down. Dark hollow for the girl with the weird chest thing. A 30 year old whatever. And I decided I was never doing that again. I'll have to cut out the dark hollow part because that's kind of Yeah, whatever carried it down that trail. So upon arrival, one of the first things a good paramedics gonna do is assess the scene. And then since it was reported that this person is having a sugar problem, we assess the state of her sugar needs keep in mind breathing at 35 to 40. Diaphragmatic. Sitting on a log, leaning against another nice bystander. Yeah. Trying to hold her upright so we get a quick finger prick. I don't remember what her blood pressure was off the top of my head. It wasn't astronomical. No, it was maybe upper one hundred's very low 200. Remember being better than mine. It wasn't. Yeah, nothing that made you go, Oh, she's good at, I mean she's breathing at like 40 And she's diaphragmatic. She's been out hiking. And did I mention that she's not a shrinking flower? So we grab the sugar and I will never forget the sugar number 768. And I thought to myself, uh, probably doesn't need any more sugar. So we do grab a quick 12 lead Until

Code snippets

No response

OS

Linux

PHP version

PHP 7.4

Library version

openai v3.4

Bad gateway/cf_bad_gateway/null on long token request

Describe the bug

I'm working on a school project to test the differences in content created by GPT-3 vs. 4, but to get the quantity for testing, I need to run a script that creates the content for me and saves it to a JSON file. But it doesn't seem like GPT-4 is handling big requests very well.

When I try to get a request from OpenAI with GPT-4, it seems to timeout even when I've increased the timeout 600 seconds.
I get "null" on the content, and oftentimes I also get "null" on the error message.
The few times I was able to produce an error message, I got "502 cf_bad_gateway".

To Reproduce

The title and the outline are generated by OpenAI with GPT-4 before this final request is made.

I run the below code with the following parameters:
keyword = expensive champagne
title = The Ultimate Guide to the Top 10 Most Expensive Champagnes: Is It Worth the Splurge?
outline = 1. Introduction to Expensive Champagne
1.1 - The allure of luxury champagne
1.2 - The role of expensive champagne in celebrations and special events

  1. Factors Contributing to the High Price of Champagne
    2.1 - Production process and limited availability
    2.2 - Prestigious brand reputation
    2.3 - Aging and maturation

  2. Top Expensive Champagne Brands
    3.1 - Dom Pérignon
    3.2 - Krug
    3.3 - Louis Roederer Cristal
    3.4 - Armand de Brignac
    3.5 - Moët & Chandon

  3. The Taste of Luxury: What Sets Expensive Champagne Apart
    4.1 - Flavor profile and complexity
    4.2 - Fine bubbles and mouthfeel
    4.3 - Pairing expensive champagne with food

  4. The Role of Expensive Champagne in Popular Culture
    5.1 - Iconic moments in movies and television
    5.2 - Celebrity endorsements and collaborations
    5.3 - Expensive champagne in music and lyrics

  5. Investing in Expensive Champagne
    6.1 - Collecting and storing champagne
    6.2 - The potential for appreciation in value
    6.3 - Risks and rewards of investing in champagne

  6. The Experience: Sipping on Expensive Champagne
    7.1 - Champagne etiquette and rituals
    7.2 - Best glassware for enjoying luxury champagne
    7.3 - Creating memorable moments with expensive champagne

  7. Conclusion: The Enduring Appeal of Expensive Champagne
    8.1 - The timeless nature of luxury
    8.2 - The joy of indulging in life's finer pleasures

Code snippets

function get_article_content($open_ai, $article_outline, $article_title, $keyword) {
    $model = 'gpt-4';
    $retry = false;
    $content_chat_decoded = null;
    $error_message = null;

    do {
        $open_ai->setTimeout(300);
        $content_chat = $open_ai->chat([
            'model' => $model,
            'messages' => [
                [
                    "role" => "system",
                    "content" => "You're AI article writer and SEO expert, you write the most amazing content.
            Each article you write should contain the following:
            - The article should be at least 1500 words, ideally 2000 words in total.
            - Start the article with an introduction. The introduction should not have a heading."
                ],
                [
                    "role" => "user",
                    "content" => "{$article_outline}
            Title: {$article_title}
            SEO keyword: {$keyword}
            Write the article:"
                ],
            ],
            'temperature' => 0.7,
            'max_tokens' => 4000,
            'frequency_penalty' => 0,
            'presence_penalty' => 0,
        ]);

        $content_chat_decoded_raw = json_decode($content_chat);
        $content_chat_decoded = $content_chat_decoded_raw->choices[0]->message->content;

        // Check for any error in the response
        if (isset($content_chat_decoded_raw->error)) {
            $retry = true;
            $model = 'gpt-4-0314';

            // Store the error message
            $error_message = $content_chat_decoded_raw->error->message;
        } else {
            $retry = false;
        }

    } while ($retry);

    return [$content_chat_decoded, $error_message];
}

// 3. Write out the article, keeping in mind to SEO optimize for the keyword
list($content_chat_decoded, $error_message) = get_article_content($open_ai, $article_outline, $article_title, $keyword);
$article_content = $content_chat_decoded;

OS

Ubuntu 20

PHP version

PHP 7.1.1-1

Library version

openai v3.0.1

Adding more capabilities

Hi there!

First of all, what a great package! Thank you for creating this.

I was wondering if you'd be open to any PR on the below areas:

  • Files
  • Fine tuning

I would like to extend the package so we could upload training files and ultimately fine-tune also.

Let me know :)

Chat GTP 4 model integration

Describe the feature or improvement you're requesting

I need to integrate Chat GTP 4 model in this repo, How can I do this?
I was trying but didn't get any response from API.
Please let me know when you are planning to integrate the latest Chat GPT4 model.
I will appreciate you if you respond to me.
Thanks

Additional context

I need to integrate Chat GTP 4 model in this repo, How can I do this?
I was trying but didn't get any response from API.
Please let me know when you are planning to integrate the latest Chat GPT4 model.
I will appreciate you if you respond to me.
Thanks

Markdown while typing possible?

Describe the feature or improvement you're requesting

Well I got this beauty of an package working. But I run into 1 problem I cannot seem to get my head around.
I am working with the SSE and that works fine, i get all the bits and pieces and I get it in a div.
But when ChatGPT is outputting markdown styled content (like a table), I cannot apply markdown while it's still "typing".
I can only manage to do this after ChatGPT is done.

Can anybody share a light on this?

Additional context

No response

Change BaseURL from https://api.openai.com/v1

Describe the feature or improvement you're requesting

There are several interesting analytics products like https://www.helicone.ai/ that allow for easy tracking of usage, and even metered billing. To make it work, we need to change the base URL from the standard OpenAI's API, to the middleman. It would be a nice feature to be able to define this as a constant or something like it.

Additional context

No response

The model won't print/return any results.

Hi there, I'm trying to set up a working example using Laravel but when I dd($complete) I get false.

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Orhanerday\OpenAi\OpenAi;

class HomeController extends Controller
{
    public function index() {
        $open_ai = new OpenAi(env('OPEN_AI_API_KEY'));
        $engines = $open_ai->engines();

        $complete = $open_ai->complete([
            'engine' => 'davinci',
            'prompt' => 'Hello',
            'temperature' => 0.9,
            'max_tokens' => 150,
            'frequency_penalty' => 0,
            'presence_penalty' => 0.6,
        ]);

        $data = json_decode($complete, true);

        dd($engines);
    }
}

$open_ai returns a collection with engine, headers and contentTypes

$engines returns false as well, it's not even reading the library.

$data returns null

Request Error:Failed to connect to api.openai.com port 443 after 21066 ms: Timed out

Describe the bug

When I use a third-party Python library for OpenAI, it can return results normally. However, when I run PHP, it returns false. When I checked with curl_errno(), I found the error mentioned above. Both situations are under the same network conditions, and I have ruled out the issue of SSL certificate expiration. What could be the problem?

To Reproduce

image

Code snippets

No response

OS

Windows

PHP version

php 8.2

Library version

4.7

Maximum execution time of 60 seconds exceeded on File Upload

Hello again,

I'm trying the file upload feature but every single time it times out

Maximum execution time of 60 seconds exceeded

The file is less than 1kb

This is my controller

        $file = $request->file('file');
        Storage::putFileAs('files', $file, 'sample.jsonl');
        $c_file = curl_file_create(\URL::to('storage/files/sample.jsonl'));
        echo "[";
        echo $open_ai->uploadFile(
            [
                "purpose" => "answers",
                "file" => $c_file,
            ]
        );
        echo ",";
        echo $open_ai->listFiles();
        echo "]";

And how do I interact with the file once it's uploaded? Like how do I give the AI commands based on my file contents?

Where are the error codes from the OpenAI documentation ?

Describe the feature or improvement you're requesting

Hello,

I would like to know if it's currently possible to get the real OpenAI API error codes, or if this features will be added.

Currently when i call the chat function and get an error it's for example the following :

object(stdClass)#26338 (1) {
  ["error"]=>
  object(stdClass)#26339 (4) {
    ["message"]=>
    string(105) "Incorrect API key provided: 6. You can find your API key at https://platform.openai.com/account/api-keys."
    ["type"]=>
    string(21) "invalid_request_error"
    ["param"]=>
    NULL
    ["code"]=>
    string(15) "invalid_api_key"
  }
}

The obtained code is "invalid_api_key", wich isn't the code documented on OpenAI website (401)(https://platform.openai.com/docs/guides/error-codes) and isn't documented anywhere else.

It seem that the code should normally be at the root of the response, but it doesn't seem to be the case.

I thank you in advance for your reply.

Additional context

No response

Is there any strategy for performing throttled parallel calls using this library?

Describe the feature or improvement you're requesting

I was just looking at this here https://github.com/openai/openai-cookbook/blob/main/examples/api_request_parallel_processor.py

It essentially allows to queue/batch multiple calls to be run in parallel while observing the rate limits.. super useful if you need to run many calls at once and don't want to run into errors. Would something like this be feasible in PHP I wonder?

Additional context

No response

RESOLVED: Method chat() undefined. Did not update library with composer correctly.

Describe the bug

Updated library with:

composer update orhanerday/open-ai

Receive following error

Fatal error: Uncaught Error: Call to undefined method Orhanerday\OpenAi\OpenAi::chat() in ...

when running example "Chat" endpoint code.

RESOLVED:

FYI: Other noobs, updating library requires following shell command ("composer require ...", not "composer update..."):

composer require orhanerday/open-ai

Maybe add a note in README with brief upgrade instructions for others new to composer? Thanks.

The event-Stream feature cannot catch any error.

Describe the feature or improvement you're requesting

I see completion with stream can not catch error like "Wrong API".
The completion is still working fine with stream and correct API.

Additional context

No response

chat api sometimes unreasonably "exceeds" 4000 completion tokens

Describe the bug

I'm having an issue with the chat api saying I am exceeding the maximum length of 4000 tokens despite my request clearly not exceeding even 300 tokens. I tried the exact same request using python's openai api and it worked fine. It seems past a certain point in the length of conversation will cause a bug in the count of completion token.

To Reproduce

This is the error after running the code:
["error"]=>
object(stdClass)#28 (4) {
["message"]=>
string(189) "This model's maximum context length is 4097 tokens. However, you requested 4111 tokens (111 in the messages, 4000 in the completion). Please reduce the length of the messages or completion."
["type"]=>
string(21) "invalid_request_error"
["param"]=>
string(8) "messages"
["code"]=>
string(23) "context_length_exceeded"
}
}

Code snippets

require_once __DIR__ . '/vendor/autoload.php';
$dotenv = Dotenv\Dotenv::createImmutable(__DIR__);
$dotenv->load();
use Orhanerday\OpenAi\OpenAi;

$open_ai = new OpenAi($_ENV['OPENAI_API_KEY']);
$chat = $open_ai->chat([
    'model' => 'gpt-3.5-turbo',
    'messages' => [
        [
            "role"=>"system",
            "content"=>"summarize conversation within 40 words"
        ],
        [
            "role"=>"user",
            "content"=>" Anna: Hi my name is Anna User: hello how do i cook an egg? Anna: To cook an egg, bring a pot of water to boil, reduce heat, and gently add an egg for 3-5 minutes. User: what about a potato? Anna: To cook a potato, scrub it clean, poke a few holes in it, and bake in the oven at 400°F for about an hour, or until tender."
        ]
    ],
    'temperature' => 1.0,
    'max_tokens' => 4000,
    'frequency_penalty' => 0,
    'presence_penalty' => 0,
 ]);

$d = json_decode($chat);
var_dump($d);
// Get Content
echo($d->choices[0]->message->content);

OS

Windows

PHP version

PHP 8.2.4

Library version

openai V4.7

When I use stream, I sometimes encounter such a problem that the response content starts from a certain sentence and repeats continuously

Describe the bug

When I use stream, I sometimes encounter such a problem that the response content starts from a certain sentence and repeats continuously

To Reproduce

1.The user enters a question to ask
2.use eventSource
3.php code see below

Code snippets

$opts = [
            'model' => 'text-davinci-003',
            'prompt' => '你看过《满江红》吗?',
            'temperature' => 0,
            'max_tokens' => 2000,
            'stream' => true,
        ];

        header('Content-type: text/event-stream');
        header('Cache-Control: no-cache');

        $open_ai->completion($opts, function ($curl_info, $data) {
            echo $data . "\n\n";
            echo PHP_EOL;
            ob_flush();
            flush();
            info($data);
            return strlen($data);
        });


前端使用eventSource

OS

linux

PHP version

php 7.4

Library version

openai v3.0.1

HTML Format

Describe the feature or improvement you're requesting

Hello, I want to receive the incoming response in HTML format. Is this possible? thank you

Additional context

No response

Disabling SSL checking in cURL can solve getting "false" as a response

Describe the bug

This is not a bug per se, but I thought I would share it with other users and the dev team in case it helps.

In another thread, users were reporting that they were getting nothing or "false" back when they were sending a simple Completion like this:

    $open_ai = new OpenAi($open_ai_key);

    $response = $open_ai->completion([
        'model' => 'davinci',
        'prompt' => 'Hello',
        'temperature' => 0.9,
        'max_tokens' => 150,
        'frequency_penalty' => 0,
        'presence_penalty' => 0.6,
    ]);

The issue is now closed, but I found that if you are testing on a local host and are using something like Laragon (which is similar to XAMPP) to host your app or website, the certificates issued are very generic and can cause cURL to fail if it is set to verify the certificate,

To resolve it, you can go to the OpenAi.php file in the vendor folder and add
CURLOPT_SSL_VERIFYPEER => false,
to the sendRequest function like this:

    $curl_info = [
        CURLOPT_URL => $url,
        CURLOPT_RETURNTRANSFER => true,
        ...
        CURLOPT_HTTPHEADER => $this->headers,
        CURLOPT_SSL_VERIFYPEER => false,
    ];

and it no longer tries to verify the SSL certificate. It resolved my issue and so it may help others who encounter a similar issue.

You may want to consider adding it to the next release, but I am unsure of what the implications may be from a SSL point of view, if any? Maybe you could add it as an optional $opts parameter for those who are testing on localhosts which they could then remove when in production.

To Reproduce

Nothing to reproduce

Code snippets

No response

OS

Windows

PHP version

PHP 8.1.3

Library version

v3.3

Request batching

Describe the feature or improvement you're requesting

From documentation it seems there is no option for batching requests to avoid hitting limit rate?

OpenAI refers to request batching and has given code example in python (https://platform.openai.com/docs/guides/rate-limits/error-mitigation). Would be useful to do anything similar with this package.

When I try to include several messages in a single request it responds with a single “choice” that uses all the messages as context rather than one response per message.

The OpenAI example below is specifically for the completion endpoint rather than the chat endpoint. However chat endpoint is probably way more desirable for request batching, due to current limit of 3 RPM and 1/10 of completion endpoint price.

UPDATE: After doing bit of research there doesn't seem to be viable options to do batching for ChatCompletion endpoints. At least that's what I've read on the OpenAi forums. There are some workarounds though. Here are links to the posts:

Additional context

OpenAI official batching example for completion endpoint using Python:

`import openai # for making OpenAI API requests

num_stories = 10
prompts = ["Once upon a time,"] * num_stories

batched example, with 10 story completions per request

response = openai.Completion.create(
model="curie",
prompt=prompts,
max_tokens=20,
)

match completions to prompts by index

stories = [""] * len(prompts)
for choice in response.choices:
stories[choice.index] = prompts[choice.index] + choice.text

print stories

for story in stories:
print(story)`

Question: Does anybody know a good token count package for PHP?

Describe the feature or improvement you're requesting

At thispoint we just divide the number of words by .75 to get the estimate, but a "real" script like tiktoken e.g. would be helpful. I did not come across one anywhere, does one exist?

If there is one, we're looking forward to using it and perhaps it will help your package as well?

Additional context

It would help give an estimate of tokens to our end users so they know what's about to get charged by OpenAI.

PHPStan shows parameter error

Describe the bug

PHPStan shows error:
Parameter #2 $stream of method Orhanerday\OpenAi\OpenAi::chat() expects null, Closure given.

Secondary parameter $stream of chat method is typed null.

To Reproduce

  1. use PHPStan Level 5
  2. run it

Code snippets

$open_ai->chat($options, function ($curl_info, $data) {
    echo $data . PHP_EOL;
    echo PHP_EOL;
    ob_flush();
    flush();
    return strlen($data);
});

OS

macOS

PHP version

PHP 8.0

Library version

openai V4.7.1

Chunk

Describe the feature or improvement you're requesting

Hello,
How to disable chunk?

Additional context

No response

'answers' is not one of ['fine-tune'] - 'purpose'

Describe the bug

I am using :

				$c_file = curl_file_create($file);
				$result =$open_ai->uploadFile([
					"purpose" => "answers",
					"file"    => $c_file,
				]);

Results :
'answers' is not one of ['fine-tune'] - 'purpose'

To Reproduce

  1. $c_file = curl_file_create($file);
  2. $result =$open_ai->uploadFile([
    "purpose" => "answers",
    "file" => $c_file,
    ]);
  3. Results :
    'answers' is not one of ['fine-tune'] - 'purpose'

Code snippets

No response

OS

win

PHP version

PHP 8

Library version

openai v3.0.1

Built in Tokenizer

Describe the feature or improvement you're requesting

It would be great if you or someone else implemented a tokenizer for the different models or one that allowed you to pass in a parameter like 'cl100k_base'.

Means we don't need to balance different libraries that specialise in specific tokenisers.

Additional context

No response

GPT-3.5-Turbo logit_bias parameter

Describe the bug

Hi,

First of all thanks for your work on this.
I encounter a problem when i try to send an object with various tokenized words with their bias with the "logit_bias" parameter. (https://platform.openai.com/docs/api-reference/chat/create#chat/create-logit_bias)

To Reproduce

  1. Use your quickstart template
  2. Add the "logit_bias" parameter and any kind of json object, or even 'null' which from the doc is the accepted default.
  3. The response is an error message stating that whatever we put as a value for "logit_bias" is not of type 'object' - 'logis_bias'

Code snippets

<?php

require __DIR__ . '/vendor/autoload.php'; // remove this line if you use a PHP Framework.

use Orhanerday\OpenAi\OpenAi;

$open_ai_key = getenv('OPENAI_API_KEY');
$open_ai = new OpenAi($open_ai_key);

$complete = $open_ai->chat([
   'model' => 'gpt-3.5-turbo',
   'messages' => [
       [
           "role" => "system",
           "content" => "You are a helpful assistant."
       ],
       [
           "role" => "user",
           "content" => "Who won the world series in 2020?"
       ],
       [
           "role" => "assistant",
           "content" => "The Los Angeles Dodgers won the World Series in 2020."
       ],
       [
           "role" => "user",
           "content" => "Where was it played?"
       ],
   ],
   'temperature' => 1.0,
   'max_tokens' => 4000,
   'frequency_penalty' => 0,
   'presence_penalty' => 0,
   'logit_bias' => null // or { 3789: 10 } or anything of the sort
]);

var_dump($complete);

OS

Windows

PHP version

PHP 8.1

Library version

open-ai v4.7.1

Call to any function that sends curl request throws exception in v4.5

Describe the bug

On calling any function which sends a curl request, e.g completions, we get an exception in v4.5
curl_getinfo(): supplied resource is not a valid cURL handle resource
This is because in the function sendRequest, curl_getinfo is being called after curl_close() which throws exception thus breaks everything. This makes version 4.5 to not work at all.

To Reproduce

Call any end point.e.g completions using open-ai version 4.5

Code snippets

/**
     * @param string $url
     * @param string $method
     * @param array $opts
     * @return bool|string
     */
    private function sendRequest(string $url, string $method, array $opts = [])
    {
        $post_fields = json_encode($opts);

        if (array_key_exists('file', $opts) || array_key_exists('image', $opts)) {
            $this->headers[0] = $this->contentTypes["multipart/form-data"];
            $post_fields = $opts;
        } else {
            $this->headers[0] = $this->contentTypes["application/json"];
        }
        $curl_info = [
            CURLOPT_URL => $url,
            CURLOPT_RETURNTRANSFER => true,
            CURLOPT_ENCODING => '',
            CURLOPT_MAXREDIRS => 10,
            CURLOPT_TIMEOUT => $this->timeout,
            CURLOPT_FOLLOWLOCATION => true,
            CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
            CURLOPT_CUSTOMREQUEST => $method,
            CURLOPT_POSTFIELDS => $post_fields,
            CURLOPT_HTTPHEADER => $this->headers,
        ];

        if ($opts == []) {
            unset($curl_info[CURLOPT_POSTFIELDS]);
        }

        if (! empty($this->proxy)) {
            $curl_info[CURLOPT_PROXY] = $this->proxy;
        }

        if (array_key_exists('stream', $opts) && $opts['stream']) {
            $curl_info[CURLOPT_WRITEFUNCTION] = $this->stream_method;
        }

        $curl = curl_init();

        curl_setopt_array($curl, $curl_info);
        $response = curl_exec($curl);
        curl_close($curl);

        $info = curl_getinfo($curl);
        $this->curlInfo = $info;

        return $response;
    }

OS

Ubuntu 20.04 or any

PHP version

PHP 7.4.11 or any

Library version

openai v4.5

Function Calling?

Describe the feature or improvement you're requesting

Hi there! Glad to be using your wonderful project!
As you know for sure, OpenAI released some updates to the API, including Function Calling.
Is it possible to use Function Calling with the project right now?
If not, could you possibly add it?
Thanks anyway. Much appreciated work.

Additional context

No response

How to set header?

Describe the feature or improvement you're requesting

How to set header?

Additional context

No response

Variable $txt does not include initial parts of text

Describe the bug

I'm having an issue where the variable $txt is not including the initial parts of the text. Instead of "Hello world", it only contains "lo world". This is the code that I'm using:

header('Content-type: text/event-stream');
header('Cache-Control: no-cache');
$txt = "";
$complete = $open_ai->chat($opts, function ($curl_info, $data) use (&$role, &$txt, &$contentId) {
    if ($obj = json_decode($data) and $obj->error->message != "") {
        error_log(json_encode($obj->error->message));
    } else {
        echo $data;
        $clean = str_replace("data: ", "", $data);
        $arr = json_decode($clean, true);

        if ($data != "data: [DONE]\n\n" and isset($arr["choices"][0]["delta"]["content"])) {
            $txt .= $arr["choices"][0]["delta"]["content"];
        }
    }

    echo PHP_EOL;
    ob_flush();
    flush();
    return strlen($data);
});

I'm not sure why $txt is not including the initial parts of the text. Can someone help me with this issue?

To Reproduce

  1. Execute the code provided in the question.
    
  2. Ensure that the $open_ai object is properly instantiated and configured with the appropriate options.
    
  3. Trigger the function that calls the $open_ai->chat() method.
    

    Check the output and observe that the variable $txt does not include the initial parts of the text.

Code snippets

No response

OS

Ubuntu

PHP version

7.4.21

Library version

openai 4.7.1

$openai->completion (singular) works where opanai documentation specifies https://api.openai.com/v1/completions (plural)

Describe the bug

The README places links to OpenAI's reference documentation above the fold (at the very beginning), creating the impression that the library's methods have the same name as the path of the OpenAI endpoints. For example, it may be common for new users of the library to assume Completions are accessed by a method names "completions" such as:

$openai->completion() (spelled singular)

since the linked OpenAI reference documentation for "completions" endpoint describes endpoint as

POST https://api.openai.com/v1/completions (spelled plural)

Yet, the library's method is completion (spelled singular)

$openai->completion() (spelled singular)

This is made clear further down the README, but it's possible many new users will get sidetracked following links to OpenAI's reference and trying to copy the endpoint paths as library method names. Suggest several possible fixes:

  1. At the top of the Readme, calling out the fact method names are described further down the README.

  2. Create a prominent table mapping the OpenAI Reference endpoints to the library's methods.

Thnx

To Reproduce

$openai->completion()

Code snippets

No response

OS

Linux

PHP version

PHP 7.4

Library version

openai v3

Fine-tune pricing as in Python CLI

Describe the feature or improvement you're requesting

When reading the docs and YT videos on fine-tuning your model, you see the Python CLI displays the pricing of your model you're about to Fine-tune.

Would this be possible with this package as well? So before actually starting the fine-tune, see an estimate of what it is going to cost you, and thén continue with the actual FT, or not.

Additional context

No response

No output is coming from openai libaray

Describe the bug

I have write this code but it giving no output and returning bool(false) on var_dump($result);

To Reproduce

completion([ 'model' => 'text-davinci-003', 'prompt' => 'Make me a paragraph that summarizes this text for a second-grade student', 'temperature' => 0.7, 'max_tokens' => 64, "top_p" => 1.0, 'frequency_penalty' => 0.0, 'presence_penalty' => 0.0, ]); echo $result; // var_dump($result); ?>

Code snippets

<?php

    require __DIR__ . '/vendor/autoload.php';

    use Orhanerday\OpenAi\OpenAi;

    $open_ai_key = getenv('OPENAI_API_KEY');
    $open_ai = new OpenAi($open_ai_key);
    
    $result = $open_ai->completion([
        'model' => 'text-davinci-003',
        'prompt' => 'Make me a paragraph that summarizes this text for a second-grade student',
        'temperature' => 0.7,
        'max_tokens' => 64,
        "top_p" => 1.0,
        'frequency_penalty' => 0.0,
        'presence_penalty' => 0.0,
    ]);

    echo $result;
    // var_dump($result);
?>

OS

Windows

PHP version

PHP 8.1.0

Library version

openai v.3.0.1

How to get only content in output

Describe the bug

I write controller CI4:

 public function openAIPost()
    {
        $response = array();
        if ($this->request->getMethod() === 'post') {
            $question = $this->request->getPost('question');
            $open_ai_key = getenv('OPENAI_API_KEY');
            $open_ai = new OpenAi($open_ai_key);
            $complete = $open_ai->chat([
                'model' => 'gpt-3.5-turbo',
                'messages' => [
                    [
                        "role" => "user",
                        "content" => $question
                    ]
                ],
                'temperature' => 1.0,
                'max_tokens' => 10,
                'frequency_penalty' => 0,
                'presence_penalty' => 0,
            ]);
            $response['complete'] = $complete;
        }
        echo json_encode($response);
    }

But I get in output:

{"id":"chatcmpl-6xxhQ2FLm08VhE8KKHsTpilkvpMTS","object":"chat.completion","created":1679748856,"model":"gpt-3.5-turbo-0301","usage":{"prompt_tokens":9,"completion_tokens":10,"total_tokens":19},"choices":[{"message":{"role":"assistant","content":"This is a test of the AI language model."},"finish_reason":"length","index":0}]}

To Reproduce

Can you help us How to get only content from response?

Code snippets

No response

OS

macOS

PHP version

8.2

Library version

openai v.3.0.1

why the answer is so short ?? is there any problem??

Describe the bug

why the answer is so short ?? is there any problem??

To Reproduce

$open_ai = new OpenAi($open_ai_key);

    $complete = $open_ai->completion([
        'model' => 'text-davinci-003',
        'prompt' => $prompt,
        'temperature' => 0.9,
        //'max_tokens' => 4096,
        'frequency_penalty' => 0,
        'presence_penalty' => 0.6,
    ]);

$prompt = "我家有只狗叫三万,你猜猜它为什么叫这个名字";
I believe the answer is long, but the answer is :
"{"id":"cmpl-.......","object":"text_completion","created":1676702955,"model":"text-davinci-003","choices":[{"text":"?\n\n这可能是你家","index":0,"logprobs":null,"finish_reason":"length"}],"usage":{"prompt_tokens":44,"completion_tokens":16,"total_tokens":60}}
"
why the answer is so short ?? is there any problem??

Code snippets

No response

OS

centos7.2

PHP version

PHP 7.4

Library version

openai3

Chat API continuous thread

Describe the bug

When script send next request to chat API, it returns different response, how can we continue same thread?

To Reproduce

Continue thread

Code snippets

No response

OS

MAC

PHP version

PHP latest

Library version

latest

Uncaught Error: Call to undefined function env()

Describe the bug

I receive this error. I'm running PHP 8.2

To Reproduce

require_once("vendor/autoload.php");
use Orhanerday\OpenAi\OpenAi;

$open_ai = new OpenAi(env('mykey'));

Code snippets

No response

OS

Hosting

PHP version

PHP 8.2

Library version

openai 3.1

return false in request

Describe the bug

To use this library in thinkphp5, the returned information is false, and nothing is returned to me except false; I have set "set OPENAI_API_KEY=sk-gjtv...."
I am sure there is nothing wrong with my code, if this library is not referenced, then it should directly display an error

To Reproduce

1

Code snippets

<?php

namespace app\index\controller;

use think\Controller;
use Orhanerday\OpenAi\OpenAi;

class Index extends Controller
{
    public function index()
    {
        $open_ai_key = config("openai.OPENAI_API_KEY");
        $open_ai = new OpenAi($open_ai_key);
        //dump($open_ai_key); // have

        // $complete = $open_ai->completion([
        //     "model"   =>  "code-davinci-002",
        //     "prompt"  =>  "",
        //     "temperature" => 0,
        //     "max_tokens" => 64,
        //     "top_p" => 1,
        //     "frequency_penalty" => 0,
        //     "presence_penalty" => 0,
        //     "stop" => ["\"\"\""],
        // ]);
        $complete = $open_ai->completion([
            'model' => 'davinci',
            'prompt' => 'Hello',
            'temperature' => 0.9,
            'max_tokens' => 150,
            'frequency_penalty' => 0,
            'presence_penalty' => 0.6,
        ]);

        dump($complete); //that is "false"
        //return $this->fetch(["msg"=>$complete]);
    }
}

OS

win10

PHP version

7.4.3

Library version

3.4

I get nothing as response

Describe the bug

get ge nothing in response, when i var_dump, i have a boolean false

test.php:25:boolean false

To Reproduce

reproduced code

Code snippets

$prompt = "Hello";

$complete = $open_ai->completion([
    'model' => 'davinci',
    'prompt' => $prompt,
    'temperature' => 0.7,
    'max_tokens' => 150,
    'frequency_penalty' => 0,
    'presence_penalty' => 0,
]);

var_dump($complete);

OS

windows

PHP version

8,2

Library version

openai v3.5

The data was randomly merged

Describe the bug

When I've parsed the data and exported it to the front end, it should look something like this:
{"code":200,"message":"OK","data":"\u662f"}

Sometimes, however, it concatenates multiple results, which causes the front end to fail to parse the data:
{"code":200,"message":"OK","data":"\u662f"}
{"code":200,"message":"OK","data":"\u7684"}

To Reproduce

What happens is very random, if I add random sleep to the callback function, or if I add more PHP_EOL, the probability of it happening will decrease

Code snippets

$this->chat($option, function ($curl_info, string $data) {
                if (! str_contains($data, 'data: [DONE]')) {
                    $result = ChatStreamResponse::parse($data);

                    if ($result) {
                        $this->buffer[] = $result->content;
                        /**
                         * json encode the response and echo it
                         * {"code":200,"message":"OK","data":"\u662f"}
                         */
                        echo AbstractResponse::sendOK($result->content)->content();
                        echo PHP_EOL;
                        ob_flush();
                        flush();
                    }
                }
                return strlen($data);
            });

OS

macOS

PHP version

php8.1

Library version

openai v4.7.1

Completions requesting gpt-4 models timeout (Error: Gateway timeout.)

Describe the bug

Calls to $openai->chat with large messages and max_tokens length (~4K + ~4K respectively for a total of 8K) are often timing out. Meaning php script that calls function exits after 10 minutes while waiting for response without receiving response and some times receiving response "Error: Gateway timeout." Calling the same script from a web browser will fail earlier before any response to endpoint call received.

This does not happen every time, but has occurred almost every time $openai->chat is called with large context.

Is there:

  1. An alternative way to request large context completion that is less likely to fail in this manner?

  2. A way to request a timeout -- meaning a termination of the endpoint request along with explicit error response if OpenAI endpoint does not respond within a specified amount of time?

  3. A way to keep the API call and/or calling php script alive longer?

To Reproduce

$chat = $openai->chat([
	'model' => 'gpt-4-0314',
	'messages' => "MESSAGE ~4K tokens in length)
	'temperature' => 0,
	'max_tokens' => 4l000,
	'frequency_penalty' => 0,
	'presence_penalty' => 0
]);
echo json_decode($chat);

Code snippets

No response

OS

Linux

PHP version

PHP 7.6

Library version

openai v3

No way to list fine-tuned models

Hi!
I'm working with fine-tuned models, and need to get a list of all available. However, the engines method only returns the base models, and does not include the user models as well.
Could we have a new models() method to return all available models?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.