Coder Social home page Coder Social logo

betalgo / openai Goto Github PK

View Code? Open in Web Editor NEW
2.7K 68.0 494.0 3.63 MB

OpenAI .NET sdk - Azure OpenAI, ChatGPT, Whisper, and DALL-E

Home Page: https://betalgo.github.io/openai/

License: MIT License

C# 100.00%
csharp dall-e dotnet gpt-3 openai openai-api sdk chatgpt gpt-4 whisper

openai's Introduction

.NET SDK for OpenAI APIs

.Net SDK for OpenAI, Community Library:

Betalgo.OpenAI

Install-Package Betalgo.OpenAI

Beta:

Assistant features are available only in the beta version.

To check avaliable features: Feature Availability Table

Beta version Nuget: V8.2.0-beta

Install-Package Betalgo.OpenAI -Version 8.2.0-beta

Use "UseBeta": true in your config file or
serviceCollection.AddOpenAIService(r=>r.UseBeta = true); or
new OpenAiOptions { UseBeta = true } in your service registration.

experimental utilities library:

Betalgo.OpenAI.Utilities

Install-Package Betalgo.OpenAI.Utilities

Documentations and links:

Wiki Page Feature Availability Table Change Logs

Betalgo.OpenAI: Static Badge
Betalgo.OpenAI.Utilities: Static Badge


Maintenance of this project is made possible by all the bug reporters, contributors and sponsors.
💖 Sponsors:
@betalgo, Laser Cat Eyes

@tylerje ,@oferavnery, @MayDay-wpf, @AnukarOP, @Removable


Sample Usages

The repository contains a sample project named OpenAI.Playground that you can refer to for a better understanding of how the library works. However, please exercise caution while experimenting with it, as some of the test methods may result in unintended consequences such as file deletion or fine tuning.

!! It is highly recommended that you use a separate account instead of your primary account while using the playground. This is because some test methods may add or delete your files and models, which could potentially cause unwanted issues. !!

Your API Key comes from here --> https://platform.openai.com/account/api-keys
Your Organization ID comes from here --> https://platform.openai.com/account/org-settings

Without using dependency injection:

var openAiService = new OpenAIService(new OpenAiOptions()
{
    ApiKey =  Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY")
});

Using dependency injection:

secrets.json:

 "OpenAIServiceOptions": {
    //"ApiKey":"Your api key goes here"
    //,"Organization": "Your Organization Id goes here (optional)"
    //,"UseBeta": "true/false (optional)"
  },

(How to use user secret ?
Right click your project name in "solution explorer" then click "Manage User Secret", it is a good way to keep your api keys)

For Beta Features:

  • Use "UseBeta": true in your config file or serviceCollection.AddOpenAIService(r=>r.UseBeta = true); or new OpenAiOptions { UseBeta = true } in your service registration.

Program.cs

serviceCollection.AddOpenAIService();

OR
Use it like below but do NOT put your API key directly to your source code.

Program.cs

serviceCollection.AddOpenAIService(settings => { settings.ApiKey = Environment.GetEnvironmentVariable("MY_OPEN_AI_API_KEY"); });

After injecting your service you will be able to get it from service provider

var openAiService = serviceProvider.GetRequiredService<IOpenAIService>();

You can set default model(optional):

openAiService.SetDefaultModelId(Models.Davinci);

Chat Gpt Sample

var completionResult = await openAiService.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
{
    Messages = new List<ChatMessage>
    {
        ChatMessage.FromSystem("You are a helpful assistant."),
        ChatMessage.FromUser("Who won the world series in 2020?"),
        ChatMessage.FromAssistant("The Los Angeles Dodgers won the World Series in 2020."),
        ChatMessage.FromUser("Where was it played?")
    },
    Model = Models.ChatGpt3_5Turbo,
    MaxTokens = 50//optional
});
if (completionResult.Successful)
{
   Console.WriteLine(completionResult.Choices.First().Message.Content);
}

Function Sample

var fn1 = new FunctionDefinitionBuilder("get_current_weather", "Get the current weather")
            .AddParameter("location", PropertyDefinition.DefineString("The city and state, e.g. San Francisco, CA"))
            .AddParameter("format", PropertyDefinition.DefineEnum(new List<string> { "celsius", "fahrenheit" }, "The temperature unit to use. Infer this from the users location."))
            .Validate()
            .Build();

        var fn2 = new FunctionDefinitionBuilder("get_n_day_weather_forecast", "Get an N-day weather forecast")
            .AddParameter("location", new() { Type = "string", Description = "The city and state, e.g. San Francisco, CA" })
            .AddParameter("format", PropertyDefinition.DefineEnum(new List<string> { "celsius", "fahrenheit" }, "The temperature unit to use. Infer this from the users location."))
            .AddParameter("num_days", PropertyDefinition.DefineInteger("The number of days to forecast"))
            .Validate()
            .Build();
        var fn3 = new FunctionDefinitionBuilder("get_current_datetime", "Get the current date and time, e.g. 'Saturday, June 24, 2023 6:14:14 PM'")
            .Build();

        var fn4 = new FunctionDefinitionBuilder("identify_number_sequence", "Get a sequence of numbers present in the user message")
            .AddParameter("values", PropertyDefinition.DefineArray(PropertyDefinition.DefineNumber("Sequence of numbers specified by the user")))
            .Build();

        var tools = new List<ToolDefinition>()
        {
            new ToolDefinition() { Function = fn1 },
            new ToolDefinition() { Function = fn2 },
            new ToolDefinition() { Function = fn3 },
            new ToolDefinition() { Function = fn4 },
        }

        ConsoleExtensions.WriteLine("Chat Function Call Test:", ConsoleColor.DarkCyan);
        var completionResult = await sdk.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
        {
            Messages = new List<ChatMessage>
                {
                    ChatMessage.FromSystem("Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous."),
                    ChatMessage.FromUser("Give me a weather report for Chicago, USA, for the next 5 days.")
                },
            Tools = tools,
            MaxTokens = 50,
            Model = Models.Gpt_3_5_Turbo
        });

        if (completionResult.Successful)
        {
            var choice = completionResult.Choices.First();
            Console.WriteLine($"Message:        {choice.Message.Content}");

            var fn = choice.Message.FunctionCall;
            if (fn != null)
            {
                Console.WriteLine($"Function call:  {fn.Name}");
                foreach (var entry in fn.ParseArguments())
                {
                    Console.WriteLine($"  {entry.Key}: {entry.Value}");
                }
            }
        }

Completions Stream Sample

var completionResult = openAiService.Completions.CreateCompletionAsStream(new CompletionCreateRequest()
   {
      Prompt = "Once upon a time",
      MaxTokens = 50
   }, Models.Davinci);

   await foreach (var completion in completionResult)
   {
      if (completion.Successful)
      {
         Console.Write(completion.Choices.FirstOrDefault()?.Text);
      }
      else
      {
         if (completion.Error == null)
         {
            throw new Exception("Unknown Error");
         }

         Console.WriteLine($"{completion.Error.Code}: {completion.Error.Message}");
      }
   }
   Console.WriteLine("Complete");

DALL·E Sample

var imageResult = await openAiService.Image.CreateImage(new ImageCreateRequest
{
    Prompt = "Laser cat eyes",
    N = 2,
    Size = StaticValues.ImageStatics.Size.Size256,
    ResponseFormat = StaticValues.ImageStatics.ResponseFormat.Url,
    User = "TestUser"
});


if (imageResult.Successful)
{
    Console.WriteLine(string.Join("\n", imageResult.Results.Select(r => r.Url)));
}

VISION Sample

var completionResult = await sdk.ChatCompletion.CreateCompletion(
    new ChatCompletionCreateRequest
    {
        Messages = new List<ChatMessage>
        {
            ChatMessage.FromSystem("You are an image analyzer assistant."),
            ChatMessage.FromUser(
                new List<MessageContent>
                {
                    MessageContent.TextContent("What is on the picture in details?"),
                    MessageContent.ImageUrlContent(
                        "https://www.digitaltrends.com/wp-content/uploads/2016/06/1024px-Bill_Cunningham_at_Fashion_Week_photographed_by_Jiyang_Chen.jpg?p=1",
                        ImageStatics.ImageDetailTypes.High
                    )
                }
            ),
        },
        MaxTokens = 300,
        Model = Models.Gpt_4_vision_preview,
        N = 1
    }
);

if (completionResult.Successful)
{
    Console.WriteLine(completionResult.Choices.First().Message.Content);
}

VISION Sample using Base64 encoded image

const string fileName = "image.png";
var binaryImage = await FileExtensions.ReadAllBytesAsync(fileName);

var completionResult = await sdk.ChatCompletion.CreateCompletion(
    new ChatCompletionCreateRequest
    {
        Messages = new List<ChatMessage>
        {
            ChatMessage.FromSystem("You are an image analyzer assistant."),
            ChatMessage.FromUser(
                new List<MessageContent>
                {
                    MessageContent.TextContent("What is on the picture in details?"),
                    MessageContent.ImageBinaryContent(
                        binaryImage,
                        ImageStatics.ImageFileTypes.Png,
                        ImageStatics.ImageDetailTypes.High
                    )
                }
            ),
        },
        MaxTokens = 300,
        Model = Models.Gpt_4_vision_preview,
        N = 1
    }
);

if (completionResult.Successful)
{
    Console.WriteLine(completionResult.Choices.First().Message.Content);
}

Notes:

This library used to be known as Betalgo.OpenAI.GPT3, now it has a new package Id Betalgo.OpenAI.

Please note that due to time constraints, I was unable to thoroughly test all of the methods or fully document the library. If you encounter any issues, please do not hesitate to report them or submit a pull request - your contributions are always appreciated.

I initially developed this SDK for my personal use and later decided to share it with the community. As I have not maintained any open-source projects before, any assistance or feedback would be greatly appreciated. If you would like to contribute in any way, please feel free to reach out to me with your suggestions.

I will always be using the latest libraries, and future releases will frequently include breaking changes. Please take this into consideration before deciding to use the library. I want to make it clear that I cannot accept any responsibility for any damage caused by using the library. If you feel that this is not suitable for your purposes, you are free to explore alternative libraries or the OpenAI Web-API.

If I forgot your name in change logs, please accept my apologies and let me know so I can add it to the list.

Changelog

8.2.0-beta

  • Added support for beta features, such as assistants, threads, messages, and run. Still missing some of the endpoints, but good progress achieved. See complete list from here: Feature Availability Table. Thanks to @CongquanHu , @alistein, @hucongquan.
  • Use "UseBeta": true in your config file or serviceCollection.AddOpenAIService(r=>r.UseBeta = true); or new OpenAiOptions { UseBeta = true } in your service registration.

8.1.1

  • Fixed incorrect mapping for batch API error response.

8.1.0

  • Added support for Batch API

8.0.1

  • Added support for new Models gpt-4-turbo and gpt-4-turbo-2024-04-09 thanks to @ChaseIngersol

8.0.0

  • Added support for .NET 8.0 thanks to @BroMarduk
  • Utilities library updated to work with only .NET 8.0

7.4.7

  • Fixed a bug that was causing binary image to be sent as base64 string, Thanks to @yt3trees
  • Fixed a bug that was blocking CreateCompletionAsStream on some platforms. #331
  • Fixed a bug that was causing an error with multiple tool calls, now we are handling index parameter #493, thanks to @David-Buyer

openai's People

Contributors

aghimir3 avatar almis90 avatar belaszalontai avatar betalgoup avatar bromarduk avatar chaseingersol avatar choshinyoung avatar congquanhu avatar copypastedeveloper avatar david-buyer avatar digitalvir avatar dogdie233 avatar doggy8088 avatar gotmike avatar gspentzas1991 avatar kayhantolga avatar kosmonikos avatar mac8005 avatar pdcruze avatar qbm5 avatar robertlyson avatar rzubek avatar samuelnygaard avatar sarilouis avatar shanepowell avatar simontjell avatar swimburger avatar szabe74 avatar weihanli avatar yt3trees avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

openai's Issues

JsonException

Describe the bug
I started getting a JsonException when calling CreateCompletion as of this morning with no change of code on my end. Wonder if OpenAI changed something?

Your code piece

 try {
                await GPTSemaphore.WaitAsync();
                CompletionCreateResponse result = await api.Completions.CreateCompletion(completionRequest, "text-davinci-002");
                if (result.Successful) {
                    var resultStrings = result.Choices.Select(c => c.Text).ToList();
                    foreach (var resultString in resultStrings) {
                        prompt.responses.Add(Util.CleanListString(resultString));

                        if (RuntimeConfig.settings.displayOptions.showResults) {
                            Util.WriteLineToConsole("---- RESULT ----", ConsoleColor.DarkCyan);
                            Util.WriteLineToConsole(resultString, ConsoleColor.Cyan);
                        }
                    }
                    GPTRequestsTotal++;
                    return resultStrings.Count;
                } else {
                    // TODO: Handle failures in a smarter way
                    if (result.Error is not null) {
                        Console.WriteLine($"{result.Error.Code}: OpenAI = {result.Error.Message}");
                    }
                }
                throw new Exception("API Failure");
            }

Result
Exception has occurred: CLR/System.Text.Json.JsonException
An unhandled exception of type 'System.Text.Json.JsonException' occurred in System.Private.CoreLib.dll: ''<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.'
Inner exceptions found, see $exception in variables window for more details.
Innermost exception System.Text.Json.JsonReaderException : '<' is an invalid start of a value. LineNumber: 0 | BytePositionInLine: 0.
at System.Text.Json.ThrowHelper.ThrowJsonReaderException(Utf8JsonReader& json, ExceptionResource resource, Byte nextByte, ReadOnlySpan1 bytes) at System.Text.Json.Utf8JsonReader.ConsumeValue(Byte marker) at System.Text.Json.Utf8JsonReader.ReadFirstToken(Byte first) at System.Text.Json.Utf8JsonReader.ReadSingleSegment() at System.Text.Json.Utf8JsonReader.Read() at System.Text.Json.Serialization.JsonConverter1.ReadCore(Utf8JsonReader& reader, JsonSerializerOptions options, ReadStack& state)

Expected behavior
Get a result back

Screenshots
image

Desktop (please complete the following information):
Windows, C#, 6.5.0 (and 6.4.0)

Best model for advance machine translation?

HI,

Im looking for best way to use GPT for text translation. Can you please advice me how, or what enpoint use it for that ? I would like to also pass instruction for context and glossary.

thank you for help

Client is not configurable

Describe the bug
The Http client is not exposed and cannot be configured, so a call to audio transcription with Whisper is very limited to small files even though the service supports up to 1 hour of audio, the call is interrupted after the default 100 seconds which is not enough. Small files are processed correctly though.

Your code piece

var audioCreateTranscriptionResponse = await _iOpenAIService.Audio.CreateTranscription(audioCreateTranscriptionRequest);

Result
Task is cancelled

Expected behavior
To be able to configure the client timeout.

CancellationToken support

Hi 👋

Is your feature request related to a problem? Please describe.
I would like to be able to pass CancellationToken to IOpenAIService endpoints.

Describe the solution you'd like
For example when calling completions endpoint would be great to pass token like this

var result = await _service.Completions.CreateCompletion(new CompletionCreateRequest    
    {
        ..                                                                                   
    },                                                                                  
    cancellationToken: cancellationToken);                                              

Describe alternatives you've considered

x

Additional context

x

Maybe I'm coming with this after someone already was thinking of this and you have thoughts / opinions already? Happy to prepare PR for this.

Thanks!

Fine tuning

Hi,

And thanks for sharing your work. Is this support Fine tuning aswell ?

The ChatCompletionCreateResponse Usage is null in CreateCompletionAsStream

Describe the bug
when I use CreateCompletionAsStream() method,the response not include Usage object,but CreateCompletion() include.

Your code piece

 var chatResponse=openAiService.ChatCompletion.CreateCompletionAsStream(new ChatCompletionCreateRequest
                    {
                        Messages = new List<ChatMessage>
                        {
                            ChatMessage.FromSystem("You are a helpful assistant."),
                            ChatMessage.FromUser(inputStr),
                        },
                        Model = Models.ChatGpt3_5Turbo
                    });

        var tokens=0;
        await foreach(var chat in chatResponse){
            if (chat.Successful){
                tokens+=chat.Usage?.TotalTokens ?? 0;
                var choice=chat.Choices.First();
                Console.Write(choice.Message.Content);
            }
        }

Result
chat.Usage is null

Expected behavior
chat.Usage is not null and inclue tokens.

Screenshots

the CreateCompletionAsStream()response
image

the CreateCompletion() response
image

Desktop (please complete the following information):

  • OS: Linux
  • Language C#
  • Version 7.0.202

openAIService.Audio.CreateTranscription Returns completionResult.Successful even on invalid_request_error

Describe the bug
Send a CreateTranscription with a faulty key

Your code piece

          var completionResult = await _openAIService.Audio.CreateTranscription(new AudioCreateTranscriptionRequest
            {
                Model = Models.WhisperV1,
                ResponseFormat = StaticValues.AudioStatics.ResponseFormat.Text,
                File = fileBytes,
                FileName = inmp3.Name,
            });

Result
completionResult.Successful = true
completionResult.Text =
{
"error": {
"message": "Incorrect API key provided: sk-QoVWP***************************************m111. You can find your API key at https://platform.openai.com/account/api-keys.",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}

Expected behavior
completionResult.Successful should be false

Screenshots
image

Desktop (please complete the following information):

  • Windows
  • C#
  • 6.8.0

Can not install into my project.

Could you please help to solve it :

Severity Code Description Project File Line Suppression State
Error Could not install package 'Betalgo.OpenAI.GPT3 6.7.2'. You are trying to install this package into a project that targets '.NETFramework,Version=v4.7.2', but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

TokenizerGpt3.Encode incorrect

Describe the bug
TokenizerGpt3.Encode incorrect

Your code piece

 text = """
                Many words map to one token, but some don't: indivisible.

                Unicode characters like emojis may be split into many tokens containing the underlying bytes: 🤚🏾

                Sequences of characters commonly found next to each other may be grouped together: 1234567890
                """;
            int n=TokenizerGpt3.Encode(text).Count;

Result
= 260

Screenshots
If applicable, add screenshots to help explain your problem.
image

Desktop (please complete the following information):

  • OS: [Windows]
  • Language [c#]
  • Version [6.7.0]

Additional context
Add any other context about the problem here.

Question: get started with AzureOpenAI

Great thanks for the project, I'm trying to use it with Azure OpenAI service, but do not know how to find the DeploymentId, could you please help, thanks

Install-Package: Dependency loop detected for package 'Betalgo.OpenAI.GPT3'

Getting this error when installing

Install-Package Betalgo.OpenAI.GPT3

The package(s) come(s) from a package source that is not marked as trusted.
Are you sure you want to install software from 'nuget.org'?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "N"): A
Install-Package: Dependency loop detected for package 'Betalgo.OpenAI.GPT3'.

Result
It runs really slow, and then fails.

Expected behavior
Install the package from the CLI

Desktop (please complete the following information):

  • OS: Windows 11
  • Language: Powershell Core 7.3.1
  • Version: 7.3.1

Including logprobs in request causes an error

When I include logprobs in a request I get an error:
`var builder = new ConfigurationBuilder()
.AddJsonFile("ApiSettings.json")
.AddUserSecrets();

IConfiguration configuration = builder.Build();
var serviceCollection = new ServiceCollection();
serviceCollection.AddScoped(_ => configuration);

serviceCollection.AddOpenAIService();
var serviceProvider = serviceCollection.BuildServiceProvider();
var sdk = serviceProvider.GetRequiredService();

var openAiService = serviceProvider.GetRequiredService();

var completionResult = await openAiService.Completions.CreateCompletion(new CompletionCreateRequest()
{
Prompt = "I never go to bed after ten o'clock.",
LogProbs = 3,
MaxTokens = 50
},
Models.Davinci);`

ERROR MESSAGE:
{"The JSON value could not be converted to System.Nullable`1[System.Int32]. Path: $.choices[0].logprobs | LineNumber: 0 | BytePositionInLine: 344."}

Put Models in specialized collections

Is your feature request related to a problem? Please describe.
I sometimes choose the wrong model for the type of request I make

Describe the solution you'd like
I would like to type "EditModel." and get only models that are valid for an edit endpoint, or "ImageModel." and get valid Image models, etc.

Describe alternatives you've considered
I see they're labeled, but it'd be nice if it weren't possible to supply an invalid model to my request object.

Additional context
I just started playing with the library today, it's very helpful in general. If this request is approved, I'd be happy to help implement it.

Dalle Image Request Issue

When i try the below code :

        var imageResult = await openAiService.Image.CreateImage(new ImageCreateRequest
        {
            Prompt = "Laser cat eyes",
            N = 2,
            Size = StaticValues.ImageStatics.Size.Size256,
            ResponseFormat = StaticValues.ImageStatics.ResponseFormat.Url,
            User = "TEST"
        });
        
        i get error for image response like : 
        
        unhandled error: '<' is an invalid start of a value. Path: $ | LineNumber: 0 | BytePositionInLine: 0.

It should be able to get Image Result

Support proxy server configuration

Is your feature request related to a problem? Please describe.
I have encountered a very important issue. My environment does not have direct access to apiopenapi.com, and I need an agent fu

Describe the solution you'd like
Support configuring proxy servers in options to achieve the effect of proxy request APIs

builder.Services.AddOpenAIService(settings => {
    settings.ProxyUrl = "proxy serve url";
    settings.ProxyPort = "proxy serve port";
});

The SSL connection could not be established, see inner exception.

Describe the bug
The SSL connection could not be established, see inner exception.
SocketException: An existing connection was forcibly closed by the remote host.

Your code piece

        public async Task GenerateResponse2()
        {
            var openAiService = new OpenAIService(new OpenAiOptions()
            {
                ApiKey = apiKey
            });

            var completionResult = await openAiService.ChatCompletion.CreateCompletion(new ChatCompletionCreateRequest
            {
                Messages = new List<ChatMessage>
                {
                    ChatMessage.FromSystem("You are a helpful assistant."),
                    ChatMessage.FromUser("Who won the world series in 2020?"),
                    ChatMessage.FromAssistant("The Los Angeles Dodgers won the World Series in 2020."),
                    ChatMessage.FromUser("Where was it played?")
                },
                Model = Models.ChatGpt3_5Turbo,
                MaxTokens = 50//optional
            });
            if (completionResult.Successful)
            {
                Console.WriteLine(completionResult.Choices.First().Message.Content);
            }
        }

support Chinese character for tokenizer.

Is your feature request related to a problem? Please describe.
when i use tokenizer to count Chinese character. the result is not correct.
Describe the solution you'd like
correct token count for chinese character.

Describe alternatives you've considered
N/A
Additional context
N/A

Example for Moderation API has the test backward

The documentation for the Moderation API has the test backward. It should say "!= false" instead of "!= true", because "Flagged" is set to false if the content is OK, and true if the content is unacceptable.

I tested this with an actual call to the service, just to make sure I understood how it works.

[Feature Request] Support for ready-to-use GPT3.5

Irrelevant Remarks

Great library by the way, and as someone who does not like to deal with REST APIs, having it exposed as native methods makes my code cleaner.

Is your feature request related to a problem? Please describe.

According to official API doc and Engadget news, ChatGPT's model is officially available as of today (2023.03.01). It would be nice if this library can support the corresponding names for the completion API.

Describe the solution you'd like

I think there should be minimal API change - essentially adding an enumeration. This feature request only concerns the completion API.

Describe alternatives you've considered

I tried to make a fork but the ModelNameBuilder programming model is a bit confusing to me so I will leave this feature implementation to the original author😂

Additional context

image

Dependency injection is not working properly

Describe the bug
Dependency injection is not working for Azure Functions with this overload serviceCollection.AddOpenAIService();
It only works with the overload serviceCollection.AddOpenAIService(settings => { settings.ApiKey = "APIKEY" });
It seems that the constructor of the OpenAIService is not called in the expected order and so the OpenAiOptions param has the key empty.

Your code piece
https://github.com/elias-rod/ChatGptPoc/tree/di-error
(branch di-error)

My secrets.json is

{
  "OpenAiOptions": {
    "ApiKey": "xxx"
  }
}

Error repro steps

  1. Download repo
  2. Checkout to branch branch di-error
  3. Create secrets.json
  4. Execute program

Screenshots
image

Desktop (please complete the following information):

  • OS: Windows 10
  • Language c#
  • Version Visual Studio 2022 17.5

HandshakeFailure

Would you pleas help to check the following error? Thanks

Describe the bug
Failed to run RunSimpleCompletionTest()
Got an exception: "| InnerException | {"Authentication failed because the remote party sent a TLS alert: 'HandshakeFailure'."} | System.Exception {System.Security.Authentication.AuthenticationException}"

Your code piece

using ChatGPTClient;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using OpenAI.GPT3.Extensions;
using OpenAI.GPT3.Interfaces;

var builder = new ConfigurationBuilder()
    .AddJsonFile("ApiSettings.json").AddUserSecrets<Program>();
IConfiguration configuration = builder.Build();

var serviceCollection = new ServiceCollection();
serviceCollection.AddScoped(_ => configuration);


serviceCollection.AddOpenAIService();
var serviceProvider = serviceCollection.BuildServiceProvider();
var sdk = serviceProvider.GetRequiredService<IOpenAIService>();
await CompletionTestHelper.RunSimpleCompletionTest(sdk);
Console.ReadLine();

Desktop (please complete the following information):

  • OS: Win7
  • Language C#
  • Version .net 6.0

JSON value could not be converted to System.Boolean

I got following error when i try call await EngineTestHelper.FetchEnginesTest(sdk); method.

I have tested some other methods and they seemed to work but this one is not. I do not know what is the issue.

OpenAIService does not gracefully close when parent thread / Transient dep injected service closes

Describe the bug
Exceptions are thrown when a transient service exists that leverages OpenAIService for a group of calls and the thread goes unused and naturally closes.

Your code piece

        private readonly IOpenAIService? _openAIService;
        private readonly ILocalSettings settingsService;
        private readonly ILoggingService Logger;
        private string _apikey;
        private bool _disposed;

        public OpenAIAPIService(ILocalSettings settingsService, ILoggingService logger)
        {
            this.settingsService = settingsService;
            this.Logger = logger;
            _apikey = settingsService.Load<string>("ApiKey");

            if (String.IsNullOrEmpty(_apikey))
            {
                _apikey = "Api Key Is Null or Empty";
                Logger.LogError("_apikey");
            }

            _openAIService = new OpenAIService(new OpenAiOptions()
            {
                ApiKey =  _apikey
            });
        }

Result
A series of socket exceptions occur :
(The only socket code in this app is OpenAIService

image

Expected behavior
OpenAIService needs to gracefully close or be IDisposable

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [Windows]
  • Language [C#]
  • Version [6.8.0]

Additional context

Missing best_of parameter in ChatCompletionCreateRequest

Please add best_of parameter to the ChatCompletionCreateRequest.

More info: https://platform.openai.com/docs/api-reference/completions/create#completions/create-best_of

best_of (integer, Optional, Defaults to 1)

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.

When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.

Copilot Proxy Support

The copilot proxy exposed by the Github Copilot CLI is https://copilot-proxy.githubusercontent.com/

The completion URI follows the same similar contexts and I imagine it will work fine here:
https://copilot-proxy.githubusercontent.com/v1/engines/copilot-labs-codex/completions

I'm pretty sure it just proxies an Azure ML model based on the headers, so I propose adding a new provider type that is either:

  1. Copilot Specific
  2. Allow to specify an arbitrary base domain that isn't Azure-based.

CreateImageEdit Throw Exception if Mask is null

Describe the bug
When calling CreateImageEdit method, and passing ImageEditCreateRequest object to the method, if the optional parameter "Mask" is not set, it throws an exception
Value cannot be null. (Parameter 'content')|System.ArgumentNullException: Value cannot be null. (Parameter 'content') at System.Net.Http.ByteArrayContent..ctor(Byte[] content) at OpenAI.GPT3.Managers.OpenAIService.CreateImageEdit(ImageEditCreateRequest imageEditCreateRequest)

Your code piece

var imageCreateResponse = await _openAIService.Image.CreateImageEdit(new ImageEditCreateRequest()
{
    Size = "512x512",
    N = 1,
    ImageName = "123456",
    Image = fileBytes,
    User = Cryptography.SHA1Hash("123456789", true),
    Prompt = "Robot, AI"

});

From the SDK source code, there is no check on Mask property before adding it to the request

multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Mask), "mask", imageEditCreateRequest.MaskName);

public async Task<ImageCreateResponse> CreateImageEdit(ImageEditCreateRequest imageEditCreateRequest)
    {
        var multipartContent = new MultipartFormDataContent();
        if (imageEditCreateRequest.User != null) multipartContent.Add(new StringContent(imageEditCreateRequest.User), "user");
        if (imageEditCreateRequest.ResponseFormat != null) multipartContent.Add(new StringContent(imageEditCreateRequest.ResponseFormat), "response_format");
        if (imageEditCreateRequest.Size != null) multipartContent.Add(new StringContent(imageEditCreateRequest.Size), "size");
        if (imageEditCreateRequest.N != null) multipartContent.Add(new StringContent(imageEditCreateRequest.N.ToString()!), "n");

        multipartContent.Add(new StringContent(imageEditCreateRequest.Prompt), "prompt");
        multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Image), "image", imageEditCreateRequest.ImageName);
        multipartContent.Add(new ByteArrayContent(imageEditCreateRequest.Mask), "mask", imageEditCreateRequest.MaskName);

        return await _httpClient.PostFileAndReadAsAsync<ImageCreateResponse>(_endpointProvider.ImageEditCreate(), multipartContent);
    }

Tokenizer test failing on windows

Describe the bug
The Tokenizer Test fails on windows, due to receiving a different amount of tokens than expected (68 instead of 64, like in https://platform.openai.com/tokenizer ). The issue for is that when reading the TokenizerSample.txt, newlines are read as \r\n, resulting in 2 tokens each instead of 1.

Screenshots
image

Desktop

  • OS: Windows

On the relevance between problems

hi there:
I found that there is no correlation between the problems submitted using the API interface, I can't chat with it continuously, this is different from the feedback obtained by using chatgpt web system.
is this the method I use incorrect?
thank you!

Typo in OpenAI.GPT3.ObjectModels.ResponseModels.Categories

Describe the bug
I'm trying to create a OpenAI with chat moderation features, and I found out that there is a small typo in Categories.Sexualminors, but CategoryScores with same json 'sexualminors' its using CategoryScores.SexualMinors

Your code piece

public static IEnumerable<float> GetScores(this CategoryScores scores)
    {
        yield return scores.Hate;
        yield return scores.HateThreatening;
        yield return scores.Selfharm;
        yield return scores.Sexual;
        yield return scores.SexualMinors;
        yield return scores.Violence;
        yield return scores.Violencegraphic;
    }

    public static IEnumerable<bool> GetCategories(this Categories category)
    {
        yield return category.Hate;
        yield return category.HateThreatening;
        yield return category.Selfharm;
        yield return category.Sexual;
        yield return category.Sexualminors;
        yield return category.Violence;
        yield return category.Violencegraphic;
    }

Result

Expected behavior
It should same either Sexualminors or SexualMinors one of it but I prefer SexualMinors

Desktop (please complete the following information):

  • OS: Windows 10 22H2
  • Language: C#
  • Version: v6.8.0

Additional context

Whisper

Count this as a vote to add Whisper to the library.

Count of TokenizerGpt3.Encode

Hello, I would like to request an addition to TokenizerGpt3.Encode that would just return the number of the tokens, without creating unnecessary lists. In addition, there are a couple of further optimizations:

  • Encode should return IEnumerable<int> and use yield. There is no need to create a new list to return a sequence.
  • BytePairEncoding(token).Split(' ').Select(x => TokenizerGpt3Settings.Encoder[x]).ToList(); - there is also no need for .ToList() since AddRange works just as well without it.

TLS alert: 'HandshakeFailure'

Hi,this is code piece

public async Task GetAnswer(string text)
{

        string answer = "";

        OpenAIService service = new OpenAIService(new OpenAiOptions() { ApiKey = OpenAIToken });

        CompletionCreateRequest createRequest = new CompletionCreateRequest()
        {

            Prompt = text,
            Temperature = 0.3f,
            MaxTokens = 1000
        };

        var res = await service.Completions.CreateCompletion(createRequest, Models.TextDavinciV3);

        if (res.Successful)
        {
            answer = res.Choices.FirstOrDefault().Text;

        }
        return answer;

    }

When I am running it on my server, it returns an error : Authentication failed because the remote party sent a TLS alert: 'HandshakeFailure'.

When I am running it on local pc,it runs ok.
I have checked the setting of TLS between server and local pc, they are the same each other.
btw,I can get normally response from openai.com’s playground on server.

thank you.

Fine Tuning Model/Engine error

Currently, we have an issue with model(engine) usage. It is preventing to use Fine Tuned Models.

I am planning to fix this next week (16-22 January). Please follow this issue for notifications.

Answers, Classification, Search Endpoint Deprecation

Describe the bug
Today, we’re announcing that we will be deprecating the Answers, Classifications, and Search endpoints.

Since releasing these endpoints, we’ve developed new methods that achieve better results for these tasks. As a result, we’ll be removing the Answers, Classifications, and Search endpoints from our documentation and removing access to these endpoints on December 3, 2022.

We encourage anyone using these endpoints to switch over to newer techniques which produce better results, and have developed transition guides to help with this process.

Your code piece

Exception message:
Org org-OqrhlCeqzLz6NBDkPuE1o5IY does not have access to the answers endpoint, likely because it is deprecated. Please see https://community.openai.com/t/answers-classification-search-endpoint-deprecation/18532 for more information and reach out to [email protected] if you have any questions.

Result
If you are currently using these endpoints, this change will not immediately impact you. Prior to December 3, you will still be able to make requests and access the endpoint documentation by directly navigating to these pages:

Additional context
Link to the official message.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.