Coder Social home page Coder Social logo

s-fletcher / llm-api Goto Github PK

View Code? Open in Web Editor NEW

This project forked from dzhng/llm-api

0.0 0.0 0.0 1.01 MB

Fully typed & consistent chat APIs for OpenAI, Anthropic, Azure's chat models for browser, edge, and node environments.

Home Page: https://www.npmjs.com/package/llm-api

License: MIT License

Shell 0.12% JavaScript 2.03% TypeScript 97.86%

llm-api's Introduction

โœจ LLM API

test

Fully typed chat APIs for OpenAI, Anthropic, and Azure's chat models for browser, edge, and node environments.

๐Ÿ‘‹ Introduction

  • Clean interface for text and chat completion for OpenAI, Anthropic, and Azure models
  • Catch token overflow errors automatically on the client side
  • Handle rate limit and any other API errors as gracefully as possible (e.g. exponential backoff for rate-limit)
  • Support for browser, edge, and node environments
  • Works great with zod-gpt for outputting structured data
import { OpenAIChatApi } from 'llm-api';

const openai = new OpenAIChatApi({ apiKey: 'YOUR_OPENAI_KEY' });

const resText = await openai.textCompletion('Hello');

const resChat = await openai.chatCompletion({
  role: 'user',
  content: 'Hello world',
});

๐Ÿ”จ Usage

Install

This package is hosted on npm:

npm i llm-api
yarn add llm-api

Model Config

To configure a new model endpoint:

const openai = new OpenAIChatApi(params: OpenAIConfig, config: ModelConfig);

These model config map to OpenAI's config directly, see doc: https://platform.openai.com/docs/api-reference/chat/create

interface ModelConfig {
  model?: string;
  contextSize?: number;
  maxTokens?: number;
  temperature?: number;
  topP?: number;
  stop?: string | string[];
  presencePenalty?: number;
  frequencyPenalty?: number;
  logitBias?: Record<string, number>;
  user?: string;

  // use stream mode for API response, the streamed tokens will be sent to `events in `ModelRequestOptions`
  stream?: boolean;
}

Request

To send a completion request to a model:

const text: ModelResponse = await openai.textCompletion(api: CompletionApi, prompt: string, options: ModelRequestOptions);

const completion: ModelResponse = await openai.chatCompletion(api: CompletionApi, messages: ChatCompletionRequestMessage, options: ModelRequestOptions);

// respond to existing chat session, preserving the past messages
const response: ModelResponse = await completion.respond(message: ChatCompletionRequestMessage, options: ModelRequestOptions);

options You can override the default request options via this parameter. A request will automatically be retried if there is a ratelimit or server error.

type ModelRequestOptions = {
  // set to automatically add system message (only relevant when using textCompletion)
  systemMessage?: string | (() => string);

  // send a prefix to the model response so the model can continue generating from there, useful for steering the model towards certain output structures.
  // the response prefix WILL be appended to the model response.
  // for Anthropic's models ONLY
  responsePrefix?: string;

  // function related parameters are for OpenAI's models ONLY
  functions?: ModelFunction[];
  // force the model to call the following function
  callFunction?: string;

  // default: 3
  retries?: number;
  // default: 30s
  retryInterval?: number;
  // default: 60s
  timeout?: number;

  // the minimum amount of tokens to allocate for the response. if the request is predicted to not have enough tokens, it will automatically throw a 'TokenError' without sending the request
  // default: 200
  minimumResponseTokens?: number;

  // the maximum amount of tokens to use for response
  // NOTE: in OpenAI models, setting this option also requires contextSize in ModelConfig to be set
  maximumResponseTokens?: number;
};

Response

Completion responses are in the following format:

interface ModelResponse {
  content?: string;

  // used to parse function responses
  name?: string;
  arguments?: JsonValue;

  usage?: {
    promptTokens: number;
    completionTokens: number;
    totalTokens: number;
  };

  // function to send another message in the same 'chat', this will automatically append a new message to the messages array
  respond: (
    message: ChatCompletionRequestMessage,
    opt?: ModelRequestOptions,
  ) => Promise<ModelResponse>;
}

๐Ÿ“ƒ Token Errors

A common error with LLM APIs is token usage - you are only allowed to fit a certain amount of data in the context window.

If you set a contextSize key, llm-api will automatically determine if the request will breach the token limit BEFORE sending the actual request to the model provider (e.g. OpenAI). This will save one network round-trip call and let you handle these type of errors in a responsive manner.

const openai = new OpenAIChatApi(
  { apiKey: 'YOUR_OPENAI_KEY' },
  { model: 'gpt-4-0613', contextSize: 8129 },
);

try {
  const res = await openai.textCompletion(...);
} catch (e) {
  if (e instanceof TokenError) {
    // handle token errors...
  }
}

๐Ÿ”ท Azure

llm-api also comes with support for Azure's OpenAI models. The Azure version is usually much faster and more reliable than OpenAI's own API endpoints. In order to use the Azure endpoints, you must include 2 Azure specific options when initializing the OpenAI model, azureDeployment and azureEndpoint. The apiKey field will also now be used for the Azure API key.

You can find the Azure API key and endpoint in the Azure Portal. The Azure Deployment must be created under the Azure AI Portal.

Note that the model parameter in ModelConfig will be ignored when using Azure. This is because in the Azure system, the model is selected on deployment creation, not on run time.

const openai = new OpenAIChatApi({
  apiKey: 'AZURE_OPENAI_KEY',
  azureDeployment: 'AZURE_DEPLOYMENT_NAME',
  azureEndpoint: 'AZURE_ENDPOINT',

  // optional, defaults to 2023-06-01-preview
  azureApiVersion: 'YYYY-MM-DD',
});

๐Ÿ”ถ Anthropic

Anthropic's models have the unique advantage of a large 100k context window and extremely fast performance. If no explicit model is specified, llm-api will default to the claude-instant-1 model.

const anthropic = new AnthropicChatApi(params: AnthropicConfig, config: ModelConfig);

โ– Amazon Bedrock

const conf = {
  accessKeyId: 'AWS_ACCESS_KEY',
  secretAccessKey: 'AWS_SECRET_KEY',
};

const bedrock = new AnthropicBedrockChatApi(params: BedrockConfig, config: ModelConfig);

๐Ÿค“ Debugging

llm-api usese the debug module for logging & error messages. To run in debug mode, set the DEBUG env variable:

DEBUG=llm-api:* yarn playground

You can also specify different logging types via:

DEBUG=llm-api:error yarn playground DEBUG=llm-api:log yarn playground

llm-api's People

Contributors

dzhng avatar oscarmyepes avatar olsenbudanur avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.