Coder Social home page Coder Social logo

deniska83 / window.ai Goto Github PK

View Code? Open in Web Editor NEW

This project forked from alexanderatallah/window.ai

0.0 0.0 0.0 9.66 MB

Use your own AI models on the web

Home Page: https://windowai.io

License: MIT License

JavaScript 0.99% TypeScript 98.91% CSS 0.10%

window.ai's Introduction

Window: use your own AI models on the web

Window is a browser extension that lets you use model-polymorphic AI apps.

  • For developers: free from API costs and limits - just use the injected window.ai library

  • For users: use your preferred model, whether it's external (like OpenAI), proxied, or local, to protect privacy.

More about why this was made here.

Below, you'll find out how to install, how to find apps, how to make apps, and how to connect custom models.

๐Ÿ“บ Demo

demo.mp4

โ„น๏ธ Contents

โญ๏ธ Main features

  • Configure keys: set all your API keys in one place and forget about them. They are only stored locally.

  • User-controlled models: use external, proxied, and local models of your choice.

  • Save your prompt history across apps (maybe train your own models with it).

โš™๏ธ How it works

  1. You configure your keys and models just once (see demo above).

  2. Apps can request permission to send prompts to your chosen model via the injected window.ai library (see the simple docs).

  3. You maintain visibility on what's being asked and when.

It works with these models:

๐Ÿ“ฅ Installation

This extension is in beta and not on stores yet. For now, you can join the #beta-testing channel on Discord to get access to a downloadable extension that you can load into Chrome.

๐Ÿ‘€ Find apps

Better ways of doing this are coming soon, but today, you can use the Discord #app-showcase channel to discover new window.ai-compatible apps, or you can browse user-submitted ones on aggregators:

๐Ÿ“„ Docs

This section shows why and how to get started, followed by a reference of window.ai methods.

Why should I build with this?

As a developer, one of the primary reasons to use window.ai instead of API calls is reducing your infrastructure burden. No more model API costs, timeouts, rate limiting, and server billing time.

Plus, depending on what you make, you may have no need to make code changes when new models come out, like GPT-4, or when users need to switch between them.

Lastly, now you can build privacy-conscious apps that just talk to the user's choice of model, and you have less liability for the model's output.

Getting started

To leverage user-managed models in your app, simply call await window.ai.getCompletion with your prompt and options.

Example:

const response: Output = await window.ai.getCompletion(
    { messages: [{role: "user", content: "Who are you?"}] }: Input
  )

console.log(response.message.content) // "I am an AI language model"

All public types, including error messages, are documented in this file. Input, for example, allows you to use both simple strings and ChatML.

Example of streaming GPT-4 results to the console:

await ai.getCompletion({
  messages: [{role: "user", content: "Who are you?"}]
}, {
  temperature: 0.7,
  maxTokens: 800,
  model: ModelID.GPT4,
  onStreamResult: (res) => console.log(res.message.content)
})

Note that getCompletion will return an array, Output[], if you specify numOutputs > 1.

Reference

Better version coming soon. In the meantime, all public types, including error messages, are documented in this file. There are just two functions in the library:

Current model: get the user's currently preferred model ID.

window.ai.getCurrentModel(): Promise<ModelID> 

Get completion: get or stream a completion from the specified (or preferred) model.

window.ai.getCompletion(
    input: Input,
    options: CompletionOptions = {}
  ): Promise<Output | Output[]>

Input is either a { prompt : string } or { messages: ChatMessage[]}. Examples: see getting started above.

๐Ÿง  Local model setup

You can configure any local model to work with Window-compatible apps by writing a simple HTTP server.

Here are instructions for setting up an Alpaca server locally with FastAPI and Uvicorn: Alpaca Turbo.

Server API Spec

Types

  • ChatMessage: {"role": string, "content": string}

POST /completions

This endpoint accepts a request body containing the following parameters:

  • model: A string identifier for the model type to use.
  • prompt: The prompt(s) to generate completions for, encoded as a string. OR you can use ChatML format via messages:
  • messages an array of ChatMessages.
  • max_tokens: The maximum number of tokens to generate in the completion.
  • temperature: What sampling temperature to use, between 0 and 2.
  • stop_sequences: A string or array of strings where the API will stop generating further tokens. The returned text will not contain the stop sequence.
  • stream: A boolean representing whether to stream generated tokens, sent as data-only server-sent events as they become available. Defaults to false.
  • num_generations: How many choices to generate (should default to 1).

Note: apps like windowai.io will ask to stream, so your local server might not work with them until you support streaming.

Return value:

This endpoint should return an object that looks like: { "choices": Array<{ text: string }.

More WIP thinking here.

Demo comparing Alpaca with GPT-4

Demo context

WindowAlpacaRecordingApr4.mp4

๐Ÿค Contributing

This is a turborepo monorepo containing:

  1. A Plasmo extension project.
  2. A web app serving windowai.io.
  3. Upcoming packages to help developers (see Discord for more info).

To run the extension and the web app in parallel:

pnpm dev

To build them both:

pnpm build

After building, open your browser and load the appropriate development build by loading an unpacked extension. For example, if you are developing for the Chrome browser, using manifest v3, use: build/chrome-mv3-dev.

window.ai's People

Contributors

alexanderatallah avatar cfortuner avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.