This repo implements OpenAI APIs with open source models, for example, open source LLMs for chat, Whisper
for audio, SDXL
for image, intfloat/e5-large-v2
for embeddings, and so on. With this repo, you can interact with LLMs using the openai
libraries or the LangChain
library.
make install
Start development server with the following command:
cp .env.example .env
, and modify the.env
file on your needmake run
Notice: the models can be loadded on startup or on the fly.
Services | API | Status | Description |
---|---|---|---|
Authorization | |||
Models | List models | ✅ Done | |
Models | Retrieve model | ||
Chat | Create chat completion | Partial Done | Support Multi. LLMs |
Completions | Create completion | ||
Images | Create image | ✅ Done | |
Images | Create image edit | ||
Images | Create image variation | ||
Embeddings | Create embeddings | ✅ Done | Support Multi. LLMs |
Audio | Create transcription | ✅ Done | |
Audio | Create translation | ✅ Done | |
Files | List files | ✅ Done | |
Files | Upload file | ✅ Done | |
Files | Delete file | ✅ Done | |
Files | Retrieve file | ✅ Done | |
Files | Retrieve file content | ✅ Done | |
Fine-tunes | Create fine-tune | ||
Fine-tunes | List fine-tunes | ||
Fine-tunes | Retrieve fine-tune | ||
Fine-tunes | Cancel fine-tune | ||
Fine-tunes | List fine-tune events | ||
Fine-tunes | Delete fine-tune model | ||
Moderations | Create moderation | ||
Edits | Create edit |
Model | Embedding Dim. | Sequnce Length | Checkpoint link |
---|---|---|---|
gte-large | 1024 | 512 | thenlper/gte-large |
e5-large-v2 | 1024 | 512 | intfloat/e5-large-v2 |
Model | #Resp Format | Checkpoint link |
---|---|---|
stable-diffusion-xl-base-1.0 | b64_json | stabilityai/stable-diffusion-xl-base-1.0 |
stable-diffusion-xl-base-0.9 | b64_json | stabilityai/stable-diffusion-xl-base-0.9 |
Model | #Params | Checkpoint link |
---|---|---|
whisper-1 | 1550 | alias for whisper-large-v2 |
whisper-large-v2 | 1550 M | large-v2 |
whisper-medium | 769 M | medium |
whisper-small | 244 M | small |
whisper-base | 74 M | base |
whisper-tiny | 39 M | tiny |
import openai
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "none"
for chunk in openai.ChatCompletion.create(
model="Baichuan-13B-Chat",
messages=[{"role": "user", "content": "Which moutain is the second highest one in the world?"}],
stream=True
):
if hasattr(chunk.choices[0].delta, "content"):
print(chunk.choices[0].delta.content, end="", flush=True)
import openai
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "none"
resp = openai.ChatCompletion.create(
model="Baichuan-13B-Chat",
messages = [{ "role":"user", "content": "Which moutain is the second highest one in the world?" }]
)
print(resp.choices[0].message.content)
import openai
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "none"
embeddings = openai.Embedding.create(
model="gte-large",
input="The food was delicious and the waiter..."
)
print(embeddings)
import os
import openai
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "none"
openai.Model.list()
import os
import openai
from base64 import b64decode
from IPython.display import Image
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "none"
response = openai.Image.create(
prompt="An astronaut riding a green horse",
n=1,
size="1024x1024",
response_format='b64_json'
)
b64_json = response['data'][0]['b64_json']
image = b64decode(b64_json)
Image(image)
# Cell 1: set openai
import openai
openai.api_base = "http://localhost:8000/api/v1"
openai.api_key = "None"
# Cell 2: create a recorder in notebook
# ===================================================
# sudo apt install ffmpeg
# pip install torchaudio ipywebrtc notebook
# jupyter nbextension enable --py widgetsnbextension
from IPython.display import Audio
from ipywebrtc import AudioRecorder, CameraStream
camera = CameraStream(constraints={'audio': True,'video':False})
recorder = AudioRecorder(stream=camera)
recorder
# Cell 3: transcribe
import os
import openai
temp_file = '/tmp/recording.webm'
with open(temp_file, 'wb') as f:
f.write(recorder.audio.value)
audio_file = open(temp_file, "rb")
transcript = openai.Audio.transcribe("whisper-1", audio_file)
print(transcript.text)
项目参考了很多大佬的代码,例如 @xusenlinzy 大佬的api-for-open-llm, @hiyouga 大佬的LLaMA-Efficient-Tuning 等,表示感谢。