Coder Social home page Coder Social logo

shipbit / slickgpt Goto Github PK

View Code? Open in Web Editor NEW
447.0 14.0 94.0 2.48 MB

SlickGPT is a light-weight "use-your-own-API-key" web client for the OpenAI API written in Svelte. It offers GPT-4 integration, a userless share feature and other superpowers.

Home Page: https://slickgpt.vercel.app

License: MIT License

JavaScript 0.68% HTML 0.58% CSS 0.51% Svelte 69.82% TypeScript 28.40%
chatgpt chatgpt-api openai svelte sveltekit

slickgpt's People

Contributors

gschurck avatar ratcha9 avatar schroedi avatar shackless avatar th8m0z avatar thenbe avatar timokorinth avatar xmoiduts avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

slickgpt's Issues

Model does not exist

When i try to use gpt-4 or gpt-4-32k it says model does not exist when sending the query

[Feature] Fuzzy search all messages in all chats

Title is self-explanatory. I wish ChatGPT had this, so I'll add it to this project. But there should be a command palette or a search bar that lets you search through all your chats, this would be very useful. I'll work on an initial PR.

Cancel completions

SlickGPT users should be able to cancel the generation of completions like on chat.openai.com:

image

How to deploy it on vercel?

Hello! This project is very 👍
I want to know how to deploy it with vercel.
I fork your project, and create a project with vercel. but when I click deploy, there was many errors.

This is my deploy setting:

This is the error informations:

[22:57:21.186] Running build in San Francisco, USA (West) – sfo1
[22:57:21.257] Cloning github.com/ddy-ddy/my-personal-chatgpt (Branch: master, Commit: 1b19c0f)
[22:57:21.389] Previous build cache not available
[22:57:22.355] Cloning completed: 1.098s
[22:57:22.480] Running "vercel build"
[22:57:22.913] Vercel CLI 28.18.5
[22:57:23.468] Installing dependencies...
[22:57:36.361] 
[22:57:36.361] added 411 packages in 13s
[22:57:36.361] 
[22:57:36.361] 34 packages are looking for funding
[22:57:36.361]   run `npm fund` for details
[22:57:36.387] Detected `package-lock.json` generated by npm 7+...
[22:57:36.388] Running "npm run build"
[22:57:36.678] 
[22:57:36.679] > @shipbit/[email protected] build
[22:57:36.679] > vite build
[22:57:36.679] 
[22:57:37.536] 
[22:57:37.541] �[36mvite v4.2.1 �[32mbuilding SSR bundle for production...�[36m�[39m
[22:57:37.576] transforming...
[22:57:39.127] 
[22:57:39.127] �[1m�[33mwarn�[39m�[22m - As of Tailwind CSS v3.3, the `@tailwindcss/line-clamp` plugin is now included by default.
[22:57:39.127] �[1m�[33mwarn�[39m�[22m - Remove it from the `plugins` array in your configuration to eliminate this warning.
[22:57:43.242] �[32m✓�[39m 831 modules transformed.
[22:57:43.259] 2:57:43 PM [vite-plugin-svelte] ssr compile done.
[22:57:43.259] package          	files	  time	   avg
[22:57:43.259] @shipbit/slickgpt	  645	 1.62s	 2.5ms
[22:57:43.259] @sveltejs/kit    	    1	24.4ms	24.4ms
[22:57:43.260] �[32m✓�[39m built in 5.72s
[22:57:43.260] �[31m"FIREBASE_APIKEY" is not exported by "�$env/static/private", imported by "src/misc/firebase.ts".�[39m
[22:57:43.260] file: �[36m/vercel/path0/src/misc/firebase.ts:4:1�[39m
[22:57:43.260] �[33m2: import { getDatabase, get, ref } from 'firebase/database';
[22:57:43.260] 3: import {
[22:57:43.260] 4:   FIREBASE_APIKEY,
[22:57:43.260]      ^
[22:57:43.260] 5:   FIREBASE_AUTHDOMAIN,
[22:57:43.260] 6:   FIREBASE_PROJECTID,�[39m
[22:57:43.262] �[31merror during build:
[22:57:43.263] RollupError: "FIREBASE_APIKEY" is not exported by "�$env/static/private", imported by "src/misc/firebase.ts".
[22:57:43.263]     at error (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:2105:30)
[22:57:43.263]     at Module.error (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:13174:16)
[22:57:43.263]     at Module.traceVariable (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:13559:29)
[22:57:43.263]     at ModuleScope.findVariable (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:12061:39)
[22:57:43.263]     at Identifier.bind (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:7933:40)
[22:57:43.263]     at CallExpression.bind (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:5722:28)
[22:57:43.263]     at CallExpression.bind (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:9470:15)
[22:57:43.263]     at ExpressionStatement.bind (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:5726:23)
[22:57:43.264]     at Module.bindReferences (file:///vercel/path0/node_modules/rollup/dist/es/shared/node-entry.js:13170:18)�[39m
[22:57:43.294] Error: Command "npm run build" exited with 1
[22:57:43.657] Deployment completed
[22:57:43.613] BUILD_UTILS_SPAWN_1: Command "npm run build" exited with 1

GPT 4

is it possible to switch the model to GPT 4 if we have API access?

I can't access the app from my private vps

The app works fine locally, but when I run it on my server, and then I access vps-ip-address:5173/ the app doesn't load at all.

I'm no expert in node or npm, what I am missing here ?

[enhancement] Tutorial on dialogue sync or import/export

When users have multiple devices or are about to retire old devices, they prefer to transfer data from one to another before deprecating old ones. It would be helpful if a dialogue import/export mechanism or tutorial could be provided, or one step further, a syncing mechanism between various devices of a single user.

Editable messages & chat branching

On chat.openai.com you can edit user prompts which "branches" the chat:

image

image

  • All prompts and completions after the edited one are still accessible and can be switched in the UI
  • The edited prompt creates a branch in the chat
  • Whenever you continue prompting in a branch, only the prompts and completions in your current branch are sent to the ChatGPT API (to save tokens and keep a clean context). Our token cost estimation has to reflect this correctly.

HowTo: Host on other providers than Vercel

SlickGPT is setup to use Vercel as hosting provider. It uses @sveltejs/adapter-vercel and configures the endpoints in the /api dir to run as Edge functions. Create some documentation and/or branches to show how to run it on:

  • Netlify
  • Azure
  • ?

I already tried with Netlify and encountered an unexpected issue where Netlify was not able to access env vars from $env/static/private. See this issue for more infos and a workaround. Another problem was that longer prompts timed out because the edge/serverless functions ran too long on Netlify.

Use basic encryption to encrypt the openai key in browser local storage.

I think the cryptr module on NPM would be an excellent option for this. There'd need to be an extra env variable for storing the serverside key, the FIREBASE_APIKEY variable could be a fallback since all that is needed is a random string, but I think it would make the software seem more secure. I can work on a PR to add this

FIREFOX: Height attribute on chatInput textarea compressing below 1 row size

I have not been able to find what's applying a height: 20px inline on the textarea, but it's breaking things.
Firefox, latest, mobile and desktop.

Expectation
no height css

Reality
20px height css

In Chrome it seems to work fine, and a height of 42px is assigned (desktop), so I'm guessing there is some js or post processor that I don't know about doing it, so I'm posting here.

GPT-4-vision-preview support

Is there any plans currently to support the new GPT-4-vision model in the future for uploading images in chats?

Can't select and copy text for SlickGpt text frames

I'm trying to select text, copy it and pas to my editor....
It is impossible for some reason... The selection seems not to work.
It is the same for the frame where i enter my text and for frames with my previous input, GPT output too, and context editing frame too,

I'm running Chromium Version 111.0.5563.146 (Official Build) snap (64-bit)
Under Ubuntu 18.04

Premature AI reply display under old branch upon branching

Editing an existing message (①) should create a new branch first, and then the reply can be generated under the new branch like (②).

In the current version, editing a message and request completion will result in the reply being generated under the current branch rather than the new one, perceived as the chat not being branched properly (③), then the reply is moved to the new branch after generation.

image

image

Hide API Key once entered

Hide or make it unreadable, once the API key is entered. Makes it easier to record videos or show this to someone else without spoiling the key ;-)

Iphone 7 IOS 15.7.3 - safari browser

streaming response not working in Iphone 7 IOS 15.7.3 - safari browser.

it may work with json reponse.

i have tried to test it using my own python code. it ndont work either.
@app.post('/chatgpt-proxy')
async def chatgpt_proxy(request: Request):
try:
data = await request.json()
except json.JSONDecodeError:
raise HTTPException(status_code=400, detail="Invalid or empty JSON request body")

headers = {
'Authorization': f'Bearer {OPENAI_API_KEY}',
'Content-Type': 'application/json',
}
print(data)
print(headers)
r = requests.post(OPENAI_API_URL, headers=headers, json=data, stream=True)
return StreamingResponse(r.iter_content(chunk_size=8192), status_code=r.status_code)

HowTo: Self-host without Firebase / Share feature

The share feature in SlickGPT requires a database because we somehow have to pass the Chat object from client A to client B.

The project is currently set up to use a Firebase Realtime database as easy "JSON dump storage".

Create a branch and some documentation where the Share feature can easily be disabled and all Firebase dependencies (mostly the env vars) are removed from SlickGPT so that devs can run their own instance quicker and with less hassle in case they don't need to share chats.

Also document more clearly which code devs would have to edit to use another database provider than Firebase (which is basically just the share endpoint)

Wrong max token limits for GPT4-Turbo-1106

Currently the Max Tokens value has a maximum limit to 128000 for model gpt-4-1106-preview, however, from OpenAI documentation , the model returns a maximum of 4096 tokens, making the most of the dragging bar useless. Setting the values to exceed 4096 will result in an error in requesting a reply.

image
image
image

iOS/Safari - Lost chat in background

Whenever I send some input and put my device in standby or the app in background while the answer is being written, both my input as well as the GPT answer get erased from chat history.

Likely a bug?

I'd prefer my input wouldn't be lost and even if the response is not complete, that the received data remains as well.

iOS 16.4.1 (a) - iPhone 13 Pro Max

Self-hosted, behind nginx proxy: CORS and JSON errors

Hi, I managed to run slickgpt on my own server, using @sveltejs/adapter-node, and behind nginx proxy with Let's Encrypt SSL.

When I try to use SuggestTitleModal.svelte I get some errors. They only show when I connect via my domain. If going through IP:port directly, all works fine.

JSON:

Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data SuggestTitleModal.svelte:573:2
    handleSuggestTitle SuggestTitleModal.svelte:573
    AsyncFunctionThrow self-hosted:856
    (Async: async)
    listen index.mjs:463
    listen_dev index.mjs:2325
    mount SuggestTitleModal.svelte:122
    mount SuggestTitleModal.svelte:459
    m svelte-hooks.js:291
    mount_component index.mjs:2097
    mount @skeletonlabs_skeleton.js:25390
    mount @skeletonlabs_skeleton.js:26472
    mount @skeletonlabs_skeleton.js:25276
    update @skeletonlabs_skeleton.js:26709
    update index.mjs:1347
    flush index.mjs:1307
    (Async: promise callback)
    schedule_update index.mjs:1258
    make_dirty index.mjs:2133
    ctx index.mjs:2171
    instance32 @skeletonlabs_skeleton.js:26759
    set index.mjs:34
    update index.mjs:42
    trigger stores.js:10
    showModalComponent shared.ts:129
    handleEditTitle Toolbar.svelte:13
    (Async: EventListener.handleEvent)
    listen index.mjs:463
    listen_dev index.mjs:2325
    mount Toolbar.svelte:80
    mount Toolbar.svelte:203
    m svelte-hooks.js:291
    mount_component index.mjs:2097
    mount +page.svelte:132
    mount +page.svelte:1315
    m svelte-hooks.js:291
    mount_component index.mjs:2097
    update root.svelte:292
    update_slot_base index.mjs:100
    update +layout.svelte:89
    update_slot_base index.mjs:100
    update @skeletonlabs_skeleton.js:5920
    update index.mjs:1347
    flush index.mjs:1307
    (Async: promise callback)
    schedule_update index.mjs:1258
    make_dirty index.mjs:2133
    ctx index.mjs:2171
    $$set root.svelte:619
    get proxy.js:83
    $set index.mjs:2272
    key proxy.js:46
    navigate client.js:1096
    InterpretGeneratorResume self-hosted:1455
    AsyncFunctionNext self-hosted:852
    (Async: async)
    _start_router client.js:1569
    (Async: EventListener.handleEvent)
    _start_router client.js:1485
    start start.js:27
    <anonymous> (index):30
    (Async: promise callback)
    <anonymous> (index):29

also in GET:

NS_ERROR_WEBSOCKET_CONNECTION_REFUSED

and in POST:

Status
403
Forbidden
VersionHTTP/2
Transferred405 B (46 B size)
Referrer Policystrict-origin-when-cross-origin
Request PriorityHighest
DNS ResolutionSystem

Create an "in-app" changelog

It would be cool if SlickGPT users could see the latest changes within the app. Maybe on the Dashboard?
UI and technical concept needed - where would these infos come from and where would we display them?

Input is laggy on mobile or slow clients

Reported by several users:
Typing on mobile or slow clients leads to a noticeable delay. This is probably happening as a result of the live token cost calculation.

Don't blink/fade text while GPT is typing

I am currently using GPT-4 to generate responses to my requests. However, as the generation process is slow, I find myself staring at a blinking textbox for minutes at a time. As a result, I am unable to read the output that has already been generated.

pls add copy code button

slick indeed!
Would be cool if you could add a copy code button when answer contains created code.

token tooltip flickering

If you scroll chat boxes out of about two screens long, and bring them back in, the tooltip text Approximate Token Cost will appear very briefly, flickering when you hover over the tokens.

Integrate Moderation API

Quote OpenAI:

The moderation endpoint is a tool you can use to check whether content complies with OpenAI's usage policies. Developers can thus identify content that our usage policies prohibits and take action, for instance by filtering it.

See OpenAI Docs

This endpoint is free and it would be nice if SlickGPT would integrate this API.

Ability to set default Model

It would be nice to not have to go in and change from 3.5-turbo to 4 every time I make a new thread.

(Great app btw!)

Auto-suggest chat titles with ChatGPT

chat.openai.org has this neat feature where they let the AI set a title for your chat based on the first prompt und completion in the chat. People are bad at naming things ;)

Condition: if (chat.slug === chat.title) and there is at least one question and answer in the chat

Ideas:

  • either do this automatically as soon as the first answer arrived OR
  • Ask the user to name the chat or offer a button to let ChatGPT pick one. Do this when the user closes a chat and above conditions are met. This could save some tokens. Bonus: Estimate how many tokens this will roughly cost.

Technical:
it probably makes sense to set stream: false for this (and other) calls to the OpenAI API. This would require passing stream as param to the the endpoint in the Ask API and adding a new handler in the client that receives the answer. Currently stream is always hardcoded to true and the handler in ChatInput.svelte always treats the answer as EventStream.

Unable to scroll up while chatgpt is typing

It'd be nice to scroll up and view convo history while gpt is "typing". This is currently not possible because of auto scroll. It's more noticeable when using gpt4 because it's noticeably slower than older models (at the time of writing at least).

Possible solutions:

  • (best) only auto scroll if the user is at the bottom of the convo
  • manually toggle auto scroll on/off
  • nothing, at the rate we're moving gpt4 speed will soon be a non-issue

Cannot select/copy text during response

Currently I cannot select/copy text during an ongoing response.
Issue? My selected text is immediately unselected.

Why would I want an incomplete selection?
Example: Big code request where the ai gives multiple code blocks and may have already finished one.

Why fix? This saves time as I would not need to wait for the response to finish before I can select and copy the text.

Would be very appreciated.

Browsers: msedge/117, chrome/114

Long code-formatted text won't wrap during generation

In generating code-formatted long texts that do contain new lines, the response bubble won't wrap and exceeds the screen. Image below:
image

The properly wrapped format is demonstrated after generation is complete. Example:
image

Improve token cost calculation (+performance)

SlickGPT uses gpt3-tokenizer which is one of the few libs I found that "just runs" in the browser. It's close enough but not perfect as the calculations in

export function estimateChatCost(chat: Chat): ChatCost {
are a bit "off" compared to the official OpenAI Tokenizer.

Another and bigger problem is performance. The gpt3-tokenizer has a huge payload and it's pretty slow. Other solutions use advanced stuff like Node Buffer structures and are much faster. The problem is that they don't run easily in the browser without Node.

Any ideas how to calculate the tokens per Chat (context, prompts, completions) more accurately and faster?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.