Coder Social home page Coder Social logo

enricoros / big-agi Goto Github PK

View Code? Open in Web Editor NEW
4.5K 51.0 1.0K 24.45 MB

Generative AI suite powered by state-of-the-art models and providing advanced AI/AGI functions. It features AI personas, AGI functions, multi-model chats, text-to-image, voice, response streaming, code highlighting and execution, PDF import, presets for developers, much more. Deploy on-prem or in the cloud.

Home Page: https://big-agi.com

License: MIT License

JavaScript 45.86% TypeScript 53.00% CSS 1.09% Dockerfile 0.05%
chatgpt generative-ai ui chatgpt-ui agi large-language-models stable-diffusion gpt gpt-4 openai

big-agi's People

Contributors

aj47 avatar ashesh3 avatar dandv avatar defifofum avatar dogmatic69 avatar edmondop avatar enricoros avatar felixclements avatar fredliubojin avatar g1ibby avatar harlanlewis avatar jacksongoode avatar joriskalz avatar justmrphoenix avatar koganei avatar konsila avatar kursad-k avatar llegomark avatar mludvig avatar nilshulth avatar penagwin avatar privtec avatar ptrckaraujo avatar ranfysvalle02 avatar rossman22590 avatar seven4x avatar shinkawk avatar smileynet avatar tboydston avatar typpo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

big-agi's Issues

Load PDFs

Load PDFs in the UI and transform them to LLM input. This will enable answering questions / summarizing / paraphrasing / writing code / etc.

UX options:

  1. Existing button to load file will parse the PDF (right now text files are inserted as raw text in a ``` section)
  2. When pasting a PDF URL in the Composer, it should download the PDF, parse, and queue ir up as an attachment to the message
  3. If the size of the PDF exceeds the available context window, there should detail removed (references, appendix, and summarization) before attaching the file

There seem to be many TS/JS libraries, including:

  1. https://www.npmjs.com/package/pdf2json
  2. https://www.npmjs.com/package/pdfreader
  3. more

Please vote / contribute ideas!

In-line Free Scroll toggling

A common UX affordance is to enable free scroll and add a “⬇️ scroll to bottom” floating action button that disables free scroll. optionally, this new behavior could be disabled with another setting/env var.

Feature: export chats

Something I wished for in official ChatGPT is the ability to save and export chat sessions. i.e. format preserved HTML files or have a check box against each input/output message, and we can bulk select which messages to save to HTML files.

Guess it would be related to #14. If you had S3 storage support, could also save chat conversations to S3 storage and have ability to recall them from S3 storage for view/review in your app - maybe loaded into the scratch pad notes #17 ? :)

Code run html

when i click the run button i get a bunch of errors, in the little sections running the code. how can i fix the code sandbox

sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiaW5kZXguanMiLCJzb3VyY2VSb290IjoiIiwic291cmNlcyI6WyJpbmRleC50cyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiO0FBQVEsSUFBSSxDQUFBO0FBQUMsUUFBUSxHQUFDLEdBQUcsQ0FBQTtBQUFDLEdBQUcsR0FBQyxRQUFRLENBQUEiLCJzb3VyY2VzQ29udGVudCI6WyI8YnV0dG9uIHR5cGU9XCJidXR0b25cIj5CdXkgTm93PC9idXR0b24+Il19'
at https://2-1-9-sandpack.codesandbox.io/static/js/sandbox.d52cf7871.js:1:258176
at e.value (https://2-1-9-sandpack.codesandbox.io/static/js/sandbox.d52cf7871.js:1:258295)
at e.value (https://2-1-9-sandpack.codesandbox.io/static/js/sandbox.d52cf7871.js:1:256570)
at Worker. (https://2-1-9-sandpack.codesandbox.io/static/js/sandbox.d52cf7871.js:1:257164)"

Reasoning Systems 🧩

Implementing reasoning systems that are more sophisticated than the current turn-based chat (default).

Reasoning systems

System UX Information
default turn-based-chat primitive, and okay
ReAct ReAct: Synergizing Reasoning and Acting in Language Models PDF
DEPS Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents PDF
Reflexion Reflexion: an autonomous agent with dynamic memory and self-reflection PDF
Your epic idea 💡 ... ...

Techniques

What Link
Early experiments with GPT-4 Sparks of Artificial General Intelligence
HuggingGPT Solving AI Tasks with ChatGPT and its Friends in HuggingFace
Task-driven Autonomous Agent (Yohei) Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain Tweet

If you have feedback on which to pick first, UX, etc, comment down below.

Project should have an identifiable name

Should this project have a more identifiable name than "NextJS ChatGPT App"?

GPT-4 suggested these:

  1. "ChatGPT Nexus”. The term “Nexus” reflects the convergence of diverse features and customizations, while maintaining an academic tone. This name highlights the central role of ChatGPT as the core technology and suggests a hub where developers, scientists, and other users can collaborate and explore the app’s diverse functionalities.

  2. "ChatGPT Synergy": "Synergy" suggests the interaction and cooperation of multiple components to create a combined effect greater than the sum of their individual effects. This name accentuates the app's integrated nature, where various features like chat streaming, code highlighting, and code execution work together to provide an enhanced user experience.

  3. "ChatGPT Codex": A "Codex" is an ancient manuscript that typically contains multiple texts. This name alludes to the project's rich collection of features and presets, signifying a comprehensive resource for developers and users to access and contribute to. Moreover, it conveys an academic tone, making it suitable for a project with diverse user types, including scientists and executives.

And then we can make the title of the application reflect that name, along with the current model from the settings (if my PR gets merged).

Anthropic API

It is similar to OPENAI

        conversations = conversations.concat(`Human: ${text}\n\nAssistant: `);
        var response: ClaudeResponse = await fetcher(URL.Completion, {
            method: 'POST',
            headers: {
                "content-type": "application/json",
                "x-api-key": this.key,
            },
            body: {
                "prompt": conversations,
                "model": options ? options.model : 'claude-v1',
                "max_tokens_to_sample": (options) ? (options.max_tokens_to_sample ? options.max_tokens_to_sample : 512) : 512,
                "stream": options?.stream,
                "stop_sequence": options?.stop_sequences,
                "temperature": options?.temperature,
                "top_k": options?.top_k,
                "top_p": options?.top_p,
            }
        });
        if (!response) throw "Error parsing response body";
        conversations = conversations + response.completion + '\n\n';
        await fsAsync.writeFile(`data/${conversationId}.txt`, conversations, { encoding: 'utf-8' });
        return {
            response: response.completion,
            conversationId: conversationId,
        };
    }

I have access to anthropic.com and I have a code we can test with. Please let me know what you think?

https://console.anthropic.com/docs

Context utilization UI & UX (8k, 4k, ..)

Measure the token usage of a chat, to know when we're about to hit the limit, and offer ways to deal with it.

UX could be a progress bar with utilization and remaining capacity when composing a new message. A red bar would hint at removing some older chat messages before being able to compose a new message. An alternative would be to automatically prune messages from the start of the conversation (but keeping the system message).

Tech note: context windows vary by Model - GPT-4 8k having the largest until the 32k comes along.

Note: originally mentioned by by @bytesuji in #2 (comment)_

Feature: paste grids/tables as markdown

When pasting (ctrl+v, or the "smart paste" button), tabular data should be converted into safe markdown.
This will give GPT a better way to relate to the data (and visually, it will be rendered better).

Rank City Country Population
1 Tokyo Japan 37,833,000
2 Delhi India 28,514,000
3 Shanghai China 25,582,000
4 São Paulo Brazil 21,650,000

Example above - see raw source

Remember shared pages

Remember in-app the URLs that were generated from sharing (paste.gg), their expiration and deletion key.
Low priority.

Suggestion: Sans font for text, Mono for code

Hello,

I've modified my client so that the assistant uses the same font as the user for prose, mono for code.

image

Curious if you think this is desired. I think it enhances readability. If you'd like to keep assistant text fully mono, would you be open to a PR that adds a toggle?

ligatures in code

Loving the incredible progress on this repo!

Are you open to removing jetbrains mono ligatures in code blocks? Heavily subjective opinion - I appreciate the legibility they add in sentence form, but find the transformation of !=, >=, => etc quite distracting. The non-transformative (ie just whitespace balancing) of [||] etc is fine.

Should this be done with a font-family switch, or CSS font-variant-ligatures?

Multiple chats & local history - conversation sidebar

ChatGPT-like conversations sidebar to have multiple chats - locally stored on localstorage/indexedDB in your browser. This will make conversations instantly resumeable and reusable.

Can have a transparent UX via left-side hamburger menu, and enable other features such as clearing the conversation, 'forking' a conversation, etc.

Originally posted by @bytesuji in #2 (comment)

UX - Increase chat bubbles' horizontal margin

I find centered chat views a better choice as they are more user-friendly and easier to read.

Having whitespace on both sides helps users focus on the conversation without feeling overwhelmed. Consistent line length is also maintained, making it easier to follow what's being said.

Is that something worth considering, @enricoros?

Thanks!

Error: the model `gpt-4` does not exist

Context

I have an issue while using your demo app. It seems like the "gpt-4" modal does not exist when asking for some answers from the chat app. Could you look into it please?

Steps to reproduce

  1. open https://nextjs-chatgpt-app-enricoros.vercel.app/
  2. create an api key on openai.com
  3. feed it to the nextjs-chatgpt-app
  4. choose "Developer" in the dropdown
  5. ask the chat to give you the code for a pacman game (could be anything else TBH)

Result

image

Intended result

A valid answer from GPT-4 written in the chat.

RFC: Whisper integration

I propose integrating OpenAI's Whisper Automatic Speech Recognition (ASR) system [GitHub]. Whisper is designed to convert spoken language into written text.

Is it something that might be of interest?

AI Purpose: support list of role/content, not just system.

Really enjoying playing with this, I think you have the beginnings of a really useful product.

  1. For my purposes, I would prefer to have the AI purpose dropdown to be visible along with the chat window.
  2. I would like to be able to change the AI purpose without refreshing the page. If the item above was implemented, then you could change the AI purpose without refreshing the page.
  3. The API allows for more context to be submitted with the prompt. Currently you support the 'system' role, but the API also supports 'user' and 'assistant' roles in this style:
    {"role": "system", "content": "You are the assistant to a Dungeon Master running the game 'Dungeons and Dragons'."},
    {"role": "user", "content": "How much damage does magic missile do when cast from a 5th level spell slot?"},
    {"role": "assistant", "content": "When cast at 5th level, magic missile deals 5d4 + 5 force damage,"},

UI - File dialog

A file dialog button is easier to use and more intuitive rather than drag-and-drop.

How does it sound?

Expose API Base URL option

There are projects like Helicone which provide a wrapper around the OpenAI API for tracking prompts/response quality over time, error rates, understanding pricing and token usage etc. Additionally some people might implement OpenAI API-compatible open source model servers e.g. based on LLama etc. So it would be great if the user could change the baseURL in the settings to use other services.

From the Helicone docs, the implementation should be pretty easy:

import { Configuration, OpenAIApi } from "openai";

const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
  basePath: "https://oai.hconeai.com/v1",
});

const openai = new OpenAIApi(configuration);

Tabualar or tables

when the bot makes tables, it doesn't make nice tables looking in css it does it in code like structure using lines and raw text unformatted

Feature - User defined task list

Defining tasks and saving them in the browser's storage would be great. In a way, expand on the already existing "developer", "scientist" and so on.

Auto Personas - Meta-analysis of chats and baby steps to AGI: Hyperlink in chat to launch new agents for specialized tasks.

Use-case:

In the course of consulting ChatGPT in many specialized fields, it will urge you to consult with a professional, e.g. a lawyer, a doctor, an electrician, peers, etc. If we are able to analyze portions of the bot's output for instructions to consult with a specialist, we can offer to launch a new chat with a new system prompt generated from that consultation instruction.

Example:

User: I want to start a business.
ChatGPT: Choose a business structure: .... Consult with a legal expert or business advisor to determine the best option for your startup.

Where clicking Consult with a legal expert, etc., would open a new tab with an editable custom system prompt.

More discussion needed:

ChatGPT's official interface, with plugins enabled, detects plugin-able requests, and this would be architected similarly. Except this would be open for custom plugins eventually!

Criticisms?

Feature: Scratch pad for notes and/or text splitting?

@enricoros Love your work, it was very easy to setup using Cloudflare Pages https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/ once you set NODE_VERSION https://developers.cloudflare.com/pages/platform/build-configuration/ 😀

The first suggestion is how about a scratch pad for notes on a expand/collapse sidebar column? Sometimes you work with a text size that is greater than the token length, so need the whole text to be placed somewhere.

Even better, add support for the notepad to allow full-text input and count characters/tokens and then users can define how many characters or tokens to split text by and have neat snippets of text displayed to be copied and pasted. I did a standalone text splitter demo for what I mean at https://slicer.centminmod.com/ 😃

cheers

George

Forking of conversations

Explores alternative realities. I'd rather fork and go back between convos than use a tree-like variants approach.

Domain-specific document/workspace uploading

For example, I often want the Developer purpose to understand the context of the project I am working on. It would be great if there was an easier way to do so across multiple files, a la gpt-repository-loader

I think this could expand the "Paste code" option. Perhaps it could be selected during Purpose selection, and the repository could be preloaded with some prompt engineering around it.

Care would need to be taken to avoid going over token limits.

What other similar domains and use-cases would there be to work on specific documents/projects that may be split over many files, and updated during the chat session?

GPT -4 Error

when im using the Gpt 4 models the error always occours the error message
OpenAI API error: 404 Not Found {"error":{"message":"The model: gpt-4 does not exist","type":"invalid_request_error","param":null,"code":"model_not_found"}}

Feature: http proxy

Can we expose the operation of setting system proxy in the nextjs-chatgpt-app project? Currently, i found that the traffic is not going through the system-level proxy.

Feature - Clear chat

It would be nice to have a button to reset the chat, thus starting a new session.

Failed to build on Cloudflare Pages - functions were not configured to run with the Edge Runtime: publish.func

@enricoros updated my fork centminmod@55490d9 to latest build and seems I can no longer build the app on Cloudflare Pages? Checking https://nextjs.org/docs/api-routes/edge-api-route gives 404 not found?

Maybe something specific to Vercel? I never used Vercel and built in past straight with Cloudflare Pages though they have https://developers.cloudflare.com/pages/migrations/migrating-from-vercel/

The build errors are related to following functions were not configured to run with the Edge Runtime: publish.func

06:36:44.042 | ▲ Detected Next.js version: 13.3.0
-- | --
06:36:44.047 | ▲ Detected `package-lock.json` generated by npm 7+...
06:36:44.048 | ▲ Running "npm run build"
06:36:44.579 | ▲ npm
06:36:44.579 | ▲ WARN
06:36:44.579 | ▲ config tmp This setting is no longer used.  npm stores temporary files in a special
06:36:44.579 | ▲ npm WARN
06:36:44.580 | ▲ config location in the cache, and they are managed by
06:36:44.580 | ▲ npm WARN config     [`cacache`](http://npm.im/cacache).
06:36:44.597 | ▲ > [email protected] build
06:36:44.598 | ▲ > next build
06:36:45.367 | ▲ info  - Linting and checking validity of types...
06:36:54.264 | ▲ info  - Creating an optimized production build...
06:37:09.793 | ▲ info  - Compiled successfully
06:37:09.794 | ▲ info  - Collecting page data...
06:37:27.991 | ▲ info  - Generating static pages (0/3)
06:37:29.285 | ▲ info  - Generating static pages (3/3)
06:37:29.306 | ▲ info  - Finalizing page optimization...
06:37:29.314 | ▲
06:37:29.325 | ▲ Route (pages)                              Size     First Load JS
06:37:29.326 | ▲ ┌ ○ /                                      152 kB          251 kB
06:37:29.327 | ▲ ├   └ css/86ad80c02bda2636.css             648 B
06:37:29.327 | ▲ ├   /_app                                  0 B            99.5 kB
06:37:29.327 | ▲ ├ ○ /404                                   182 B          99.7 kB
06:37:29.327 | ▲ ├ ℇ /api/openai/models                     0 B            99.5 kB
06:37:29.327 | ▲ ├ ℇ /api/openai/stream-chat                0 B            99.5 kB
06:37:29.328 | ▲ └ λ /api/publish                           0 B            99.5 kB
06:37:29.328 | ▲ + First Load JS shared by all              105 kB
06:37:29.328 | ▲ ├ chunks/framework-2c79e2a64abdb08b.js   45.2 kB
06:37:29.328 | ▲ ├ chunks/main-4dcb7f9b52833aba.js        27.2 kB
06:37:29.329 | ▲ ├ chunks/pages/_app-02967b3b3aa6bbc9.js  24.7 kB
06:37:29.329 | ▲ ├ chunks/webpack-62a659d736ca0b7b.js     2.44 kB
06:37:29.330 | ▲ └ css/a90f819cc1334814.css               5.27 kB
06:37:29.330 | ▲ ℇ  (Streaming)  server-side renders with streaming (uses React 18 SSR streaming or Server Components)
06:37:29.330 | ▲ λ  (Server)     server-side renders at runtime (uses getInitialProps or getServerSideProps)
06:37:29.330 | ▲ ○  (Static)     automatically rendered as static HTML (uses no initial props)
06:37:34.300 | ▲ Traced Next.js server files in: 4.880s
06:37:35.313 | ▲ Created all serverless functions in: 1.011s
06:37:35.417 | ▲ Collected static files (public/, static/, .next/static): 17.165ms
06:37:37.116 | ▲ Build Completed in .vercel/output [54s]
06:37:37.255 | ⚡️ Building Completed.
06:37:37.441 | ⚡️ ERROR: Failed to produce a Cloudflare Pages build from the project.
06:37:37.441 | ⚡️
06:37:37.441 | ⚡️ The following functions were not configured to run with the Edge Runtime:
06:37:37.441 | ⚡️  - publish.func
06:37:37.441 | ⚡️
06:37:37.442 | ⚡️ If this is a Next.js project:
06:37:37.442 | ⚡️
06:37:37.442 | ⚡️ - you can read more about configuring Edge API Routes here: https://nextjs.org/docs/api-routes/edge-api-route
06:37:37.442 | ⚡️
06:37:37.442 | ⚡️ - you can try enabling the Edge Runtime for a specific page by exporting the following from your page:
06:37:37.442 | ⚡️
06:37:37.442 | ⚡️         export const config = { runtime: 'edge' };
06:37:37.442 | ⚡️
06:37:37.443 | ⚡️ - or you can try enabling the Edge Runtime for all pages in your project by adding the following to your 'next.config.js' file:
06:37:37.443 | ⚡️
06:37:37.443 | ⚡️         const nextConfig = { experimental: { runtime: 'edge'} };
06:37:37.443 | ⚡️
06:37:37.443 | ⚡️ You can read more about the Edge Runtime here: https://nextjs.org/docs/advanced-features/react-18/switchable-runtime
06:37:37.477 | Failed: build command exited with code: 1
06:37:38.454 | Failed: error occurred while running build command

The basics of Cloudflare Pages steps for building the app that worked previously was as follows:

For setting up your nextjs app on Cloudflare Pages you can follow Cloudflare developer docs at https://developers.cloudflare.com/pages/framework-guides/deploy-a-nextjs-site/ and https://developers.cloudflare.com/pages/platform/build-configuration/

The only difference is I took these steps below:

  1. I forked your repo to my own Github repo
  2. On the Cloudflare Pages section, click Create a project button > Connect To Git and give Cloudflare Pages either All Github account Repo access or selected Repo access. I use selected Repo access and select the forked repo from step 1
  3. Once you select the forked Github repo, you click Begin Setup button to setup build and deployments. On this page you set you Project name, Production branch i.e. main and your Build settings where you select from Framework preset dropdown menu Next.js. Leave the preset filled Build command and Build output directory as preset defaults. You'd want to set Environmental variables (advanced) on this page to configure some variables as follows:
VARIABLE VALUE
GO_VERSION 1.16
NEXT_TELEMETRY_DISABLED 1
NODE_VERSION 17
PHP_VERSION 7.4
PYTHON_VERSION 3.7
RUBY_VERSION 2.7.1
  1. Click the Save and Deploy button
  2. Then watch the process run to initialize your build environment, clone Github repo, build the application and deploy to Cloudflare Network once that is done, proceed to the project you created.
  3. Custom domains tab allows you to set up your domain via CNAME
  4. Settings page will have 2 settings you want to enable, Access Policy to restrict preview deployments to members of your Cloudflare account via one time pin and to restrict primary *.YOURPROJECT.pages.dev domain see https://developers.cloudflare.com/pages/platform/known-issues/#enabling-access-on-your-pagesdev-domain and enable Web Analytics

Fetch Failed

When I execute a chat, i get a load of HTML in the response.
I can see there is a fetch fail
,"buildId":"development","isFallback":false,"err":{"name":"TypeError","source":"edge-server","message":"fetch failed","stack":"TypeError: fetch failed\n at context.fetch (file://C:\\git\\nextjs-chatgpt-app\\node_modules\\next\\dist\\server\\web\\sandbox\\context.js:167:38)\n at OpenAIStream

how might i investigate futher to get a more helpful error?

Create Plugins-Option // RFC

Give users a possibility to extend the chat functionality with their own plugins. For example I would like to create a plugin to upload a PDF and then ask questions to it. Just like in this OpenAI example: https://github.com/openai/openai-cookbook/tree/main/apps/file-q-and-a
Or a plugin, which visits a web page and then answers questions about it's content. Like hier: https://github.com/openai/openai-cookbook/tree/main/apps/web-crawl-q-and-a

You wrote an amazing software, truly made with love, keep it up!

Personas: Add custom, dynamic, and persisted

🚀 Dynamic Personas support 🌟

Requirements for the personas:

  • Phase 1: Still client-side, but some support for server-side

    • Dynamic persona support, remove hardcoding
    • Stored in a state store, persisted in localStorage
    • Personas in chats: reference? or copy?
    • Uuids for personas, and versioning
    • Add attributes: #198
    • Import from GPTs
    • Make extensible
  • Phase 2: Mixed Client and Server side?

    • Server-side personas may not have full visibility on the client - undecided on this yet
    • Shall be seeded from the server side
    • Shall be updated when the server side updates/adds/removes person
    • App is shipped without preloaded personas by default?

GPT3.5 continuous conversations?

Great work @enricoros on latest updates! I'm trying to understand why the app and OpenAI AI token limit utilisation works the way it does for your app. When you hit 4097 token limit we need to remove previous messages. But why doesn't https://chat.openai.com/chat implementation of GPT3.5 don't need to do so? Is their method of implementation possible for your app?

[OpenAI Issue] Error: 400 · Bad Request · {"error":{"message":"This model's maximum context length is 4097 tokens. However, your messages resulted in 6412 tokens. Please reduce the length of the messages.","type":"invalid_request_error","param":"messages","code":"context_length_exceeded"}}

image

Code Run

Can you make me a branch with this feature? I tried to implement but something is up with my css, i have inluded the needed libraries in package json and lock.

resulting code works but css is weird look at code:

import * as React from 'react';

import { Sandpack, SandpackFiles } from '@codesandbox/sandpack-react';

import Prism from 'prismjs';
import 'prismjs/themes/prism.css';
import 'prismjs/components/prism-bash';
import 'prismjs/components/prism-java';
import 'prismjs/components/prism-javascript';
import 'prismjs/components/prism-json';
import 'prismjs/components/prism-markdown';
import 'prismjs/components/prism-python';
import 'prismjs/components/prism-typescript';

import { Alert, Avatar, Box, Button, IconButton, ListDivider, ListItem, ListItemDecorator, Menu, MenuItem, Stack, Textarea, Tooltip, Typography, useTheme } from '@mui/joy';
import { SxProps, Theme } from '@mui/joy/styles/types';
import ClearIcon from '@mui/icons-material/Clear';
import ContentCopyIcon from '@mui/icons-material/ContentCopy';
import EditIcon from '@mui/icons-material/Edit';
import Face6Icon from '@mui/icons-material/Face6';
import FastForwardIcon from '@mui/icons-material/FastForward';
import MoreVertIcon from '@mui/icons-material/MoreVert';
import PlayArrowOutlinedIcon from '@mui/icons-material/PlayArrowOutlined';
import SettingsSuggestIcon from '@mui/icons-material/SettingsSuggest';
import SmartToyOutlinedIcon from '@mui/icons-material/SmartToyOutlined';
import StopOutlinedIcon from '@mui/icons-material/StopOutlined';

import { DMessage } from '@/lib/store-chats';
import { Link } from './util/Link';
import { cssRainbowColorKeyframes } from '@/lib/theme';


/// Utilities to parse messages into blocks of text and code

type Block = TextBlock | CodeBlock;
type TextBlock = { type: 'text'; content: string; };
type CodeBlock = { type: 'code'; content: string; language: string | null; complete: boolean; code: string; };

const inferCodeLanguage = (markdownLanguage: string, code: string): string | null => {
  // we have an hint
  if (markdownLanguage) {
    // no dot: assume is the syntax-highlight name
    if (!markdownLanguage.includes('.'))
      return markdownLanguage;

    // dot: there's probably a file extension
    const extension = markdownLanguage.split('.').pop();
    if (extension) {
      const languageMap: { [key: string]: string } = {
        cs: 'csharp', html: 'html', java: 'java', js: 'javascript', json: 'json', jsx: 'javascript',
        md: 'markdown', py: 'python', sh: 'bash', ts: 'typescript', tsx: 'typescript', xml: 'xml',
      };
      const language = languageMap[extension];
      if (language)
        return language;
    }
  }

  // based on how the code starts, return the language
  if (code.startsWith('<DOCTYPE html') || code.startsWith('<!DOCTYPE')) return 'html';
  if (code.startsWith('<')) return 'xml';
  if (code.startsWith('from ')) return 'python';
  if (code.startsWith('import ') || code.startsWith('export ')) return 'typescript'; // or python
  if (code.startsWith('interface ') || code.startsWith('function ')) return 'typescript'; // ambiguous
  if (code.startsWith('package ')) return 'java';
  if (code.startsWith('using ')) return 'csharp';
  return null;
};

/**
 * FIXME: expensive function, especially as it's not been used in incremental fashion
 */
const parseBlocks = (forceText: boolean, text: string): Block[] => {
  if (forceText)
    return [{ type: 'text', content: text }];

  const codeBlockRegex = /`{3,}([\w\\.+]+)?\n([\s\S]*?)(`{3,}|$)/g;
  const result: Block[] = [];

  let lastIndex = 0;
  let match;

  while ((match = codeBlockRegex.exec(text)) !== null) {
    const markdownLanguage = (match[1] || '').trim();
    const code = match[2].trim();
    const blockEnd: string = match[3];

    // Load the specified language if it's not loaded yet
    // NOTE: this is commented out because it inflates the size of the bundle by 200k
    // if (!Prism.languages[language]) {
    //   try {
    //     require(`prismjs/components/prism-${language}`);
    //   } catch (e) {
    //     console.warn(`Prism language '${language}' not found, falling back to 'typescript'`);
    //   }
    // }

    const codeLanguage = inferCodeLanguage(markdownLanguage, code);
    const highlightLanguage = codeLanguage || 'typescript';
    const highlightedCode = Prism.highlight(
      code,
      Prism.languages[highlightLanguage] || Prism.languages.typescript,
      highlightLanguage,
    );

    result.push({ type: 'text', content: text.slice(lastIndex, match.index) });
    result.push({ type: 'code', content: highlightedCode, language: codeLanguage, complete: blockEnd.startsWith('```'), code });
    lastIndex = match.index + match[0].length;
  }

  if (lastIndex < text.length) {
    result.push({ type: 'text', content: text.slice(lastIndex) });
  }

  return result;
};


/// Renderers for the different types of message blocks

type SandpackConfig = { files: SandpackFiles, template: 'vanilla-ts' | 'vanilla' };

const runnableLanguages = ['html', 'javascript', 'typescript'];

function RunnableCode({ codeBlock, theme }: { codeBlock: CodeBlock, theme: Theme }): JSX.Element | null {
  let config: SandpackConfig;
  switch (codeBlock.language) {
    case 'html':
      config = {
        template: 'vanilla',
        files: { '/index.html': codeBlock.code, '/index.js': '' },
      };
      break;
    case 'javascript':
    case 'typescript':
      config = {
        template: 'vanilla-ts',
        files: { '/index.ts': codeBlock.code },
      };
      break;
    default:
      return null;
  }
  return (
    <Sandpack {...config} theme={theme.palette.mode === 'dark' ? 'dark' : 'light'}
              options={{ showConsole: true, showConsoleButton: true, showTabs: true, showNavigator: false }} />
  );
}

function RenderCode({ codeBlock, theme, sx }: { codeBlock: CodeBlock, theme: Theme, sx?: SxProps }) {
  const [showSandpack, setShowSandpack] = React.useState(false);

  const handleCopyToClipboard = () =>
    copyToClipboard(codeBlock.code);

  const handleToggleSandpack = () =>
    setShowSandpack(!showSandpack);

  const showRunIcon = codeBlock.complete && !!codeBlock.language && runnableLanguages.includes(codeBlock.language);

  return <Box component='code' sx={{
    position: 'relative', ...(sx || {}), mx: 0, p: 1.5,
    display: 'block', fontWeight: 500, background: theme.vars.palette.background.level1,
    '&:hover > button': { opacity: 1 },
  }}>
    <Tooltip title='Copy Code' variant='solid'>
      <IconButton variant='plain' color='primary' onClick={handleCopyToClipboard} sx={{ position: 'absolute', top: 0, right: 0, zIndex: 10, p: 0.5, opacity: 0, transition: 'opacity 0.3s' }}>
        <ContentCopyIcon />
      </IconButton>
    </Tooltip>
    {showRunIcon && (
      <Tooltip title='Try it out' variant='solid'>
        <IconButton variant='plain' color='primary' onClick={handleToggleSandpack} sx={{ position: 'absolute', top: 0, right: 50, zIndex: 10, p: 0.5, opacity: 0, transition: 'opacity 0.3s' }}>
          {showSandpack ? <StopOutlinedIcon /> : <PlayArrowOutlinedIcon />}
        </IconButton>
      </Tooltip>
    )}
    {/* this is the highlighted code */}
    <Box dangerouslySetInnerHTML={{ __html: codeBlock.content }} />
    {showRunIcon && showSandpack && <RunnableCode codeBlock={codeBlock} theme={theme} />}
  </Box>;
}

const RenderText = ({ textBlock, onDoubleClick, sx }: { textBlock: TextBlock, onDoubleClick: (e: React.MouseEvent) => void, sx?: SxProps }) =>
  <Typography
    level='body1' component='span'
    onDoubleClick={onDoubleClick}
    sx={{ ...(sx || {}), mx: 1.5 }}
  >
    {textBlock.content}
  </Typography>;


function copyToClipboard(text: string) {
  if (typeof navigator !== 'undefined')
    navigator.clipboard.writeText(text)
      .then(() => console.log('Message copied to clipboard'))
      .catch((err) => console.error('Failed to copy message: ', err));
}

function prettyBaseModel(model: string | undefined): string {
  if (!model) return '';
  if (model.startsWith('gpt-4')) return 'gpt-4';
  if (model.startsWith('gpt-3.5-turbo')) return '3.5 Turbo';
  return model;
}

function explainErrorInMessage(text: string, isAssistant: boolean, modelId?: string) {
  let errorMessage: JSX.Element | null = null;
  const isAssistantError = isAssistant && (text.startsWith('Error: ') || text.startsWith('OpenAI API error: '));
  if (isAssistantError) {
    if (text.startsWith('OpenAI API error: 429 Too Many Requests')) {
      // TODO: retry at the api/chat level a few times instead of showing this error
      errorMessage = <>
        The model appears to be occupied at the moment. Kindly select <b>GPT-3.5 Turbo</b>,
        or give it another go by selecting <b>Run again</b> from the message menu.
      </>;
    } else if (text.includes('"model_not_found"')) {
      // note that "model_not_found" is different than "The model `gpt-xyz` does not exist" message
      errorMessage = <>
        Your API key appears to be unauthorized for {modelId || 'this model'}. You can change to <b>GPT-3.5 Turbo</b>
        and simultaneously <Link noLinkStyle href='https://openai.com/waitlist/gpt-4-api' target='_blank'>request
        access</Link> to the desired model.
      </>;
    } else if (text.includes('"context_length_exceeded"')) {
      // TODO: propose to summarize or split the input?
      const pattern: RegExp = /maximum context length is (\d+) tokens.+resulted in (\d+) tokens/;
      const match = pattern.exec(text);
      const usedText = match ? <b>{parseInt(match[2] || '0').toLocaleString()} tokens &gt; {parseInt(match[1] || '0').toLocaleString()}</b> : '';
      errorMessage = <>
        This thread <b>surpasses the maximum size</b> allowed for {modelId || 'this model'}. {usedText}.
        Please consider removing some earlier messages from the conversation, start a new conversation,
        choose a model with larger context, or submit a shorter new message.
      </>;
    } else if (text.includes('"invalid_api_key"')) {
      errorMessage = <>
        The API key appears to not be correct or to have expired.
        Please <Link noLinkStyle href='https://openai.com/account/api-keys' target='_blank'>check your API key</Link> and
        update it in the <b>Settings</b> menu.
      </>;
    }
  }
  return { errorMessage, isAssistantError };
}


/**
 * The Message component is a customizable chat message UI component that supports
 * different roles (user, assistant, and system), text editing, syntax highlighting,
 * and code execution using Sandpack for TypeScript, JavaScript, and HTML code blocks.
 * The component also provides options for copying code to clipboard and expanding
 * or collapsing long user messages.
 *
 */
export function ChatMessage(props: { message: DMessage, disableSend: boolean, onDelete: () => void, onEdit: (text: string) => void, onRunAgain: () => void }) {
  const theme = useTheme();
  const {
    text: messageText,
    sender: messageSender,
    avatar: messageAvatar,
    typing: messageTyping,
    role: messageRole,
    // purposeId: messagePurposeId,
    originLLM: messageModelId,
    tokenCount: messageTokenCount,
    updated: messageUpdated,
  } = props.message;
  const fromAssistant = messageRole === 'assistant';
  const fromSystem = messageRole === 'system';
  const fromUser = messageRole === 'user';
  const wasEdited = !!messageUpdated;

  // viewing
  const [forceExpanded, setForceExpanded] = React.useState(false);

  // editing
  const [isHovering, setIsHovering] = React.useState(false);
  const [menuAnchor, setMenuAnchor] = React.useState<HTMLElement | null>(null);
  const [isEditing, setIsEditing] = React.useState(false);
  const [editedText, setEditedText] = React.useState('');


  const closeOperationsMenu = () => setMenuAnchor(null);

  const handleMenuCopy = (e: React.MouseEvent) => {
    copyToClipboard(messageText);
    e.preventDefault();
    closeOperationsMenu();
  };

  const handleMenuEdit = (e: React.MouseEvent) => {
    if (!isEditing)
      setEditedText(messageText);
    setIsEditing(!isEditing);
    e.preventDefault();
    closeOperationsMenu();
  };

  const handleMenuRunAgain = (e: React.MouseEvent) => {
    if (!props.disableSend) {
      props.onRunAgain();
      e.preventDefault();
      closeOperationsMenu();
    }
  };


  const handleEditTextChanged = (e: React.ChangeEvent<HTMLTextAreaElement>) =>
    setEditedText(e.target.value);

  const handleEditKeyPressed = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {
    if (e.key === 'Enter' && !e.shiftKey && !e.altKey) {
      e.preventDefault();
      setIsEditing(false);
      props.onEdit(editedText);
    }
  };

  const handleEditBlur = () => {
    setIsEditing(false);
    if (editedText !== messageText && editedText?.trim())
      props.onEdit(editedText);
  };


  const handleExpand = () => setForceExpanded(true);


  // soft error handling
  const { isAssistantError, errorMessage } = explainErrorInMessage(messageText, fromAssistant, messageModelId);


  // theming
  let background = theme.vars.palette.background.body;
  let textBackground: string | undefined = undefined;
  switch (messageRole) {
    case 'system':
      // background = theme.vars.palette.background.body;
      // textBackground = wasEdited ? theme.vars.palette.warning.plainHoverBg : theme.vars.palette.neutral.plainHoverBg;
      background = wasEdited ? theme.vars.palette.warning.plainHoverBg : theme.vars.palette.background.popup;
      break;
    case 'user':
      background = theme.vars.palette.primary.plainHoverBg;
      break;
    case 'assistant':
      background = (isAssistantError && !errorMessage) ? theme.vars.palette.danger.softBg : theme.vars.palette.background.body;
      break;
  }


  // avatar
  const avatarEl: JSX.Element = React.useMemo(
    () => {
      if (typeof messageAvatar === 'string' && messageAvatar)
        return <Avatar alt={messageSender} src={messageAvatar} />;
      switch (messageRole) {
        case 'system':
          return <SettingsSuggestIcon sx={{ width: 40, height: 40 }} />;  // https://em-content.zobj.net/thumbs/120/apple/325/robot_1f916.png
        case 'assistant':
          return <SmartToyOutlinedIcon sx={{ width: 40, height: 40 }} />; // https://mui.com/static/images/avatar/2.jpg
        case 'user':
          return <Face6Icon sx={{ width: 40, height: 40 }} />;            // https://www.svgrepo.com/show/306500/openai.svg
      }
      return <Avatar alt={messageSender} />;
    }, [messageAvatar, messageRole, messageSender],
  );

  // text box css
  const chatFontCss = {
    my: 'auto',
    fontFamily: fromAssistant ? theme.fontFamily.code : theme.fontFamily.body,
    fontSize: fromAssistant ? '14px' : '16px',
    lineHeight: 1.75,
  };

  // user message truncation
  let collapsedText = messageText;
  let isCollapsed = false;
  if (fromUser && !forceExpanded) {
    const lines = messageText.split('\n');
    if (lines.length > 10) {
      collapsedText = lines.slice(0, 10).join('\n');
      isCollapsed = true;
    }
  }


  return (
    <ListItem sx={{
      display: 'flex', flexDirection: !fromAssistant ? 'row-reverse' : 'row', alignItems: 'flex-start',
      gap: 1, px: { xs: 1, md: 2 }, py: 2,
      background,
      borderBottom: '1px solid',
      borderBottomColor: `rgba(${theme.vars.palette.neutral.mainChannel} / 0.2)`,
      position: 'relative',
      '&:hover > button': { opacity: 1 },
    }}>

      {/* Author */}
      <Stack sx={{ alignItems: 'center', minWidth: { xs: 50, md: 64 }, textAlign: 'center' }}
             onMouseEnter={() => setIsHovering(true)} onMouseLeave={() => setIsHovering(false)}
             onClick={event => setMenuAnchor(event.currentTarget)}>

        {isHovering ? (
          <IconButton variant='soft' color='primary'>
            <MoreVertIcon />
          </IconButton>
        ) : (
          avatarEl
        )}

        {fromAssistant && (
          <Tooltip title={messageModelId || 'unk-model'} variant='solid'>
            <Typography level='body2' sx={messageTyping
              ? { animation: `${cssRainbowColorKeyframes} 5s linear infinite`, fontWeight: 500 }
              : { fontWeight: 500 }
            }>
              {prettyBaseModel(messageModelId)}
            </Typography>
          </Tooltip>
        )}

      </Stack>


      {/* Edit / Blocks */}
      {!isEditing ? (

        <Box sx={{ ...chatFontCss, flexGrow: 0, whiteSpace: 'break-spaces' }}>

          {fromSystem && wasEdited && <Typography level='body2' color='warning' sx={{ mt: 1, mx: 1.5 }}>modified by user - auto-update disabled</Typography>}

          {parseBlocks(fromSystem, collapsedText).map((block, index) =>
            block.type === 'code'
              ? <RenderCode key={'code-' + index} codeBlock={block} theme={theme} sx={chatFontCss} />
              : <RenderText key={'text-' + index} textBlock={block} onDoubleClick={handleMenuEdit} sx={textBackground ? { ...chatFontCss, background: textBackground } : chatFontCss} />,
          )}

          {errorMessage && <Alert variant='soft' color='warning' sx={{ mt: 1 }}><Typography>{errorMessage}</Typography></Alert>}

          {isCollapsed && <Button variant='plain' onClick={handleExpand}>... expand ...</Button>}

        </Box>

      ) : (

        <Textarea variant='soft' color='warning' autoFocus minRows={1}
                  value={editedText} onChange={handleEditTextChanged} onKeyDown={handleEditKeyPressed} onBlur={handleEditBlur}
                  sx={{ ...chatFontCss, flexGrow: 1 }} />

      )}


      {/* Copy message */}
      {!fromSystem && !isEditing && (
        <Tooltip title={fromAssistant ? 'Copy response' : 'Copy input'} variant='solid'>
          <IconButton
            variant='plain' color='primary' onClick={handleMenuCopy}
            sx={{
              position: 'absolute', ...(fromAssistant ? { right: { xs: 12, md: 28 } } : { left: { xs: 12, md: 28 } }), zIndex: 10,
              opacity: 0, transition: 'opacity 0.3s',
            }}>
            <ContentCopyIcon />
          </IconButton>
        </Tooltip>
      )}


      {/* Message Operations menu */}
      {!!menuAnchor && (
        <Menu
          variant='plain' color='neutral' size='lg' placement='bottom-end' sx={{ minWidth: 280 }}
          open anchorEl={menuAnchor} onClose={closeOperationsMenu}>
          <MenuItem onClick={handleMenuCopy}>
            <ListItemDecorator><ContentCopyIcon /></ListItemDecorator>
            Copy
          </MenuItem>
          <MenuItem onClick={handleMenuEdit}>
            <ListItemDecorator><EditIcon /></ListItemDecorator>
            {isEditing ? 'Discard' : 'Edit'}
            {!isEditing && <span style={{ opacity: 0.5, marginLeft: '8px' }}> (double-click)</span>}
          </MenuItem>
          <ListDivider />
          <MenuItem onClick={handleMenuRunAgain} disabled={!fromUser || props.disableSend}>
            <ListItemDecorator><FastForwardIcon /></ListItemDecorator>
            Run again
          </MenuItem>
          <MenuItem onClick={props.onDelete} disabled={false /*fromSystem*/}>
            <ListItemDecorator><ClearIcon /></ListItemDecorator>
            Delete
          </MenuItem>
        </Menu>
      )}

    </ListItem>
  );
}

Here is what I mean:

screenshot-nextjs-chatgpt-n0rki9kxx-tsi-team vercel app-2023 04 07-19_19_27

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.