Coder Social home page Coder Social logo

chat-llamaindex's Introduction



LlamaIndex Chat Logo

LlamaIndex Chat

Create chat bots that know your data

LlamaIndex Chat Screen

Welcome to LlamaIndex Chat. You can create and share LLM chatbots that know your data (PDF or text documents).

Getting started with LlamaIndex Chat is a breeze. Visit https://chat.llamaindex.ai - a hosted version of LlamaIndex Chat with no user authentication that provides an immediate start.

๐Ÿš€ Features

LlamaIndex Chat is an example chatbot application for LlamaIndexTS. You can:

  • Create bots using prompt engineering and share them with other users.
  • Modify the demo bots by using the UI or directly editing the ./app/bots/bot.data.ts file.
  • Integrate your data by uploading documents or generating new data sources.

โšก๏ธ Quick start

Local Development

Requirement: NodeJS 18

  • Clone the repository
git clone https://github.com/run-llama/chat-llamaindex
cd chat-llamaindex
  • Prepare the project
pnpm install
pnpm run create-llama

Note: The last step copies the chat UI component and file server route from the create-llama project, see ./create-llama.sh.

  • Set the environment variables

Edit environment variables in .env.development.local. Especially check your OPENAI_API_KEY.

  • Run the dev server
pnpm dev

๐Ÿณ Docker

You can use Docker for development and deployment of LlamaIndex Chat.

Building the Docker Image

docker build -t chat-llamaindex .

Running in a Docker Container

docker run -p 3000:3000 --env-file .env.development.local chat-llamaindex

Docker Compose

For those preferring Docker Compose, we've included a docker-compose.yml file. To run using Docker Compose:

docker compose up

Go to http://localhost:3000 in your web browser.

Note: By default, the Docker Compose setup maps the cache and datasources directories from your host machine to the Docker container, ensuring data persistence and accessibility between container restarts.

Vercel Deployment

Deploying to Vercel is simple; click the button below and follow the instructions:

Deploy with Vercel

If you're deploying to a Vercel Hobby account, change the running time to 10 seconds, as this is the limit for the free plan.

If you want to use the sharing functionality, then you need to create a Vercel KV store and connect it to your project. Just follow this step from the quickstart. No further configuration is necessary as the app automatically uses a connected KV store.

๐Ÿ”„ Sharing

LlamaIndex Chat supports the sharing of bots via URLs. Demo bots are read-only and can't be shared. But you can create new bots (or clone and modify a demo bot) and call the share functionality in the context menu. It will create a unique URL that you can share with others. Opening the URL, users can directly use the shared bot.

๐Ÿ“€ Data Sources

The app is using a ChatEngine for each bot with a VectorStoreIndex attached. The cache folder in the root directory is used as Storage for each VectorStoreIndex.

Each subfolder in the cache folder contains the data for one VectorStoreIndex. To set which VectorStoreIndex is used for a bot, use the subfolder's name as datasource attribute in the bot's data.

Note: To use the changed bots, you have to clear your local storage. Otherwise, the old bots are still used. You can clear your local storage by opening the developer tools and running localStorage.clear() in the console and reloading the page.

Generate Data Sources

To generate a new data source, create a new subfolder in the datasources directory and add the data files (e.g., PDFs). Then, create the `VectorStoreIndex`` for the data source by running the following command:

pnpm run generate <datasource-name>

Where <datasource-name> is the name of the subfolder with your data files.

Note: On Windows, use pnpm run generate:win <datasource-name> instead.

๐Ÿ™ Thanks

Thanks go to @Yidadaa for his ChatGPT-Next-Web project, which was used as a starter template for this project.

chat-llamaindex's People

Contributors

dependabot[bot] avatar ekaone avatar gappc avatar himself65 avatar joshuasundance-swca avatar jwandekoken avatar marcusschiesser avatar thucpn avatar tolgayan avatar yisding avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chat-llamaindex's Issues

[Feature] Local LLM Support

Would like to be able to run this with local llm stacks like litellm or ollama etc.

Could you provide a parameter to specify llm and base url

[Bug] Warning: filter "Crypt" not supported yet

Not an error. When I run the generation of a new data source script, getting a whole set of following warning messages. Source contains PDF's. What is the reason? How would I now if all of my PDF's got processed?

Warning: filter "Crypt" not supported yet
Warning: Could not find a preferred cmap table.
Warning: Required "glyf" table is not found -- trying to recover.
Warning: TT: undefined function: 32

[Bug] First try to add url to get error

To Reproduce
Steps to reproduce the behavior:

  1. Go to input text
  2. Put url and press enter
  3. See error

Second type without any errors
Screenshots
image

Desktop (please complete the following information):

  • OS: windows 11
  • Browser edge
  • Version latest

Supported LLM: Azure OpenAI?

You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?

If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?

Content of .env.development.local file

Your openai api key. (required)

OPENAI_API_KEY=sk-xxxx

[Bug] TypeError: text.match is not a function

Describe the bug
TypeError: text.match is not a function

To Reproduce
Steps to reproduce the behavior:

  1. Create GPT4V model
  2. upload image
  3. prompt the model to "explain this picture"
  4. error generates

Expected behavior
Proper response from model

Deployment

  • Vercel

Desktop (please complete the following information):

  • OS: win10
  • Browser chromium
  • Version: latest

[Feature] Add PDF OCR Support

As a user, I want to be able to upload and train PDF documents to LlamaIndex Chat and have the text contents of those PDFs extracted via OCR so that OpenAI can easily process the text data. Many PDF files are scanned copies and not true searchable PDFs.

[Feature] and [Bug] Project is named llama index - but doesn't support llama

Is your feature request related to a problem? Please describe.

I went to use this project and found that it doesn't seem to actually use or support llama despite the name.

It appears to be locked into only using OpenAI's proprietary SaaS product.

e.g. https://github.com/run-llama/chat-llamaindex/blob/main/.env.template#L1

Describe the solution you'd like

  • Support for local / self-hosted LLMs such as llama.
  • There should be configuration where you provide the API endpoint for your LLM.
    • This could be an OpenAI style API and if so I would highly recommend using LiteLLM for this as it's a quick and easy solution that's being widely adopted.
    • Alternatives options include adding support for the Text Generation Web UI native API.

Describe alternatives you've considered

Maybe rename the project to chat-openai-index or similar if it hasn't got anything to do with Llama as it may confuse folks.

Additional context
N/A

[Bug] deployement on aws amplify is not working properly

Describe the bug
In local setup it's working fine , but when I deploy it to aws amplify , it's giving internal server error (500) for API call

not it's calling this API : https://develop.d2tnt2s5bwrvl6.amplifyapp.com/api/llm
instead of: http://localhost:300/api/llm

To Reproduce
Steps to reproduce the behavior:
Deploy it to aws amplify

Expected behavior
Should call api/llm successfully
Screenshots
If applicable, add screenshots to help explain your problem.

Deployment

  • aws amplify

Desktop (please complete the following information):

  • OS: [e.g. windows]
  • Browser [chrome]

Smartphone (please complete the following information):

Additional Logs
Add any logs about the problem here.

[Bug] Error getting OPENAI_API_KEY from .env.development.local: ENOENT: no such file or directory

On Windows 10. Trying to generate a new data source and when I run pnpm run generate <datasource-name> get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation. Related to #23 which was closed as addressed.

Error getting OPENAI_API_KEY from .env.development.local: ENOENT: no such file or directory, open 'C:\C:\Users\xxx\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\.env.development.local'
โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1.

[Bug] Upload button not working now.

After the commit below, the upload button only works for CSV and image files.

feat: use create-llama chat session (https://github.com/run-llama/chat-llamaindex/pull/94[)](https://github.com/run-llama/chat-llamaindex/commit/5c16a343eab3daaefa363a8d450df972757f64ff)

I'm not sure why, but after this commit, the application is using the file-uploader located in the "cl" directory instead of its own. This seems to be where the issue lies.
personal (Workspace)/e:\Workspace\AI\llm\rag\chat-llamaindex\app\components\ui\file-uploader.tsx:61

const handleUpload = async (file: FileWrap) => {

personal (Workspace)\Workspace\AI\llm\rag\chat-llamaindex\cl\app\components\ui\file-uploader.tsx:62

  const handleUpload = async (file: File) => {
    const onFileUploadError = onFileError || window.alert;
    const fileExtension = file.name.split(".").pop() || "";
    const extensionFileError = checkExtension(fileExtension);
    if (extensionFileError) {
      return onFileUploadError(extensionFileError);
    }

    if (isFileSizeExceeded(file)) {
      return onFileUploadError(
        `File size exceeded. Limit is ${fileSizeLimit / 1024 / 1024} MB`,
      );
    }

    await onFileUpload(file);
  };

Add a Roadmap on project's README

It would be great to be able to see what are the features that are being developped and those who are going to be in the future.

For example I'm wondering if you're aiming to be anytime soon ISO feature with OpenAI, or when are you going to actually support running opensource models.
And I'm sure I'm not alone so again would be great to have access to those informations!

[Bug] Error: Set OpenAI Key in OPENAI_API_KEY env variable

Trying to generate a new data source and when I run pnpm run generate <datasource-name> get the following error. The OpenAI key is set in .env.development.local. The app works, but not the data source generation.

chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470
      throw new Error("Set OpenAI Key in OPENAI_API_KEY env variable");
            ^
Error: Set OpenAI Key in OPENAI_API_KEY env variable
    at new OpenAISession (~\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:470:13)
    at getOpenAISession (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:486:15)
    at new OpenAI2 (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:606:81)
    at serviceContextFromDefaults (~\Documents\WORKSPACES\GenerativeAI\chat-llamaindex\node_modules\.pnpm\[email protected][email protected]\node_modules\llamaindex\dist\index.js:2075:71)
    at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:54:26
    at file:///~/Documents/WORKSPACES/GenerativeAI/chat-llamaindex/scripts/generate.mjs:61:3
    at ModuleJob.run (node:internal/modules/esm/module_job:193:25)
    at async Promise.all (index 0)
    at async ESMLoader.import (node:internal/modules/esm/loader:530:24)
    at async loadESM (node:internal/process/esm_loader:91:5)
    at async handleMainPromise (node:internal/modules/run_main:65:12)

Node.js v18.12.1
โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1.

Supported LLM: Azure OpenAI?

You have indicated that ChatGPT-Next-Web project was used as a starter template for this project. Can you please confirm if LlamaIndex Chat support Azure OpenAI?

If yes, please provide the instructions to switch to Azure OpenAI.
If no, will this be treated as feature enhancement? Is there a quick way to make this switch to use Azure OpenAI?

Content of .env.development.local file

Your openai api key. (required)

OPENAI_API_KEY=sk-xxxx

[Bug] Addition of Sentry seems to have broken the docker build

Describe the bug
I clone the main branch and followed the docker build instructions. Instead of having a working docker container I get the following error.

To Reproduce
Steps to reproduce the behavior:

  1. clone git clone https://github.com/run-llama/chat-llamaindex
  2. Copy cp .env.template .env.development.local
  3. Build docker build -t chat-llamaindex .
  4. See error
 => [build 2/6] WORKDIR /usr/src/app                                                                                                 0.8s
 => [runtime 2/6] WORKDIR /usr/src/app                                                                                               0.8s
 => [build 3/6] COPY package.json pnpm-lock.yaml ./                                                                                  0.0s
 => [build 4/6] RUN npm install -g pnpm &&     pnpm install                                                                         14.6s
 => [build 5/6] COPY . .                                                                                                             0.3s
 => ERROR [build 6/6] RUN pnpm build                                                                                                39.5s
------
 > [build 6/6] RUN pnpm build:
0.674
0.674 > chat-llamaindex@ build /usr/src/app
0.674 > next build
0.674
1.363 Attention: Next.js now collects completely anonymous telemetry regarding usage.
1.363 This information is used to shape Next.js' roadmap and prioritize features.
1.363 You can learn more, including how to opt-out if you'd not like to participate in this anonymous program, by visiting the following URL:
1.363 https://nextjs.org/telemetry
1.363
1.419   โ–ฒ Next.js 14.2.1
1.419
1.475    Creating an optimized production build ...
1.810 warn  - It seems like you don't have a global error handler set up. It is recommended that you add a global-error.js file with Sentry instrumentation so that React rendering errors are reported to Sentry. Read more: https://docs.sentry.io/platforms/javascript/guides/nextjs/manual-setup/#react-render-errors-in-app-router
38.00 Failed to compile.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00   caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.00 Sentry CLI Plugin: Command failed: /usr/src/app/node_modules/.pnpm/@[email protected][email protected]/node_modules/@sentry/cli/sentry-cli releases new VRbevAbU_2mYJxqEuyKqu
38.00 error: API request failed
38.00   caused by: [60] SSL peer certificate or SSH remote key was not OK (SSL certificate problem: unable to get local issuer certificate)
38.00
38.00 Add --log-level=[info|debug] or export SENTRY_LOG_LEVEL=[info|debug] to see more output.
38.00 Please attach the full debug log to all bug reports.
38.00
38.01
38.01 > Build failed because of webpack errors
38.22 โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1.
------
Dockerfile:18
--------------------
  16 |
  17 |     # Build the application for production
  18 | >>> RUN pnpm build
  19 |
  20 |     # ---- Production Stage ----
--------------------
ERROR: failed to solve: process "/bin/sh -c pnpm build" did not complete successfully: exit code: 1

Expected behavior
The docker build should finish without error
Deployment

  • [ x ] Docker
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: Ubuntu 23.10
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

[Bug] - Vercel Blob token required for local usage

Describe the bug
I want to understand if its possible to use this app without connecting to Vercel when using locally. When i try to upload an image, I see the following error:
[Upload] BlobError: Vercel Blob: No token found. Either configure the BLOB_READ_WRITE_TOKENenvironment variable, or pass atoken option to your calls.

To Reproduce
Steps to reproduce the behavior:

  1. Open the vision preview bot
  2. Upload an image

Expected behavior
Not sure if this supported but can we use this project locally without requiring Vercel tokens?

Screenshots
image

Deployment

  • Docker
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: [e.g. iOS] macOS
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional Logs
Add any logs about the problem here.

[Bug] Hit Token limit when using the generate command

Describe the bug
A clear and concise description of what the bug is.

Ran into this when running the generate <datasource command.

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
  error: {
    message: "This model's maximum context length is 8192 tokens, however you requested 18039 tokens (18039 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.",
    type: 'invalid_request_error',
    param: null,
    code: null
  },

I expected it to split the documents for me?

[Bug] Emoji not loading

Describe the bug
The emojis are not loading.

To Reproduce
Steps to reproduce the behavior:

  1. Open the app, e.g. https://chat-llamaindex.vercel.app/
  2. There are no emojis

Expected behavior
The expectation is that the emojis will be loaded and displayed.

Screenshots
image

Deployment

  • Vercel (tested)
  • local development (tested)

The issue likely exists everywhere, the CDN used to load the emojis (cdn.staticfile.org) appears to no longer host emoji files.

A fix is provided in PR #57. That PR updates the CDN to Cloudflare and incorporates the latest emoji version.

[Feature] support for OpenAI-like mock servers & OpenAI proxy servers

Currently, when I want to use OpenAI-like mock servers or proxy servers, there's no apparent way to manually modify the openai.api_base and add headers to openai Completion/ChatCompletion request.

The mock server requires changing openai.api_base and specifying the model name.
The proxy server requires changing openai.api_base, providing openai.api_key, specifying the model name, and adding a custom headers to the request.

[Bug] Docker Build is not working

Describe the bug
The docker build command described at the readme is not working

To Reproduce
Steps to reproduce the behavior:

  1. Just run the docker build

Expected behavior
A clear and concise description of what you expected to happen.

Deployment

  • [ X] Docker Desktop - v4.17.0
  • Vercel
  • Server

Desktop (please complete the following information):

  • OS: [Mac os]

Additional Logs
=> ERROR [build 6/8] RUN npm install -g pnpm && pnpm install 36.8s

[build 6/8] RUN npm install -g pnpm && pnpm install:
#11 2.817
#11 2.817 added 1 package in 3s
#11 2.817
#11 2.817 1 package is looking for funding
#11 2.817 run npm fund for details
#11 2.819 npm notice
#11 2.819 npm notice New minor version of npm available! 10.7.0 -> 10.8.1
#11 2.819 npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.1
#11 2.819 npm notice To update run: npm install -g [email protected]
#11 2.819 npm notice
#11 3.464 Lockfile is up to date, resolution step is skipped
#11 3.590 Progress: resolved 1, reused 0, downloaded 0, added 0
#11 3.984 Packages: +1239
#11 3.984 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
#11 4.603 Progress: resolved 1239, reused 0, downloaded 10, added 8
#11 5.604 Progress: resolved 1239, reused 0, downloaded 56, added 56
#11 6.605 Progress: resolved 1239, reused 0, downloaded 91, added 91
#11 7.605 Progress: resolved 1239, reused 0, downloaded 132, added 131
#11 8.631 Progress: resolved 1239, reused 0, downloaded 179, added 179
#11 9.631 Progress: resolved 1239, reused 0, downloaded 239, added 239
#11 10.63 Progress: resolved 1239, reused 0, downloaded 302, added 302
#11 11.63 Progress: resolved 1239, reused 0, downloaded 335, added 335
#11 12.63 Progress: resolved 1239, reused 0, downloaded 344, added 344
#11 13.64 Progress: resolved 1239, reused 0, downloaded 363, added 363
#11 14.64 Progress: resolved 1239, reused 0, downloaded 405, added 405
#11 15.64 Progress: resolved 1239, reused 0, downloaded 440, added 440
#11 16.64 Progress: resolved 1239, reused 0, downloaded 468, added 468
#11 17.64 Progress: resolved 1239, reused 0, downloaded 495, added 494
#11 18.64 Progress: resolved 1239, reused 0, downloaded 527, added 527
#11 19.65 Progress: resolved 1239, reused 0, downloaded 556, added 556
#11 20.65 Progress: resolved 1239, reused 0, downloaded 586, added 586
#11 21.65 Progress: resolved 1239, reused 0, downloaded 616, added 616
#11 22.65 Progress: resolved 1239, reused 0, downloaded 659, added 659
#11 23.65 Progress: resolved 1239, reused 0, downloaded 718, added 718
#11 24.65 Progress: resolved 1239, reused 0, downloaded 782, added 782
#11 25.65 Progress: resolved 1239, reused 0, downloaded 816, added 816
#11 26.65 Progress: resolved 1239, reused 0, downloaded 837, added 836
#11 27.65 Progress: resolved 1239, reused 0, downloaded 873, added 873
#11 28.65 Progress: resolved 1239, reused 0, downloaded 949, added 949
#11 29.66 Progress: resolved 1239, reused 0, downloaded 993, added 993
#11 30.66 Progress: resolved 1239, reused 0, downloaded 1079, added 1079
#11 31.66 Progress: resolved 1239, reused 0, downloaded 1166, added 1166
#11 32.72 Progress: resolved 1239, reused 0, downloaded 1206, added 1205
#11 33.71 Progress: resolved 1239, reused 0, downloaded 1237, added 1237
#11 34.07 Progress: resolved 1239, reused 0, downloaded 1239, added 1239, done
#11 34.63 .../node_modules/protobufjs postinstall$ node scripts/postinstall
#11 34.67 .../node_modules/bufferutil install$ node-gyp-build
#11 34.67 .../node_modules/utf-8-validate install$ node-gyp-build
#11 34.67 .../node_modules/@sentry/cli install$ node ./scripts/install.js
#11 34.73 .../[email protected]/node_modules/esbuild postinstall$ node install.js
#11 34.77 .../node_modules/protobufjs postinstall: Done
#11 34.91 .../node_modules/protobufjs postinstall$ node scripts/postinstall
#11 34.95 .../node_modules/@sentry/cli install: [sentry-cli] Downloading from https://downloads.sentry-cdn.com/sentry-cli/1.77.3/sentry-cli-Linux-aarch64
#11 35.01 .../[email protected]/node_modules/esbuild postinstall: Done
#11 35.03 .../node_modules/protobufjs postinstall: Done
#11 35.03 .../node_modules/utf-8-validate install: gyp info it worked if it ends with ok
#11 35.03 .../node_modules/utf-8-validate install: gyp info using [email protected]
#11 35.03 .../node_modules/utf-8-validate install: gyp info using [email protected] | linux | arm64
#11 35.07 .../[email protected]/node_modules/sharp install$ (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)
#11 35.11 .../node_modules/bufferutil install: gyp info it worked if it ends with ok
#11 35.11 .../node_modules/bufferutil install: gyp info using [email protected]
#11 35.11 .../node_modules/bufferutil install: gyp info using [email protected] | linux | arm64
#11 35.17 .../node_modules/utf-8-validate install: gyp ERR! find Python
#11 35.18 .../node_modules/utf-8-validate install: gyp ERR! find Python Python is not set from command line or npm configuration
#11 35.18 .../node_modules/utf-8-validate install: gyp ERR! find Python Python is not set from environment variable PYTHON
#11 35.18 .../node_modules/utf-8-validate install: gyp ERR! find Python checking if "python3" can be used
#11 35.18 .../node_modules/utf-8-validate install: gyp ERR! find Python - executable path is ""
#11 35.18 .../node_modules/utf-8-validate install: gyp ERR! find Python - "" could not be run
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python checking if "python" can be used
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python - executable path is ""
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python - "" could not be run
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python **********************************************************
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python You need to install the latest version of Python.
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python Node-gyp should be able to find and use Python. If not,
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python you can try one of the following options:
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python - Use the switch --python="/path/to/pythonexecutable"
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python (accepted by both node-gyp and npm)
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python - Set the environment variable PYTHON
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python - Set the npm configuration variable python:
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python npm config set python "/path/to/pythonexecutable"
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python For more information consult the documentation at:
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python https://github.com/nodejs/node-gyp#installation
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python **********************************************************
#11 35.19 .../node_modules/utf-8-validate install: gyp ERR! find Python
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! configure error
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack Error: Could not find any Python installation to use
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack at PythonFinder.fail (/usr/local/lib/node_modules/pnpm/dist/node_modules/node-gyp/lib/find-python.js:306:11)
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack at PythonFinder.findPython (/usr/local/lib/node_modules/pnpm/dist/node_modules/node-gyp/lib/find-python.js:164:17)
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack at async configure (/usr/local/lib/node_modules/pnpm/dist/node_modules/node-gyp/lib/configure.js:27:18)
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! stack at async run (/usr/local/lib/node_modules/pnpm/dist/node_modules/node-gyp/bin/node-gyp.js:81:18)
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! System Linux 5.15.49-linuxkit
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/pnpm/dist/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
#11 35.20 .../node_modules/utf-8-validate install: gyp ERR! cwd /usr/src/app/node_modules/.pnpm/[email protected]/node_modules/utf-8-validate
#11 35.21 .../node_modules/utf-8-validate install: gyp ERR! node -v v18.20.3
#11 35.21 .../node_modules/utf-8-validate install: gyp ERR! node-gyp -v v10.1.0
#11 35.21 .../node_modules/utf-8-validate install: gyp ERR! not ok
#11 35.25 .../node_modules/utf-8-validate install: Failed
#11 35.25 โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1.


executor failed running [/bin/sh -c npm install -g pnpm && pnpm install]: exit code: 1

[Feature] Connect bot to data source

It seems intuitive that, once a user creates a data source, he should be able to query it somehow. It would be great if there were a field in the 'create bot' window to connect the bot to an existing data source.

It's entirely possible I'm missing something, but I can't see how to make that connection at the moment.

Thank you very much,
Adam

[Bug] Docker install failed

Describe the bug
run docker build -t chat-llamaindex . failed

Deployment

  • [X ] Docker
  • Vercel
  • Server

Desktop (please complete the following information):
win 11

Additional Logs

=> ERROR [build 9/9] RUN pnpm build 31.0s

[build 9/9] RUN pnpm build:
1.110
1.110 > chat-llamaindex@ build /usr/src/app
1.110 > npm run create-llama && next build
1.110
1.303
1.303 > create-llama
1.303 > bash create-llama.sh
1.303
1.316
1.316 Adding sources from create-llama...
18.83 npm warn skipping integrity check for git dependency ssh://[email protected]/watson/ci-info.git
30.77 node:internal/modules/cjs/loader:1143
30.77 throw err;
30.77 ^
30.77
30.77 Error: Cannot find module '/usr/src/app/node_modules/next/dist/bin/next'
30.77 at Module._resolveFilename (node:internal/modules/cjs/loader:1140:15)
30.77 at Module._load (node:internal/modules/cjs/loader:981:27)
30.77 at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:128:12)
30.77 at node:internal/main/run_main_module:28:49 {
30.77 code: 'MODULE_NOT_FOUND',
30.77 requireStack: []
30.77 }
30.77
30.77 Node.js v18.20.4
30.79 โ€‰ELIFECYCLEโ€‰ Command failed with exit code 1.


Dockerfile:27

25 |
26 | # Build the application for production
27 | >>> RUN pnpm build
28 |
29 | # ---- Production Stage ----

ERROR: failed to solve: process "/bin/sh -c pnpm build" did not complete successfully: exit code: 1

[Bug] Generating datasource not working anymore on the latest update

Describe the bug
I'm using this project as a base for my site since last month and have just tried upgrading to the newest update (llamaindex edge).
Everything else seems fine but generating a datasource (pnpm run generate ) doesn't seem to be working anymore. (Tested on the original repository by cloning the latest update)
Error says: "Cannot find package 'llamaindex' imported from..."

+Additional bug:
https://chat.llamaindex.ai/
There is an error with the bots based off of data sources. (Red Hat Linux Expert, Apple Watch Genius, & German Basic Law Expert bots are not working.)

To Reproduce
Steps to reproduce the behavior:

  1. Clone repository

  2. Follow steps to create new data source (add folder to datasources folder -> add files to folder)

  3. Run "pnpm run generate " in terminal

  4. See error

  5. Go to Chat Llamaindex

  6. Select "German Basic Law Expert" bot

  7. Ask any question

  8. See error

Expected behavior
A new VerctorStoreIndex for data source created (new data source folder and data in cache folder).
Normal chat experience at Chat Llamaindex site.

Screenshots
์Šคํฌ๋ฆฐ์ƒท 2024-03-21 112228
์Šคํฌ๋ฆฐ์ƒท 2024-03-21 132648

Deployment

  • Docker
  • Vercel
  • Server

Desktop (please complete the following information): Unapplicable

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information): Unapplicable

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional Logs
Add any logs about the problem here.

Thank you!!

[Bug] May I know why there is BadRequestError:400 while I run "npm run generate"

I get this output when I run npm run generate :

BadRequestError: 400 This model's maximum context length is 8192 tokens, however you requested 23869 tokens (23869 in your prompt; 0 for the completion). Please reduce your prompt; or completion length. at APIError.generate (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/error.mjs:41:20) at OpenAI.makeStatusError (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:256:25) at OpenAI.makeRequest (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected][email protected]/node_modules/openai/core.mjs:299:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async OpenAIEmbedding.getOpenAIEmbedding (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:82:26) at async OpenAIEmbedding.getTextEmbeddings (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/OpenAIEmbedding.js:93:16) at async OpenAIEmbedding.getTextEmbeddingsBatch (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/embeddings/types.js:32:36) at async VectorStoreIndex.getNodeEmbeddingResults (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:89:28) at async VectorStoreIndex.insertNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:189:34) at async VectorStoreIndex.buildIndexFromNodes (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:109:9) at async VectorStoreIndex.init (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:55:13) at async VectorStoreIndex.fromDocuments (file:///C:/chat-llama/chat-llamaindex/node_modules/.pnpm/[email protected]_@[email protected][email protected][email protected]/node_modules/llamaindex/dist/indices/vectorStore/index.js:132:16) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:37:5 at async getRuntime (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:22:3) at async generateDatasource (file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:30:14) at async file:///C:/chat-llama/chat-llamaindex/scripts/generate.mjs:86:3

Unclear where to add datasource for bots created in UI

First of all, thanks for the great solution.

Everything is running fine locally but I'm not clear where to edit the bots created from the UI. When I go to the apps/bots/bot.data.ts i do not see the bot I created and when I edit one of the demo bots in that file I don't see the changes in the UI.

Deactivated GPT4 on chat.llamaindex.ai

Dear friends, I regret to inform you that Chatlamaindex's response has been invalidated

{
"error": true,
"message": "There was an error calling the OpenAI API. Please try again later."
}

[Feature] Python version

Dear all,
Thanks for this great contribution to the LLM community.
Are you considering a chat-llamaindex implementation based on Python instead of Typescript?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.