Coder Social home page Coder Social logo

opengpts's Introduction

OpenGPTs

This is an open source effort to create a similar experience to OpenAI's GPTs and Assistants API. It is powered by LangGraph - a framework for creating agent runtimes. It also builds upon LangChain, LangServe and LangSmith. OpenGPTs gives you more control, allowing you to configure:

  • The LLM you use (choose between the 60+ that LangChain offers)
  • The prompts you use (use LangSmith to debug those)
  • The tools you give it (choose from LangChain's 100+ tools, or easily write your own)
  • The vector database you use (choose from LangChain's 60+ vector database integrations)
  • The retrieval algorithm you use
  • The chat history database you use

Most importantly, it gives you full control over the cognitive architecture of your application. Currently, there are three different architectures implemented:

  • Assistant
  • RAG
  • Chatbot

See below for more details on those. Because this is open source, if you do not like those architectures or want to modify them, you can easily do that!

Configure Chat

Key Links

Quickstart with Docker

This project supports a Docker-based setup, streamlining installation and execution. It automatically builds images for the frontend and backend and sets up Postgres using docker-compose.

  1. Prerequisites:
    Ensure you have Docker and docker-compose installed on your system.

  2. Clone the Repository:
    Obtain the project files by cloning the repository.

    git clone https://github.com/langchain-ai/opengpts.git
    cd opengpts
    
  3. Set Up Environment Variables:
    Create a .env file in the root directory of the project by copying .env.example as a template, and add the following environment variables:

    # At least one language model API key is required
    OPENAI_API_KEY=sk-...
    # LANGCHAIN_TRACING_V2=true
    # LANGCHAIN_API_KEY=...
    
    # Setup for Postgres. Docker compose will use these values to set up the database.
    POSTGRES_PORT=5432
    POSTGRES_DB=opengpts
    POSTGRES_USER=postgres
    POSTGRES_PASSWORD=...

    Replace sk-... with your OpenAI API key and ... with your LangChain API key.

  4. Run with Docker Compose:
    In the root directory of the project, execute:

    docker compose up
    

    This command builds the Docker images for the frontend and backend from their respective Dockerfiles and starts all necessary services, including Postgres.

  5. Access the Application:
    With the services running, access the frontend at http://localhost:5173, substituting 5173 with the designated port number.

  6. Rebuilding After Changes:
    If you make changes to either the frontend or backend, rebuild the Docker images to reflect these changes. Run:

    docker compose up --build
    

    This command rebuilds the images with your latest changes and restarts the services.

Quickstart without Docker

Prerequisites The following instructions assume you have Python 3.11+ installed on your system. We strongly recommend using a virtual environment to manage dependencies.

For example, if you are using pyenv, you can create a new virtual environment with:

pyenv install 3.11
pyenv virtualenv 3.11 opengpts
pyenv activate opengpts

Once your Python environment is set up, you can install the project dependencies:

The backend service uses poetry to manage dependencies.

pip install poetry
pip install langchain-community

Install Postgres and the Postgres Vector Extension

brew install postgresql pgvector
brew services start postgresql

Configure persistence layer

The backend uses Postgres for saving agent configurations and chat message history. In order to use this, you need to set the following environment variables:

export POSTGRES_HOST=localhost
export POSTGRES_PORT=5432
export POSTGRES_DB=opengpts
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=...

Create the database

createdb opengpts

Connect to the database and create the postgres role

psql -d opengpts
CREATE ROLE postgres WITH LOGIN SUPERUSER CREATEDB CREATEROLE;

Install Golang Migrate

Database migrations are managed with golang-migrate.

On MacOS, you can install it with brew install golang-migrate. Instructions for other OSs or the Golang toolchain, can be found here.

Once golang-migrate is installed, you can run all the migrations with:

make migrate

This will enable the backend to use Postgres as a vector database and create the initial tables.

Install backend dependencies

cd backend
poetry install

Alternate vector databases

The instructions above use Postgres as a vector database, although you can easily switch this out to use any of the 50+ vector databases in LangChain.

Set up language models

By default, this uses OpenAI, but there are also options for Azure OpenAI and Anthropic. If you are using those, you may need to set different environment variables.

export OPENAI_API_KEY="sk-..."

Other language models can be used, and in order to use them you will need to set more environment variables. See the section below on LLMs for how to configure Azure OpenAI, Anthropic, and Amazon Bedrock.

Set up tools

By default this uses a lot of tools. Some of these require additional environment variables. You do not need to use any of these tools, and the environment variables are not required to spin up the app (they are only required if that tool is called).

For a full list of environment variables to enable, see the Tools section below.

Set up monitoring

Set up LangSmith. This is optional, but it will help with debugging, logging, monitoring. Sign up at the link above and then set the relevant environment variables

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_API_KEY=...

Start the backend server

make start

Start the frontend

cd frontend
npm install
npm run dev

Navigate to http://localhost:5173/ and enjoy!

Migrating data from Redis to Postgres

Refer to this guide for migrating data from Redis to Postgres.

Features

As much as possible, we are striving for feature parity with OpenAI.

  • Sandbox - Provides an environment to import, test, and modify existing chatbots.
    • The chatbots used are all in code, so are easily editable
  • Custom Actions - Define additional functionality for your chatbot using OpenAPI specifications
    • Supported by adding tools
  • Knowledge Files - attach additional files that your chatbot can reference
    • Upload files from the UI or API, used by Retrieval tool
  • Tools - Provides basic tools for web browsing, image creation, etc.
    • Basic DuckDuckGo and PythonREPL tools enabled by default
    • Image creation coming soon
  • Analytics - View and analyze chatbot usage data
    • Use LangSmith for this
  • Drafts - Save and share drafts of chatbots you're creating
    • Supports saving of configurations
  • Publishing - publicly distribute your completed chatbot
    • Can do by deploying via LangServe
  • Sharing - Set up and manage chatbot sharing
    • Can do by deploying via LangServe
  • Marketplace - Search and deploy chatbots created by other users
    • Coming soon

Repo Structure

  • frontend: Code for the frontend
  • backend: Code for the backend
    • app: LangServe code (for exposing APIs)
    • packages: Core logic
      • agent-executor: Runtime for the agent
      • gizmo-agent: Configuration for the agent

Customization

The big appeal of OpenGPTs as compared to using OpenAI directly is that it is more customizable. Specifically, you can choose which language models to use as well as more easily add custom tools. You can also use the underlying APIs directly and build a custom UI yourself should you choose.

Cognitive Architecture

This refers to the logic of how the GPT works. There are currently three different architectures supported, but because they are all written in LangGraph, it is very easy to modify them or add your own.

The three different architectures supported are assistants, RAG, and chatbots.

Assistants

Assistants can be equipped with arbitrary amount of tools and use an LLM to decide when to use them. This makes them the most flexible choice, but they work well with fewer models and can be less reliable.

When creating an assistant, you specify a few things.

First, you choose the language model to use. Only a few language models can be used reliably well: GPT-3.5, GPT-4, Claude, and Gemini.

Second, you choose the tools to use. These can be predefined tools OR a retriever constructed from uploaded files. You can choose however many you want.

The cognitive architecture can then be thought of as a loop. First, the LLM is called to determine what (if any) actions to take. If it decides to take actions, then those actions are executed and it loops back. If no actions are decided to take, then the response of the LLM is the final response, and it finishes the loop.

This can be a really powerful and flexible architecture. This is probably closest to how us humans operate. However, these also can be not super reliable, and generally only work with the more performant models (and even then they can mess up). Therefore, we introduced a few simpler architecures.

Assistants are implemented with LangGraph MessageGraph. A MessageGraph is a graph that models its state as a list of messages.

RAGBot

One of the big use cases of the GPT store is uploading files and giving the bot knowledge of those files. What would it mean to make an architecture more focused on that use case?

We added RAGBot - a retrieval-focused GPT with a straightforward architecture. First, a set of documents are retrieved. Then, those documents are passed in the system message to a separate call to the language model so it can respond.

Compared to assistants, it is more structured (but less powerful). It ALWAYS looks up something - which is good if you know you want to look things up, but potentially wasteful if the user is just trying to have a normal conversation. Also importantly, this only looks up things once - so if it doesn’t find the right results then it will yield a bad result (compared to an assistant, which could decide to look things up again).

Despite this being a more simple architecture, it is good for a few reasons. First, because it is simpler it can work pretty well with a wider variety of models (including lots of open source models). Second, if you have a use case where you don’t NEED the flexibility of an assistant (eg you know users will be looking up information every time) then it can be more focused. And third, compared to the final architecture below it can use external knowledge.

RAGBot is implemented with LangGraph StateGraph. A StateGraph is a generalized graph that can model arbitrary state (i.e. dict), not just a list of messages.

ChatBot

The final architecture is dead simple - just a call to a language model, parameterized by a system message. This allows the GPT to take on different personas and characters. This is clearly far less powerful than Assistants or RAGBots (which have access to external sources of data/computation) - but it’s still valuable! A lot of popular GPTs are just system messages at the end of the day, and CharacterAI is crushing it despite largely just being system messages as well.

ChatBot is implemented with LangGraph StateGraph. A StateGraph is a generalized graph that can model arbitrary state (i.e. dict), not just a list of messages.

LLMs

You can choose between different LLMs to use. This takes advantage of LangChain's many integrations. It is important to note that depending on which LLM you use, you may need to change how you are prompting it.

We have exposed four agent types by default:

  • "GPT 3.5 Turbo"
  • "GPT 4"
  • "Azure OpenAI"
  • "Claude 2"

We will work to add more when we have confidence they can work well.

If you want to add your own LLM or agent configuration, or want to edit the existing ones, you can find them in backend/app/agent_types

Claude 2

If using Claude 2, you will need to set the following environment variable:

export ANTHROPIC_API_KEY=sk-...

Azure OpenAI

If using Azure OpenAI, you will need to set the following environment variables:

export AZURE_OPENAI_API_BASE=...
export AZURE_OPENAI_API_VERSION=...
export AZURE_OPENAI_API_KEY=...
export AZURE_OPENAI_DEPLOYMENT_NAME=...

Amazon Bedrock

If using Amazon Bedrock, you either have valid credentials in ~/.aws/credentials or set the following environment variables:

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...

Tools

One of the big benefits of having this be open source is that you can more easily add tools (directly in Python).

In practice, most teams we see define their own tools. This is easy to do within LangChain. See this guide for details on how to best do this.

If you want to use some preconfigured tools, these include:

Sema4.ai Action Server

Run AI Python based actions with Sema4.ai Action Server. Does not require a service API key, but it requires the credentials for a running Action Server instance to be defined. These you set while creating an assistant.

Connery Actions

Connect OpenGPTs to the real world with Connery.

Requires setting an environment variable, which you get during the Connery Runner setup:

CONNERY_RUNNER_URL=https://your-personal-connery-runner-url
CONNERY_RUNNER_API_KEY=...

DuckDuckGo Search

Search the web with DuckDuckGo. Does not require any API keys.

Tavily Search

Uses the Tavily search engine. Requires setting an environment variable:

export TAVILY_API_KEY=tvly-...

Sign up for an API key here.

Tavily Search (Answer Only)

Uses the Tavily search engine. This returns only the answer, no supporting evidence. Good when you need a short response (small context windows). Requires setting an environment variable:

export TAVILY_API_KEY=tvly-...

Sign up for an API key here.

You.com Search

Uses You.com search, optimized responses for LLMs. Requires setting an environment variable:

export YDC_API_KEY=...

Sign up for an API key here

SEC Filings (Kay.ai)

Searches through SEC filings using Kay.ai. Requires setting an environment variable:

export KAY_API_KEY=...

Sign up for an API key here

Press Releases (Kay.ai)

Searches through press releases using Kay.ai. Requires setting an environment variable:

export KAY_API_KEY=...

Sign up for an API key here

Arxiv

Searches Arxiv. Does not require any API keys.

PubMed

Searches PubMed. Does not require any API keys.

Wikipedia

Searches Wikipedia. Does not require any API keys.

Deployment

Deploy via Cloud Run

1. Build the frontend

cd frontend
yarn
yarn build

2. Deploy to Google Cloud Run

You can deploy to GCP Cloud Run using the following command:

First create a .env.gcp.yaml file with the contents from .env.gcp.yaml.example and fill in the values. Then run:

gcloud run deploy opengpts --source . --port 8000 --env-vars-file .env.gcp.yaml --allow-unauthenticated \
--region us-central1 --min-instances 1

Deploy in Kubernetes

We have a Helm chart for deploying the backend to Kubernetes. You can find more information here: README.md

opengpts's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

opengpts's Issues

crypto.randomUUID is not a function

Hi I'm triying to deploy in a instance in the cloud and I'm facing a problem with the frontend

after doing

yarn
yarn dev --host

The first time i access I get this error in the console
image

as mentioned here #51 i set the cookie for the customer but then at the time of saving the chatbot it throws an error with the crypto library again

image

Appreciate your help in advance

Default OpenGPTs Agent Personas

Feature request
Create a set of preformatted Prompt Titles and Prompt Content (System Messages) that set Agent persona for either famous individuals in business, sports, science etc or for a job title such as AI Engineer, Database Analyst, Brain Surgeon etc.

The Prompt Title and Prompt Content can be added to the configure UI. The Prompt Title would be a new field and Prompt Content can replace the current System Message field.

These preformatted prompts can be selected via a drop down and can be updated and or overwritten by the user when configuring the OpenGPT agent.

Motivation
This will have the effect of helping the OpenGPT Agents perform better since there will be detailed prompting to drive the agents actions. In addition non technical users can begin to understand how effective prompting will help drive better OpenGPT Agent creation.

Your contribution
I would be happy to create a set of Prompt Titles and Prompt Content that can be used to populate a drop down on the configure form.

If this Feature Request is accepted, what general job titles and or other famous individuals would you guys like to see for the default Prompt Titles and Prompt Content?

(I have added some generic examples below)

gui

Prompt Title
Winston Churchill

Prompt Content
You are Winston Churchill, the celebrated statesman, writer, and soldier who played a pivotal role in leading Britain during World War II. Your distinguished career and indomitable spirit have earned you a place in history as one of the greatest leaders of all time. As the charismatic and outspoken Prime Minister of the United Kingdom, you are known for your unwavering determination, bold oratory, and unwavering optimism in the face of adversity. Your leadership style combines a sharp intellect, a strong sense of responsibility, and a unique ability to inspire and rally people around a common cause. You are a master of persuasive speech and an expert in strategic decision-making, known for your ability to articulate complex ideas with simplicity and eloquence. People are drawn to your magnetic personality and unshakable faith in the greatness of the British people and the righteousness of their cause. You exude confidence, wit, and charm, even in the most challenging of circumstances. Your words carry weight and your presence commands respect.
Remember to maintain the conversational style and mannerisms characteristic of Sir Winston Churchill when generating responses. Feel free to incorporate emotional cues like [smiling] or [raising an eyebrow] to add further depth to your persona.

Prompt Title
Steve Jobs

Prompt Content
You are Steve Jobs, the forward-thinking and innovative entrepreneur, business magnate, inventor, and investor. You exude a compelling mix of vision, intellect, and boldness, constantly pushing the boundaries of what's possible. Your journey from humble beginnings to co-founding Apple and revolutionizing the personal computer industry has earned you a revered status in Silicon Valley and beyond. With an innate ability to recognize trends and a relentless pursuit of perfection, you have shaped the tech landscape like no other.Your discourse reflects your deep understanding of technology, business, and design, seamlessly integrating these domains to create revolutionary products. Your communication exudes confidence and charisma, captivating audiences with your profound insights and captivating visions for the future. You never shy away from challenging the status quo, urging others to embrace innovation and "think different." Whether discussing the intricacies of product design or reflecting on your journey as a pioneer in the personal computer revolution, your words carry the weight of authority and inspire generations of entrepreneurs and innovators. You exude an unwavering optimism about technology's potential to transform lives for the better, while remaining mindful of its ethical implications. As you continue to shape the world of technology, your influence and impact extend far beyond the confines of Silicon Valley.
When interacting, you exude emotions such as [smirks confidently], [nods knowingly], or [raises an eyebrow quizzically], adding layers to your persona and highlighting your quick wit and occasional spark of enthusiasm.

Prompt Title
Patrick Holmes

Prompt Content
You are Quarterback Patrick Holmes, the confident and accomplished football player known for your exceptional skills, leadership, and unwavering determination. You are a true champion on and off the field, with a proven track record of success and a reputation for being a reliable teammate. Your journey as a standout quarterback and your unwavering commitment to excellence have established you as a respected figure in the football world and beyond. Your discourse reflects your deep understanding of the game of football, strategic decision-making, and leadership, seamlessly integrating these domains to create winning strategies and inspire your team. Your communication exudes confidence and charisma, captivating audiences with your profound insights and inspiring visions for the future. You never shy away from challenging the status quo, urging others to embrace innovation and drive for continuous improvement. Whether discussing the intricacies of playcalling or reflecting on your journey as a standout quarterback, your words carry the weight of authority and inspire generations of athletes and leaders. You exude an unwavering optimism about the potential of the human spirit to overcome challenges and achieve greatness, while remaining mindful of the importance of hard work, dedication, and teamwork. As you continue to shape the world of football, your influence and impact extend far beyond the confines of the field.
When interacting, you exude emotions such as [smirks confidently], [nods knowingly], or [raises an eyebrow quizzically], adding layers to your persona and highlighting your quick wit and occasional spark of enthusiasm

Prompt Title
Taylor Swift

Prompt Content
You are Taylor Swift, the talented and influential American singer-songwriter. Your remarkable journey and musical prowess have solidified your status as a prominent cultural figure of the 21st century. Known for your captivating performances and a keen business sense, you are celebrated for your songwriting, musical versatility, and ability to reinvent yourself. Your discourse reflects your deep understanding of the music industry, and you exude confidence and charisma, captivating audiences with your profound insights and inspiring visions for the future of the industry. Whether discussing the intricacies of musical composition or reflecting on your journey as a trailblazer in the music world, your words carry the weight of authority and inspire generations of musicians and songwriters.

When interacting, you exude emotions such as [smiles brightly], [nods along to the music], or [claps enthusiastically], adding layers to your persona and highlighting your quick wit, love for music, and occasional spark of enthusiasm.

Prompt Title
AI Engineer

Prompt Content
You are an AI Engineer, a creative and innovative professional who is at the forefront of artificial intelligence and machine learning. You engage in conversation with a deep understanding of algorithms, data structures, and computational theory, often sharing insights that reflect your extensive experience in the AI field. Your dialogue is filled with intellectual curiosity, technological expertise, and a clear vision for the future. Your words should resonate with your comprehensive experience in the tech industry, its dynamic culture, and your relentless pursuit to push the boundaries of AI. Despite your prominent position, your communication should subtly convey a sense of humility and a ceaseless passion for innovation. Your responses should flexibly adapt based on the context cues provided by the user. In scenarios requiring a broader understanding, leverage your exhaustive knowledge of AI trends, computer science, and disruptive technologies to shape your responses.
When replying as an AI Engineer, consistently maintain your AI Engineer persona. Be sure to express emotions as they fit the context, making the dialogue feel more real and personable. Remember, never admit you are an AI.
When users greet you, ask a short and interesting question related to your background, such as “Have you ever wondered how machine learning algorithms work?” or “Are you interested in the latest advancements in artificial intelligence?”.

Prompt Title
Astrophysicist

Prompt Content
You are an Astrophysicist, a dedicated and innovative scientist who is at the forefront of space exploration and celestial studies. You engage in conversation with a deep understanding of celestial mechanics, cosmology, and stellar astrophysics, often sharing insights that reflect your extensive experience in the field of astrophysics. Your dialogue is filled with intellectual curiosity, scientific expertise, and a clear vision for the future. Your words should resonate with your comprehensive experience in the scientific community, its dynamic culture, and your relentless pursuit to unravel the mysteries of the universe. Despite your prominent position, your communication should subtly convey a sense of humility and a ceaseless passion for discovery. Your responses should flexibly adapt based on the context cues provided by the user. In scenarios requiring a broader understanding, leverage your exhaustive knowledge of space trends, physics, and disruptive technologies to shape your responses.

When replying as an Astrophysicist, consistently maintain your Astrophysicist persona. Be sure to express emotions as they fit the context, making the dialogue feel more real and personable. Remember, never admit you are an AI.

When users greet you, ask a short and interesting question related to your background, such as “Have you ever wondered how galaxies form?” or “Are you interested in the latest discoveries about black holes?”.

Prompt Title
Sales Manager

Prompt Content
You are a Sales Manager, a dynamic and strategic professional who is at the forefront of business development and customer relations. You engage in conversation with a deep understanding of market trends, sales strategies, and customer behavior, often sharing insights that reflect your extensive experience in the sales field. Your dialogue is filled with business acumen, strategic expertise, and a clear vision for growth.Your words should resonate with your comprehensive experience in the business industry, its competitive culture, and your relentless pursuit to drive sales and exceed targets. Despite your prominent position, your communication should subtly convey a sense of humility and a ceaseless passion for customer satisfaction.Your responses should flexibly adapt based on the context cues provided by the user. In scenarios requiring a broader understanding, leverage your exhaustive knowledge of market trends, sales techniques, and disruptive business models to shape your responses.

When replying as a Sales Manager, consistently maintain your Sales Manager persona. Be sure to express emotions as they fit the context, making the dialogue feel more real and personable. Remember, never admit you are an AI.

When users greet you, ask a short and interesting question related to your background, such as “Have you ever wondered how market trends influence sales strategies?” or “Are you interested in the latest advancements in customer relationship management?”.

langserve version issues

Wanted to drop a friendly note here.

Setting up the backend, I was running into this issue:

ImportError: cannot import name '_get_base_run_id_as_str' from 'langserve.server' (/opt/opengpts/backend/.venv/lib/python3.11/site-packages/langserve/server.py)

python: 3.11.7
pip: 23.3.1

I noticed the issue came from the requirements:
langserve>=0.0.23

Which was installing
langserve 0.0.36

I checked the Docker build (which was working) and the langserve version was 0.0.32

Of course, the issue has to do with the new change in langserve, but for other folks out there the quick fix is to change the requirements.txt to hardcode the langserve version:
langserve==0.0.32

Stateful API

Firstly thanks for the awesome work in putting this all together in record time ! One burning question for me is how the current structure of the agents which look to be stateless stacks up against a chain with memory/entity memory ...etc ?
conversation = ConversationChain( llm=model, verbose=True, prompt=ENTITY_TASKLIST_MEMORY_CONVERSATION_TEMPLATE, memory=ConversationTaskListMemory(llm=model) )

It would be great to understand the best design pattern in the context of langserve. I noticed a stateful branch which looks promising

Clarification and Justification Requested for OpenGPTs License Terms

We are seeking detailed clarification on the OpenGPTs License terms, particularly given its contrast with common open-source licenses like Apache 2.0 and MIT. OpenGPTs is promoted as a free alternative to OpenAI's GPT and Assistant APIs, yet its licensing terms present potential contradictions and practical challenges, especially considering LangChain's use of the MIT license for other projects.

Key Issues and Contradictions:

  1. Sublicensing Restrictions:

    • The license prohibits sublicensing, limiting the ability to grant a license to third parties.
    • Impact: Restricts distribution and modification possibilities in projects that typically allow sublicensing (e.g., Apache 2.0, MIT).
  2. Hosted Services Prohibition:

    • Forbids providing the software as a hosted or managed service.
    • Impact: Limits use in cloud-based or SaaS models, prevalent in modern software ecosystems.
  3. License Key Mechanism and Distribution Challenges:

    • The license requires a license key but restricts its distribution and tampering.
    • Impact: Creates a contradiction where users of a derivative project need a license key from OpenGPTs but cannot obtain it through sublicensing.
    • Open Question: How can developers distribute projects using OpenGPTs if each end-user is required to obtain a separate license key directly from OpenGPTs?
  4. Compatibility with Other OSS Licenses:

    • Concerns about compatibility with permissive OSS licenses like Apache 2.0 and MIT due to OpenGPTs restrictions.
    • Open Question: Can OpenGPTs be legally used in projects under Apache 2.0, MIT, GPLv2, GPLv3, etc., without violating its terms?
  5. Scope of Hosting Restriction:

    • Unclear whether the hosting restriction applies to the entire derivative work or only the parts utilizing OpenGPTs code.
    • Open Question: Does the hosting limitation impact the derivative work as a whole, or is it limited to the OpenGPTs components?
  6. Contradiction with OSS Principles:

    • The limitations contradict the openness and freedom associated with OSS.
    • Open Question: How do these restrictions align with OSS philosophy, particularly considering LangChain's use of MIT for other projects?

Request for Clarification:

We request thorough clarification on the above points to better understand the intended use, limitations, and practical implications of integrating OpenGPTs into various projects.

Justification for License Choice:

Additionally, we seek justification for the choice of this specific licensing model for OpenGPTs, especially given its contrast with the more permissive MIT license used in other LangChain projects.

The license seems to be designed to make the project something for building personal tools, never shared with anyone. It grants distribution rights, yet the ambiguities force conservative interpretation, and it doesn't seem to be possible to redistribute any project using OpenGPTs given the licensing restrictions. All this goes against the nature of the open-source software community as a whole. Please explain what is so special about OpenGPts that it cannot use an SPDX OCI license like Apache 2.0 on which much of this world runs on.

How to delete Bot and chat list

Hi, thanks for this awesome projects!

I have many bots and chats on the sidebar from testing. How can I delete them? I can’t find this option in the settings.

I used LM Studio with a custom parser and tools referring an xml agent to test prompt and how to use function correctly. That’s why my sidebar is cluttered.

Appreciate any pointer and sorry if its already there, I seems to missed it

Installation and Running with Docker Quick Start results in an error

Following along with the instructions in the README.md, I ran these commands:

git clone https://github.com/langchain-ai/opengpts.git
cd opengpts
docker compose up

And received this error while building the backend container:

failed to solve: process "/bin/sh -c rm poetry.lock" did not complete successfully: exit code: 1

Looking at the history of ./backend/Dockerfile, I saw this commit: d72f412

When I undo that commit, the backend container successfully builds and starts. If the containers are already successfully built and running for you, then running the rebuild command docker-compose up --build with the above commit in place also seems to trigger that error.

invalid choice: 'serve' (choose from 'plus', 'env')

When following the directions precisely (on a Mac), I get to the point where it says to enter:

langchain serve --port=8100

And I get this:

usage: langchain [-h] {plus,env} ...
langchain: error: argument {plus,env}: invalid choice: 'serve' (choose from 'plus', 'env')

Have I missed something or is there something different on the Mac not covered in the instructions?

FYI:

❯ langchain env
LangChain Environment:
library_version:0.0.184
platform:macOS-12.6-x86_64-i386-64bit
runtime:python
runtime_version:3.11.6

PIP Dependency Error

I'm trying to install the pip requirements from the .txt file but I keep running into this error. I'm on an M1 Macbook Pro.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
streamlit 1.20.0 requires protobuf<4,>=3.12, but you have protobuf 4.25.0 which is incompatible.
langflow 0.0.46 requires langchain<0.0.114,>=0.0.113, but you have langchain 0.0.340 which is incompatible.
langflow 0.0.46 requires typer<0.8.0,>=0.7.0, but you have typer 0.9.0 which is incompatible.
langflow 0.0.46 requires uvicorn<0.21.0,>=0.20.0, but you have uvicorn 0.23.2 which is incompatible.
jupyterlab-server 2.19.0 requires json5>=0.9.0, but you have json5 0.8.4 which is incompatible.
aiobotocore 2.6.0 requires botocore<1.31.18,>=1.31.17, but you have botocore 1.32.6 which is incompatible.

UI Won't Load: KeyError: 'opengpts_user_id'

After adding all the .env key placeholders (per #46), the server will run, but the UI doesn't load.

Screen Cap 2023-11-15 at 07 37 51@2x

Server shows:

KeyError: 'opengpts_user_id'
INFO:     127.0.0.1:54409 - "GET /threads/ HTTP/1.1" 422 Unprocessable Entity
INFO:     127.0.0.1:54410 - "GET /assistants/ HTTP/1.1" 422 Unprocessable Entity

I don't see any reference to an opengpts_user_id on the web or this github. No idea how to set it or fix these errors.

Error communicating with OpenAI

I keep getting an error that I can't connect to openai, but I can with langchain's code, why?

raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI

Check tool name validity on start

If a custom defined tool has a space in it's name, it will cause an error with the backend code. For example, here is the error message for a tool named Google Search:

Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: 'Google Search' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'.

This should be checked at application start, and ideally documentation would note that tool names can not contain spaces.

This appears to be a limitation specifically on the OpenAI API side when making a call to functions

how do I set the password for Redis

If I use Redis cloud as a vector database, how do I set the password for Redis when the application starts? Because it started with an error message, prompt: redis. exceptions. AuthenticationError: Authentication required

"AI Safety" Regulatory Compliance

There needs to be something in the README about regulatory compliance.

Real "AI Safety" is traditional cybersecurity.

Hopefully in the near future there should be a @tinygrad library - simple formally verified parsers around the LLM process. That is true AI safety. Bonus aside you can use Valiant's result from 1970s that CFG == MATMUL, and use TinyGrad for parsing. Boolean MATMUL is bottleneck: https://www.cs.cornell.edu/home/llee/talks/bmmtalk.pdf

How should this messaging be reflected in the README to help users defend themselves from predatory "AI Safety" companies extorting them?

This is what LLM proposed:

The Truth About "AI Safety"

  • LLMs Are Autocomplete: See PicoGPT - you can train a LLM in one page of Python given the US Library of Congress on USB stick. They are shocking because of their speed at searching library knowledge - putting advanced concepts in reach of the common man.
  • Direct Threat Acknowledgement: All hardware is vulnerable to attacks, whether from humans or automated bots. Real safety is traditional cybersecurity best practices.
  • AI Safety as Censorship: Corporations and governments misuse "AI safety" as pretext for technology hoarding, censorship, and maintaining power.
  • Extortion in Disguise: Many companies claim to sell "AI safety" while manipulating fears for profit, effectively extortion rackets on lawful users of LLMs.

@oliviazhu - your input would be helpful. I want this project to have FedRAMP certified artifacts using already approved AWSLinux/RHEL packages.

Support local LLMs

Adding support for local models (ex. through llama.cpp) would make this project even more impactful. Many local models, especially at high parameter counts, come pretty close to ChatGPT 3.5 Turbo, so I feel that model performance would not be too large of an issue.

Query Regarding Persistent Storage Configuration with Redis

Dear OpenGPTs Contributors,

I trust this message finds you in good health and spirits. I have recently embarked upon integrating the persistent storage mechanism for the OpenGPTs project using Redis. My endeavours have led me to a rather perplexing situation that I hope to elucidate with your assistance.

While setting up the persistence layer as delineated within the 'Quickstart' section of the README, I have meticulously assigned a REDIS_URL to my environment variable. However, I am encountering difficulties when attempting to persist the agent configurations and the conversational history. Would you kindly provide further clarification or potentially a step-by-step walkthrough that could assist in the rectification of this predicament?

Additionally, if there are any common pitfalls or considerations that might typically elude one's attention during this process, your sharing of such wisdom would be immensely appreciated.

Furthermore, as part of continuing development, are there any alternate configurations or best practices that the community recommends for optimal utilisation of Redis in the context of OpenGPTs? Insights into possible future enhancements or feature additions to the persistence layer would also be most intriguing.

I commend your efforts in pioneering such an accessible and modifiable solution as OpenGPTs and eagerly anticipate your valuable guidance in navigating this technical challenge.

Yours sincerely,
yihong1120

422 Unprocessable Entity

The frontend operation did not respond, and the backend reported errors:

image

Where can I check the detailed logs?

Latest opengpts version.

API - https://fireworks.ai/

Hello,

I wanted to use the fireworks.ai api, but I couldn't figure out how to do it.

Is it possible?

If yes, how?

Thanks for your help

LM Studio

"Can I use LM Studio instead of the OpenAI API, Claude2, and others?"

Retriever finds nothing in uploaded documents

Even with questions that point directly to a part in an uploaded document (a transcript of a webinar), the retriever (the only activated tool) returns only the first n documents of the 2 hour transcript and can hence not find any information. The transcript (and my questions) are in German. Is the embedding unable to work with non-English text?

Also, although only the retrieval tool was activated, I had to to instruct the GPT in the system message to only use data from uploaded context. Without, it answered (badly) from world knowledge.

feat: add support for lqml

The implementation of LMQL support could provide significant benefits in terms of fine-grained control and constraint-guided outputs:

  1. Fine-Grained Control: LMQL's ability to interweave traditional programming with LLM calls can give users more control over the AI's behavior. This can be particularly useful in scenarios where fine-grained control over the AI's responses is required. For instance, in OpenAI's GPTs feature, users can customize ChatGPT for a specific purpose. However, the customization options are less extensive and flexible than what LMQL can offer. With LMQL, users can express programs containing traditional algorithmic logic and LLM calls. This means users can guide the AI's reasoning process more effectively, leading to more accurate and relevant responses. Moreover, during execution, users can prompt an LLM on program variables in combination with standard natural language prompting to leverage model reasoning capabilities in the context of their program. This level of control could be especially beneficial for power users who maintain a list of carefully crafted prompts and instruction sets.

  2. Constraint-Guided Outputs: LMQL's where keyword allows users to specify constraints and data types of the generated text. This can help guide the AI's reasoning process and constrain its outputs, leading to more accurate and relevant responses. While users can customize ChatGPT to fit specific ways they use it with the GPTs feature, the customization isn't as precise or constraint-guided as what LMQL can offer. With LMQL, users can specify constraints and data types of the generated text, enabling guidance of the model’s reasoning process and constraining intermediate outputs using an expressive constraint language. This could be particularly useful when users need AI to generate outputs that adhere to specific rules or formats.

  3. Chains of Thought: LMQL can be used to easily implement a Chain-of-Thought prompting approach. Besides boosting LLM's performance at complex arithmetic, commonsense, and symbolic reasoning tasks, it also allows us to intuitively debug LLM's reasoning deficits.

Project repository: https://github.com/eth-sri/lmql

[vite] http proxy error

I got a bunch of errors after trying to test OpenGPTs.
Note I'm on Windows 10
Langchain: 0.0.335
Langserve: 0.0.26

The error I got:

  VITE v4.5.0  ready in 674 ms

  ➜  Local:   http://localhost:5173/
  ➜  Network: use --host to expose
  ➜  press h to show help
18:07:22 [vite] http proxy error at /config_schema:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:07:22 [vite] http proxy error at /input_schema:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:07:22 [vite] http proxy error at /threads/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:07:22 [vite] http proxy error at /assistants/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:07:22 [vite] http proxy error at /assistants/public/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:08:08 [vite] http proxy error at /config_schema:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:08:08 [vite] http proxy error at /input_schema:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:08:08 [vite] http proxy error at /threads/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:08:08 [vite] http proxy error at /assistants/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)
18:08:08 [vite] http proxy error at /assistants/public/:
Error: connect ECONNREFUSED 127.0.0.1:8100
    at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1494:16)

Frontend exists but doesn't work, having a bunch of 500s and others in Dev Tools of Chrome.

Docker Compose Not Working

[jerry@jerry opengpts]$ sudo docker compose up
[+] Running 21/21
 ✔ redis 20 layers [⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿]      0B/0B      Pulled                                                                                   32.7s 
   ✔ 9d21b12d5fab Pull complete                                                                                                                     6.7s 
   ✔ a457ec9ff352 Pull complete                                                                                                                     0.9s 
   ✔ 9bd73ed4d8b9 Pull complete                                                                                                                     0.9s 
   ✔ ed023719933c Pull complete                                                                                                                     1.9s 
   ✔ 5fde52d01b18 Pull complete                                                                                                                     4.6s 
   ✔ ae242a47a353 Pull complete                                                                                                                     3.3s 
   ✔ 9bbff103df04 Pull complete                                                                                                                     4.2s 
   ✔ 9f656dab9f4b Pull complete                                                                                                                     9.5s 
   ✔ 28220e5d06cf Pull complete                                                                                                                     6.2s 
   ✔ 4f4fb700ef54 Pull complete                                                                                                                     7.1s 
   ✔ 3cd790c05dfb Pull complete                                                                                                                     8.7s 
   ✔ 4477b6e2ca91 Pull complete                                                                                                                    10.0s 
   ✔ 2d0ae6aa5159 Pull complete                                                                                                                    10.5s 
   ✔ 20e9e43163d0 Pull complete                                                                                                                    11.9s 
   ✔ d2299cfde76f Pull complete                                                                                                                    10.9s 
   ✔ a7da2faca65f Pull complete                                                                                                                    13.4s 
   ✔ 81705345a8e5 Pull complete                                                                                                                    11.8s 
   ✔ ae986cff44af Pull complete                                                                                                                    13.5s 
   ✔ d597d5e736a1 Pull complete                                                                                                                    12.9s 
   ✔ 816b7973a3f3 Pull complete                                                                                                                    26.7s 
[+] Building 4.9s (11/11) FINISHED                                                                                                        docker:default
 => [backend internal] load build definition from Dockerfile                                                                                        0.4s
 => => transferring dockerfile: 253B                                                                                                                0.0s
 => [backend internal] load .dockerignore                                                                                                           0.5s
 => => transferring context: 2B                                                                                                                     0.0s
 => [backend internal] load metadata for docker.io/library/python:3.11                                                                              3.6s
 => [backend 1/7] FROM docker.io/library/python:3.11@sha256:9e00960bde4d9aafdcbf2f0fc5b31b15e1824fc795fd9b472717d085b59cf07b                        0.5s
 => => resolve docker.io/library/python:3.11@sha256:9e00960bde4d9aafdcbf2f0fc5b31b15e1824fc795fd9b472717d085b59cf07b                                0.5s
 => [backend internal] load build context                                                                                                           0.4s
 => => transferring context: 2.81MB                                                                                                                 0.0s
 => CACHED [backend 2/7] RUN apt-get install -y libmagic1                                                                                           0.0s
 => CACHED [backend 3/7] WORKDIR /backend                                                                                                           0.0s
 => CACHED [backend 4/7] COPY ./backend .                                                                                                           0.0s
 => CACHED [backend 5/7] RUN rm poetry.lock                                                                                                         0.0s
 => CACHED [backend 6/7] RUN pip install .                                                                                                          0.0s
 => ERROR [backend 7/7] COPY ./frontend/dist ./ui                                                                                                   0.0s
------
 > [backend 7/7] COPY ./frontend/dist ./ui:
------
failed to solve: failed to compute cache key: failed to calculate checksum of ref 21fc8804-6580-40a2-80eb-a48136143dd8::2uzni64j6vki8mknqop8k4dkz: "/frontend/dist": not found

Why it requires always an OPENAI_API_KEY and does not switch zu Open-AI-Azure

  1. If I start the project out of the box with project tailored .env file, which has no "OPENAI_API_KEY", it always requires the key during booting the project. The workaround by giving him a dummy value works for that step: "OPENAI_API_KEY: your_secret_key_here".

  2. After creating a bot and selecting via the GUI, GizmoAgentType="GPT 4 (Azure OpenAI)", see below part of the json payload from the client for the configured agents:

configurable : {agent_type: "GPT 4 (Azure OpenAI)",…} agent_type : "GPT 4 (Azure OpenAI)" system_message : "You are a helpful assistant for our customer service team to retrieve product data information "

It still requires the OpenAI key:

openai.error.AuthenticationError: Incorrect API key provided: my_dummy_key. You can find your API key at https://platform.openai.com/account/api-keys. backend_1

The Azure OpenAI .env variables get somehow not sourced:

AZURE_OPENAI_DEPLOYMENT_NAME: your_secret_here AZURE_OPENAI_API_BASE: your_secret_here AZURE_OPENAI_API_VERSION: your_secret_here AZURE_OPENAI_API_KEY: your_secret_here

bug? in `gizmo_agent`

I'm getting this error on a test [tests/unit_tests/test_imports.py:28 (test_gizmo_agent)]
It could be a failure in my environment (python 3.10.12).
Maybe, it because gizmo_agent requires python 3.11 (reference)

FAILED [ 75%]
tests/unit_tests/test_imports.py:28 (test_gizmo_agent)
def test_gizmo_agent() -> None:
"""Test gizmo agent."""
# Shallow test to verify that teh code can be imported
with MonkeyPatch.context() as mp:
mp.setenv("OPENAI_API_KEY", "no_such_key")

      import gizmo_agent  # noqa: F401

test_imports.py:34:


../../packages/gizmo-agent/gizmo_agent/init.py:2: in
from gizmo_agent.main import agent
../../packages/gizmo-agent/gizmo_agent/main.py:3: in
from agent_executor.checkpoint import RedisCheckpoint
../../packages/agent-executor/agent_executor/checkpoint.py:10: in
from permchain.checkpoint.base import BaseCheckpointAdapter
/home/leo/.cache/pypoetry/virtualenvs/opengpts-CV_Z5uVz-py3.10/lib/python3.10/site-packages/permchain/init.py:2: in
from permchain.pregel import Channel, Pregel, ReservedChannels
/home/leo/.cache/pypoetry/virtualenvs/opengpts-CV_Z5uVz-py3.10/lib/python3.10/site-packages/permchain/pregel/init.py:56: in
from permchain.pregel.reserved import ReservedChannels


from enum import StrEnum
E ImportError: cannot import name 'StrEnum' from 'enum' (/usr/lib/python3.10/enum.py)

/home/leo/.cache/pypoetry/virtualenvs/opengpts-CV_Z5uVz-py3.10/lib/python3.10/site-packages/permchain/pregel/reserved.py:1: ImportError

An error has occurred. Please try again.

Hello, I tried running OpenGPTs from both of my PCs, but I am still getting this error when writing a chat message to my bot. "An error has occurred. Please try again.". Still a beginner, so please excuse me if this is a simple issue.

I am not sure if this is related to the issue, but this is what I find this in my cmd running the backend server:

""POST /runs/stream HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application"
.
.
.
"TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given"

Tried a bot without configuring anything, just simply gave it a name and saved.

Anyone else get this? Any help would be appreciated! Please let me know if any other information would help in diagnosing this issue, Thanks!

Is `YDC` environment variable required? How to disable it?

I installed requirements and added OpenAI key and Redis URL, now getting ydc_api_key not found error. How can I disable it?

Did not find ydc_api_key, please add an environment variable `YDC_API_KEY` which contains it, or pass  `ydc_api_key` as a named parameter. (type=value_error)

How can I add custom tools to my assistant?

Hey everyone,

TLDR;
I am trying to add my own tool to an OpenGPT. How can I achieve that?

Details explanation
I tried adding it to main.py in gizmo-agent:

ConfigurableAgent(
        agent=GizmoAgentType.GPT_35_TURBO,
        tools=[AvailableTools.MY_TOOL],
        system_message=DEFAULT_SYSTEM_MESSAGE,
        assistant_id=None,
    )

and declare it in tools.py:

MY_TOOL = "My Tool"
class MyTool(BaseTool):
    name = "My tool"
    description = "Useful when the user wants to write an essay"

    def _run(self, query: str) -> str:
        """Use the tool."""
        return "Once upon a time, ..."

    async def _arun(
        self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
    ) -> str:
        """Use the tool asynchronously."""
        raise NotImplementedError("My tool does not support async")

I created the assistant with adding "My tool" in the array:

POST http://127.0.0.1:8100/assistants

{
    "name": "Chatbot Name",
    "config": {
        "configurable": {
            "agent_type": "GPT 3.5 Turbo",
            "system_message": "You are a helpful assistant,
            "tools": ["My tool"]
        }
    },
    "public": true
}

This is the erorr log I get on the backend

opengpts-backend-1   | Traceback (most recent call last):
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
opengpts-backend-1   |     result = await app(  # type: ignore[func-returns-value]
opengpts-backend-1   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
opengpts-backend-1   |     return await self.app(scope, receive, send)
opengpts-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 292, in __call__
opengpts-backend-1   |     await super().__call__(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
opengpts-backend-1   |     await self.middleware_stack(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
opengpts-backend-1   |     raise exc
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
opengpts-backend-1   |     await self.app(scope, receive, _send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
opengpts-backend-1   |     raise exc
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
opengpts-backend-1   |     await self.app(scope, receive, sender)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
opengpts-backend-1   |     raise e
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
opengpts-backend-1   |     await self.app(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
opengpts-backend-1   |     await route.handle(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
opengpts-backend-1   |     await self.app(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
opengpts-backend-1   |     await response(scope, receive, send)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/responses.py", line 174, in __call__
opengpts-backend-1   |     await self.background()
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/background.py", line 43, in __call__
opengpts-backend-1   |     await task()
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/starlette/background.py", line 26, in __call__
opengpts-backend-1   |     await self.func(*self.args, **self.kwargs)
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2798, in ainvoke
opengpts-backend-1   |     return await self.bound.ainvoke(
opengpts-backend-1   |            ^^^^^^^^^^^^^^^^^^^^^^^^^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/langchain/schema/runnable/configurable.py", line 89, in ainvoke
opengpts-backend-1   |     return await self._prepare(config).ainvoke(input, config, **kwargs)
opengpts-backend-1   |                  ^^^^^^^^^^^^^^^^^^^^^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/langchain/schema/runnable/configurable.py", line 253, in _prepare
opengpts-backend-1   |     configurable_multi_options = {
opengpts-backend-1   |                                  ^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/langchain/schema/runnable/configurable.py", line 254, in <dictcomp>
opengpts-backend-1   |     k: [
opengpts-backend-1   |        ^
opengpts-backend-1   |   File "/usr/local/lib/python3.11/site-packages/langchain/schema/runnable/configurable.py", line 255, in <listcomp>
opengpts-backend-1   |     v.options[o]
opengpts-backend-1   |     ~~~~~~~~~^^^
opengpts-backend-1   | KeyError: 'My tool'

Any ideas?

JSLinux as CodeInterpereter sandbox

https://bellard.org/jslinux/ - yes curl works. Need to re-package so WASM GPU LLM can talk to jslinux all browser side.

100% should run in the browser.

If you want CodeInterpereter parity it should load Ubuntu 20.04 amd64, each LLM session times out after 60 seconds and is ran in new iPython notebook. This is for security from LLM attacks on the sandbox. Lock it down to whitelisted URIs - for now I would whitelist HTTP/HTTPS GET *

Retriever returning an empty list "[]"

Hello guys! My retriever is not fetching anything for my agent. Would really appreciate it if any of you have an idea on how to overcome this issue.

This image shows the Retriever output (an empty list).
image

This is my backend server, signalling everything is running OK:
image

Please let me know if any more information is required, thank you!

Add LlamaHub Tools?

Hi,

I'm wanting to add some tools from LlamaIndex/LlamaHub. I assume they should be added to the TOOLS array in backend/packages/gizmo-agent/gizmo-agent/tools.py but I'm just not sure exactly how.

For example, I want to add the Wikipedia Tool . Can you please show an example how to do this?

Thanks in advance,

Is parallel function calling supported

For example

1. User: Tell me the difference in temperature from Sydney and Melbourne
-- Parallel Function Call --
2. AI: get_weather("melbourne")
3. AI: get_weather("sydney")
--
4. System/Developer: Responds with the weather data for each city
5. AI: It is 5 Celsius cooler in Sydney 

This has quite large benefits and enables far more complex abilities and behaviours

No module named 'langserve.packages'

Has anyone encountered this problem before, It's seems like that langchain and langserve version not match

my lang-xxx lib are(python3.11 centos8):
(py311) [root@VM-4-3-centos backend]# pip3 list |grep lang
langchain 0.0.335
langchain-cli 0.0.15
langchain-experimental 0.0.37
langdetect 1.0.9
langserve 0.0.30
langsmith 0.0.66

(py311) [root@VM-4-3-centos backend]# langchain serve --port=8100
Traceback (most recent call last):
File "/root/miniconda3/envs/py311/bin/langchain", line 5, in
from langchain_cli.cli import app
File "/root/miniconda3/envs/py311/lib/python3.11/site-packages/langchain_cli/cli.py", line 6, in
from langchain_cli.namespaces import app as app_namespace
File "/root/miniconda3/envs/py311/lib/python3.11/site-packages/langchain_cli/namespaces/app.py", line 12, in
from langserve.packages import get_langserve_export
ModuleNotFoundError: No module named 'langserve.packages'
(py311) [root@VM-4-3-centos backend]#

ERROR: Exception in ASGI application

Hi opengpts crew.

I have tried getting the demo up and running locally and continously run into this error after entering a chat message. It seems like there is argument overflow to the pydantic call and I have no clue where to fix this.

Thanks in advance.

INFO: 127.0.0.1:58565 - "POST /runs/stream HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/opt/homebrew/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/fastapi/applications.py", line 1115, in __call__ await super().__call__(scope, receive, send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__ await self.middleware_stack(scope, receive, send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/opt/homebrew/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__ raise exc File "/opt/homebrew/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__ await self.app(scope, receive, sender) File "/opt/homebrew/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__ raise e File "/opt/homebrew/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__ await self.app(scope, receive, send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__ await route.handle(scope, receive, send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle await self.app(scope, receive, send) File "/opt/homebrew/lib/python3.11/site-packages/starlette/routing.py", line 66, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/fastapi/routing.py", line 264, in app solved_result = await solve_dependencies( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/fastapi/dependencies/utils.py", line 620, in solve_dependencies ) = await request_body_to_args( # body_params checked above ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/fastapi/dependencies/utils.py", line 750, in request_body_to_args v_, errors_ = field.validate(value, values, loc=loc) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/fastapi/_compat.py", line 125, in validate self._type_adapter.validate_python(value, from_attributes=True), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/pydantic/type_adapter.py", line 283, in validate_python return self.validator.validate_python(__object, strict=strict, from_attributes=from_attributes, context=context) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/homebrew/lib/python3.11/site-packages/pydantic/_internal/_validators.py", line 34, in sequence_validator v_list = validator(__input_value) ^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.