Coder Social home page Coder Social logo

archytas's Introduction

Archytas: A Tools Interface for AI Agents

Implementation of the ReAct (Reason & Action) framework for Large Language Model (LLM) agents. Mainly targeting OpenAI's GPT-4.

Easily create tools from simple python functions or classes with the @tool decorator. A tools list can then be passed to the ReActAgent which will automagially generate a prompt for the LLM containing usage instructions for each tool, as well as manage the ReAct decision loop while the LLM performs its task.

Tools can be anything from internet searches to custom interpreters for your domain. Archytas provides a few built-in demo tools e.g. datetime, fibonacci numbers, and a simple calculator.

Demo

Short demo of using the PythonTool to download a COVID-19 dataset, and perform some basic processing/visualization/analysis/etc.

Watch the video
click to watch original video on youtube

Quickstart

# make sure poetry is installed
pip install poetry

# clone and install
git clone [email protected]:jataware/archytas.git
cd archytas
poetry install

# make sure OPENAI_API_KEY var is set
# or pass it in as an argument to the agent
export OPENAI_API_KEY="sk-..."

# run demo
poetry run chat-repl

Simple Usage

Import pre-made tools from the tools module

from archytas.react import ReActAgent, FailedTaskError
from archytas.tools import PythonTool

from easyrepl import REPL

# create the agent with the tools list
some_tools = [PythonTool, ..., etc.]
agent = ReActAgent(tools=some_tools, verbose=True)

# REPL to interact with agent
for query in REPL()
    try:
        answer = agent.react(query)
        print(answer)
    except FailedTaskError as e:
        print(f"Error: {e}")

Documentation

See the wiki docs for details.

archytas's People

Contributors

david-andrew avatar mattprintz avatar fivegrant avatar ryanholtschneider2 avatar

Stargazers

 avatar Robnet Kerns avatar Justin Gawrilow avatar

Watchers

 avatar Justin Gawrilow avatar Brandon Rose avatar Ben Johnson avatar Kostas Georgiou avatar  avatar

archytas's Issues

Rename 'master' branch to 'main'

Better to keep consistent with majority of projects and dev expectations. Using master gives this repo an antiquated/outdated feel.

Summarize last tools' output as part of react loop

As the react loop "thoughts" are being shared, the agent could summarize the previous tools output as part of its though and return the summary as part of the thoughts, that would really help with transparency, without worrying about cluttering up the human chat with too many details.

Reliably use tools defined inside of Agents

When a tool is defined inside of an agent, it seems to be called incorrectly.

The specific case I ran into:

 Error while handling message llm_request in function llm_request
jupyter-1   | Traceback (most recent call last):
jupyter-1   |   File "/home/jupyter/.local/lib/python3.10/site-packages/archytas/react.py", line 204, in react_async
jupyter-1   |     tool_fn = self.tools[tool_name]
jupyter-1   | KeyError: 'search_installed_packages'
jupyter-1   | 
jupyter-1   | During handling of the above exception, another exception occurred:
jupyter-1   | 
jupyter-1   | Traceback (most recent call last):
jupyter-1   |   File "/home/jupyter/.local/lib/python3.10/site-packages/beaker_kernel/lib/utils.py", line 30, in wrapper
jupyter-1   |     result = await fn(self, message)
jupyter-1   |   File "/home/jupyter/.local/lib/python3.10/site-packages/beaker_kernel/kernel.py", line 440, in llm_request
jupyter-1   |     result = await self.context.agent.react_async(request, react_context={"message": message})
jupyter-1   |   File "/home/jupyter/.local/lib/python3.10/site-packages/archytas/react.py", line 206, in react_async
jupyter-1   |     action_str = await self.error(f'Unknown tool "{tool_name}"\nAvailable tools: {", ".join(self.tools.keys())}')
jupyter-1   |   File "/home/jupyter/.local/lib/python3.10/site-packages/archytas/react.py", line 302, in error
jupyter-1   |     raise FailedTaskError(f"Too many errors during task. Last error: {mesg}")
jupyter-1   | archytas.react.FailedTaskError: Too many errors during task. Last error: Unknown tool "search_installed_packages"
jupyter-1   | Available tools: Agent.ask_user, Agent.get_available_functions, Agent.get_functions_docstring, Agent.retrieve_documentation_for_module, Agent.search_installed_packages, Agent.submit_code

This agent class has a tool defined inside of it like this

@tool()
def search_installed_packages(self, code:str) -> str:
....

and it shows up in the list of available functions. However, the agent is unable to call the tool in the action loop

Allow use of gpt-4 turbo.

gpt-4 turbo outputs json markdown given current prompt. We can either filter the response or use the official open ai format enforcing -

completion = openai.chat.completions.create(
    **model='gpt-4-1106-preview',**
    response_format={"type": "json_object"},
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python? Answer in json",
        },
    ],
)

completion.choices[0].message.content

Runtime prompt/manual lookup to save context length

Problem

There needs to be some way for the LLM to lookup documentation for a particular tool at runtime rather than including all the information in the initial prompt. When there are lots of tools, or the tools have complex behavior with long prompts, this takes up lots of the limited input context available in GPT-4 (currently 8k tokens).

The current approach of stuffing everything into the initial prompt has the following drawbacks:

  • expensive (openai costs per token)
  • can confuse the LLM. If there are too many tools with long descriptions, the LLM may be less likely to hone in on the section it needs to complete its current task
  • decreases the amount of interactions the user can have with the agent. GPT-4/etc. have a finite input length, so the longer the initial prompt, the less space for the user's actual conversation with the LLM

Approaches / Open Questions

  • have a manual tool that the LLM can call with the name of a tool it wants more info on
    • where does the more info live?
    • how does the prompt generation know to add a note that more info can be looked up? (don't want to require the user to manually say that more info can be looked up)
  • what about with the PythonTool or similar? Want to be able to pass in local variables, that the model could presumably lookup the values of. This is semi-handled by the pyman tool: https://github.com/jataware/archytas/blob/master/archytas/tools.py#L134, though the prompt generated for the PythonTool doesn't update to indicate that the locals exist, or that information about them can be looked up

Runtime prompt modification

Problem

Presently, the prompts generated for an agent are static, and based solely off the docstrings/type signatures of the tools passed into the ReActAgent class. E.g.

from archytas.tools import PythonTool
from archytas.react import ReActAgent

agent = ReActAgent(tools=[PythonTool], verbose=True)
print(agent.prompt)

Which generates the following (abridged) prompt:

You are the ReAct (Reason & Action) assistant. ...

# Tools
You have access to the following tools which can help you in your job:

PythonTool (class):
    Tool for running python code. If the user asks you to write code, you can run it here.

    methods:
        run:
            Runs python code in a python environment.

            The environment is persistent between runs, so any variables created will be available in subsequent runs.
            The only visible effects of this tool are from output to stdout/stderr. If you want to view a result, you MUST print it.

            _input_: (str) The code to run
            _output_: (str) The stdout output of the code

ask_user:
    Ask the user a question and get their response. 

    You should ask the user a question if you do not have enough information to complete the task, and there is no suitable tool to help you.

    _input_: (str) The question to ask the user
    _output_: (str) The user's response

final_answer:
    the final_answer tool is used to indicate that you have completed the task. You should use this tool to communicate the final answer to the user.
    _input_: the final answer to the user's task

fail_task
    the fail_task tool is used to indicate that you have failed to complete the task. You should use this tool to communicate the reason for the failure to the user. Do not call this tool unless you have given a good effort to complete the task.
    _input_: the reason for the failure


<other prompt information...>

This prompt gets generated when the agent is instantiated, and there is no way to adjust it after the fact. For the PythonTool specifically, it would be useful to be able modify the description to indicate that prelude code was run or that extra local variables are available.

# make a tool instance with prelude code and variables
python = PythonTool(
    # code to run at the beginning
    prelude='import numpy as np\nfrom matplotlib import pyplot as plt',
    # variables to include in the environment 
    locals={'fib_n': fib_n, 'ModelSimulation': ModelSimulation, 'jackpot': jackpot, 'pyman': pyman},
)

agent = ReActAgent(tools=[python], verbose=True)

At present, there is no way to update the description for the PythonTool to reflect that the environment has extra variables, and has already run some code. It's also relevant to managing context for a given tool.

There needs to be a good approach for adjusting the tool's prompt

Possible approaches

  • method that the user can call any time that overwrites the current prompt. Perhaps the method gets attached to the @tool/@toolset wrappers that get generated, and attach a _prompt_override variable to the wrapper.
    • The system prompt could be regenerated on every call to the OpenAI api, and if the _prompt_override variable is present, it is used instead of the generated prompt
    • or an extra message gets inserted into the chat history that is sent to the LLM indicating the change.
  • @toolsets could allow for an optional generate_prompt method that would be called when prompts are being generated, and override the default generated prompt for the toolset.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.