Coder Social home page Coder Social logo

phidatahq / phidata Goto Github PK

View Code? Open in Web Editor NEW
9.0K 65.0 1.3K 42.06 MB

Build AI Assistants with memory, knowledge and tools.

Home Page: https://docs.phidata.com

License: Mozilla Public License 2.0

Python 99.61% Shell 0.28% Batchfile 0.11%
developer-tools python aws ai llm llmops gpt-4

phidata's Introduction

phidata

Build AI Assistants with memory, knowledge and tools

image

What is phidata?

Phidata is a framework for building Autonomous Assistants (aka Agents) that have long-term memory, contextual knowledge and the ability to take actions using function calling.

Why phidata?

Problem: LLMs have limited context and cannot take actions.

Solution: Add memory, knowledge and tools.

  • Memory: Stores chat history in a database and enables LLMs to have long-term conversations.
  • Knowledge: Stores information in a vector database and provides LLMs with business context.
  • Tools: Enable LLMs to take actions like pulling data from an API, sending emails or querying a database.

How it works

  • Step 1: Create an Assistant
  • Step 2: Add Tools (functions), Knowledge (vectordb) and Storage (database)
  • Step 3: Serve using Streamlit, FastApi or Django to build your AI application

Installation

pip install -U phidata

Quickstart: Assistant that can search the web

Create a file assistant.py

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo

assistant = Assistant(tools=[DuckDuckGo()], show_tool_calls=True)
assistant.print_response("Whats happening in France?", markdown=True)

Install libraries, export your OPENAI_API_KEY and run the Assistant

pip install openai duckduckgo-search

export OPENAI_API_KEY=sk-xxxx

python assistant.py

Documentation and Support

Examples

Assistant that can write and run python code

Show details

The PythonAssistant can achieve tasks by writing and running python code.

  • Create a file python_assistant.py
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile

python_assistant = PythonAssistant(
    files=[
        CsvFile(
            path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
            description="Contains information about movies from IMDB.",
        )
    ],
    pip_install=True,
    show_tool_calls=True,
)

python_assistant.print_response("What is the average rating of movies?", markdown=True)
  • Install pandas and run the python_assistant.py
pip install pandas

python python_assistant.py

Assistant that can analyze data using SQL

Show details

The DuckDbAssistant can perform data analysis using SQL.

  • Create a file data_assistant.py
import json
from phi.assistant.duckdb import DuckDbAssistant

duckdb_assistant = DuckDbAssistant(
    semantic_model=json.dumps({
        "tables": [
            {
                "name": "movies",
                "description": "Contains information about movies from IMDB.",
                "path": "https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
            }
        ]
    }),
)

duckdb_assistant.print_response("What is the average rating of movies? Show me the SQL.", markdown=True)
  • Install duckdb and run the data_assistant.py file
pip install duckdb

python data_assistant.py

Assistant that can generate pydantic models

Show details

One of our favorite LLM features is generating structured data (i.e. a pydantic model) from text. Use this feature to extract features, generate movie scripts, produce fake data etc.

Let's create an Movie Assistant to write a MovieScript for us.

  • Create a file movie_assistant.py
from typing import List
from pydantic import BaseModel, Field
from rich.pretty import pprint
from phi.assistant import Assistant

class MovieScript(BaseModel):
    setting: str = Field(..., description="Provide a nice setting for a blockbuster movie.")
    ending: str = Field(..., description="Ending of the movie. If not available, provide a happy ending.")
    genre: str = Field(..., description="Genre of the movie. If not available, select action, thriller or romantic comedy.")
    name: str = Field(..., description="Give a name to this movie")
    characters: List[str] = Field(..., description="Name of characters for this movie.")
    storyline: str = Field(..., description="3 sentence storyline for the movie. Make it exciting!")

movie_assistant = Assistant(
    description="You help write movie scripts.",
    output_model=MovieScript,
)

pprint(movie_assistant.run("New York"))
  • Run the movie_assistant.py file
python movie_assistant.py
  • The output is an object of the MovieScript class, here's how it looks:
MovieScript(
│   setting='A bustling and vibrant New York City',
│   ending='The protagonist saves the city and reconciles with their estranged family.',
│   genre='action',
│   name='City Pulse',
│   characters=['Alex Mercer', 'Nina Castillo', 'Detective Mike Johnson'],
│   storyline='In the heart of New York City, a former cop turned vigilante, Alex Mercer, teams up with a street-smart activist, Nina Castillo, to take down a corrupt political figure who threatens to destroy the city. As they navigate through the intricate web of power and deception, they uncover shocking truths that push them to the brink of their abilities. With time running out, they must race against the clock to save New York and confront their own demons.'
)

PDF Assistant with Knowledge & Storage

Show details

Lets create a PDF Assistant that can answer questions from a PDF. We'll use PgVector for knowledge and storage.

Knowledge Base: information that the Assistant can search to improve its responses (uses a vector db).

Storage: provides long term memory for Assistants (uses a database).

  1. Run PgVector

Install docker desktop and run PgVector on port 5532 using:

docker run -d \
  -e POSTGRES_DB=ai \
  -e POSTGRES_USER=ai \
  -e POSTGRES_PASSWORD=ai \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v pgvolume:/var/lib/postgresql/data \
  -p 5532:5432 \
  --name pgvector \
  phidata/pgvector:16
  1. Create PDF Assistant
  • Create a file pdf_assistant.py
import typer
from rich.prompt import Prompt
from typing import Optional, List
from phi.assistant import Assistant
from phi.storage.assistant.postgres import PgAssistantStorage
from phi.knowledge.pdf import PDFUrlKnowledgeBase
from phi.vectordb.pgvector import PgVector2

db_url = "postgresql+psycopg://ai:ai@localhost:5532/ai"

knowledge_base = PDFUrlKnowledgeBase(
    urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
    vector_db=PgVector2(collection="recipes", db_url=db_url),
)
# Comment out after first run
knowledge_base.load()

storage = PgAssistantStorage(table_name="pdf_assistant", db_url=db_url)


def pdf_assistant(new: bool = False, user: str = "user"):
    run_id: Optional[str] = None

    if not new:
        existing_run_ids: List[str] = storage.get_all_run_ids(user)
        if len(existing_run_ids) > 0:
            run_id = existing_run_ids[0]

    assistant = Assistant(
        run_id=run_id,
        user_id=user,
        knowledge_base=knowledge_base,
        storage=storage,
        # Show tool calls in the response
        show_tool_calls=True,
        # Enable the assistant to search the knowledge base
        search_knowledge=True,
        # Enable the assistant to read the chat history
        read_chat_history=True,
    )
    if run_id is None:
        run_id = assistant.run_id
        print(f"Started Run: {run_id}\n")
    else:
        print(f"Continuing Run: {run_id}\n")

    # Runs the assistant as a cli app
    assistant.cli_app(markdown=True)


if __name__ == "__main__":
    typer.run(pdf_assistant)
  1. Install libraries
pip install -U pgvector pypdf "psycopg[binary]" sqlalchemy
  1. Run PDF Assistant
python pdf_assistant.py
  • Ask a question:
How do I make pad thai?
  • See how the Assistant searches the knowledge base and returns a response.

  • Message bye to exit, start the assistant again using python pdf_assistant.py and ask:

What was my last message?

See how the assistant now maintains storage across sessions.

  • Run the pdf_assistant.py file with the --new flag to start a new run.
python pdf_assistant.py --new

Checkout the cookbook for more examples.

Next Steps

  1. Read the basics to learn more about phidata.
  2. Read about Assistants and how to customize them.
  3. Checkout the cookbook for in-depth examples and code.

Demos

Checkout the following AI Applications built using phidata:

  • PDF AI that summarizes and answers questions from PDFs.
  • ArXiv AI that answers questions about ArXiv papers using the ArXiv API.
  • HackerNews AI summarize stories, users and shares what's new on HackerNews.

Tutorials

Building the LLM OS with gpt-4o

Autonomous RAG

Local RAG with Llama3

Llama3 Research Assistant powered by Groq

Looking to build an AI product?

We've helped many companies build AI products, the general workflow is:

  1. Build an Assistant with proprietary data to perform tasks specific to your product.
  2. Connect your product to the Assistant via an API.
  3. Monitor and Improve your AI product.

We also provide dedicated support and development, book a call to get started.

Contributions

We're an open-source project and welcome contributions, please read the contributing guide for more information.

Request a feature

  • If you have a feature request, please open an issue or make a pull request.
  • If you have ideas on how we can improve, please create a discussion.

Roadmap

Our roadmap is available here. If you have a feature request, please open an issue/discussion.

phidata's People

Contributors

adieyal avatar anuragts avatar aravindmyd avatar ashpreetbedi avatar atharvk9 avatar ayushmorbar avatar bmorphism avatar demeralde avatar eliaspereirah avatar eltociear avatar fanaperana avatar felixkamau avatar frankxia avatar ivanfioravanti avatar jacobgoldenart avatar jacobweiss2305 avatar jeblister avatar joeychilson avatar justinqcai avatar kedar-1 avatar kosiew avatar kpcofgs avatar mjcarbonell avatar neverbiasu avatar raghavanvijay33 avatar raghavdixit99 avatar shailensobhee avatar vishwajeetdabholkar avatar ysdinesh31 avatar ysolanky avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

phidata's Issues

error: phi ws up

I attempted to create a docker container by running phi ws up as below, but encountered an error. Docker itself is up and running without any issues.

> phi ws up                                                                  
Starting workspace: ai-app

--**-- Confirm resources to create:
  -+-> Network: ai
  -+-> Container: ai-db
  -+-> Container: ai-app

Network: ai
Total 3 resources

Confirm deploy [Y/n]: Y

-==+==- Network: ai
ERROR    Could not connect to docker. Please confirm docker is installed and running                    
ERROR    Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))    

How to switch OpenAI models

from phi.assistant import Assistant

assistant = Assistant(description="You help people with their health and fitness goals.")
assistant.print_response("Share a quick healthy breakfast recipe.", markdown=True)


in the above example, the GPT-4-turbo model is used by default
How do I modify the default model?
Thank you

Question about OpenAILike usage

Saw this LM-Studio Cookbook,

from phi.assistant import Assistant
from phi.llm.openai.like import OpenAILike
from phi.tools.duckduckgo import DuckDuckGo

assistant = Assistant(
    llm=OpenAILike(base_url="http://localhost:1234/v1"),
    tools=[DuckDuckGo()],
    show_tool_calls=True,
)
assistant.print_response("Whats happening in France? Summarize top stories with sources.", markdown=True)

Is it because LM-Studio OpenAI API provided is supporting tools?

Since I'm using colab wondering how OpenAILike works and tried with Groq endpoint llm=OpenAILike(base_url="https://api.groq.com/openai/v1", api_key="gsk_xxxxxx", model="mixtral-8x7b-32768"),

it said: tools is not supported with this model. Just want to get my head around it.

And kudos with phi-data it really make opensource model usable in very clear way without many workaround!

Functions with datetime arguments

Discussed in https://github.com/orgs/phidatahq/discussions/50

Originally posted by adieyal January 19, 2024
Thanks for an excellent library.

I was wondering what the best approach for defining tools whose functions have datetime arguments. From my reading of the code, it seems that you need to create a custom Function since get_json_schema doesn't handle dates. e.g.

class SalesFunction(Function):
    ...
    parameters: Dict[str, Any] = {
        "type": "object",
        "properties": {
            "start_date": {
                "type": "string",
                "format": "date-time",
                "description": ...
            },
            "end_date": {
                "type": "string",
                "format": "date-time",
                "description": ...
            }
        }
    }

    entrypoint: Callable[[str, str], str] = ...

perhaps an alternative would be to customise get_json_schema and then override from_callable to use the new function?

def from_callable(cls, c: Callable) -> "Function":
from inspect import getdoc
from phi.utils.json_schema import get_json_schema
parameters = {"type": "object", "properties": {}}
try:
type_hints = get_type_hints(c)
parameters = get_json_schema(type_hints)
# logger.debug(f"Type hints for {c.__name__}: {type_hints}")
except Exception as e:
logger.warning(f"Could not parse args for {c.__name__}: {e}")
return cls(
name=c.__name__,
description=getdoc(c),
parameters=parameters,
entrypoint=validate_call(c),
)

sqlalchemy.exc.OperationalError: (psycopg.OperationalError) connection failed: FATAL: role "ai" does not exist

Hi, I'm new here so this may be silly but following the 'assistant knowledge cookbook' instructions leads only to the following error in the console, complaining about lack of an 'ai' role.
sqlalchemy.exc.OperationalError: (psycopg.OperationalError) connection failed: FATAL: role "ai" does not exist (Background on this error at: https://sqlalche.me/e/20/e3q8)

Any idea why this might be happening? I'm looking to contribute so I'd definitely like to get my local working.

Add skypilot as underlaying Infrastructure provider

I'm just exploring this very promising project.

IMHO, skypilot would be the perfect match for phidata & running ai_apps.

About
SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.

basically, after writing a short deployment (yaml) once, a app can be run on any configured cloud, even local on k8s.
skypilot searches for the actual cheapest prices for the configured resources and starts the app, including loadbalancer, scaling, tunneles, etc…

edit: fixed typos

Gradio demo on Spaces

Kudos for building Phidata. It would be great to have a hosted gradio app on Spaces as well. Are there any plans?

Assistant fails to return response if tool returns large string

I have created the following program to test phidata tools capabilities. I noticed that when my tool returns a larger string, the programs invokes the tool in an infinite loop (for example string 4K in size). I also get the following warning:

WARNING Could not parse tool calls from response: {
"tool_calls": [
{
"name": "xvpinfo",
"arguments": {
"street": "Swan Lake",
"year": "2024"
}
}
]
}

When the tool returns smaller string (for example 500 bytes in size), everything works as expected.

Is there anything to tweak to make the program work with larger strings returned by the tool? Many thanks in advance.

import logging
from phi.assistant import Assistant
from phi.llm.ollama import Ollama
import requests
import sys
from phi.tools import Toolkit

temp = 0.3
models = ["mistral", "llama2", "mixtral"]
model = models[0]

class XvpTool(Toolkit):
    def __init__(Toolkit):
        super().__init__()

    def xvpinfo(self, street: str, year: str) -> str:
        """Retrieves list of residents living on a street in specific year.
        Args:
            street (str): Full or part of street name (e.g., 'SWAN LAKE', '930 MAIN').
            year (str): year (e.g., '2024').            
        Returns:
            str: A list of residents and their full address.
        """
        url = f"https://secret.com/api/search/address/{street}/{year}/"
        headers = {
            "API-Key": "******",
            "X-Version": "2.0"
        }

        try:
            response = requests.get(url, headers=headers)

            # Check if the request was successful (status code 200)
            if response.status_code == 200:
                result_set = response.json()

                if not result_set:
                    return "None"
                result_string = "\r\n".join([f"{entry.get('name', '')} {entry.get('locNameFull', '')}" if 'locNameFull' in entry else f"{entry.get('name', '')}" for entry in result_set])
                return result_string
            else:
                print(f"Error: {response.status_code} - {response.text}")
                return None

        except Exception as e:
            print(f"Error: {e}")
            return None

assistant = Assistant(
    llm=Ollama(model=model, options={"temperature": temp}),
    description="You are a helpful Assistant to retrieve list of residents living on specific street using tools", 
    instructions=[
            "Given a street name and year, retrieve a list of residents and their address sorted by owner name.",
            "At the end of the response, please include the number of residents found.",
            "At the end of the response, please provide resident count by city.", 
            "If no residents are found, please return 'None Found'."
    ],
    tools=[XvpTool().xvpinfo], 
    show_tool_calls=True,
    debug_mode=True,
    prevent_hallucinations=True,
)

assistant.print_response(f"Who lives at {street} in year {year}?", markdown=False)


Adding current date to custom instructions

Discussed in https://github.com/orgs/phidatahq/discussions/51

Originally posted by adieyal January 19, 2024
It could be useful to add a flag to the assistant's custom instructions to include the current date to enable queries relative to time, e.g. Where there any notable trends in last month's sales?

I've added mine to extra_instructions, i.e.

extra_instructions=[f"The current date is {datetime.now()}"]

but it might be a common enough requirement to warrant adding as a bool on Assistant

class Assistant(BaseModel):
    ...
    add_current_time: bool = False

Returning a string instead of print response

How do you return the answer as a string verse print response:

from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile

def average_rating():
    python_assistant = PythonAssistant(
        files=[
            CsvFile(
                path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
                description="Contains information about movies from IMDB.",
            )
        ],
        pip_install=True,
        show_tool_calls=True,
    )

    python_assistant.print_response("What is the average rating of movies?", markdown=True)

Groq Support

Do u guys have any plan for integrating w/ Grok? Currently, it seems u depend on OpenAI tools' functionality. Adding a mechanism to detect tools using Grok could bring crazy speed benefits here for a more efficient assistant. Hope u're considering a plan. 🚀

OpenAI Vision Incomplete Response

I get incomplete response when executing the following

Code:

from dotenv import load_dotenv
from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

from spaceX.services import prompt

# ensure loading of .envs
_ = load_dotenv()

assistant = Assistant(
    description= "You are a brilliant invoices OCR.",
    #instructions=["Only provide the result, do not need to provide any additional information."],
    # prevent_prompt_injection=True,
    debug_mode=True,
    llm=OpenAIChat(model="gpt-4-vision-preview", ),
    
)


response = assistant.run( [
     {"type": "text", "text": prompt.PROMPT},
     {
          "type": "image_url",
          "image_url": "https://XXX.ngrok-free.app/test.jpeg",
    },
    ], stream=False)


print(response)

Trace:

❯ PYTHONPATH=. python capture/main.py
DEBUG    Debug logs enabled                                                                                                                            
DEBUG    *********** Run Start: 09c4f3e5-d8f5-4c1f-96e9-7dffd3b028cb ***********                                                                       
DEBUG    *********** Task 1 Start ***********                                                                                                          
DEBUG    *********** Task Start: 17444c7b-7c80-4742-a2aa-6cf29acfd132 ***********                                                                      
DEBUG    ---------- OpenAI Response Start ----------                                                                                                   
DEBUG    ============== system ==============                                                                                                          
DEBUG    You are a brilliant invoices OCR.                                                                                                             
                                                                                                                                                       
DEBUG    ============== user ==============                                                                                                            
DEBUG    [{'type': 'text', 'text': 'Please extract the following details from the provided invoice image onto JSON format:\n    Invoice number,\n      
         Date of the invoice,\n    Customer name and address,\n    Vendor name and address,\n    Line items for each item, retrieve the description,   
         quantity, unit price, and total price\n    Subtotals, tax rates, tax amounts, and total due amount,\n    Payment terms, including due date and
         any discounts for early payment,\n    Vendor and customer VAT identification numbers (if applicable),\n    Bank details, including bank name, 
         account number, IBAN, and SWIFT/BIC codes\n\nFormat the extracted data into a structured JSON object ensuring accuracy and consistency in the 
         naming convention and data types for each field."\n'}, {'type': 'image_url', 'image_url':                                                     
         'https://X.ngrok-free.app/test.jpeg'}]                                                                                       
DEBUG    Time to generate response: 8.4779s                                                                                                            
DEBUG    ============== assistant ==============                                                                                                       
DEBUG    Here is the extracted information from the provided invoice image formatted as a JSON object:                                                 
                                                                                                                                                       
                                                                                                                                                       
DEBUG    ---------- OpenAI Response End ----------                                                                                                     
DEBUG    *********** Task 1 End ***********                                                                                                            
Here is the extracted information from the provided invoice image formatted as a JSON object:

Settings:

phidata == 2.3.52
openai ==1.13.3
Another variation:
from dotenv import load_dotenv
from phi.assistant import Assistant
from phi.llm.openai import OpenAIChat

from capture.services import prompt

# ensure loading of .envs
_ = load_dotenv()

assistant = Assistant(
    description= "You are a brilliant invoices OCR.",
    instructions=[prompt.PROMPT],
    # prevent_prompt_injection=True,
    debug_mode=True,
    llm=OpenAIChat(model="gpt-4-vision-preview", ),
    
)


response = assistant.run( [
     {"type": "text", "text": "return all the values from the instructions"},
     {
          "type": "image_url",
          "image_url": " X.ngrok-free.app/test.jpeg",
    },
    ], stream=False)

print(response)

Trace

❯ PYTHONPATH=. python capture/main.py
DEBUG    Debug logs enabled                                                                                                                            
DEBUG    *********** Run Start: 429e7408-27ec-470f-870b-29632012f0c1 ***********                                                                       
DEBUG    *********** Task 1 Start ***********                                                                                                          
DEBUG    *********** Task Start: 409b8a8c-5012-4302-b0c6-cbe4cdb92177 ***********                                                                      
DEBUG    ---------- OpenAI Response Start ----------                                                                                                   
DEBUG    ============== system ==============                                                                                                          
DEBUG    You are a brilliant invoices OCR.                                                                                                             
         YOU MUST FOLLOW THESE INSTRUCTIONS CAREFULLY.                                                                                                 
         <instructions>                                                                                                                                
         1. Please extract the following details from the provided invoice image onto JSON format:                                                     
             Invoice number,                                                                                                                           
             Date of the invoice,                                                                                                                      
             Customer name and address,                                                                                                                
             Vendor name and address,                                                                                                                  
             Line items for each item, retrieve the description, quantity, unit price, and total price                                                 
             Subtotals, tax rates, tax amounts, and total due amount,                                                                                  
             Payment terms, including due date and any discounts for early payment,                                                                    
             Vendor and customer VAT identification numbers (if applicable),                                                                           
             Bank details, including bank name, account number, IBAN, and SWIFT/BIC codes                                                              
                                                                                                                                                       
         Format the extracted data into a structured JSON object ensuring accuracy and consistency in the naming convention and data types for each    
         field."                                                                                                                                       
                                                                                                                                                       
         </instructions>                                                                                                                               
DEBUG    ============== user ==============                                                                                                            
DEBUG    [{'type': 'text', 'text': 'return all the values from the instructions'}, {'type': 'image_url', 'image_url': '                                
         https://X.ngrok-free.app/test.jpeg'}]                                                                                         
DEBUG    Time to generate response: 16.1524s                                                                                                           
DEBUG    ============== assistant ==============                                                                                                       
DEBUG    ```json                                                                                                                                       
         {                                                                                                                                             
           "invoice_number": "90152935",                                                                                                               
           "                                                                                                                                           
DEBUG    ---------- OpenAI Response End ----------                                                                                                     
DEBUG    *********** Task 1 End ***********                                                                                                            
```json
{
  "invoice_number": "90152935",
  "

What am I doing wrong?

RAG Design Pattern

Hello, is there a recommenced RAG design pattern for a phi assistant and a vector database?

Do we need to pass tools? like cosine similarity retrieval as a python function?

How to create own tool

I want to call generate_report function with app name parameter. How to create own function and configure with assistant.

Please give me some sample code or reference link.

Support for different language model backends/APIs

I realise the simple answer is to replace the hostname in the openai module, but it might be nice to add more support for various backends (as in, for the LLM part). There are a couple of open source function calling models and they are able to use similar querying methods.

Assistant max function calling

Is there a way to max out the phidata python assistant on how many steps it can take? I am trying to control for infinite loops.

Assistant's save_and_run, run_files, and run_code functionalities not working

Describe the bug
The save_and_run, run_files, and run_code functionalities of the Assistant class in the phi.assistant module are not working as expected. When these options are set to True, the Assistant does not execute the code or save the changes.

To Reproduce
Steps to reproduce the behavior:

  1. Create an instance of the Assistant class with save_and_run=True, run_files=True, and run_code=True.
  2. Use the print_response method to generate some code.
  3. Observe that the code is not executed and the changes are not saved.

Expected behavior
When save_and_run, run_files, and run_code are set to True, the Assistant should execute the generated code and save any changes made to the files.

PythonAssistant - ERROR: Function multi_tool_use.parallel not found

Issue

  • Python Assistant will not pip install packages

  • pip_install=True, is causing an error "ERROR: Function multi_tool_use.parallel not found"

  • pip_install=False the code runs without error.

Phidata Version

2.3.36

Code

from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile
from rich.pretty import pprint
from pydantic import BaseModel, Field


class AssistantResponse(BaseModel):
    result: str = Field(..., description="The result of the users question.")


def average_rating() -> AssistantResponse:
    python_assistant = PythonAssistant(
        files=[
            CsvFile(
                path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
                description="Contains information about movies from IMDB.",
            )
        ],
        instructions=[
            "Only provide the result, do not need to provide any additional information.",
        ],
        # This will make sure the output is of this Assistant is an object of the `AssistantResponse` class
        output_model=AssistantResponse,
        # This will allow the Assistant to directly run python code, risky but fun
        run_code=True,
        # Uncomment the following line to let the assistant install python packages
        pip_install=True,
        # Uncomment the following line to show debug logs
        # debug_mode=True,
    )

    response: AssistantResponse = python_assistant.run("What is the average rating of movies?")  # type: ignore
    return response

Requirements

Package Version


annotated-types 0.6.0
anyio 4.3.0
asttokens 2.4.1
boto3 1.34.50
botocore 1.34.50
certifi 2024.2.2
charset-normalizer 3.3.2
click 8.1.7
colorama 0.4.6
comm 0.2.1
debugpy 1.8.1
decorator 5.1.1
distro 1.9.0
docker 7.0.0
executing 2.0.1
gitdb 4.0.11
GitPython 3.1.42
h11 0.14.0
httpcore 1.0.4
httpx 0.27.0
idna 3.6
ipykernel 6.29.3
ipython 8.22.1
jedi 0.19.1
jmespath 1.0.1
jupyter_client 8.6.0
jupyter_core 5.7.1
markdown-it-py 3.0.0
matplotlib-inline 0.1.6
mdurl 0.1.2
nest-asyncio 1.6.0
numpy 1.26.4
openai 1.12.0
packaging 23.2
pandas 2.2.1
parso 0.8.3
phidata 2.3.43
pip 22.3.1
platformdirs 4.2.0
prompt-toolkit 3.0.43
psutil 5.9.8
pure-eval 0.2.2
pydantic 2.6.2
pydantic_core 2.16.3
pydantic-settings 2.2.1
Pygments 2.17.2
python-dateutil 2.8.2
python-dotenv 1.0.1
pytz 2024.1
pywin32 306
PyYAML 6.0.1
pyzmq 25.1.2
requests 2.31.0
rich 13.7.0
s3transfer 0.10.0
setuptools 65.5.0
six 1.16.0
smmap 5.0.1
sniffio 1.3.1
stack-data 0.6.3
tomli 2.0.1
tornado 6.4
tqdm 4.66.2
traitlets 5.14.1
typer 0.9.0
typing_extensions 4.10.0
tzdata 2024.1
urllib3 2.0.7
wcwidth 0.2.13

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.