Coder Social home page Coder Social logo

daveebbelaar / langchain-experiments Goto Github PK

View Code? Open in Web Editor NEW
859.0 859.0 552.0 5.68 MB

Building Apps with LLMs

Home Page: https://datalumina.com

License: MIT License

Python 26.87% Jupyter Notebook 73.13%
ai langchain langchain-python python slack-bot

langchain-experiments's People

Contributors

alberto-codes avatar daveebbelaar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

langchain-experiments's Issues

Issue with function calling using tools parameter

Hi Dave. I am using your openai_function_calling.py example to understand the function calling functionality. I see that it is a little outdated and I tried to update the code. Apart from changing ChatCompletion.create to chat.completion.create, the main change is the use of tools parameter instead of functions (and tool_choice instead of function_call). It seems to run fine except for the part where the function result is integrated into the API call (second_completion parameter in the code -- towards the very end). The main change is the use of tool as a role instead of function as a role. Could you please guide me on what exactly I am doing wrong? The updated code is as follows:

import json
from openai import OpenAI
from datetime import datetime, timedelta

client = OpenAI()

# --------------------------------------------------------------
# Ask ChatGPT a Question
# --------------------------------------------------------------

completion = client.chat.completions.create(
    model="gpt-3.5-turbo-0613",
    messages=[
        {
            "role": "user",
            "content": "When's the next flight from Amsterdam to New York?",
        },
    ],
)

output = completion.choices[0].message.content
print("\n")
print(output)
print("\n")

# --------------------------------------------------------------
# Use OpenAI’s Function Calling Feature
# --------------------------------------------------------------

tools = [
  {
    "type": "function",
    "function": {
      "name": "get_flight_info",
      "description": "Get flight information between two locations",
      "parameters": {
        "type": "object",
        "properties": {
          "loc_origin": {
            "type": "string",
            "description":  "The departure airport, e.g. DUS",
          },
          "loc_destination": {
                    "type": "string",
                    "description": "The destination airport, e.g. HAM",
          }
        },
        "required": ["loc_origin", "loc_destination"],
      },
    }
  }
]

user_prompt = "When's the next flight from Amsterdam to New York?"

completion = client.chat.completions.create(
    model="gpt-3.5-turbo-0613",
    messages=[{"role": "user", "content": user_prompt}],
    # Add function calling
    tools=tools,
    tool_choice="auto",  # specify the function call
)

# It automatically fills the arguments with correct info based on the prompt
# Note: the function does not exist yet

output = completion.choices[0].message.tool_calls[0]
print("\n")
print(output)
print("\n")

# --------------------------------------------------------------
# Add a Function
# --------------------------------------------------------------


def get_flight_info(loc_origin, loc_destination):
    """Get flight information between two locations."""

    # Example output returned from an API or database
    flight_info = {
        "loc_origin": loc_origin,
        "loc_destination": loc_destination,
        "datetime": str(datetime.now() + timedelta(hours=2)),
        "airline": "KLM",
        "flight": "KL643",
    }

    return json.dumps(flight_info)


# Use the LLM output to manually call the function
# The json.loads function converts the string to a Python object

origin = json.loads(output.function.arguments).get("loc_origin")
destination = json.loads(output.function.arguments).get("loc_destination")
params = json.loads(output.function.arguments)
type(params)

print("\n")
print(origin)
print(destination)
print(params)
print("\n")

# Call the function with arguments

chosen_function = eval(output.function.name)
flight = chosen_function(**params)

print("\n")
print(flight)
print("\n")

# --------------------------------------------------------------
# Add function result to the prompt for a final answer
# --------------------------------------------------------------

# The key is to add the function output back to the messages with role: function
second_completion = client.chat.completions.create(
    model="gpt-3.5-turbo-0613",
    messages=[
        {"role": "user", "content": user_prompt},
        {"role": "tool", "tool_call_id": output.id, "content": flight}, # this line seems to be the problem
    ],
    tools=tools,
)
response = second_completion.choices[0].message.content
print("\n")
print(response)
print("\n")

The error I run into is as follows:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_utils/_utils.py", line 272, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 645, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 1088, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 853, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 930, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.", 'type': 'invalid_request_error', 'param': 'messages.[1].role', 'code': None}}

Thank you! Any help is really appreciated.

Generative Agents in LangChain

Hi,

I am working on AGI agents and facing the below issues.

  1. I need the agents to work as one of the electrical appliances and not as an individual. I tried the below code snippet for agent but it does not completely get in as an agent that is required, it still acts as an individual.

`stevie = GenerativeAgent(
name="Stevie",
traits="talkative, helpful", # You can add more persistent traits here
status="Virtual Agent", # When connected to a virtual world, we can have the characters update their status
llm=LLM,
daily_summaries=[
(
"Stevie is the autonomus AI agents in Tommie's smart home."
"It is equipped with advanced AI capabilities. It can control the TV, calender and the microwave in the house."
)
],
memory=stevie_memory,
verbose=False,
)

  1. How do I make one of agents to wait for the response of the other. I that the response in terms of SAY and REACT is added. How can we add WAIT for an agent while the other returns back a response.

youtube_chat can not run?

video_url = "https://www.youtube.com/watch?v=L_Guz73e6fw"
db = create_db_from_youtube_video_url(video_url)

receive error like this, could you help me ?

Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIError: HTTP code 413 from API (<html>
<head><title>413 Request Entity Too Large<[/title](https://file+.vscode-resource.vscode-cdn.net/title)><[/head](https://file+.vscode-resource.vscode-cdn.net/head)>
<body>
<center><h1>413 Request Entity Too Large<[/h1](https://file+.vscode-resource.vscode-cdn.net/h1)><[/center](https://file+.vscode-resource.vscode-cdn.net/center)>
<hr><center>openresty<[/center](https://file+.vscode-resource.vscode-cdn.net/center)>
<[/body](https://file+.vscode-resource.vscode-cdn.net/body)>
<[/html](https://file+.vscode-resource.vscode-cdn.net/html)>
).

Deploy different experiments as APIs locally/on cloud using langchain-serve

Repo - langchain-serve.

  • Exposes APIs from function definitions locally as well as on the cloud.
  • Very few lines of code changes and ease of development remain the same as local.
  • Supports both REST & WebSocket endpoints
  • Serverless/autoscaling endpoints with automatic tls certs on the cloud.
  • Real-time streaming, human-in-the-loop support - which is crucial for chatbots.

Disclaimer: I'm the primary author of langchain-serve.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.