daveebbelaar / langchain-experiments Goto Github PK
View Code? Open in Web Editor NEWBuilding Apps with LLMs
Home Page: https://datalumina.com
License: MIT License
Building Apps with LLMs
Home Page: https://datalumina.com
License: MIT License
work in progress
Hi Dave. I am using your openai_function_calling.py
example to understand the function calling functionality. I see that it is a little outdated and I tried to update the code. Apart from changing ChatCompletion.create
to chat.completion.create
, the main change is the use of tools
parameter instead of functions
(and tool_choice
instead of function_call
). It seems to run fine except for the part where the function result is integrated into the API call (second_completion
parameter in the code -- towards the very end). The main change is the use of tool
as a role
instead of function
as a role
. Could you please guide me on what exactly I am doing wrong? The updated code is as follows:
import json
from openai import OpenAI
from datetime import datetime, timedelta
client = OpenAI()
# --------------------------------------------------------------
# Ask ChatGPT a Question
# --------------------------------------------------------------
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0613",
messages=[
{
"role": "user",
"content": "When's the next flight from Amsterdam to New York?",
},
],
)
output = completion.choices[0].message.content
print("\n")
print(output)
print("\n")
# --------------------------------------------------------------
# Use OpenAI’s Function Calling Feature
# --------------------------------------------------------------
tools = [
{
"type": "function",
"function": {
"name": "get_flight_info",
"description": "Get flight information between two locations",
"parameters": {
"type": "object",
"properties": {
"loc_origin": {
"type": "string",
"description": "The departure airport, e.g. DUS",
},
"loc_destination": {
"type": "string",
"description": "The destination airport, e.g. HAM",
}
},
"required": ["loc_origin", "loc_destination"],
},
}
}
]
user_prompt = "When's the next flight from Amsterdam to New York?"
completion = client.chat.completions.create(
model="gpt-3.5-turbo-0613",
messages=[{"role": "user", "content": user_prompt}],
# Add function calling
tools=tools,
tool_choice="auto", # specify the function call
)
# It automatically fills the arguments with correct info based on the prompt
# Note: the function does not exist yet
output = completion.choices[0].message.tool_calls[0]
print("\n")
print(output)
print("\n")
# --------------------------------------------------------------
# Add a Function
# --------------------------------------------------------------
def get_flight_info(loc_origin, loc_destination):
"""Get flight information between two locations."""
# Example output returned from an API or database
flight_info = {
"loc_origin": loc_origin,
"loc_destination": loc_destination,
"datetime": str(datetime.now() + timedelta(hours=2)),
"airline": "KLM",
"flight": "KL643",
}
return json.dumps(flight_info)
# Use the LLM output to manually call the function
# The json.loads function converts the string to a Python object
origin = json.loads(output.function.arguments).get("loc_origin")
destination = json.loads(output.function.arguments).get("loc_destination")
params = json.loads(output.function.arguments)
type(params)
print("\n")
print(origin)
print(destination)
print(params)
print("\n")
# Call the function with arguments
chosen_function = eval(output.function.name)
flight = chosen_function(**params)
print("\n")
print(flight)
print("\n")
# --------------------------------------------------------------
# Add function result to the prompt for a final answer
# --------------------------------------------------------------
# The key is to add the function output back to the messages with role: function
second_completion = client.chat.completions.create(
model="gpt-3.5-turbo-0613",
messages=[
{"role": "user", "content": user_prompt},
{"role": "tool", "tool_call_id": output.id, "content": flight}, # this line seems to be the problem
],
tools=tools,
)
response = second_completion.choices[0].message.content
print("\n")
print(response)
print("\n")
The error I run into is as follows:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_utils/_utils.py", line 272, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/resources/chat/completions.py", line 645, in create
return self._post(
^^^^^^^^^^^
File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 1088, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 853, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/Mac/openai-env/lib/python3.12/site-packages/openai/_base_client.py", line 930, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.", 'type': 'invalid_request_error', 'param': 'messages.[1].role', 'code': None}}
Thank you! Any help is really appreciated.
I tried following your youtube tutorial for "How to Build an AI Document Chatbot in 10 Minutes" but when I put any question in the chatbot i get this message.
TypeError: Cannot read properties of undefined (reading 'startsWith')
Hi there, first of thank you for this great project! I have something similar to this person:
https://learn.microsoft.com/en-us/answers/questions/1459385/i-m-getting-an-error-when-deploying-a-flask-app-to
For some reason it cannot find the module slack_sdk
Hello
I've got this error message when I want to ask what is Doc about.
"Error: Request failed with status code 401"
What did I do wrong? Please help
Hi,
I am working on AGI agents and facing the below issues.
`stevie = GenerativeAgent(
name="Stevie",
traits="talkative, helpful", # You can add more persistent traits here
status="Virtual Agent", # When connected to a virtual world, we can have the characters update their status
llm=LLM,
daily_summaries=[
(
"Stevie is the autonomus AI agents in Tommie's smart home."
"It is equipped with advanced AI capabilities. It can control the TV, calender and the microwave in the house."
)
],
memory=stevie_memory,
verbose=False,
)
Hoi,
the link for the langchain quickstart guide changed.
from:
https://python.langchain.com/en/latest/getting_started/getting_started.html#
video_url = "https://www.youtube.com/watch?v=L_Guz73e6fw"
db = create_db_from_youtube_video_url(video_url)
receive error like this, could you help me ?
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIError: HTTP code 413 from API (<html>
<head><title>413 Request Entity Too Large<[/title](https://file+.vscode-resource.vscode-cdn.net/title)><[/head](https://file+.vscode-resource.vscode-cdn.net/head)>
<body>
<center><h1>413 Request Entity Too Large<[/h1](https://file+.vscode-resource.vscode-cdn.net/h1)><[/center](https://file+.vscode-resource.vscode-cdn.net/center)>
<hr><center>openresty<[/center](https://file+.vscode-resource.vscode-cdn.net/center)>
<[/body](https://file+.vscode-resource.vscode-cdn.net/body)>
<[/html](https://file+.vscode-resource.vscode-cdn.net/html)>
).
Repo - langchain-serve.
Disclaimer: I'm the primary author of langchain-serve.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.