Coder Social home page Coder Social logo

fastapi_poe's Introduction

fastapi_poe

An implementation of the Poe protocol using FastAPI.

Write your own bot

This package can also be used as a base to write your own bot. You can inherit from PoeBot to make a bot:

import fastapi_poe as fp

class EchoBot(fp.PoeBot):
    async def get_response(self, request: fp.QueryRequest):
        last_message = request.query[-1].content
        yield fp.PartialResponse(text=last_message)

if __name__ == "__main__":
    fp.run(EchoBot(), allow_without_key=True)

Now, run your bot using python <filename.py>.

  • In a different terminal, run ngrok to make it publicly accessible.
  • Use the publicly accessible url to integrate your bot with poe

Enable authentication

Poe servers send requests containing Authorization HTTP header in the format "Bearer <access_key>"; the access key is configured in the bot settings page.

To validate that the request is from the Poe servers, you can either set the environment variable POE_ACCESS_KEY or pass the parameter access_key in the run function like:

if __name__ == "__main__":
    fp.run(EchoBot(), access_key=<key>)

Samples

Check out our starter code repository for some examples you can use to get started with bot development.

fastapi_poe's People

Contributors

anmolsingh95 avatar brendangraham14 avatar canwushuang avatar chris00zeng avatar jellezijlstra avatar johntheli avatar krisyang1125 avatar lilida avatar tkeji avatar yushengauggie avatar zintegy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

fastapi_poe's Issues

Internal server error (error_id: f518718b-78a2-4498-bba8-366ae3247773) with gpt models

I am doing an experiment on a few LLMs including GPT-4o, GPT-4, GPT-3.5, and Claude-3. I am getting this error repeatedly when I call the GPT models (all three of them).

fastapi_poe.client.BotError: {"allow_retry": true, "text": "Internal server error (error_id: f518718b-78a2-4498-bba8-366ae3247773)"}.

This doesn't happen for the other models (for exact same input), including Claude-3-Opus and Gemini-1.5-Pro. What could be the possible issue?

GPT-3.5-Turbo dependency not working when using the API

I have created a bot with GPT-3.5-Turbo dependency using the code provided here. It works well on the web (poe.com). But I wanted to call my bot through this library and tried with the following code (from here). But it keeps giving the fastapi_poe.client.BotError: {"allow_retry": true, "text": "Internal server error"} error.

import asyncio
import fastapi_poe as fp

async def get_responses(api_key, bot_name, messages):
    async for partial in fp.get_bot_response(messages=messages, bot_name=bot_name, api_key=api_key):
        print(partial)

api_key = "..."

message = fp.ProtocolMessage(role="user", content="Hello world")

bot_name = "My_Bot_Name_Here"

asyncio.run(get_responses(api_key, bot_name, [message]))

The error on my VS Code is:

Error in Poe bot: Bot request to My_Bot_Name_Here failed on try 0 
 BotError('{"allow_retry": true, "text": "Internal server error"}')
Error in Poe bot: Bot request to My_Bot_Name_Here failed on try 1 
 BotError('{"allow_retry": true, "text": "Internal server error"}')

The error on the Modal console is:

Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 0 
 BotError('{"allow_retry": true, "text": "Internal server error"}')
Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 1 
 BotError('{"allow_retry": true, "text": "Internal server error"}')

It seems the GPT-3.5-Turbo dependency is working just fine on the web interface, but not working when using the API. Why is that?

"[SSL: WRONG_VERSION_NUMBER] wrong version number"

Following the get_bot_response example from the developer documentation returns "[SSL: WRONG_VERSION_NUMBER] wrong version number" for all attempts.

Environment Details:

Python 3.10.12
fastapi_poe 0.0.36
httpx 0.27.0
httpx-sse 0.4.0

Stacktrace:

Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 0 
 SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1007)')
Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 1 
 SSLError(1, '[SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1007)')
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/fastapi_poe/client.py", line 459, in stream_request_base
    async for message in ctx.perform_query_request(
  File "/usr/local/lib/python3.10/dist-packages/fastapi_poe/client.py", line 146, in perform_query_request
    async with httpx_sse.aconnect_sse(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/httpx_sse/_api.py", line 69, in aconnect_sse
    async with client.stream(method, url, headers=headers, **kwargs) as response:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1617, in stream
    response = await self.send(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1661, in send
    response = await self._send_handling_auth(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1689, in _send_handling_auth
    response = await self._send_handling_redirects(
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1726, in _send_handling_redirects
    response = await self._send_single_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1763, in _send_single_request
    response = await transport.handle_async_request(request)
  File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 373, in handle_async_request
    resp = await self._pool.handle_async_request(req)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request
    raise exc from None
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request
    response = await connection.handle_async_request(
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 99, in handle_async_request
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 76, in handle_async_request
    stream = await self._connect(request)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 154, in _connect
    stream = await stream.start_tls(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/anyio.py", line 80, in start_tls
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/anyio.py", line 71, in start_tls
    ssl_stream = await anyio.streams.tls.TLSStream.wrap(
  File "/usr/local/lib/python3.10/dist-packages/anyio/streams/tls.py", line 132, in wrap
    await wrapper._call_sslobject_method(ssl_object.do_handshake)
  File "/usr/local/lib/python3.10/dist-packages/anyio/streams/tls.py", line 140, in _call_sslobject_method
    result = func(*args)
  File "/usr/lib/python3.10/ssl.py", line 975, in do_handshake
    self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1007)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/access_api_directly.py", line 15, in <module>
    asyncio.run(get_responses(api_key, [message]))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/home/access_api_directly.py", line 6, in get_responses
    async for partial in fp.get_bot_response(messages=messages, bot_name="GPT-3.5-Turbo", api_key=api_key):
  File "/usr/local/lib/python3.10/dist-packages/fastapi_poe/client.py", line 330, in stream_request
    async for message in stream_request_base(
  File "/usr/local/lib/python3.10/dist-packages/fastapi_poe/client.py", line 484, in stream_request_base
    raise BotError(f"Error communicating with bot {bot_name}") from e
fastapi_poe.client.BotError: Error communicating with bot GPT-3.5-Turbo

BotError('{"allow_retry": true, "text": "Internal server error"}') with Image Generation Bots

I created this issue because original #48 is not same. Althought the comments in #48 mentioned.

It might not be the issue of fastapi_poe but maybe the poe backend bot service. However there is no other place for reporting the issue, so...please help.

Tried using stream_request, directly using fp.get_final_response. Works with LLM bots, however always return internal server error with Image Generation bots(SDXL, Playground-v2, or customed bots with these models).

If using stream_request and yield the msg in real time. You can see first 1 or 2 'Generating image (1s elapsed)...' rendered and then goes to error.

Example: Python 3.11, fastapi_poe 0.0.34

final_message = await fp.get_final_response(request, bot_name=IMAGE_MODEL, api_key=request.access_key)

Error as:

Error in Poe bot: Bot request to ComicBookStyle-PGV2 failed on try 0 
 BotError('{"allow_retry": true, "text": "Internal server error"}')
Error responding to query
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/base.py", line 381, in handle_query
    async for event in self.get_response_with_context(request, context):
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/base.py", line 111, in get_response_with_context
    async for event in self.get_response(request):
  File "/root/memes_creator.py", line 39, in get_response
    final_message = await fp.get_final_response(request, bot_name=IMAGE_MODEL, api_key=request.access_key)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 544, in get_final_response
    async for message in stream_request(
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 327, in stream_request
    async for message in stream_request_base(
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 456, in stream_request_base
    async for message in ctx.perform_query_request(
  File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1624, in stream
    yield response
  File "/usr/local/lib/python3.11/site-packages/httpx_sse/_api.py", line 70, in aconnect_sse
    yield EventSource(response)
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 225, in perform_query_request
    raise BotError(event.data)
fastapi_poe.client.BotError: {"allow_retry": true, "text": "Internal server error"}

Access denied and Error communicating with bot chatGPT

I deployed the fastapi_poe on server with public access.

It worked well if directly use bot in POE portal, But when i tried to invoke the API, it returns the below error:

Error in Poe API Bot: Bot request to chatGPT failed on try 0 {"allow_retry": true, "text": "Access denied"}
Error in Poe API Bot: Bot request to chatGPT failed on try 1 {"allow_retry": true, "text": "Access denied"}
2023-07-19 06:34:19,553 - ERROR - Error responding to query
Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.8/site-packages/fastapi_poe/client.py", line 321, in stream_request
    async for message in ctx.perform_query_request(request):
  File "/home/ubuntu/.local/lib/python3.8/site-packages/fastapi_poe/client.py", line 225, in perform_query_request
    raise BotError(event.data)
fastapi_poe.client.BotError: {"allow_retry": true, "text": "Access denied"}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.8/site-packages/fastapi_poe/base.py", line 169, in handle_query
    async for event in self.get_response(query):
  File "main_copy.py", line 89, in get_response
    async for msg in stream_request(query, "chatGPT", query.api_key):
  File "/home/ubuntu/.local/lib/python3.8/site-packages/fastapi_poe/client.py", line 330, in stream_request
    raise BotError(f"Error communicating with bot {bot_name}") from e
fastapi_poe.client.BotError: Error communicating with bot chatGPT

The data body i post:

message_id = 'm-000000xy92lr5lnu75wqtc90dg2JDT32' 
message_id1 ='m-000000xy92lr5lyif9g6rbylxbbk3543'
conversation_id = 'c-000000xy92lr5lnu75wqtc90dg2wcf62'
user_id = "u-000000xy92lqv94p4iwlomy2k1242pv0"

data = {
    "version": "1.1",
    "type": "query",
    "query": [
        {
            "role": "user",
            "content": "What is the capital of Nepal?",
            "content_type": "text/markdown",
            "timestamp": int(time.time() * 1_000_000),
            "feedback": [],
            "message_id":message_id1
        }
    ],
    "user_id": user_id, 
    "conversation_id": conversation_id, 
    "message_id": message_id           
}

The query that server receive

>>> query:version='1.1' type='query' query=[ProtocolMessage(role='user', content='What is the capital of Nepal?', content_type='text/markdown', timestamp=1689748457587810, message_id='m-000000xy92lr5lyif9g6rbylxbbk3543', feedback=[])] user_id='u-000000xy92lqv94p4iwlomy2k1242pv0' conversation_id='c-000000xy92lr5lnu75wqtc90dg2wcf62' message_id='m-000000xy92lr5lnu75wqtc90dg2JDT32' api_key='<my api key>'

May i know how to address the issue, thanks

How to use api to call a multi-model with local image?

Hi, I'm using the poe api to call a multimodal model, like gpt-4v or claude3-opus. I refer to an example in the diagram, but I can't find the code on how to load the local image into the request. May I know how can I implement this? I noticed that the new documentation mentions "attachment.parsed_content", should I use this? What is the format of parsed_content? Should I process the image to base64 or use binary read?
Looking for your reply
Snipaste_2024-04-12_18-18-12

Using the code, i get the error

i followed the tutorial in here : https://developer.poe.com/api-bots/quick-start
i also enabled the POE_API_KEY, i send it in the request in the BEARER.

EchoBot and HuggingFaceBot works perfectly when making an api calling using postman.
however, when switching to ChatGPTAllCapsBot or the example in here https://developer.poe.com/api-bots/api-to-access-bots-on-poe i get the error below:

Sending request to chatGPT...
Request finished with status 200. (execution time: 6.9 ms, total latency: 1805.2 ms)
Error in Poe API Bot: Bot request to chatGPT failed on try 0 {"allow_retry": true, "text": "Internal server error"}
Error in Poe API Bot: Bot request to chatGPT failed on try 1 {"allow_retry": true, "text": "Internal server error"}
Error occurred: Error communicating with bot chatGPT
Error responding to query
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 321, in stream_request
    async for message in ctx.perform_query_request(request):
  File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "/usr/local/lib/python3.11/contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "/usr/local/lib/python3.11/site-packages/httpx/_client.py", line 1580, in stream
    yield response
  File "/usr/local/lib/python3.11/site-packages/httpx_sse/_api.py", line 70, in aconnect_sse
    yield EventSource(response)
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 225, in perform_query_request
    raise BotError(event.data)
fastapi_poe.client.BotError: {"allow_retry": true, "text": "Internal server error"}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/base.py", line 169, in handle_query
    async for event in self.get_response(query):
  File "/root/chatgpt_allcapsbot.py", line 33, in get_response
    raise e
  File "/root/chatgpt_allcapsbot.py", line 12, in get_response
    async for msg in stream_request(query, "chatGPT", query.api_key):
  File "/usr/local/lib/python3.11/site-packages/fastapi_poe/client.py", line 330, in stream_request
    raise BotError(f"Error communicating with bot {bot_name}") from e
fastapi_poe.client.BotError: Error communicating with bot chatGPT

if i use the BOT i created by sending aprompt from the POE website, it works.

AttributeError: 'HttpRequestBot' object has no attribute '_pending_file_attachment_tasks'

description:

modified the example code from https://github.com/poe-platform/server-bot-quick-start/blob/main/langchain_openai.py and tried to use the openai.

my bot code:

class HttpRequestBot(fp.PoeBot):
    def __init__(self, OPENAI_API_KEY: str):
        self.chat_model = ChatOpenAI(openai_api_key=OPENAI_API_KEY)
    async def get_response(
        self, request: fp.QueryRequest
    ):
        messages = []
        for message in request.query:
            if message.role == "bot":
                messages.append(AIMessage(content=message.content))
            elif message.role == "system":
                messages.append(SystemMessage(content=message.content))
            elif message.role == "user":
                the_prompt = message.conten
                messages.append(HumanMessage(content=the_prompt))

        response = self.chat_model.invoke(messages).content
        if isinstance(response, str):
            yield fp.PartialResponse(text=response)
        else:
            yield fp.PartialResponse(text="There was an issue processing your query.")

setup code

REQUIREMENTS = ["fastapi-poe==0.0.36", "langchain==0.1.13", "openai==1.14.2","langchain-openai==0.1.0","langchain-core==0.1.33"]
image = Image.debian_slim().pip_install(*REQUIREMENTS)
stub = Stub("http-request")


@stub.function(image=image)
@asgi_app()
def fastapi_app():
    OPENAI_API_KEY = "MASKED"
    bot = HttpRequestBot(OPENAI_API_KEY=OPENAI_API_KEY)
    POE_ACCESS_KEY = "MASKED"
    # app = make_app(bot, access_key=POE_ACCESS_KEY)
    app = fp.make_app(bot, access_key=POE_ACCESS_KEY)
    return app

error message:
Error processing pending attachment requests Error processing pending attachment requests Traceback (most recent call last): File "/usr/local/lib/python3.11/site-packages/fastapi_poe/base.py", line 407, in handle_query await self._process_pending_attachment_requests(request.message_id) File "/usr/local/lib/python3.11/site-packages/fastapi_poe/base.py", line 299, in _process_pending_attachment_requests *self._pending_file_attachment_tasks.pop(message_id, []) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'HttpRequestBot' object has no attribute '_pending_file_attachment_tasks' POST / -> 200 OK (duration: 9.98 s, execution: 5.89 s)

my questions:
why AttributeError: 'HttpRequestBot' object has no attribute '_pending_file_attachment_tasks' occurs? Is there anything missed from the example code?

thanks

How to Include a URL and PDF as Attachments in a Protocol Message with fastapi_poe?

I'm working with the fastapi_poe library to build a bot that includes attachments in the prompt. I need to attach both a URL and a PDF file. However, I'm encountering validation errors. How can I properly structure the Attachment fields to ensure both types are recognized and processed?

Here is a snippet of my current implementation:

import asyncio
import fastapi_poe as fp

async def get_responses(api_key, messages):
async for partial in fp.get_bot_response(messages=messages, bot_name="gpt-3", api_key=api_key):
print(partial.text)

api_key = "YOUR_API_KEY"
message = fp.ProtocolMessage(
role="user",
content="Please create a paragraph based on the provided document.",
attachments=[
fp.Attachment(
url="https://example.com/document",
title="Document",
description="Detailed instructions",
content_type="text/html",
),
fp.Attachment(
url="https://example.com/sample.pdf",
title="Sample PDF",
description="A sample PDF file.",
content_type="application/pdf",
name="sample.pdf"
)
]
)

asyncio.run(get_responses(api_key, [message]))

asyncio.run(get_responses(api_key, [message]))
What modifications are needed to correctly include these attachments? Any help would be greatly appreciated!

Error in Poe bot: Bot request to Web-Search failed : "Internal server error"

Isn't Web-Search accessible?

The code is:

async def get_responses(api_key): message = ProtocolMessage(role="user", content="What's today's date?") async for partial in get_bot_response(messages=[message], bot_name="Web-Search", api_key=api_key): print(partial.text)

and result is:

`Searching ...

current date

Error in Poe bot: Bot request to Web-Search failed on try 0
BotError('{"allow_retry": true, "text": "Internal server error"}')`

Get Syncronize Response

Is there a way I can get Syncronize response? Feel hard to integrate with llama-index

class PoeLLM(CustomLLM):
    context_window: int = 4096
    num_output: int = 1024
    model_name: str = "custom"
    dummy_response: str = "My response"
    chunk_size_limit: int = 1000

    def poeSimple(self, systemMessage: str = "", userMessage: str = ""):
        print(['length', len(userMessage)])
        print(userMessage)
        mes = [
            fp.ProtocolMessage(role='system', content=systemMessage),
            fp.ProtocolMessage(role='user', content=userMessage),
        ]
        a = fp.get_bot_response(mes, bot_name='Assistant', api_key='')

        message = []

        # 直接在poeSimple方法中异步迭代生成器并收集响应
        async def collect_response():
            async for value in a:
                message.append(value.text)
            print(message)


        # 运行异步事件循环
        executor = ThreadPoolExecutor()
        loop = asyncio.new_event_loop()
        result = loop.run_in_executor(executor=executor, func=collect_response)
        loop.run_until_complete(result)
        return message

    @property
    def metadata(self) -> LLMMetadata:
        """Get LLM metadata."""
        return LLMMetadata(
            context_window=self.context_window,
            num_output=self.num_output,
            model_name=self.model_name,
        )

    @llm_completion_callback()
    def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse:
        rs = self.poeSimple(userMessage=prompt)
        self.dummy_response = rs
        return CompletionResponse(text=rs)

    @llm_completion_callback()
    def stream_complete(
        self, prompt: str, **kwargs: Any
    ) -> CompletionResponseGen:
        response = ""
        for token in self.dummy_response:
            response += token
            yield CompletionResponse(text=response, delta=token)

keep having await problems of not being await

You have exceeded rate limit to query bot using User API Key. Try again later.

I am working on a project to test translation performance of few LLMs on Low resourced languages (specifically, Amharic). While iterating through my source sentences and getting translations from GPT-3.5 for each sentence, the API returned this error at the 8th sentence.

Traceback (most recent call last):
  File "D:\...\...\API calls\get_translations_poe.py", line 63, in <module>
    returned_message = asyncio.run(get_responses(api_key, model, message_prompt))
  File "D:\Anaconda\envs\fastpoe\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "D:\Anaconda\envs\fastpoe\lib\asyncio\base_events.py", line 647, in run_until_complete
    return future.result()
  File "D:\...\...\API calls\get_translations_poe.py", line 41, in get_responses
    async for partial in get_bot_response(
  File "D:\Anaconda\envs\fastpoe\lib\site-packages\fastapi_poe\client.py", line 350, in stream_request
    async for message in stream_request_base(
  File "D:\Anaconda\envs\fastpoe\lib\site-packages\fastapi_poe\client.py", line 483, in stream_request_base
    async for message in ctx.perform_query_request(
  File "D:\Anaconda\envs\fastpoe\lib\site-packages\fastapi_poe\client.py", line 229, in perform_query_request
    raise BotError(event.data)
fastapi_poe.client.BotError: {"allow_retry": true, "text": "You have exceeded rate limit to query bot using User API Key. Try again later."}

I have more than 900k remaining monthly points, so that shouldn't be the issue. I need to translate around 1000 sentence using each model, that is, 1000 iterations. If there is a better way to do this, your recommendations are welcome too. Thanks.

Different bot responses don't appear in next bot queries

I've been following the examples and wanted to see what happens if we use two bots at once like this:

...
        last_message = request.query[-1].content
        
        if last_message.startswith("search"):
            async for msg in fp.stream_request(
                request,
                "Web-Search",
                request.access_key,
                # tools=tools,
                # tool_executables=tools_executables,
            ):
                yield msg
        else:
            async for msg in fp.stream_request(
                request,
                "GPT-4",
                request.access_key,
                tools=tools,
                tool_executables=tools_executables,
            ):
                yield msg
...

In this example, if the user "searches" something, the result is shown in the chat, but if then the user says "summarize this", GPT-4 doesn't see the search results even though its argument is request.

API call error

  • environment:
(poe) pangzheng@iZt4n86o3ror4rdm8mmpiyZ:~/fastapi_poe$ pip list

Package           Version
----------------- ------------
annotated-types   0.6.0
anyio             3.7.1
certifi           2023.7.22
click             8.1.7
exceptiongroup    1.1.3
fastapi           0.104.1
fastapi-poe       0.0.23
h11               0.14.0
httpcore          1.0.2
httpx             0.25.1
httpx-sse         0.3.1
idna              3.4
pip               20.0.2
pkg-resources     0.0.0
pydantic          2.5.0
pydantic-core     2.14.1
setuptools        44.0.0
sniffio           1.3.0
sse-starlette     1.6.5
starlette         0.27.0
typing-extensions 4.8.0
uvicorn           0.24.0.post1
wheel             0.34.2

  • code:
import asyncio
  from fastapi_poe.types import ProtocolMessage
  from fastapi_poe.client import get_bot_response
  
  # Replace <api_key> with your actual API key, ensuring it is a string.
  api_key = "xxxx"
  
  # Create an asynchronous function to encapsulate the async for loop
  async def get_responses(api_key):
      message = ProtocolMessage(role="user", content="Hello world")
      async for partial in get_bot_response(messages=[message], bot_name="GPT-3.5-Turbo", api_key=api_key):
          print(partial)
  
  # Run the event loop
  # For Python 3.7 and newer
  asyncio.run(get_responses(api_key))
  
  # For Python 3.6 and older, you would typically do the following:
  # loop = asyncio.get_event_loop()
  # loop.run_until_complete(get_responses(api_key))
 # loop.close()
  • call responses:
$ python poe-demo.py

text='' raw_response={'type': 'text', 'text': '{"text": ""}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text='Hello' raw_response={'type': 'text', 'text': '{"text": "Hello"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text='!' raw_response={'type': 'text', 'text': '{"text": "!"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' How' raw_response={'type': 'text', 'text': '{"text": " How"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' can' raw_response={'type': 'text', 'text': '{"text": " can"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' I' raw_response={'type': 'text', 'text': '{"text": " I"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' assist' raw_response={'type': 'text', 'text': '{"text": " assist"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' you' raw_response={'type': 'text', 'text': '{"text": " you"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text=' today' raw_response={'type': 'text', 'text': '{"text": " today"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text='?' raw_response={'type': 'text', 'text': '{"text": "?"}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False
text='' raw_response={'type': 'text', 'text': '{"text": ""}'} full_prompt="QueryRequest(version='1.0', type='query', query=[ProtocolMessage(role='user', content='Hello world', content_type='text/markdown', timestamp=0, message_id='', feedback=[], attachments=[])], user_id='', conversation_id='', message_id='', metadata='', api_key='<missing>', access_key='<missing>', temperature=0.7, skip_system_prompt=False, logit_bias={}, stop_sequences=[])" request_id=None is_suggested_reply=False is_replace_response=False`

"Internal Server Error" using `fp.get_bot_response`

Following the get_bot_response example from the developer documentation returns "Internal Server Error" for all attempts.

Newly minted API Key was used.

Environment Details:

  • Python 3.11.0
  • fastapi_poe 0.0.28

Stacktrace:

Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 0
 BotError('{"allow_retry": true, "text": "Internal server error"}')
Error in Poe bot: Bot request to GPT-3.5-Turbo failed on try 1
 BotError('{"allow_retry": true, "text": "Internal server error"}')
Traceback (most recent call last):
  File "C:\path\to\example.py", line 29, in <module>
    res = asyncio.run(
          ^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 190, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Python311\Lib\asyncio\base_events.py", line 650, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "C:\path\to\example.py", line 18, in get_responses
    async for partial in fp.get_bot_response(
  File "C:\Python311\Lib\site-packages\fastapi_poe\client.py", line 326, in stream_request
    async for message in stream_request_base(
  File "C:\Python311\Lib\site-packages\fastapi_poe\client.py", line 455, in stream_request_base
    async for message in ctx.perform_query_request(
  File "C:\Python311\Lib\contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "C:\Python311\Lib\contextlib.py", line 222, in __aexit__
    await self.gen.athrow(typ, value, traceback)
  File "C:\Python311\Lib\site-packages\httpx\_client.py", line 1609, in stream
    yield response
  File "C:\Python311\Lib\site-packages\httpx_sse\_api.py", line 70, in aconnect_sse
    yield EventSource(response)
  File "C:\Python311\Lib\site-packages\fastapi_poe\client.py", line 224, in perform_query_request
    raise BotError(event.data)
fastapi_poe.client.BotError: {"allow_retry": true, "text": "Internal server error"}

Rigidity and Disorganization in Flow of Function Calls

Issue Summary:
The current implementation of function calling within Poe's API appears to be disorganized and overly restrictive. The expectation from a developer's perspective is for the model to have the discretion to determine the sequence and timing of function calls in response to the task it is performing. This dynamic approach is well-supported by OpenAI's API, setting a precedent for how such interactions should typically function.

Upon inspecting the API behavior more closely, I've observed the following issues:

Preemptive Function Calls:
The API module appears to force the language model to call all provided functions at the onset of processing each message. This rigid approach negates the model's ability to decide when each function should be called based on the context of the conversation.

Restriction of Later Calls:
Moreover, once the initial call is made, the model is restricted from invoking any function calls at a later stage in the message processing. This limitation is counterintuitive to the expected behavior of a conversational model that may require additional information as the dialogue progresses.

Demonstration of the Fault:
Below is a simple example that illustrates the issue:

import fastapi_poe as fp
import json

from asteval import Interpreter
aeval = Interpreter()

magic = ["142 * 4 + 294", "viudz117trlwubo", "ct89zhrrv6x0xmc"]

def get_magic_data_index(index):
	return json.dumps({"magic": magic[index]})

def evaluate_expression(expression):
	return aeval(expression)

tools_executables = [get_magic_data_index, evaluate_expression]

tools_dict_list = [
	{
		"type": "function",
		"function": {
			"name": "get_magic_data_index",
			"description": "Get the Magic Data numbered by an index",
			"parameters": {
				"type": "object",
				"properties": {
					"index": {
						"type": "number",
						"description": "Index of the Magic Data to fetch",
					}
				},
				"required": ["index"],
			},
		},
	},
	{
		"type": "function",
		"function": {
			"name": "evaluate_expression",
			"description": "Evaluate numeric expression",
			"parameters": {
				"type": "object",
				"properties": {
					"expression": {
				 		"type": "string",
						"description": "The expression",
					}
				},
				"required": ["expression"],
			},
		},
	}
]
tools = [fp.ToolDefinition(**tools_dict) for tools_dict in tools_dict_list]

prompt = """
1. Please tell me the Magic Data #0.
2. It is going to be a mathematical expression. Tell me how much is it.
3. If it is greater than 1000, tell me the Magic Data #1
3. Otherwise, tell me the Magic Data #2
"""

message = fp.ProtocolMessage(role="user", content=prompt)
async for partial in fp.get_bot_response(
	messages=[message], bot_name="GPT-3.5-Turbo", api_key=api_key,
	tools=tools, tool_executables=tools_executables,
	):
	print(partial)

Using the underlying OpenAI API directly, the model operates in a sequential and context-aware manner, calling functions as needed and presenting accurate data to the user:

  1. Acknowledges the task.
  2. Calls get_magic_data_index when needed and presents the retrieved expression to the user.
  3. Evaluates the expression with evaluate_expression.
  4. Depending on the result, it calls get_magic_data_index to retrieve and present the corresponding magic data.

Conversely, the Poe API induces a sequence of calls regardless of necessity, leading to the model fabricating data to fulfill the premature function call and ultimately resulting in an API crash when an additional call is attempted.

  1. The API client forces the model to call both get_magic_data_index and evaluate_expression regardless of context.
  2. The model makes up and evaluates a random expression without being able to retrieve the actual data.
  3. Upon the returning of function calls, the correct magic #0 is stated, followed by the value of the hallucinated expression.
  4. Subsequent calls to get_magic_data_index result in an API crash, halting the process.

Suggested Improvements:
Given these findings, I would recommend the following enhancements to align Poe's API functionality with the expected dynamic behavior:

  • Revise Function Invocation Logic: Reconfigure the API to allow the model to invoke functions dynamically, as the conversation context requires. This could be achieved by adjusting the server-side implementation to call OpenAI's API with the tool_choice parameter set to "auto" rather than deliberately forcing a list of all available functions upon the model.

  • Enhance Error Handling: Implement more robust error handling to prevent crashes when the model attempts to call functions based on the conversational flow.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.