Coder Social home page Coder Social logo

snowby666 / poe-api-wrapper Goto Github PK

View Code? Open in Web Editor NEW
546.0 17.0 74.0 2.74 MB

👾 A Python API wrapper for Poe.com. With this, you will have free access to ChatGPT, Claude, Llama, Gemini, Google-PaLM and more! 🚀

Home Page: https://pypi.org/project/poe-api-wrapper/

License: GNU General Public License v3.0

Python 100.00%
api chatgpt claude poe python poe-api quora gpt-4 chatbot reverse-engineering llama palm2 code-llama openai stable-diffusion dall-e gemini claude-3

poe-api-wrapper's Introduction

Poe API Wrapper

A simple, lightweight and efficient API wrapper for Poe.com

Python Version PyPI - Downloads

📚 Table of Contents

✨ Highlights

Support both Sync and Async
Authentication
  • Log in with your Quora's token or Poe's token
  • Auto Proxy requests
  • Specify Proxy context
Message Automation
  • Create new chat thread
  • Send messages
  • Stream bot responses
  • Retry the last message
  • Support file attachments
  • Retrieve suggested replies
  • Stop message generation
  • Delete chat threads
  • Clear conversation context
  • Purge messages of 1 bot
  • Purge all messages of user
  • Fetch previous messages
  • Share and import messages
  • Get citations
Chat Management
  • Get Chat Ids & Chat Codes of bot(s)
Bot Management
  • Get bot info
  • Get available creation models
  • Create custom bot
  • Edit custom bot
  • Delete a custom bot
Knowledge Base Customization (New)
  • Get available knowledge bases
  • Upload knowledge bases for custom bots
  • Edit knowledge bases for custom bots
Discovery
  • Get available bots
  • Get a user's bots
  • Get available categories
  • Explore 3rd party bots and users
Bots Group Chat (Beta)
  • Create a group chat
  • Delete a group chat
  • Get created groups
  • Get group data
  • Save group chat history
  • Load group chat history

🔧 Installation

  • First, install this library with the following command:
pip install -U poe-api-wrapper

Or you can install a proxy-support version of this library for Python 3.9+

pip install -U poe-api-wrapper[proxy]

You can also use the Async version:

pip install -U poe-api-wrapper[async]

Quick setup for Async Client:

from poe_api_wrapper import AsyncPoeApi
import asyncio
tokens = {
    'b': ..., 
    'lat': ...
}

async def main():
    client = await AsyncPoeApi(cookie=tokens).create()
    message = "Explain quantum computing in simple terms"
    async for chunk in client.send_message(bot="gpt3_5", message=message):
        print(chunk["response"], end='', flush=True)
        
asyncio.run(main())
  • You can run an example of this library:
from poe_api_wrapper import PoeExample
tokens = {
    'b': ..., 
    'lat': ...
}
PoeExample(cookie=tokens).chat_with_bot()
  • This library also supports command-line interface:
poe --b B_TOKEN --lat LAT_TOKEN   

Tip

Type poe -h for more info

🦄 Documentation

Available Default Bots

Display Name Model Token Limit Words Access Type
Assistant capybara 4K 3K No Limit
Claude-3-Opus claude_2_1_cedar 4K 3K Subscriber
Claude-3-Sonnet claude_2_1_bamboo 4K 3K No Limit
Claude-3-Opus-200k claude_3_opus_200k 200K 150K Subscriber
Claude-3-Sonnet-200k claude_3_sonnet_200k 200K 150K Subscriber
Claude-instant-100k a2_100k 100K 75K No Limit
Claude-2 claude_2_short 4K 3K Subscriber
Claude-2-100k a2_2 100K 75K Subscriber
Claude-instant a2 9K 7K No Limit
ChatGPT chinchilla 4K 3K No Limit
GPT-3.5-Turbo gpt3_5 2k 1.5K No Limit
GPT-3.5-Turbo-Instruct chinchilla_instruct 2K 1.5K No Limit
ChatGPT-16k agouti 16K 12K Subscriber
GPT-4 beaver 4K 3K Subscriber
GPT-4-128k vizcacha 128K 96K Subscriber
Google-PaLM acouchy 8K 6K No Limit
Llama-2-7b llama_2_7b_chat 2K 1.5K No Limit
Llama-2-13b llama_2_13b_chat 2K 1.5K No Limit
Llama-2-70b llama_2_70b_chat 2K 1.5K No Limit
Code-Llama-7b code_llama_7b_instruct 4K 3K No Limit
Code-Llama-13b code_llama_13b_instruct 4K 3K No Limit
Code-Llama-34b code_llama_34b_instruct 4K 3K No Limit
Solar-Mini upstage_solar_0_70b_16bit 2K 1.5K No Limit

Important

The data on token limits and word counts listed above are approximate and may not be entirely accurate, as the pre-prompt engineering process of poe.com is private and not publicly disclosed.

How to get your Token

Poe API Wrapper accepts both quora.com and poe.com tokens. Pick one that works best for you.

Quora Tokens

Sign in at https://www.quora.com/

F12 for Devtools (Right-click + Inspect)

  • Chromium: Devtools > Application > Cookies > quora.com
  • Firefox: Devtools > Storage > Cookies
  • Safari: Devtools > Storage > Cookies

Copy the values of m-b and m-lat cookies

Poe Tokens

Sign in at https://poe.com/

F12 for Devtools (Right-click + Inspect)

  • Chromium: Devtools > Application > Cookies > poe.com
  • Firefox: Devtools > Storage > Cookies
  • Safari: Devtools > Storage > Cookies

Copy the values of p-b and p-lat cookies

Note

Make sure you have logged in poe.com using the same email which registered on quora.com.

Basic Usage

  • Connecting to the API
# Using poe.com tokens
tokens = {
    'b': 'p-b token here',
    'lat': 'p-lat token here'
}
# Using quora.com tokens
tokens = {
    'b': 'm-b token here',
    'lat': 'm-lat token here'
}

from poe_api_wrapper import PoeApi
client = PoeApi(cookie=tokens)

# Using Client with auto_proxy (default is False)
client = PoeApi(cookie=tokens, auto_proxy=True)

# Passing proxies manually
proxy_context = [
    {"https":X1, "http":X1},
    {"https":X2, "http":X2},
    ...
]

client = PoeApi(cookie=tokens, proxy=proxy_context) 
  • Getting Chat Ids & Chat Codes
# Get chat data of all bots (this will fetch all available threads)
print(client.get_chat_history()['data'])
>> Output:
{'chinchilla': [{'chatId': 74397929, 'chatCode': '2ith0h11zfyvsta1u3z', 'id': 'Q2hhdDo3NDM5NzkyOQ==', 'title': 'Comparison'}], 'code_llama_7b_instruct': [{'chatId': 74397392, 'chatCode': '2ithbduzsysy3g178hb', 'id': 'Q2hhdDo3NDM5NzM5Mg==', 'title': 'Decent Programmers'}], 'a2': [{'chatId': 74396838, 'chatCode': '2ith9nikybn4ksn51l8', 'id': 'Q2hhdDo3NDM5NjgzOA==', 'title': 'Reverse Engineering'}, {'chatId': 74396452, 'chatCode': '2ith79n4x0p0p8w5yue', 'id': 'Q2hhdDo3NDM5NjQ1Mg==', 'title': 'Clean Code'}], 'leocooks': [{'chatId': 74396246, 'chatCode': '2ith82wj0tjrggj46no', 'id': 'Q2hhdDo3NDM5NjI0Ng==', 'title': 'Pizza perfection'}], 'capybara': [{'chatId': 74396020, 'chatCode': '2ith5o3p8c5ajkdwd3k', 'id': 'Q2hhdDo3NDM5NjAyMA==', 'title': 'Greeting'}]}

# Get chat data of a bot (this will fetch all available threads)
print(client.get_chat_history("a2")['data'])
>> Output:
{'a2': [{'chatId': 74396838, 'chatCode': '2ith9nikybn4ksn51l8', 'id': 'Q2hhdDo3NDM5NjgzOA==', 'title': 'Reverse Engineering'}, {'chatId': 74396452, 'chatCode': '2ith79n4x0p0p8w5yue', 'id': 'Q2hhdDo3NDM5NjQ1Mg==', 'title': 'Clean Code'}]}

# Get a defined number of most recent chat threads (using count param will ignore interval param)
# Fetching all bots
print(client.get_chat_history(count=20)['data'])
# Fetching 1 bot
print(client.get_chat_history(bot="a2", count=20)['data'])

# You can pass the number of bots fetched for each interval to both functions. (default is 50)
# Fetching 200 chat threads of all bots each interval
print(client.get_chat_history(interval=200)['data'])
# Fetching 200 chat threads of a bot each interval
print(client.get_chat_history(bot="a2", interval=200)['data'])

# Pagination Example:
# Fetch the first 20 chat threads
history = client.get_chat_history(count=20)
pages = [history['data']]
new_cursor = history['cursor']

# Set a while loop with a condition of your choice
while new_cursor != None:
    # Fetch the next 20 chat threads with new_cursor
    new_history = client.get_chat_history(count=20, cursor=new_cursor)
    # Append the next 20 chat threads 
    new_cursor = new_history['cursor']
    pages.append(new_history['data'])

# Print the pages (20 chat threads each page)
for page in range(len(pages)):
    print(f'This is page {page+1}')
    for bot, value in pages[page].items():
        for thread in value:
            print({bot: thread})
  • Sending messages & Streaming responses
bot = "a2"
message = "What is reverse engineering?"

# Create new chat thread
# Streamed example:
for chunk in client.send_message(bot, message):
    print(chunk["response"], end="", flush=True)
print("\n")

# Non-streamed example:
for chunk in client.send_message(bot, message):
    pass
print(chunk["text"])

# You can get chatCode and chatId of created thread to continue the conversation
chatCode = chunk["chatCode"]
chatId = chunk["chatId"]
# You can get the meaningful title as well
title = chunk["title"]
# You can also retrieve msgPrice
msgPrice = chunk["msgPrice"]

# Send message to an existing chat thread
# 1. Using chatCode
for chunk in client.send_message(bot, message, chatCode="2i58ciex72dom7im83r"):
    print(chunk["response"], end="", flush=True)
# 2. Using chatId
for chunk in client.send_message(bot, message, chatId=59726162):
    print(chunk["response"], end="", flush=True)
# 3. Specify msgPrice manually (the wrapper automatically gets this, but you can also pass the param for less resources consumed)
for chunk in client.send_message(bot, message, chatId=59726162, msgPrice=msgPrice):
    print(chunk["response"], end="", flush=True)

Note

Display names are the same as the codenames for custom bots, you can simply pass the bot's display name into client.send_message(bot, message)

  • Retrying the last message
for chunk in client.retry_message(chatCode):
    print(chunk['response'], end='', flush=True)
  • Adding file attachments
# Web urls example:
file_urls = ["https://elinux.org/images/c/c5/IntroductionToReverseEngineering_Anderson.pdf", 
            "https://www.kcl.ac.uk/warstudies/assets/automation-and-artificial-intelligence.pdf"]
for chunk in client.send_message(bot, "Compare 2 files and describe them in 300 words", file_path=file_urls):
    print(chunk["response"], end="", flush=True)
    
# Local paths example:
local_paths = ["c:\\users\\snowby666\\hello_world.py"]
for chunk in client.send_message(bot, "What is this file about?", file_path=local_paths):
    print(chunk["response"], end="", flush=True)

Note

The files size limit is different for each model.

  • Retrieving suggested replies
for chunk in client.send_message(bot, "Introduce 5 books about clean code", suggest_replies=True):
    print(chunk["response"], end="", flush=True)
print("\n")

for reply in chunk["suggestedReplies"]:
    print(reply)
  • Stopping message generation
# You can use an event to trigger this function
# Example:
# Note that keyboard library may not be compatible with MacOS, Linux, Ubuntu
import keyboard
for chunk in client.send_message(bot, message):
    print(chunk["response"], end="", flush=True)
    # Press Q key to stop the generation
    if keyboard.is_pressed('q'):
        client.cancel_message(chunk)
        print("\nMessage is now cancelled")
        break 
  • Deleting chat threads
# Delete 1 chat
# Using chatCode
client.delete_chat(bot, chatCode="2i58ciex72dom7im83r")
# Using chatId
client.delete_chat(bot, chatId=59726162)

# Delete n chats
# Using chatCode
client.delete_chat(bot, chatCode=["LIST_OF_CHAT_CODES"])
# Using chatId
client.delete_chat(bot, chatId=["LIST_OF_CHAT_IDS"])

# Delete all chats of a bot
client.delete_chat(bot, del_all=True)
  • Clearing conversation context
# 1. Using chatCode
client.chat_break(bot, chatCode="2i58ciex72dom7im83r")
# 2. Using chatId
client.chat_break(bot, chatId=59726162)
  • Purging messages of 1 bot
# Purge a defined number of messages (default is 50)
# 1. Using chatCode
client.purge_conversation(bot, chatCode="2i58ciex72dom7im83r", count=10)
# 2. Using chatId
client.purge_conversation(bot, chatId=59726162, count=10)

# Purge all messsages of the thread
# 1. Using chatCode
client.purge_conversation(bot, chatCode="2i58ciex72dom7im83r", del_all=True)
# 2. Using chatId
client.purge_conversation(bot, chatId=59726162,  del_all=True)
  • Purging all messages of user
client.purge_all_conversations()
  • Fetching previous messsages
# Get a defined number of messages (default is 50)
# Using chatCode
previous_messages = client.get_previous_messages('code_llama_34b_instruct', chatCode='2itg2a7muygs42v1u0k', count=2)
# Using chatId
previous_messages = client.get_previous_messages('code_llama_34b_instruct', chatId=74411139, count=2)
for message in previous_messages:
    print(message)
>> Output:
{'author': 'human', 'text': 'nice to meet you', 'messageId': 2861709279}
{'author': 'code_llama_34b_instruct', 'text': " Nice to meet you too! How are you doing today? Is there anything on your mind that you'd like to talk about? I'm here to listen and help", 'messageId': 2861873125}

# Get all previous messages
# Using chatCode
previous_messages = client.get_previous_messages('code_llama_34b_instruct', chatCode='2itg2a7muygs42v1u0k', get_all=True)
# Using chatId
previous_messages = client.get_previous_messages('code_llama_34b_instruct', chatId=74411139, get_all=True)
for message in previous_messages:
    print(message)
>> Output:
{'author': 'human', 'text': 'hi there', 'messageId': 2861363514}
{'author': 'code_llama_34b_instruct', 'text': " Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?", 'messageId': 2861363530}
{'author': 'chat_break', 'text': "", 'messageId': 2872383991}
{'author': 'human', 'text': 'nice to meet you', 'messageId': 2861709279}
{'author': 'code_llama_34b_instruct', 'text': " Nice to meet you too! How are you doing today? Is there anything on your mind that you'd like to talk about? I'm here to listen and help", 'messageId': 2861873125}

Note

It will fetch messages from the latest to the oldest, but the order to be displayed is reversed.

  • Getting available knowledge bases
# Get a defined number of sources (default is 10)
print(client.get_available_knowledge(botName="BOT_NAME", count=2))
>> Output:
{'What is Quora?': [86698], 'Founders of Quora': [86705]}
# Get all available sources
print(client.get_available_knowledge(botName="BOT_NAME", get_all=True))
  • Uploading knowledge bases
# Web urls example:
file_urls = ["https://elinux.org/images/c/c5/IntroductionToReverseEngineering_Anderson.pdf", 
            "https://www.kcl.ac.uk/warstudies/assets/automation-and-artificial-intelligence.pdf"]
source_ids = client.upload_knowledge(file_path=file_urls)
print(source_ids)
>> Output:
{'er-1-intro_to_re.pdf': [86344], 'automation-and-artificial-intelligence.pdf': [86345]}

# Local paths example:
local_paths = ["c:\\users\\snowby666\\hello_world.py"]
source_ids = client.upload_knowledge(file_path=local_paths)
print(source_ids)
>> Output:
{'hello_world.py': [86523]}

# Plain texts example:
knowledges = [
    {
        "title": "What is Quora?",
        "content": "Quora is a popular online platform that enables users to ask questions on various topics and receive answers from a diverse community. It covers a wide range of subjects, from academic and professional queries to personal experiences and opinions, fostering knowledge-sharing and meaningful discussions among its users worldwide."
    },
    {
        "title": "Founders of Quora",
        "content": "Quora was founded by two individuals, Adam D'Angelo and Charlie Cheever. Adam D'Angelo, who previously served as the Chief Technology Officer (CTO) at Facebook, and Charlie Cheever, a former Facebook employee as well, launched Quora in June 2009. They aimed to create a platform that would enable users to ask questions and receive high-quality answers from knowledgeable individuals. Since its inception, Quora has grown into a widely used question-and-answer platform with a large user base and a diverse range of topics covered."
    },
]
source_ids = client.upload_knowledge(text_knowledge=knowledges)
print(source_ids)
>> Output:
{'What is Quora?': [86368], 'Founders of Quora': [86369]}

# Hybrid example:
source_ids = client.upload_knowledge(file_path=file_urls, text_knowledge=knowledges)
print(source_ids)
>> Output:
{'What is Quora?': [86381], 'Founders of Quora': [86383], 'er-1-intro_to_re.pdf': [86395], 'automation-and-artificial-intelligence.pdf': [86396]}
  • Editing knowledge bases (Only for plain texts)
client.edit_knowledge(knowledgeSourceId=86381, title='What is Quora?', content='Quora is a question-and-answer platform where users can ask questions, provide answers, and engage in discussions on various topics.')
  • Getting bot info
bot = 'gpt-4'
print(client.get_botInfo(handle=bot))
>> Output:
{'handle': 'GPT-4', 'model': 'beaver', 'supportsFileUpload': True, 'messageTimeoutSecs': 15, 'displayMessagePointPrice': 350, 'numRemainingMessages': 20, 'viewerIsCreator': False, 'id': 'Qm90OjMwMDc='}
  • Getting available creation models
print(client.get_available_creation_models())
>> Output:
['chinchilla', 'mixtral8x7bchat', 'playgroundv25', 'stablediffusionxl', 'dalle3', 'a2', 'claude_2_short', 'gemini_pro', 'a2_2', 'a2_100k', 'beaver', 'llama_2_70b_chat', 'mythomaxl213b', 'claude_2_1_bamboo', 'claude_2_1_cedar', 'claude_3_haiku', 'claude_3_haiku_200k']
  • Creating a new Bot
client.create_bot("BOT_NAME", "PROMPT_HERE", base_model="a2")

# Using knowledge bases (you can use source_ids from uploaded knowledge bases for your custom bot)
client.create_bot("BOT_NAME", "PROMPT_HERE", base_model="a2", knowledgeSourceIds=source_ids, shouldCiteSources=True)
  • Editing a Bot
client.edit_bot("(NEW)BOT_NAME", "PROMPT_HERE", base_model='chinchilla')

# Adding knowledge bases 
client.edit_bot("(NEW)BOT_NAME", "PROMPT_HERE", base_model='chinchilla', knowledgeSourceIdsToAdd=source_ids, shouldCiteSources=True)

# Removing knowledge bases
client.edit_bot("(NEW)BOT_NAME", "PROMPT_HERE", base_model='chinchilla', knowledgeSourceIdsToRemove=source_ids, shouldCiteSources=True)

Tip

You can also use both knowledgeSourceIdsToAdd and knowledgeSourceIdsToRemove at the same time.

  • Deleting a Bot
client.delete_bot("BOT_NAME")
  • Getting available bots (your bots section)
# Get a defined number of bots (default is 25)
print(client.get_available_bots(count=10))
# Get all available bots
print(client.get_available_bots(get_all=True))
  • Getting a user's bots
handle = 'poe'
print(client.get_user_bots(user=handle))
  • Getting available categories
print(client.get_available_categories())
>> Output:
['Official', 'Popular', 'New', 'ImageGen', 'AI', 'Professional', 'Funny', 'History', 'Cooking', 'Advice', 'Mind', 'Programming', 'Travel', 'Writing', 'Games', 'Learning', 'Roleplay', 'Utilities', 'Sports', 'Music']
  • Exploring 3rd party bots and users
# Explore section example:
# Get a defined number of bots (default is 50)
print(client.explore(count=10))
# Get all available bots
print(client.explore(explore_all=True))

# Search for bots by query example:
# Get a defined number of bots (default is 50)
print(client.explore(search="Midjourney", count=30))
# Get all available bots
print(client.explore(search="Midjourney", explore_all=True))

# Search for bots by category example (default is defaultCategory):
# Get a defined number of bots (default is 50)
print(client.explore(categoryName="Popular", count=30))
# Get all available bots
print(client.explore(categoryName="AI", explore_all=True))

# Search for people example:
# Get a defined number of people (default is 50)
print(client.explore(search="Poe", entity_type='user', count=30))
# Get all available people
print(client.explore(search="Poe", entity_type='user', explore_all=True))
  • Sharing & Importing messages
# Share a defined number of messages (from the lastest to the oldest)
# Using chatCode
shareCode = client.share_chat("a2", chatCode="2roap5g8nd7s28ul836",count=10)
# Using chatId
shareCode = client.share_chat("a2", chatId=204052028,count=10)

# Share all messages
# Using chatCode
shareCode = client.share_chat("a2", chatCode="2roap5g8nd7s28ul836")
# Using chatId
shareCode = client.share_chat("a2", chatId=204052028)

# Set up the 2nd Client and import messages from the shareCode
client2 = PoeApi("2nd_TOKEN_HERE")
print(client2.import_chat(bot, shareCode))
>> Output:
{'chatId': 72929127, 'chatCode': '2iw0xcem7a18wy1avd3'}
  • Getting citations
print(client.get_citations(messageId=141597902621))

Bots Group Chat (beta)

  • Creating a group chat
bots = [
    {'bot': 'yayayayaeclaude', 'name': 'Yae'}, 
    {'bot': 'gepardL', 'name': 'gepard'}, 
    {'bot': 'SayukiTokihara', 'name': 'Sayuki'}
]

client.create_group(group_name='Hangout', bots=bots) 

Note

bot arg is the model/displayName. name arg is the one you'd mention them in group chat.

  • Sending messages and Streaming responses in group chat
# User engagement example:
while True: 
    message = str(input('\n\033[38;5;121mYou : \033[0m'))
    prev_bot = ""
    for chunk in client.send_message_to_group(group_name='Hangout', message=message):
        if chunk['bot'] != prev_bot:
            print(f"\n\033[38;5;121m{chunk['bot']} : \033[0m", end='', flush=True)
            prev_bot = chunk['bot']
        print(chunk['response'], end='', flush=True)
    print('\n')

# Auto-play example:
while True:
    prev_bot = ""
    for chunk in client.send_message_to_group(group_name='Hangout', autoplay=True):
        if chunk['bot'] != prev_bot:
            print(f"\n\033[38;5;121m{chunk['bot']} : \033[0m", end='', flush=True)
            prev_bot = chunk['bot']
        print(chunk['response'], end='', flush=True)
    print('\n')

# Preset history example:
preset_path = "c:\\users\\snowby666\\preset.json"
prev_bot = ""
for chunk in client.send_message_to_group(group_name='Hangout', autoplay=True, preset_history=preset_path):
    if chunk['bot'] != prev_bot:
        print(f"\n\033[38;5;121m{chunk['bot']} : \033[0m", end='', flush=True)
        prev_bot = chunk['bot']
    print(chunk['response'], end='', flush=True)
print('\n')
while True:
    for chunk in client.send_message_to_group(group_name='Hangout', autoplay=True):
        if chunk['bot'] != prev_bot:
            print(f"\n\033[38;5;121m{chunk['bot']} : \033[0m", end='', flush=True)
            prev_bot = chunk['bot']
        print(chunk['response'], end='', flush=True)
    print('\n')

Note

You can also change your name in group chat by passing a new one to the above function: client.send_message_to_group('Hangout', message=message, user='Danny') If you want to auto save the conversation_log, just simply set this to true: client.send_message_to_group('Hangout', message=message, autosave=True)

  • Deleting a group chat
client.delete_group(group_name='Hangout')
  • Getting created groups
print(client.get_available_groups())
  • Getting group data
print(client.get_group(group_name='Hangout'))
  • Saving group chat history
# Save as json in the same directory
client.save_group_history(group_name='Hangout')
# Save with a local path (json only)
local_path = "c:\\users\\snowby666\\log.json"
client.save_group_history(group_name='Hangout', file_path=local_path)
  • Loading group chat history
print(client.load_group_history(file_path=local_path))

Misc

  • How to find chatCode manually?

Here is an example, the chatCode is 23o1gxjhb9cfnlacdcd

  • What are the file types that poe-api-wrapper support?

Currently, this API only supports these file types for adding attachments

Text files

.pdf .docx .txt .md .py .js .ts .html .css .csv .c .cs .cpp .lua .rs .rb .go .java

Media files

.png .jpg .jpeg .gif .mp4 .mov .mp3 .wav

🙌 Contributing

We would love to develop poe-api-wrapper together with our community! 💕

Run debug

First, clone this repo:

git clone https://github.com/snowby666/poe-api-wrapper.git
cd poe-api-wrapper

Then run the test cases:

python -m pip install -e .[tests]
tox

Ways to contribute

  • Try poe-api-wrapper and give feedback
  • Add new integrations with open PR
  • Help with open issues or create your own
  • Share your thoughts and suggestions with us
  • Request a feature by submitting a proposal
  • Report a bug
  • Improve documentation: fix incomplete or missing docs, bad wording, examples or explanations.

Contributors


Repobeats analytics image

🤝 Copyright

This program is licensed under the GNU GPL v3. Most code has been written by me, snowby666.

Copyright Notice

snowby666/poe-api-wrapper: A simple API wrapper for poe.com using Httpx
Copyright (C) 2023 snowby666

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.

poe-api-wrapper's People

Contributors

luke-in-the-sky avatar simatwa avatar snowby666 avatar sunnysktsang avatar thelime1 avatar xenocoderce avatar xindoo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poe-api-wrapper's Issues

Able to set timeout for waiting response

When I uploaded a pdf file which has number of page > 50. It took POE AI a while to response but the code return "RuntimeError: Response timed out.". Is it possible if I can set timeout for the function?

image

Error while getting response from PoeApi: 'NoneType' object is not subscriptable

Really odd issue:

Works for me:

previous_messages = client.get_previous_messages(bot, chatId=95544182, count=1)

But this does NOT work for me:

chat="95544182"
previous_messages = client.get_previous_messages(bot, chatId=chat, count=1)

Error while getting response from PoeApi: 'NoneType' object is not subscriptable.

Any ideas?

Ballyregan dependencies

Hello, I was wondering if there was a version of this library without Ballyregan, I know it's for proxies but in my case I don't need a proxy and it takes up a massive amount of space without being used. So I was wondering if it was possible to make a version with it removed. Thank you for your time.

[Feature Request] Over 2k token message context support?

I've noticed that in poe.com that if I send a very large text over 2000 tokens I think it was, the poe bot does not respond and does not respond in the website.

My question and feature request is does this project support over 2000k context on gpt4 (I think I already tried and it failed on this repo), if not would it be possible to support it?

Using third-party proxy Clash, ballyregan called by PoeApi cannot read proxy information from the system

When I run the sample code below, I get the following error:

`from poe_api_wrapper import PoeApi

token = "************"

bot = "a2"
message = "What is reverse engineering?"

client = PoeApi(token, proxy=True)

for chunk in client.send_message(bot, message):
print(chunk["response"], end="", flush=True)`

client = PoeApi(token, proxy=True)

File "C:\Users\cao\AppData\Roaming\Python\Python39\site-packages\poe_api_wrapper\api.py", line 110, in init
proxies = fetch_proxy()
File "C:\Users\cao\AppData\Roaming\Python\Python39\site-packages\poe_api_wrapper\proxies.py", line 19, in fetch_proxy
proxies = fetcher.get(
File "C:\Users\cao\AppData\Roaming\Python\Python39\site-packages\ballyregan\fetcher.py", line 134, in get
proxies = self._gather(
File "C:\Users\cao\AppData\Roaming\Python\Python39\site-packages\ballyregan\fetcher.py", line 104, in _gather
raise NoProxiesFound
ballyregan.core.exceptions.NoProxiesFound: Could not find any proxies

Poe: handle "Response timed out" exception

At the moment this exception is thrown:

capybara : Traceback (most recent call last):
  File "D:\AI\poe-api-wrapper\poe_api_wrapper\api.py", line 481, in send_message
    message = self.message_queues[human_message_id].get(timeout=timeout)
  File "C:\Users\a\AppData\Local\Programs\Python\Python310\lib\queue.py", line 179, in get
    raise Empty
_queue.Empty

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\AI\poe-api-wrapper\example.py", line 3, in <module>
    Poe.chat_with_bot(token)
  File "D:\AI\poe-api-wrapper\poe_api_wrapper\api.py", line 935, in chat_with_bot
    for chunk in client.send_message(bot, message, chatId, suggest_replies=True, file_path=file_urls):
  File "D:\AI\poe-api-wrapper\poe_api_wrapper\api.py", line 485, in send_message
    raise RuntimeError("Response timed out.")
RuntimeError: Response timed out.

I suppose Poe should handle this, as it is pretty common case.

question about sent_message()

Hi, i get the token through your step, and when i run the code, print(client.get_chat_history()) can return, but when i run the part about for chunk in client.send_message(bot, message):, there is nothing returned. then i debug the code, and i find that there is a problem about self.connect_ws() in api.py. in api.py-line223, self.ws_connected was false, so the program stucks in an infinite loop. But the other one that use my token can correctly run the code, i don't know what is wrong of me.

Time out waitting for response

Screenshot 2023-09-20 185757
I ran poe-api in kaggle notebook and i got the correct token in quora with the same gmail in poe.com. But i got this retrying error multiple times. How can i fix this ? I set 40 second for the time out of "send_message" function.

[Errno 24] Too many open files

Not sure what triggered that but is that a possible poe-api-wrapper issue? Or would it come from somewhere else?

ERROR - Error while getting response from PoeApi: [Errno 24] Too many open files

ValueError: Bot a2_2 not found. Make sure the bot exists before creating new chat.

from poe_api_wrapper import PoeApi
client = PoeApi("token")

bot = "a2_2"
message = "What is reverse engineering?"

Create new chat thread

Streamed example:

for chunk in client.send_message(bot, message):
print(chunk["response"], end="", flush=True)
print("\n")

ValueError: Bot a2_2 not found. Make sure the bot exists before creating new chat.

Seems to happen with any bot I tried with.

'NoneType' object has no attribute 'lower'

  1. Poe.chat_with_bot(token)
  2. Choose any, e.g. [1] Assistant (capybara)
  3. Type !history command
Traceback (most recent call last):
  File "D:\AI\poe-api-wrapper\example.py", line 3, in <module>
    Poe.chat_with_bot(token)
  File "D:\AI\poe-api-wrapper\poe_api_wrapper\api.py", line 898, in chat_with_bot
    client.get_chat_history()
  File "D:\AI\poe-api-wrapper\poe_api_wrapper\api.py", line 323, in get_chat_history
    bot = bot.lower().replace(' ', '')
AttributeError: 'NoneType' object has no attribute 'lower'

Timeout and failed to get the reply

Hi today (20/09) I found that I used the send_message with Attached documents keep failing and timeout.

Here is the running code example:
image

I checked on Poe.AI and I can see the website showed the response but the code failed to get the response
image

The problem only appear on the message which has attach document, normal prompts are fine. I tried with different Bot event Chat GPT4 and the issue still happen.

Environment: Google Colab
Python: 3.10
Latest version

An unknown error occurred.

I get an unknown Error:

RuntimeError: An unknown error occurred. Raw response data: <!DOCTYPE html><html><head><title>Error 502 (Bad Gateway)</title><style type="text/css">/**
 * Name: error.css
 * Generated: 1564439105438636
 */
*{margin:0;padding:0}html{font:15px/22px "Helvetica Neue",Arial,sans-serif;background:#fff;color:#222;padding:15px}body{margin:10% auto 0;max-width:300px;min-height:200px;padding:30px 0 15px}p{margin:13px 0 25px;overflow:hidden}u{color:#777;text-decoration:none}a{color:#155fad;text-decoration:none}a img{border:0}#logo{background:url(//qsf.fs.quoracdn.net/-3-images.logo.wordmark_default.svg-26-bfa6b94bc0d6af2e.svg) center center / 80px 23px no-repeat;display:inline-block;height:44px;width:82px}.error{padding:0 0 10px}html.ios_error,html.android_error{background:#f4f4f4;font-size:14px;line-height:1.4em;text-align:center;color:#666}html.ios_error body,html.android_error body{padding:100px 0 0 0}html.ios_error p,html.android_error p{margin-top:25px}html.ios_error b,html.android_error b{font-size:20px;font-weight:normal}html.ios_error u,html.android_error u{font-size:20px;color:#666}html.ios_error #logo,html.android_error #logo{display:none}</style><meta charset="utf-8"><meta name="viewport" content="initial-scale=1, minimum-scale=1, width=device-width"></head><body class='mobile_app2'><a href='//www.quora.com/'><span id='logo'></span></a><p><b>502.</b> <u>Bad Gateway.</u></p><p>Quora is temporarily unavailable.</p><p>Please wait a few minutes and try again.</p><p><a style='display:none;' id='reload'>Try Again</a></p><p><a style='display:none;' id='instance'>https://www.quora.com</a></p><script type="text/javascript">!function(e,t){function n(n,r){r=r||{};var o={messageName:n,data:r};if(i&&i.sendMessage(JSON.stringify(o)),a){var s=btoa(e.unescape(encodeURIComponent(JSON.stringify([o])))),d=t.createElement("iframe");d.setAttribute("src","qb64://"+s),d.setAttribute("frameborder",0),d.setAttribute("height",0),d.setAttribute("width",0),t.documentElement.appendChild(d),d.parentNode.removeChild(d),d=null}}var i=e.QuoraAndroid,a=!i&&e.navigator.userAgent.indexOf("Quora")>=0;if(a||i){var r=t.getElementsByTagName("html")[0],o=i?"android_error":a?"ios_error":"";r.setAttribute("class",o);var s=t.getElementById("reload");s&&(s.style.display="",s.onclick=function(){n("reload")});var d=t.getElementById("instance"),c=-1!=t.cookie.indexOf("debugPanelVisit"),l=-1!=t.cookie.indexOf("mia-iphone_app_version")||-1!=t.cookie.indexOf("mia-android_app_version");d&&(c||l)&&(d.style.display="",d.onclick=function(){n("switchInstance",{instance_name:"m",host:"quora.com",scheme:"https"})}),t.title="Error",n("setBackgroundColor",{
red:244,green:244,blue:244,alpha:1}),n("PageReady")}}(window,document);
</script></body></html>

Here is my main code:
poe-api-wrapper-1.2.8

from poe_api_wrapper import PoeApi
import json
from tqdm import tqdm
import os
import time

token = ""
client = PoeApi(token)
bot = "a2_100k"
results = "output/gen_ins_c.json"

def queryClaud(message):
    # Non-streamed example:
    for chunk in client.send_message(bot, message):
        pass
    return chunk["text"]

I also got: bot not found

client.send_message() BUG

bot = "a2_2"
The client.send_message() method, when passed the chatCode parameter, does not work. Using the chatId parameter instead works fine.

BUG
client.send_message(bot, message, chatCode=history[f'{bot}'][0]['chatCode'])
OK
client.send_message(bot, message, chatId=history[f'{bot}'][0]['chatId'])

httpx.ConnectError: [Errno 0] Error

Code:
from poe_api_wrapper import PoeApi
client = PoeApi("06oT***0o-Lg==")

Terminal:
Traceback (most recent call last):
File "C:\anaconda\lib\site-packages\httpcore_exceptions.py", line 10, in map_exceptions
yield
File "C:\anaconda\lib\site-packages\httpcore_backends\sync.py", line 62, in start_tls
raise exc
File "C:\anaconda\lib\site-packages\httpcore_backends\sync.py", line 57, in start_tls
sock = ssl_context.wrap_socket(
File "C:\anaconda\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\anaconda\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\anaconda\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\anaconda\lib\site-packages\httpx_transports\default.py", line 60, in map_httpcore_exceptions
yield
File "C:\anaconda\lib\site-packages\httpx_transports\default.py", line 218, in handle_request
resp = self._pool.handle_request(req)
File "C:\anaconda\lib\site-packages\httpcore_sync\connection_pool.py", line 262, in handle_request
raise exc
File "C:\anaconda\lib\site-packages\httpcore_sync\connection_pool.py", line 245, in handle_request
response = connection.handle_request(request)
File "C:\anaconda\lib\site-packages\httpcore_sync\http_proxy.py", line 271, in handle_request
connect_response = self._connection.handle_request(
File "C:\anaconda\lib\site-packages\httpcore_sync\connection.py", line 92, in handle_request
raise exc
File "C:\anaconda\lib\site-packages\httpcore_sync\connection.py", line 69, in handle_request
stream = self._connect(request)
File "C:\anaconda\lib\site-packages\httpcore_sync\connection.py", line 149, in _connect
stream = stream.start_tls(**kwargs)
File "C:\anaconda\lib\site-packages\httpcore_backends\sync.py", line 62, in start_tls
raise exc
File "C:\anaconda\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "C:\anaconda\lib\site-packages\httpcore_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 0] Error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\anaconda\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\anaconda\lib\runpy.py", line 87, in run_code
exec(code, run_globals)
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy_main
.py", line 39, in
cli.main()
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher/../..\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="main")
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\22339.vscode\extensions\ms-python.python-2023.14.0\pythonFiles\lib\python\debugpy_vendored\pydevd_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "C:\desk***\PoeAPI2.py", line 3, in
client = PoeApi("06****AB0o-Lg==")
File "C:\anaconda\lib\site-packages\poe_api_wrapper\api.py", line 53, in init
'Quora-Formkey': self.get_formkey(),
File "C:\anaconda\lib\site-packages\poe_api_wrapper\api.py", line 60, in get_formkey
response = self.client.get(self.BASE_URL, headers=self.HEADERS, follow_redirects=True)
File "C:\anaconda\lib\site-packages\httpx_client.py", line 1041, in get
return self.request(
File "C:\anaconda\lib\site-packages\httpx_client.py", line 814, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "C:\anaconda\lib\site-packages\httpx_client.py", line 901, in send
response = self._send_handling_auth(
File "C:\anaconda\lib\site-packages\httpx_client.py", line 929, in _send_handling_auth
response = self._send_handling_redirects(
File "C:\anaconda\lib\site-packages\httpx_client.py", line 966, in _send_handling_redirects
response = self._send_single_request(request)
File "C:\anaconda\lib\site-packages\httpx_client.py", line 1002, in _send_single_request
response = transport.handle_request(request)
File "C:\anaconda\lib\site-packages\httpx_transports\default.py", line 218, in handle_request
resp = self._pool.handle_request(req)
File "C:\anaconda\lib\contextlib.py", line 131, in exit
self.gen.throw(type, value, traceback)
File "C:\anaconda\lib\site-packages\httpx_transports\default.py", line 77, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 0] Error

Poe: better !history function

It would be convenient to have such functions:

  • Choose a chat to continue
  • Be able to do this not only in already started chat, but elsewhere (also during bot selection and thread selection)

zsh: segmentation fault ....

This happened after upgrading to 1.1.1.

I reverted to 1.1.0 and the issue is gone. Any ideas what caused it?

Python 3.10.0

Poe: Ability to change bot during thread selection.

After i choose bot - I need to make thread choice as well.
But what if I changed my mind, and want to choose different bot?
Normally I use !reset command for that, but commands are not available during thread selection.

Perhaps it would be good to have additional option during thread selection - "Return to bot selection" or "Choose another bot"

RuntimeError: Timed out waiting for websocket to connect.

Env:
Pycharm + anaconda

Network:
proxy which works fine for EDGE broswer

Issue:
When I try run a demo as below, it return an error of
"RuntimeError: Timed out waiting for websocket to connect."

demo code:
from poe_api_wrapper import PoeExample
token = "my_cookies_copied_from_EDGE"
PoeExample(token).chat_with_bot()

Add streaming

Great work!

And it would be nice to support streaming of Poe responses.

Wonderring about token in quora

I signed in to quora with two different gmails but got the same m-b token. What was the problem and which poe account will receive the requests ? Thanks

rate limit exception when connecting

I used this code to register 3 accounts when initialing:

def _add_token(token):
    #return "ok"
    if token not in client_dict.keys():
        try:
            print(datetime.now())
            c = get_client(token)
            client_dict[token] = c
            time.sleep(3)
            return "ok"
        except Exception as exception:
            print("Failed to connect to poe due to " + str(exception))
            return "failed: " + str(exception)
    else:
        return "exist"

but got the following info, one successed but the others failed:

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] 2023-09-16 11:39:37.461653

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] Failed to connect to poe due to Rate limit exceeded for sending requests to poe.com. Please try again later.

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] 2023-09-16 11:39:39.185947

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] Failed to connect to poe due to Rate limit exceeded for sending requests to poe.com. Please try again later.

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] 2023-09-16 11:39:41.125917

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] * Serving Flask app 'api'

2023-09-16T11:39:47.319 app[9080e331a20687] hkg [info] * Debug mode: off

I tried lots of times,but got the same result.

Bot not found

Custom bot's are giving not found error.

Traceback (most recent call last):
  File "/home/cm/.local/lib/python3.11/site-packages/poe_api_wrapper/api.py", line 773, in delete_chat
    chatdata = self.get_chat_history(bot=bot)['data'][bot]
               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^

I am able to talk with the bot in web but I am not able to connect to them in API.

Retrying request 1/3 times

I need to switch off this Retrying asap. When queries a slow (does not matter what timeout I set) it keeps retrying and wasting requests for me. The next still times out (these are queries with longer documents).

Retrying request 1/3 times...

It is much better for me to query by chat ID as the content will be available eventually.

How can I switch off this behavior? Just went through 1000 paid queries in a few minutes :(

Check if the token is expired

Is there any method for checking if the token is expired? Then it could return something like true or false, not the exception message. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.