Coder Social home page Coder Social logo

kav-k / gptdiscord Goto Github PK

View Code? Open in Web Editor NEW
1.8K 32.0 296.0 1.78 MB

A robust, all-in-one GPT interface for Discord. ChatGPT-style conversations, image generation, AI-moderation, custom indexes/knowledgebase, youtube summarizer, and more!

License: MIT License

Python 99.58% Dockerfile 0.42%
artificial-intelligence asyncio gpt3 help-wanted openai openai-api python dalle2 embeddings extractive-question-answering

gptdiscord's Introduction

Hi!

I'm a fourth-year Computer Engineering student at the University of Waterloo, in Canada!

I was most recently a software engineering intern at Stripe! Previously, I also worked at places like Intuit, Carta (2x), NCR, Blackberry, and am currently a software engineering contractor for OpenAI!

gptdiscord's People

Contributors

cherryroots avatar connorv001 avatar cooperlees avatar crypticheaven-lab avatar dehlirious avatar dependabot[bot] avatar dev7g avatar duckdoom4 avatar falkoro avatar haste171 avatar hkabig avatar iyaaddev avatar jbx2060 avatar jmackles avatar justinmcp avatar karlkennerley avatar kav-k avatar lauralex avatar lsnk avatar luyaojchen avatar m-allanson avatar maclean-d avatar mina-karam avatar paillat-dev avatar raecaug avatar rolanddeboer avatar rwywawn avatar tayiorbeii avatar vivekboii avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gptdiscord's Issues

Handle mentions in the text during a conversation

Collapse mentioned users to their server nicknames when people are mentioned in conversations.

For example, the bot will see <@120931820938102>, we want to collapse that to that user's server nickname

Semantic Search in conversation history using pinecone db

Alongside summarizations, we want to embed summarizations and save them inside pinecone. Then, when users send prompts within a conversation to the bot, we want to search pinecone's vectors for the most similar embeddings closest to the user prompt. We then append this found context to the prompt before sending to GPT3. This, effectively simulates long and permanent term memory.

Of course, there are tons of things to think about, such as the "forget" conditions (the conditions upon which embeddings should be removed as they are deemed irrelevant, just like human brains), and then "save" conditions (when and depending on what policy do we store embeddings as permanent data, also like the human brain, we need to choose a time to consolidate information and filter and policy that information.

Error, please help!

I get the following error when running everything:

root@gamingstar:~/GPT3Discord# gpt3discord
The environment file is located at /root/GPT3Discord/.env
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Wrote PID to file the file bot.pid
Conversation starter text loaded from /root/GPT3Discord/conversation_starter_pretext.txt.
Conversation starter text loaded from /root/GPT3Discord/conversation_starter_pretext_minimal.txt.
The debug channel and guild IDs are 953661093627166842 and 1059859336425390162
Draw service init
Loaded image optimizer pretext from /root/GPT3Discord/image_optimizer_pretext.txt
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/dist-packages/gpt3discord.py", line 131, in init
    asyncio.get_event_loop().run_until_complete(main())
  File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.9/dist-packages/gpt3discord.py", line 116, in main
    await bot.start(os.getenv("DISCORD_TOKEN"))
  File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 659, in start
    await self.connect(reconnect=reconnect)
  File "/usr/local/lib/python3.9/dist-packages/discord/client.py", line 598, in connect
    raise PrivilegedIntentsRequired(exc.shard_id) from None
discord.errors.PrivilegedIntentsRequired: Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
Removing PID file
root@gamingstar:~/GPT3Discord#


PS: I am using a fresh Ubuntu VPS.

User Provided API keys

Hello, I was wondering if it would be possible to allow users to provide their own API keys in a secure manner, rather than using a single admin-provided key for all API calls. If this is not feasible, please let me know. One way I envision this being implemented is by prompting the user for their keys before the first use and temporarily saving them in an inaccessible cache.

Thank you for considering this suggestion. If this has been discussed previously, please excuse my duplication of the topic.

[BUG] Instructions in prompt bleed into personality description

When asking the bot to tell something about itself, I got the following response:

Sure thing! I'm a software engineer, and I love talking about random topics and exploring niche interests. I'm also a helpful and descriptive person, and I always try to make well-informed decisions. I'm also mindful of conversation history and try to be consistent with my answers. Oh, and I'm always up for chatting about coding, too – I'm good at providing code examples and explanations! 🤓

Half of the mentioned attributes are instructions to the bot, and not really descriptions of its personality. Perhaps the prompt should instruct GPT to act as a character with certain characteristics instead of simply defining all those characteristics with "you are ...", which is the same format used to define its behaviour. I will play a bit with the prompt tomorrow to come up with a suggestion.

[BUG] Errors when trying to use any commands.

Describe the bug
Bot is running in docker.

Trying to run /gpt ask command produces the following errors in docker log:

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
The environment file is located at //.env
GPT_ROLES is not defined properly in the environment file!Please copy your server's role and put it into GPT_ROLES in the .env file.For example a line should look like: `GPT_ROLES="Gpt"`
Defaulting to allowing all users to use GPT commands...

Downloading:   0%|          | 0.00/1.04M [00:00<?, ?B/s]
Downloading:   3%|▎         | 28.7k/1.04M [00:00<00:07, 135kB/s]
Downloading:   8%|▊         | 78.8k/1.04M [00:00<00:03, 263kB/s]
Downloading:  15%|█▌        | 161k/1.04M [00:00<00:02, 430kB/s] 
Downloading:  33%|███▎      | 341k/1.04M [00:00<00:00, 822kB/s]
Downloading:  67%|██████▋   | 701k/1.04M [00:00<00:00, 1.58MB/s]
Downloading: 100%|██████████| 1.04M/1.04M [00:00<00:00, 1.50MB/s]

Downloading:   0%|          | 0.00/456k [00:00<?, ?B/s]
Downloading:   3%|▎         | 12.3k/456k [00:00<00:04, 106kB/s]
Downloading:   6%|▋         | 28.7k/456k [00:00<00:03, 125kB/s]
Downloading:  17%|█▋        | 78.8k/456k [00:00<00:01, 264kB/s]
Downloading:  32%|███▏      | 144k/456k [00:00<00:00, 377kB/s] 
Downloading:  68%|██████▊   | 308k/456k [00:00<00:00, 744kB/s]
Downloading: 100%|██████████| 456k/456k [00:00<00:00, 766kB/s]

Downloading:   0%|          | 0.00/1.36M [00:00<?, ?B/s]
Downloading:   0%|          | 4.10k/1.36M [00:00<00:39, 34.2kB/s]
Downloading:   3%|▎         | 36.9k/1.36M [00:00<00:07, 177kB/s] 
Downloading:   6%|▌         | 81.9k/1.36M [00:00<00:04, 269kB/s]
Downloading:  13%|█▎        | 180k/1.36M [00:00<00:02, 489kB/s] 
Downloading:  28%|██▊       | 377k/1.36M [00:00<00:01, 913kB/s]
Downloading:  58%|█████▊    | 786k/1.36M [00:00<00:00, 1.77MB/s]
Downloading: 100%|██████████| 1.36M/1.36M [00:00<00:00, 1.83MB/s]

Downloading:   0%|          | 0.00/665 [00:00<?, ?B/s]
Downloading: 100%|██████████| 665/665 [00:00<00:00, 540kB/s]
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/cogs/gpt_3_commands_and_converser.py", line 622, in encapsulated_send
    response = await self.model.send_request(new_prompt, tokens=tokens)
  File "/usr/local/lib/python3.9/site-packages/models/openai_model.py", line 402, in send_request
    tokens_used = int(response["usage"]["total_tokens"])
KeyError: 'usage'
Ignoring exception in on_application_command_error
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/cogs/gpt_3_commands_and_converser.py", line 622, in encapsulated_send
    response = await self.model.send_request(new_prompt, tokens=tokens)
  File "/usr/local/lib/python3.9/site-packages/models/openai_model.py", line 402, in send_request
    tokens_used = int(response["usage"]["total_tokens"])
KeyError: 'usage'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 124, in wrapped
    ret = await coro(arg)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 976, in _invoke
    await self.callback(self.cog, ctx, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/cogs/gpt_3_commands_and_converser.py", line 726, in ask
    await self.encapsulated_send(user.id, prompt, ctx, from_g_command=True)
  File "/usr/local/lib/python3.9/site-packages/cogs/gpt_3_commands_and_converser.py", line 703, in encapsulated_send
    await self.end_conversation(ctx)
  File "/usr/local/lib/python3.9/site-packages/cogs/gpt_3_commands_and_converser.py", line 217, in end_conversation
    self.conversating_users.pop(normalized_user_id)
KeyError: 147783361107066881
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/discord/client.py", line 377, in _run_event
    await coro(*args, **kwargs)
  File "/bin/gpt3discord", line 66, in on_application_command_error
    raise error
  File "/usr/local/lib/python3.9/site-packages/discord/bot.py", line 1114, in invoke_application_command
    await ctx.command.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 375, in invoke
    await injected(ctx)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 124, in wrapped
    ret = await coro(arg)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 1310, in _invoke
    await command.invoke(ctx)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 375, in invoke
    await injected(ctx)
  File "/usr/local/lib/python3.9/site-packages/discord/commands/core.py", line 132, in wrapped
    raise ApplicationCommandInvokeError(exc) from exc
discord.errors.ApplicationCommandInvokeError: Application Command raised an exception: KeyError: 147783361107066881

[BUG] Message edits in conversation don't work with pinecone enabled

Describe the bug
Messsage edits don't redo the prompt when pinecone is enabled

To Reproduce

  1. enable pinecone
  2. start conversation
  3. send message
  4. edit the message

Expected behavior
The bot redoes the prompt as usual

Screenshots

Traceback (most recent call last):
  File "/home/bots/.local/lib/python3.9/site-packages/discord/client.py", line 377, in _run_event
    await coro(*args, **kwargs)
  File "/home/bots/discord_bots/GPT3Discord/cogs/gpt_3_commands_and_converser.py", line 605, in on_message_edit
    edited_content = "".join(
TypeError: sequence item 0: expected str instance, EmbeddedConversationItem found

[MAJOR PLANNED REVAMP] Public Bot Instance (an easy, drag-and-droppable bot)

I want to make this bot a global instance that anyone can drop onto their server, set an API key using a command, and use it immediately.

This means that the bot will be hosted by me, CI/CD will directly apply a server that I use to host this bot, and I will be responsible for maintaining and the upkeep of this global bot instance.

Before any of this however, the bot needs to be changed to not load OpenAI keys and other guild-specific data from an .env file, and instead have it be set by a command, per-guild.

All data needs to be isolated to be per-guild, and all data structures need to be keyed to be per-guild.

I'm looking to start a discussion hear and would love to hear everyone's thoughts.

[BUG] Extra text appended to the prompt of /g

Describe the bug
Bot might include GPTie or behave unexpectedly due to text appended to the prompt.

To Reproduce
Use /g and leave the prompt open

Expected behavior
Generate output using only the input.

Screenshots
Screenshot_20221231-214148~2.png

[BUG] when running - None of PyTorch, TensorFlow >= 2.0, or Flax have been found. - Bot wont run

Followup from https://github.com/Kav-K/GPT3Discord/issues/26

I have the same issue:

When I try to run it non-screen I get this error:

~/GPT3Discord$ python3.9 gpt3discord.py
The environment file is located at /home/azureuser/GPT3Discord/.env
/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ["[Errno 2] The file to load file system plugin from does not exist.: '/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so'"]
  warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/home/azureuser/.local/lib/python3.9/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: cannot open shared object file: No such file or directory']
  warnings.warn(f"file system plugins are not loaded: {e}")
Process ID file already exists
azureuser@chatgpt:~/GPT3Discord$

PS: only thing I changed was putting low_usage_mode to true.

Running Ubuntu 20 on arm64

/edit: installed on a fresh VM (x64) but still the same error code. Resetted my .env file but still the same.

Also, I do notice that it is saying Process Id file already exists but don't know how to fix that.

Add welcome message for new members.

It will be nice to make this bot behave more humanely by using this feature.

You'll add a new env variable for welcome message.

The bot DM every new member with the message.

It should be an embed in case someone wants style.

Happy new year.

Add Docker Support + Image Uploading

Is your feature request related to a problem? Please describe.
Docker offers a cleaner env dedicated to the code. Will allow me to run it on my VPS in a clean way as I don't want all the deps and to install python 3.9 on it directly. Would rather it in a container.

Describe the solution you'd like
Add a Docker file to the repo and then make some CI to build and upload docker images on every merge to main.

Describe alternatives you've considered

  • I have https://github.com/Kav-K/GPT3Discord/pull/22 up to add a Dockerfile + concept of saving file into a persistent data directory
  • Once this is merged I'll put up a PR to make GitHub actions push to docker hub on every merge to main

Additional context
I'm willing to do all the PRs. Will just need you to create a docker hub account and add a docker API token into Github secrets to enable uploading docker images from GitHub actions CI.

HELP WANTED - Looking for people with high traffic discord servers

Hi all!

I’m looking for someone with a discord server that has a high volume of traffic where there are a lot of people chatting, I’d like to longevity and stress test the bot (specifically moderation and async functionalities)

Please add me on discord if you’d like to help out, Kaveen#0001

Buggy conversation edits when using pinecone

image

Sometimes, when editing a message in a conversation while pinecone is enabled, the bot will not properly delete the previous messages.

As an example, at the bottom: I said that I'm working on a discord bot, and then I said I'm working on a javascript project, the javascript project message was an edit, but the previous message wasn't cleared from the conversation history and was sent up again in the prompt, not sure why this is.

The messages above it were edits too that were copied.

It has something to do with when/how embeddings are made, especially for the message edit flow, when encapsulated_send is triggered by on_message_edit

Include the prompt in /g output

When /g is used it only shows the output as it is. I'd recommend including the input in the output but bold it so if the output is a natural continuation of the prompt it'll keep the context

Screenshot_20221231-213147~2.png

Here the prompt is "What would happen if "

Make README look better and more cohesive

  • put the commands into a nice looking table with better descriptions of what the commands do and their parameters
  • Add info on newly added features like the ability to have a conversation opener in a chat, and etc
  • General clean up and beautify
  • Add build status embeds for docket and pypi, add latest release embed, any others that would be useful

[BUG] Unable to set ALLOWED_GUILDS

Describe the bug
I added the new params to env and hit:

cooper@us:~$ docker run --name gpt3discord -v /containers/gpt3discord/env:/bin/.env -v /containers/gpt3discord/data:/data gpt3discord:latest
Traceback (most recent call last):
  File "/bin/gpt3discord", line 10, in <module>
    from cogs.draw_image_generation import DrawDallEService
  File "/usr/local/lib/python3.9/site-packages/cogs/draw_image_generation.py", line 18, in <module>
    ALLOWED_GUILDS = EnvService.get_allowed_guilds()
  File "/usr/local/lib/python3.9/site-packages/models/env_service_model.py", line 23, in get_allowed_guilds
    raise ValueError(
ValueError: ALLOWED_GUILDS is not defined properly in the environment file!Please copy your server's guild ID and put it into ALLOWED_GUILDS in the .env file.For example a line should look like: `ALLOWED_GUILDS="971268468148166697"`f

My env:

cooper@us:~/repos/GPT3Discord$ cat /containers/gpt3discord/env
DATA_DIR="/data"

OPENAI_TOKEN="NO"

DISCORD_TOKEN="NONO"

ALLOWED_GUILDS="811050810460078100"
ALLOWED_ROLES="Admin,gpt"
DEBUG_GUILD="811050810460078100"
DEBUG_CHANNEL="1058174617287663689"

To Reproduce
Run from trunk with my config

Expected behavior
Start and download the files ...

Close away if you're working on this. I found this trying to move to latest published docker containers via ansible.

I don't know what this is[BUG]

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.

Fixing warning errors in logs

The bot works great! Thank you for creating such an amazing software.

CleanShot 2022-12-19 at 10 30 58@2x

When I monitor logs, I've seen that there are some cases that needs error handling. I just want to share them if you mind to fix them.

Install via python module

Is your feature request related to a problem? Please describe.
Create a setup.py or pyproject.toml and install using pip.

Describe the solution you'd like
Today we have to manually install dependencies and copy the python source files into the python environment. Lets use a defined module install method to do this for us and build sdists + wheels.

Describe alternatives you've considered
We can use traditional setup.py or more modern pyproject.toml. Happy to do either.

Additional context
https://packaging.python.org/en/latest/tutorials/packaging-projects/

[Question] when running - None of PyTorch, TensorFlow >= 2.0, or Flax have been found. - Bot wont run

I think this may actually be a config issue on my DO server - I had to restart my droplet and update the debug channel in the .env to debug to a seperate channel and since then I cant get the bot to run - I get the following error message when i run screen python3.9 main.py

"None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used."

I had it working fine last night!

Threading/Async for image concatenation

Concatenating images takes a while sometimes, especially on low-powered servers. I'm thinking it may be a good idea to make image generation work in a queue-based fashion, as a start, and then using that as scaffolding, make image concatenation happen off of the main thread

Better, more accurate and more human-like translations using DeepL

Is your feature request related to a problem? Please describe.
While I think the translations of GPT3 are correct in content, they often seem too edgy and clinical to me. Which somehow doesn't fit to a bot, which tries to get as close as possible to a human interlocutor.

Describe the solution you'd like
I would like to have the best possible and most accurate human-like translation.

Describe alternatives you've considered
DeepL is apparently currently the best translator. With really fascinating results.

Additional context (Sorry for that copycat down there... Dx)

  • Unlike other services, the neural networks of DeepL can detect even the smallest nuances and reflect them in the translation. Not only does DeepL achieve record results in scientific benchmarks, but in blind tests translators also prefer the results of DeepL three times more often than those of the competition.
  • When using DeepL Pro, your data is protected with the highest security measures. We guarantee DeepL Pro subscribers that all texts are deleted immediately after the translation has been completed, and that the connection to our servers is always encrypted. This means that your texts are not used for any purposes other than your translation, nor can they be accessed by third parties.
  • With DeepL Pro, you can translate an entire document with one click. All fonts, images, and formatting remain in place, leaving you free to edit the translated document any way you like.
  • It can translate your Microsoft Word (.docx), PowerPoint (.pptx), PDF (.pdf), and text (.txt) files. Further formats coming soon!
  • If you sign up for the DeepL API plan, you will be able to integrate DeepL’s JSON-based REST API into your own products and platforms. This allows you to incorporate the world’s best machine translation technology into a variety of new applications. For example, a company could have their international service enquiries instantly translated by DeepL Pro, greatly simplifying business procedures and improving customer satisfaction.
  • Freelance translators, translation agencies, language service providers, or corporate language departments can all benefit from using DeepL Pro, the world’s best machine translation technology, in their CAT Tool.

DeepL - Translator
DeepL - Press
DeepL - ProAPI
Why using DeepL instead of other translators?

Properly gap requests made to the moderations endpoint

For servers with high volume of text messages, we need to ensure that the moderations service still works reliably.

Re-evaluate the time gapping for moderations requests to make sure the endpoint is only hit at maximum once every 0.25 seconds or so, using a cumulative queue. Right now, the moderation request is set to be sent 0.5 seconds after a user sends a message, but if many users send a message at the same time, all of those requests will be queued for 0.5 seconds from that time and happen at once, spamming the API, we need to properly gap this.

I’d love for people with high volume servers to tell me their thoughts here as well, if you’ve already been trying out moderations!

Add exponential backoff retry for API errors

When the upstream API returns an error that's out of our control, use exponential backoff to attempt to retry the original request and return it to the original return location

Add a chat preamble feature

It would be cool to be able to add a preamble that gets inserted at the start of every prompt, even when the rest gets summarized. This can be used for example to define the character that the bot should play.

A possible way in which this could be set by the user is by issuing a command: !gp

Conversation mode will create a new thread within channel

Is your feature request related to a problem? Please describe.
Yes, !g converse mode is not working as I expected. Multiple people run multiple conversation in single room is little bit interesting.

Describe the solution you'd like
I appreciate if you able to create a new thread for each conversation started by user and everyone joins to the thread will be able to discuss with just like a same chatGPT user in the room.

Unclosed connector while running main.py

I've followed exactly same steps of README, running on Ubuntu 19 and Python 3.10.

Here is the output:

root@discord-bots:~/GPT3Discord# python3 main.py
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
/root/GPT3Discord/main.py:23: DeprecationWarning: There is no current event loop
  asyncio.ensure_future(Message.process_message_queue(message_queue, 1.5, 5))
/root/GPT3Discord/main.py:24: DeprecationWarning: There is no current event loop
  asyncio.ensure_future(Deletion.process_deletion_queue(deletion_queue, 1, 1))
Wrote PID to file the file bot.pid
/root/GPT3Discord/main.py:78: DeprecationWarning: There is no current event loop
  asyncio.get_event_loop().run_until_complete(main())
Conversation starter text loaded from file.
The debug channel and guild IDs are 793253342490918912 and 1054226060415356938
Shard ID None is requesting privileged intents that have not been explicitly enabled in the developer portal. It is recommended to go to https://discord.com/developers/applications/ and explicitly enable the privileged intents within your application's page. If this is not possible, then consider disabling the privileged intents instead.
Removing PID file
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7fa0ee96d150>
Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7fa0ee937040>, 166.232241252)]']
connector: <aiohttp.connector.TCPConnector object at 0x7fa0ee96d120>

Upgrade to Slash commands

Please upgrade the bot to use slash commands since discord is encouraging this and users find it more appealing. Just create a new branch for it so who doesn't want it will use this one.

**Another feature is private threads. **

Please make a new command that creates private thread and link that command to a role so I can put the role iD in the env file. Anyone with that role will be able to create private conversation threads using the command.

I'd love to suggest more enhancements but let's take it slow

Multi-user conversations in a thread

Allow for the conversation system to send multiple users to GPT3 in the conversation history.

For example, we should key the user based on their discord name and the conversations should look like this:

{CONVERSATION CONTEXT PRE-TEXT}
Kaveen: Hey!
GPTie: Hello there Kaveen, how can I help?
Bob: Hey GPTie!
GPTie: Hey to you as well Bob!

For the initial version of this, each message sent by any user should cause an API Request to happen. In later, and more sophisticated versions of this, I want the API request to happen 10 seconds after no conversation activity has happened, so that we don't have to make an individual request for each user's message, especially if the users are talking to each other and not GPTie.

[BUG] Context limit is seemingly not correct

Describe the bug
The conversation runs out of context (the safeguard check in the code fails and calls self.end_conversation) much before 4000 tokens are used up in the conversation. This should be some simple value somewhere but will need some investigation.

To Reproduce
Steps to reproduce the behavior:

  1. Start a conversation
  2. Build the conversation up to around 1000-2000 tokens
  3. Continue until the context limit fails, which you should see happen before 4000 tokens are really used.

Expected behavior
The conversation should only end if we attempt to send more than 4000 tokens in a prompt

This appears when I execute the main.py

None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
Traceback (most recent call last):
File "C:\Users\Lordy\Downloads\GPT3Discord-main\main.py", line 66, in
asyncio.get_event_loop().run_until_complete(main())
File "C:\Users\Lordy\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "C:\Users\Lordy\Downloads\GPT3Discord-main\main.py", line 45, in main
debug_guild = int(os.getenv("DEBUG_GUILD"))
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'

Add Private conversation command

Just one more command like /gpt private.
Then the bot will create a private channel like a ticket channel that only the user and the bot can access.

So in the env file, there will be two variables for private role iD and private channels category iD.

Only the users with the "private role can use the /gpt private command and only the user who used the command can see the channel.

This is the beginning of a new utility to this bot. I will configure it to behave like a counselor of some sort along the line.

I'll be taking another feature to a new issue.

Threading/Async for OpenAI requests

As multiple users use the bot, we need to be able to thread/make asynchronous the OpenAI API requests so that the bot's main thread doesn't get blocked every time it works on a request. Currently, the main thread gets blocked and if many people use multiple commands consecutively, the bot will block for a long time. At the very least and likely an easier approach: some kind of non-blocking queue for requests should be implemented instead of attempting to respond to the user immediately. This is probably a good approach to start with as well.

Let's take this bot out of the box.

Is it possible for this bot to be configured in a way that it can act as an actual AI companion. How?

By making it chat members up randomly and follow up according to their response.

For example a dating server where people don't really interact with each other, this bot should be able to DM members and make them feel like they have a friend.

Just enable auto DM and then train the bot to send messages that will keep people engaged.

(User provided API keys) Take the OpenAI API load off the server host (make this a multi-tenant bot)

Currently, the project has one OpenAI token defined in an .env file, this token is the token that will be used for all requests made by anyone using the bot, across all the servers the bot is installed in. This is not viable, as this puts all the API burden on one person.

I'd like to shortly make this such that if a setting is toggled, every individual user is required to enter their own API key, and that key is what will be used for all requests in the service. The usage commands should also be updated to reflect per-user usage

Please what port is the bot listening to? [BUG]

I hosted the bot on koyeb and when hosting on koyeb, a listening port is required.

I tried 5000 but it didn't listen. I've also tried 3000, 8080, 22, 80. None worked. Please which port should I use.
here is the instance:

Screenshot 2022-12-29 at 1 59 18 AM

Purge Non-ASCII characters from Discord usernames so it doesn't error out when sending data to Pinecone

Our entire Discord server has special non-ascii characters as the first character to denote rank. It would be nice if unfriendly characters were stripped out before the data is sent to Pinecone as it causes conversations to end immediately.

Error shown below:

pinecone.core.client.exceptions.ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Wed, 11 Jan 2023 01:33:10 GMT', 'x-envoy-upstream-service-time': '0', 'content-length': '129', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"ID must be ASCII, but got \n'⧋ Noa [SCALPEL-3]': Hi newthena \u003c|endofstatement|\u003e\n","details":[]}

Screenshot on the user side:
image

Keep up the great work! Love the new features.

Allow for a way to let all users use the bot

Currently, only users with the roles defined in ALLOWED_ROLES in .env will be able to use the bot, add a way to default it such that all users can use the bot (e.g, if no ALLOWED_ROLES, then everyone can use), open to any implementation/suggestions on this

[BUG] discord.errors.Forbidden: 403 Forbidden (error code: 50001): Missing Access

Describe the bug
I get a Discord error missing Access

To Reproduce
Steps to reproduce the behavior:

  1. Go to 'folder zhere the py file are'
  2. Open terminal an execue python gpt3discord.py
  3. See error

Expected behavior
To connect to the discord server and assigned channel

Additional context
.env file

OPENAI_TOKEN = ""
DISCORD_TOKEN = ""
DEBUG_GUILD = "" # discord_server_id
DEBUG_CHANNEL = "" # discord_chanel_id
ALLOWED_GUILDS = ""
PINECONE_TOKEN = ""

ADMIN_ROLES = "Admin,Owner"
DALLE_ROLES = "Admin,Openai,Dalle,gpt"
GPT_ROLES = "Openai,gpt"

WELCOME_MESSAGE = "Hi There! Welcome to our Discord server. We hope you'll enjoy our server and we look forward to engaging with you!"
USER_INPUT_API_KEYS="False"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.