Coder Social home page Coder Social logo

simatwa / python-tgpt Goto Github PK

View Code? Open in Web Editor NEW
84.0 2.0 14.0 5.91 MB

AI Chat in Terminal + Package + REST-API

Home Page: https://python-tgpt.onrender.com

License: MIT License

Python 99.07% Makefile 0.68% Shell 0.25%
ai chatgpt gemini gpt python-tgpt terminal-gpt tgpt fastapi

python-tgpt's Introduction

License Python version PyPi Black Website status Python Package flow Downloads Downloads Latest release release date wakatime

python-tgpt

>>> import pytgpt.phind as phind
>>> bot = phind.PHIND()
>>> bot.chat('hello there')
'Hello! How can I assist you today?'
from pytgpt.imager import Imager
img = Imager()
generated_images = img.generate(prompt="Cyberpunk", amount=3, stream=True)
img.save(generated_images)

This project enables seamless interaction with over 45 free LLM providers without requiring an API Key and generating images as well.

The name python-tgpt draws inspiration from its parent project tgpt, which operates on Golang. Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.

Features

  • 🐍 Python package
  • 🌐 FastAPI for web integration
  • ⌨️ Command-line interface
  • 🧠 Multiple LLM providers - 45+
  • 🌊 Stream and non-stream response
  • 🚀 Ready to use (No API key required)
  • 🎯 Customizable script generation and execution
  • 🔌 Offline support for Large Language Models
  • 🎨 Image generation capabilities
  • 🎤 Text-to-audio conversion capabilities
  • ⛓️ Chained requests via proxy
  • 🗨️ Enhanced conversational chat experience
  • 💾 Capability to save prompts and responses (Conversation)
  • 🔄 Ability to load previous conversations
  • 🚀 Pass awesome-chatgpt prompts easily
  • 🤖 Telegram bot - interface
  • 🔄 Asynchronous support for all major operations.

Providers

These are simply the hosts of the LLMs, which include:

  1. Leo - Brave
  2. Koboldai
  3. OpenGPTs
  4. OpenAI (API key required)
  5. WebChatGPT - OpenAI (Session ID required)
  6. Gemini - Google (Session ID required)
  7. Phind
  8. Llama2
  9. Blackboxai
  10. gpt4all (Offline)
  11. Poe - Poe|Quora (Session ID required)
  12. Groq (API Key required)
  13. Perplexity
  14. YepChat

41+ providers proudly offered by gpt4free.

  • To list working providers run:
    $ pytgpt gpt4free test -y

Prerequisites

Installation and Usage

Installation

Download binaries for your system from here.

Alternatively, you can install non-binaries. (Recommended)

  1. Developers:

    pip install --upgrade python-tgpt
  2. Commandline:

    pip install --upgrade "python-tgpt[cli]"
  3. Full installation:

    pip install  --upgrade "python-tgpt[all]"

pip install -U "python-tgt[api]" will install REST API dependencies.

Termux extras

  1. Developers:

    pip install --upgrade "python-tgpt[termux]"
  2. Commandline:

    pip install --upgrade "python-tgpt[termux-cli]"
  3. Full installation:

    pip install  --upgrade "python-tgpt[termux-all]"

pip install -U "python-tgt[termux-api]" will install REST API dependencies

Usage

This package offers a convenient command-line interface.

Note

phind is the default provider.

  • For a quick response:

    python -m pytgpt generate "<Your prompt>"
  • For interactive mode:

    python -m pytgpt interactive "<Kickoff prompt (though not mandatory)>"

Make use of flag --provider followed by the provider name of your choice. e.g --provider koboldai

To list all providers offered by gpt4free, use following commands: pytgpt gpt4free list providers

You can also simply use pytgpt instead of python -m pytgpt.

Starting from version 0.2.7, running $ pytgpt without any other command or option will automatically enter the interactive mode. Otherwise, you'll need to explicitly declare the desired action, for example, by running $ pytgpt generate.

Developer Docs

  1. Generate a quick response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.chat('<Your prompt>')
print(resp)
# Output : How may I help you.
  1. Get back whole response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt')
print(resp)
# Output
"""
{'completion': "I'm so excited to share with you the incredible experiences...", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwJ2', 'exception': None}
"""

Stream Response

Just add parameter stream with value true.

  1. Text Generated only
from pytgpt.leo import LEO
bot = LEO()
resp = bot.chat('<Your prompt>', stream=True)
for value in resp:
    print(value)
# output
"""
How may
How may I help 
How may I help you
How may I help you today?
"""
  1. Whole Response
from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt>', stream=True)
for value in resp:
    print(value)
# Output
"""
{'completion': "I'm so", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with.", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with you the incredible ", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}

{'completion': "I'm so excited to share with you the incredible experiences...", 'stop_reason': None, 'truncated': False, 'stop': None, 'model': 'llama-2-13b-chat', 'log_id': 'cmpl-3NmRt5A5Djqo2jXtXLBwxx', 'exception': None}
"""
Auto - *(selects any working provider)*
import pytgpt.auto import auto
bot = auto.AUTO()
print(bot.chat("<Your-prompt>"))
Openai
import pytgpt.openai as openai
bot = openai.OPENAI("<OPENAI-API-KEY>")
print(bot.chat("<Your-prompt>"))
Koboldai
import pytgpt.koboldai as koboldai
bot = koboldai.KOBOLDAI()
print(bot.chat("<Your-prompt>"))
Opengpt
import pytgpt.opengpt as opengpt
bot = opengpt.OPENGPT()
print(bot.chat("<Your-prompt>"))
phind
import pytgpt.phind as phind
bot = phind.PHIND()
print(bot.chat("<Your-prompt>"))
Gpt4free providers
import pytgpt.gpt4free as gpt4free
bot = gpt4free.GPT4FREE(provider="Koala")
print(bot.chat("<Your-prompt>"))

Asynchronous

Version 0.7.0 introduces asynchronous implementation to almost all providers except a few such as perplexity & gemini, which relies on other libraries which lacks such implementation.

To make it easier, you just have to prefix Async to the common synchronous class name. For instance OPENGPT will be accessed as AsyncOPENGPT:

Streaming Whole ai response.

import asyncio
from pytgpt.phind import AsyncPHIND

async def main():
    async_ask = await AsyncPHIND(False).ask(
        "Critique that python is cool.",
        stream=True
    )
    async for streaming_response in async_ask:
        print(
            streaming_response
        )

asyncio.run(
    main()
)

Streaming just the text

import asyncio
from pytgpt.phind import AsyncPHIND

async def main():
    async_ask = await AsyncPHIND(False).chat(
        "Critique that python is cool.",
        stream=True
    )
    async for streaming_text in async_ask:
        print(
            streaming_text
        )

asyncio.run(
    main()
)

To obtain more tailored responses, consider utilizing optimizers using the optimizer parameter. Its values can be set to either code or system_command.

from pytgpt.leo import LEO
bot = LEO()
resp = bot.ask('<Your Prompt>', optimizer='code')
print(resp)

Important

Commencing from v0.1.0, the default mode of interaction is conversational. This mode enhances the interactive experience, offering better control over the chat history. By associating previous prompts and responses, it tailors conversations for a more engaging experience.

You can still disable the mode:

bot = koboldai.KOBOLDAI(is_conversation=False)

Utilize the --disable-conversation flag in the console to achieve the same functionality.

Caution

Bard autohandles context due to the obvious reason; the is_conversation parameter is not necessary at all hence not required when initializing the class. Also be informed that majority of providers offered by gpt4free requires Google Chrome inorder to function.

Image Generation

This has been made possible by pollinations.ai.

$ pytgpt imager "<prompt>"
# e.g pytgpt imager "Coding bot"
Developers
from pytgpt.imager import Imager

img = Imager()

generated_img = img.generate('Coding bot') # [bytes]

img.save(generated_img)
Download Multiple Images
from pytgpt.imager import Imager

img = Imager()

img_generator = img.generate('Coding bot', amount=3, stream=True)

img.save(img_generator)

# RAM friendly

Using Prodia provider

from pytgpt.imager import Prodia

img = Prodia()

img_generator = img.generate('Coding bot', amount=3, stream=True)

img.save(img_generator)

Advanced Usage of Placeholders

The generate functionality has been enhanced starting from v0.3.0 to enable comprehensive utilization of the --with-copied option and support for accepting piped inputs. This improvement introduces placeholders, offering dynamic values for more versatile interactions.

Placeholder Represents
{{stream}} The piped input
{{copied}} The last copied text

This feature is particularly beneficial for intricate operations. For example:

$ git diff | pytgpt generate "Here is a diff file: {{stream}} Make a concise commit message from it, aligning with my commit message history: {{copied}}" --new

In this illustration, {{stream}} denotes the result of the $ git diff operation, while {{copied}} signifies the content copied from the output of the $ git log command.

Awesome Prompts

These prompts are designed to guide the AI's behavior or responses in a particular direction, encouraging it to exhibit certain characteristics or behaviors. The term "awesome-prompt" is not a formal term in AI or machine learning literature, but it encapsulates the idea of crafting prompts that are effective in achieving desired outcomes. Let's say you want it to behave like a Linux Terminal, PHP Interpreter, or just to JAIL BREAK.

Instances :

$ pytgpt interactve --awesome-prompt "Linux Terminal"
# Act like a Linux Terminal

$ pytgpt interactive -ap DAN
# Jailbreak

Note

Awesome prompts are alternative to --intro. Run $ pytgpt awesome whole to list available prompts (200+). Run $ pytgpt awesome --help for more info.

Introducing RawDog

RawDog is a masterpiece feature that exploits the versatile capabilities of Python to command and control your system as per your needs. You can literally do anything with it, since it generates and executes python codes, driven by your prompts! To have a bite of rawdog simply append the flag --rawdog shortform -rd in generate/interactive mode. This introduces a never seen-before feature in the tgpt ecosystem. Thanks to AbanteAI/rawdog for the idea.

This can be useful in some ways. For instance :

$ pytgpt generate -n -q "Visualize the disk usage using pie chart" --rawdog

This will pop up a window showing system disk usage as shown below.

Passing Environment Variables

Pytgpt v0.4.6 introduces a convention way of taking variables from the environment. To achieve that, set the environment variables in your operating system or script with prefix PYTGPT_ followed by the option name in uppercase, replacing dashes with underscores.

For example, for the option --provider, you would set an environment variable PYTGPT_PROVIDER to provide a default value for that option. Same case applies to boolean flags such as --rawdog whose environment variable will be PYTGPT_RAWDOG with value being either true/false. Finally, --awesome-prompt will take the environment variable PYTGPT_AWESOME_PROMPT.

Note

This is NOT limited to any command

The environment variables can be overridden by explicitly declaring new value.

Tip

Save the variables in a .env file in your current directory or export them in your ~/.zshrc file. To load previous conversations from a .txt file, use the -fp or --filepath flag. If no flag is passed, the default one will be used. To load context from a file without altering its content, use the --retain-file flag.

Dynamic Provider & Further Interfaces

Version 0.4.6 also introduces dynamic provider called g4fauto, which represents the fastest working g4f-based provider.

Tip

To launch web interface for g4f-based providers simply run $ pytgpt gpt4free gui. $ pytgpt api run will start the REST-API. Access docs and redoc at /docs and /redoc respectively. To launch the web interface for g4f-based providers, execute the following command in your terminal:

$ pytgpt gpt4free gui

This command initializes the Web-user interface for interacting with g4f-based providers.

To start the REST-API:

$ pytgpt api run

This command starts the RESTful API server, enabling you to interact with the service programmatically.

For accessing the documentation and redoc, navigate to the following paths in your web browser:

  • Documentation: /docs
  • ReDoc: /redoc

Speech Synthesis

To enable speech synthesis of responses, ensure you have either the VLC player installed on your system or, if you are a Termux user, the Termux:API package.

To activate speech synthesis, use the --talk-to-me flag or its shorthand -ttm when running your commands. For example:

$ pytgpt generate "Generate an ogre story" --talk-to-me

or

$ pytgpt interactive -ttm

This flag instructs the system to audiolize the ai responses and then play them, enhancing the user experience by providing auditory feedback.

Version 0.6.4 introduces another dynamic provider, auto, which denotes the working provider overall. This relieves you of the workload of manually checking a working provider each time you fire up pytgpt. However, auto as a provider does not work so well with streaming responses, so probably you would need to sacrifice performance for the sake of reliability.

If you're not satisfied with the existing interfaces, pytgpt-bot could be the solution you're seeking. This bot is designed to enhance your experience by offering a wide range of functionalities. Whether you're interested in engaging in AI-driven conversations, creating images and audio from text, or exploring other innovative features, pytgpt-bot is equipped to meet your needs.

The bot is maintained as a separate project so you just have to execute a command to get it installed :

$ pip install pytgpt-bot

Usage : pytgpt bot run <bot-api-token>

Or you can simply interact with the one running now as @pytgpt-bot

For more usage info run $ pytgpt --help

Usage: pytgpt [OPTIONS] COMMAND [ARGS]...

Options:
  -v, --version  Show the version and exit.
  -h, --help     Show this message and exit.

Commands:
  api          FastAPI control endpoint
  awesome      Perform CRUD operations on awesome-prompts
  bot          Telegram bot interface control
  generate     Generate a quick response with AI
  gpt4free     Discover gpt4free models, providers etc
  imager       Generate images with pollinations.ai
  interactive  Chat with AI interactively (Default)
  utils        Utility endpoint for pytgpt
  webchatgpt   Reverse Engineered ChatGPT Web-Version

API Health Status

No. API Status
1. On-render cron-job

Acknowledgements

  1. tgpt
  2. gpt4free

python-tgpt's People

Contributors

simatwa avatar johnd0e avatar ayaz345 avatar cahebebe avatar almas-ali avatar

Stargazers

 avatar  avatar w1gz avatar Vinicius Romano avatar  avatar  avatar  avatar Drippy avatar .ftgs avatar  avatar Rafael Calleja avatar Space avatar Alex Arroyo avatar  avatar  avatar Willian C.N. avatar  avatar  avatar Cheng-Lung Sung avatar Ersin Akşar avatar  avatar  avatar  avatar max avatar Mahyar Amiri avatar Kamau avatar Alexander Ushakov avatar rdwxth avatar  avatar Asd22 Entertainment avatar 泉哥 avatar  avatar Kaye avatar  avatar Fakedon avatar Badr Bench avatar dreadraze avatar Ammar avatar Vortex avatar Alex avatar James E. Kemp avatar Nothing avatar Another-Damu avatar  avatar Alwaz Shahid avatar PaMak avatar Ubaid avatar Welding Torch avatar Carlos Estrella avatar Anas Khan avatar  avatar  avatar Egor Farizov avatar Alex Bishop avatar  avatar Evgenii Zolotarev avatar  avatar  avatar Bryan Kevin Jones avatar Petrus avatar Thomaz Soares avatar nazDridoy avatar Kirill avatar  avatar gerald childs avatar Ngô Đình Gia Bảo avatar RasyiidWho avatar Alberto Segura avatar  avatar Adzuki avatar Yetmens avatar Gopal Das avatar  avatar  avatar Austin Songer,MIS,CEH,ESCA,Project+ (Navy Veteran) avatar  avatar RoyRiver avatar Andrew avatar Samir Alibabic avatar  avatar Abdelrahman Sabry avatar  avatar  avatar  avatar

Watchers

Kostas Georgiou avatar  avatar

python-tgpt's Issues

Autonaming Chats and some minor issues.

I think python-tgpt is going to be big. Great work so far. Here are a few suggestions for improvements:

  1. -s mode should have conversation flag disabled by default. While the optimiser for the -s mode was good enough for ChatGPT, LLama does a funny thing by responding with 'Sure! Here's the thing you need ... blah blah blah' before the actual answer. I found the following addition to the optimiser useful for the -s mode : “Respond in only one line. You are allowed to use semi-colon to combine commands to respond in a single line. Do not include any line starting with 'Sure' in your response.”
  2. I was wondering if we could have automatic naming of conversations rather than specifying the chat file manually everytime we have a chat. We would need the following:
  • A command such as tgpt new chat_name which creates a file named chat_name in the cache and will set this file to the current active chat.
  • If the command tgpt new is called without the argument chat_name, a temporary chat file will be created in the cache. After the first prompt (it could the be the second or the third prompt as well, tell me what you think would be a good choice) a request will be sent to the model asking it to name the chat. Something like this : “Suggest a filename for the chat so far. Make sure there are no spaces in the filename.” The model's response will have to be post processed (remove the 'Sure! Here is the thing ... blah blah blah') to get the chat_name. This chat_name will now become the current active chat.
  • We might need to check if the file with the same name exists in the cache and if so modify the name to avoid overwriting existing chats.
  • One could use a command such as tgpt select chat_name to select a chat from the existing chats to continue the same. One could use it in conjunction with a utility such as fzf to get a menu to select chats from. (One can literally use ls .cache/tgpt/ | fzf and select the file from the menu). We could in theory sort the items in the cache by file modification times to have the latest chats at the top. Since fzf supports search it would make searching chats very easy if the cache gets too large.

Phind

Provider Phind is not working (for me at least) for the last couple of days. Andrew at tgpt-golang has I think implemented Phind directly without g4f. Perhaps we could do the same. Phind is very good for realtime updates and news.

ERROR : 'choices'

I am working on a project which requires to run pytgpt inside a docker container, but I am getting this strange error:- ERROR : 'choices'

On doing exec inside the container, I am able to run pytgpt (indicates that is successfully installed), but on asking some question, I am getting the same error:- ERROR : 'choices'

Screenshot 2024-05-15 at 9 12 54 PM

It's running on my local environment, but won't work inside a docker container.

I am using Phind as LLM provider, example is given below:-

import pytgpt.phind as phind
llm = phind.PHIND()

try:
    data = llm.ask("Hello, how are you?") #getting the error here
    print(data)
    print(llm.get_message(data))

except Exception as e:
    print(e)

For reference, this is my Dockerfile

ARG PYTHON_VERSION=3.11
FROM python:${PYTHON_VERSION}-slim as base

# Prevents Python from writing pyc files.
ENV PYTHONDONTWRITEBYTECODE=1

# Keeps Python from buffering stdout and stderr to avoid situations where
# the application crashes without emitting any logs due to buffering.
ENV PYTHONUNBUFFERED=1

# Install system-level dependencies
RUN apt-get update -y \
    && apt-get upgrade -y \
    && apt-get install -y --no-install-recommends gcc build-essential libpython3-dev libleptonica-dev tesseract-ocr libtesseract-dev python3-pil tesseract-ocr-eng tesseract-ocr-script-latn \
    && rm -rf /var/lib/apt/lists/*

# Create a non-privileged user that the app will run under.
ARG UID=10001
RUN adduser \
    --disabled-password \
    --gecos "" \
    --home "/home/bot" \
    --shell "/sbin/nologin" \
    --uid "${UID}" \
    bot

# Create the home directory and set permissions
RUN chown bot:bot /home/bot

# Download dependencies as a separate step to take advantage of Docker's caching.
# Leverage a cache mount to /root/.cache/pip to speed up subsequent builds.
# Leverage a bind mount to requirements.txt to avoid having to copy them into
# into this layer.
RUN --mount=type=cache,target=/root/.cache/pip \
    --mount=type=bind,source=requirements.txt,target=requirements.txt \
    pip install --upgrade -r requirements.txt

# Switch to the non-privileged user and set the working directory
USER bot
WORKDIR /home/bot

# Copy the source code into the container.
COPY . .

# Run the application.
CMD ["python", "src/main.py"]

My requirements.txt

python-tgpt[all]
requests

Errors with rawdog

On both systems I use python3.X

Result on Linux:

:~$ pytgpt generate -n -q "Visualize the disk usage using pie chart" --rawdog
Error Occurred: while running 'python --version'
/bin/sh: 1: python: not found

Result on Windows:

>pytgpt generate -n -q "Visualize the disk usage using pie chart" --rawdog
10:46:48 - ERROR : Exception - 'choices'

More models

I recently tried chat.lmsys.org. They have a lot of models. They are collecting large chat datasets for research purposes. You can try to integrate it if you can. You can even get in touch with the team as they need more users.

Internet search is not working

╭─[temp@pyTGPT](phind)~[🕒18:39:02-💻00:00:00-⚡0.0s]
╰─>list all opensource proxies from internet
╭────────────────────────────────────────────────────────────────────────────────────── AI Response ───────────────────────────────────────────────────────────────────────────────────────╮
│ SEARCH {"query": "list of open source proxies"}                                                                                                                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─[temp@pyTGPT](phind)~[🕒18:39:17-💻00:00:14-⚡2.8s]
╰─>search on internet about opensource proxies
╭────────────────────────────────────────────────────────────────────────────────────── AI Response ───────────────────────────────────────────────────────────────────────────────────────╮
│ SEARCH {"query": "list of open source proxies"}                                                                                                                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─[temp@pyTGPT](phind)~[🕒18:39:53-💻00:00:50-⚡3.8s]
╰─>

internet search is not working using latest version

PS C:\Users\temp> pytgpt -v
pytgpt, version 0.5.3
PS C:\Users\temp>

how to control how many results are being used while responding?

Tokens repeating at the end with openai provider.

Tokens are repeating at the end for all queries with openai as the provider using api key.
Here is the error that is occuring.

Switched provider to openai.
>
Write an essay in 3000 words on the benefits of integration and education amongst tribals.
><
Introduction:
Tribal communities have long been marginalized and excluded from mainstream society, facing
discrimination and inequality in various aspects of life. However, integration and education can play
a vital role in bridging the gap between tribal and non-tribal communities, promoting social
cohesion, and empowering tribal individuals to reach their full potential. In this essay, we will
explore the benefits of integration and education amongst tribals, highlighting the positive impact
it can have on their socio-economic status, cultural preservation, and overall well-being.

Socio-Economic Benefits:
1. Improved Employment Opportunities: Education and integration can open up new avenues for
employment opportunities for tribal individuals. With access to quality education, they can acquire
skills and knowledge that are in demand in the job market, enabling them to secure better-paying jobs
and improve their socio-economic status.
2. Economic Empowerment: Tribal communities have traditionally been dependent on agriculture, forest
produce, and other natural resources for their livelihood. However, with education and integration,
they can explore new economic opportunities, such as entrepreneurship, vocational training, and
self-employment, which can help them achieve economic stability and independence.
3. Access to Healthcare and Social Services: Integration can provide tribal communities, and, and,
and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and, and,
and, and, and, and, and, and, and, and, and, and, and, and, and,

This needs to be fixed. This is happening everytime for every prompt.

Bard-API

(Bard provider does not work for me, that's why am I investigating this)

GoogleBard==1.4.0

Here I wonder why version 1.4, while latest is 2.1.
And what is also important - current GoogleBard lib in unmaintained (Public archive).

There is another package - bardapi.
I have checked, and it is working for me, but only when I am using 3 cookies at once, like in this example.
Just __Secure-1PSID is not enough.

Pyperclip error

I am getting the following error after the last update.
- ERROR : Exception - Pyperclip could not find a copy/paste mechanism for your system. For more information, please visit https://pyperclip.readthedocs.io/en/latest/index.html#not-implemented-error

It was working fine in the version 0.2.2 I had before.

Folder /usr/lib/pytgpt/gpt4all/ with all its files is missing from 0.6.3 .deb packages!

Traceback (most recent call last):
File "PyInstaller/loader/pyimod03_ctypes.py", line 53, in init
File "ctypes/init.py", line 376, in init
OSError: /usr/lib/pytgpt/gpt4all/llmodel_DO_NOT_MODIFY/build/libllmodel.so: cannot open shared object file: No such file or directory

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "main.py", line 3, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "pytgpt/init.py", line 2, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "g4f/init.py", line 6, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "g4f/models.py", line 5, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "g4f/Provider/init.py", line 42, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "g4f/Provider/Local.py", line 5, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "g4f/locals/provider.py", line 5, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "gpt4all/init.py", line 1, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "gpt4all/gpt4all.py", line 18, in
File "", line 1176, in _find_and_load
File "", line 1147, in _find_and_load_unlocked
File "", line 690, in _load_unlocked
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "gpt4all/_pyllmodel.py", line 38, in
File "gpt4all/_pyllmodel.py", line 28, in load_llmodel_library
File "PyInstaller/loader/pyimod03_ctypes.py", line 55, in init
pyimod03_ctypes.install..PyInstallerImportError: Failed to load dynlib/dll '/usr/lib/pytgpt/gpt4all/llmodel_DO_NOT_MODIFY/build/libllmodel.so'. Most likely this dynlib/dll was not found when the application was frozen.
[46543] Failed to execute script 'main' due to unhandled exception!

Leo always answers the same message

The Leo provider always returns this result

:~$ pytgpt generate --provider leo "1+1?"

╭──────────────────── AI Response ─────────────────────╮
│ Hi there, my name is Leo! I can help with all sorts  │
│ of tasks. I can create real-time summaries of        │
│ webpages or videos, answer questions about content,  │
│ or generate new content. I can even translate pages, │
│ analyze them, rewrite them, and more. By using me on │
│ Brave browser, you can get help and answers quickly  │
│ and easily, without having to leave the page you're  │
│ on. Try using Brave today!                           │
╰──────────────────────────────────────────────────────╯

Is vlc really required?

After updating to the latest version I see the following error message:

FileNotFoundError: Could not find module 'D:\AI\python-tgpt\libvlc.dll' (or one of its dependencies). Try using the full path with constructor syntax.

Is it possible to make this dependency optional?

Exit code when no network available or similar errors

I was trying tgpt today and encountered the following error as my wifi was off.

10:20:19 - ERROR : HTTPSConnectionPool(host='ai-chat.bsg.brave.com', port=443): Max retries exceeded with url: /v1/complete (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x8f61dd59rcd0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))

When such an error occurs (when host can’t be reached) tgpt should not return an exit code of 0. This affected a script it was being used in.

More customisation for chat file intros

Can there be a flag --remove-intro-from-chat which will not save the intro (or system prompt) in the .txt chat history file? That makes the chats more readable from the users perspective. We could write something like:
pytgpt generate --intro="$intro" --remove-intro-from-chat "Prompt"
which would USE the intro together with the chat history but just won't write the intro to the chat history file. To continue the chat one would then have to use the --intro flag each time.
This has a lot of use cases if one wants dynamic intro's, we could just pass a variable $intro at different times when calling in an LLM on the same chat.

[Llama2] newline handling problem

newlines are incorrectly handled by this library with llama2 provider.

if stream is off: no newlines will ever appear.
if stream is on: the ai response will repeat itself a bunch of times in random places, causing issues with character limits

this could be fixed.

Additional providers

Llama 2

Tgpt-golang has added Llama 2 as a new model. We should add it here as well as Llama2 by g4f seems to be broken. Also fakeopen has closed down.

Available voice strings

Hi

Is there a way to see the available voice strings used by the API?

Default is en-US-Wavenet-C
es-ES-Standard-A works but for example es-ES-Standard-C does not.

Another option for disable converstion

I have lately realised that there should be a flag --dont-overwrite that reads the context from an existing chat file without saving the output to the chat (without overwriting the chat file). Right now the flag --disable-conversation disables reading the input chat file entirely.
This has interesting use cases in scripts where one would like to keep the chat as is while executing new commands using the context of the chat so far. For example,
pytgpt generate --dont-overwrite --filepath abc.txt "Summarise the fields in the chat so far and write them out as a JSON." > abc.json

Phind content error

Asking Phind realtime questions is giving the following error:
19:30:31 - ERROR : 'content'
For example prompting pytgpt with the prompt What was the stock price for Nvidia yesterday? gives the above error.

This seems to be working fine with tgpt-golang. For the same prompt above tgpt-golang gives the answer:
The closing price for Nvidia on January 31, 2024, was $615.27 [0][1].

I think it's an issue with links. Whenever Phind is citing links to the information (like [0][1] above) it is leading to an error. If on the other hand the question is simple like Who are you? where the answer does not cite any links, it is answering fine.

Is it possible to execute the command `pytgpt gpt4free test -y` without using Google Drive or Selenium ?

To facilitate a quick and straightforward verification process, we could leverage libraries such as requests or httpx or httpio, etc. This approach would bypass the excessive logging and potential lack of feedback in the terminal when encountering errors.

[BROKEN]:  OpenaiChat, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  AItianhuSpace, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  Gemini, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  Pi, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  Poe, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  GptChatly, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  Theb, Error: [WinError 183] Cannot create a file when that file already exists: 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected\\chromedriver-win32\\chromedriver.exe' -> 'C:\\Users\\temp\\appdata\\roaming\\undetected_chromedriver\\undetected_chromedriver.exe'
[BROKEN]:  ChatgptAi, Error: 404, message='Not Found', url=URL('https://chatgate.ai/wp-json/mwai-ui/v1/chats/submit')
[BROKEN]:  AiChatOnline, Error: 522, message='', url=URL('https://aichatonline.org/chatgpt/wp-json/mwai-ui/v1/chats/submit')

(Linux) I can't install the deb package and the pip package doesn't work

I have removed the pip packages, the deb version and tried to install the deb package but it gives me an error

:~$ sudo dpkg -i pytgpt-linux-amd64.deb
(Reading database ... 148587 files and directories currently installed.)
Preparing to unpack pytgpt-linux-amd64.deb ...
The file list /usr/lib/pytgpt/entries.txt does not exist. Nothing to do.
Unpacking pytgpt (0.4.0) over (0.4.0) ...

Submit any bug at <https://github.com/Simatwa/python-tgpt/issues/new>
Setting up pytgpt (0.4.0) ...
To get the most out of this script, install it from pypi <pip install python-tgpt>
Submit any bug at https://github.com/Simatwa/python-tgpt/issues/new
Processing triggers for mime-support (3.64ubuntu1) ...

pip install python-tgpt

:~$ pip install python-tgpt
Defaulting to user installation because normal site-packages is not writeable
Collecting python-tgpt
  Using cached python_tgpt-0.4.0-py3-none-any.whl.metadata (19 kB)
Requirement already satisfied: requests==2.28.2 in ./.local/lib/python3.11/site-packages (from python-tgpt) (2.28.2)
Requirement already satisfied: appdirs==1.4.4 in ./.local/lib/python3.11/site-packages (from python-tgpt) (1.4.4)
Requirement already satisfied: pyyaml==6.0.1 in ./.local/lib/python3.11/site-packages (from python-tgpt) (6.0.1)
Requirement already satisfied: charset-normalizer<4,>=2 in ./.local/lib/python3.11/site-packages (from requests==2.28.2->python-tgpt) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3/dist-packages (from requests==2.28.2->python-tgpt) (2.8)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in ./.local/lib/python3.11/site-packages (from requests==2.28.2->python-tgpt) (1.26.18)
Requirement already satisfied: certifi>=2017.4.17 in ./.local/lib/python3.11/site-packages (from requests==2.28.2->python-tgpt) (2023.11.17)
Using cached python_tgpt-0.4.0-py3-none-any.whl (55 kB)
Installing collected packages: python-tgpt
Successfully installed python-tgpt-0.4.0

pytgpt -v

:~$ pytgpt -v
Traceback (most recent call last):
  File "/home/test/.local/bin/pytgpt", line 5, in <module>
    from pytgpt.console import main
  File "/home/test/.local/lib/python3.11/site-packages/pytgpt/__init__.py", line 2, in <module>
    import g4f
  File "/home/test/.local/lib/python3.11/site-packages/g4f/__init__.py", line 6, in <module>
    from .models   import Model, ModelUtils
  File "/home/test/.local/lib/python3.11/site-packages/g4f/models.py", line 3, in <module>
    from .Provider   import RetryProvider, ProviderType
  File "/home/test/.local/lib/python3.11/site-packages/g4f/Provider/__init__.py", line 7, in <module>
    from .deprecated      import *
  File "/home/test/.local/lib/python3.11/site-packages/g4f/Provider/deprecated/__init__.py", line 18, in <module>
    from .Aibn          import Aibn
  File "/home/test/.local/lib/python3.11/site-packages/g4f/Provider/deprecated/Aibn.py", line 7, in <module>
    from ...requests import StreamSession
  File "/home/test/.local/lib/python3.11/site-packages/g4f/requests.py", line 6, in <module>
    from curl_cffi.requests import Session
  File "/home/test/.local/lib/python3.11/site-packages/curl_cffi/requests/__init__.py", line 29, in <module>
    from .session import AsyncSession, BrowserType, Session
  File "/home/test/.local/lib/python3.11/site-packages/curl_cffi/requests/session.py", line 30, in <module>
    import eventlet.tpool
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/__init__.py", line 6, in <module>
    from eventlet import convenience
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/convenience.py", line 7, in <module>
    from eventlet.green import socket
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/green/socket.py", line 21, in <module>
    from eventlet.support import greendns
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/support/greendns.py", line 78, in <module>
    setattr(dns, pkg, import_patched('dns.' + pkg))
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/support/greendns.py", line 60, in import_patched
    return patcher.import_patched(module_name, **modules)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/patcher.py", line 132, in import_patched
    return inject(
           ^^^^^^^
  File "/home/test/.local/lib/python3.11/site-packages/eventlet/patcher.py", line 109, in inject
    module = __import__(module_name, {}, {}, module_name.split('.')[:-1])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/.local/lib/python3.11/site-packages/dns/asyncquery.py", line 38, in <module>
    from dns.query import (
  File "/home/test/.local/lib/python3.11/site-packages/dns/query.py", line 64, in <module>
    import httpcore
  File "/home/test/.local/lib/python3.11/site-packages/httpcore/__init__.py", line 1, in <module>
    from ._api import request, stream
  File "/home/test/.local/lib/python3.11/site-packages/httpcore/_api.py", line 5, in <module>
    from ._sync.connection_pool import ConnectionPool
  File "/home/test/.local/lib/python3.11/site-packages/httpcore/_sync/__init__.py", line 1, in <module>
    from .connection import HTTPConnection
  File "/home/test/.local/lib/python3.11/site-packages/httpcore/_sync/connection.py", line 12, in <module>
    from .._synchronization import Lock
  File "/home/test/.local/lib/python3.11/site-packages/httpcore/_synchronization.py", line 11, in <module>
    import trio
  File "/home/test/.local/lib/python3.11/site-packages/trio/__init__.py", line 20, in <module>
    from ._core import TASK_STATUS_IGNORED as TASK_STATUS_IGNORED  # isort: split
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/test/.local/lib/python3.11/site-packages/trio/_core/__init__.py", line 21, in <module>
    from ._local import RunVar, RunVarToken
  File "/home/test/.local/lib/python3.11/site-packages/trio/_core/_local.py", line 9, in <module>
    from . import _run
  File "/home/test/.local/lib/python3.11/site-packages/trio/_core/_run.py", line 2762, in <module>
    from ._io_epoll import (
  File "/home/test/.local/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 202, in <module>
    class EpollIOManager:
  File "/home/test/.local/lib/python3.11/site-packages/trio/_core/_io_epoll.py", line 203, in EpollIOManager
    _epoll: select.epoll = attr.ib(factory=select.epoll)
                                           ^^^^^^^^^^^^
AttributeError: module 'eventlet.green.select' has no attribute 'epoll'

"Patching driver executable" about undetected_chromedriver

When using some of the providers, the system logs the following line, then nothing else happens, and the process exits.

16:54:44 - INFO : patching driver executable /home/eyeberic/.local/share/undetected_chromedriver/undetected_chromedriver

Version of usage: pytgpt, version 0.3.9

Is this expected behaviour?
I have also tried uninstalling python-tgpt and g4f via pip, and then re-install it, and the behaviour is the same.

About background of code blocks

Is there a way to have syntax highlighting without having black background (or another theme background for that matter)? Most terminal users usually have terminal transparency enabled just for the looks of it. Perhaps there could be a flag that keeps the syntax highlighting but disables the background.

[BUG]: In macOS, interactive mode prompt garbles if terminal less than 130 characters wide.

Describe the bug
in macOS, in terminals uxterm, iterm2, terminal, and many more,
prompt of command "python3 -m pytgpt interactive"
would garble if terminal if less than 130 characters wide.

To Reproduce
Steps to reproduce the behavior:

  1. On macOS, Run "pip3 install python-tgpt" to install python-tgpt.
  2. Open another terminal window, 130 character or wider.
  3. Run "python3 -m pytgpt interactive"
  4. You would see proper prompt, it would look like:
  5. ╭─User@pyTGPT~[🕒00:00:00-💻00:00:00-⚡0.0s]
    ╰─>
  6. Open another terminal window, (much) less than 130 characters.
  7. Run "python3 -m pytgpt interactive"
  8. You would see garbled prompt, looking like:
  9. "00:00:00-💻00:00:00-⚡0.0s]"
  10. ╰─>
  11. Note that part "00:00:00-💻00:00:00-⚡0.0s]" is exactly 31 chars long when echo'ed,
    and won't garble until terminal is less than 65 characters wide.

Expected behavior
If terminal on macOS is less than 130 characters wide and "python3 -m pytgpt interactive" is run on it within "bash", prompt of that command would fold on itself instead of properly line breaking, as it is on other systems!

Screenshots or Code snippets
If applicable, add screenshots or code snippets to help explain your problem.

Desktop (please complete the following information):

  • OS [e.g. Windows, Mac OS, Linux]: macOS 10.12.6
  • Python-tgpt version [v0.5.3]: 0.6.3
  • Binary config [x86, x64, x32]: x64 mach-o
  • Python version [If running from source]: 3.11.6 from MacPorts

Additional context
Add any other context about the problem here.

Workaround
It is caused by mishandled color codes by faulty-compiled Python version, and, maybe, something inside "pytgpt".
Run "pytgpt" with "-nc" flag, and that issue would hide.

Bard Session ID

How do add the session_id or .json file to the command?

:~$ pytgpt generate --provider bard "1+1?"
15:49:17 - ERROR : Bard's session-id or path to bard.google.com.cookies.json file is required
Quitting

with_copied in generate mode.

I was adding features to my script and thought it would be a nice addition to have a with_copied flag for generate mode as well. It can go something like this,
pytgpt generate --with-copied.
This should pass the content of the clipboard as the prompt.
One could also have something like
pytgpt generate --with-copied "Prompt to be appended at the end of the text from clipboard".
For example, if my clipboard contained a program in python we could type something like
pytgpt generate --with-copied "Explain this code"
and it would send the the whole thing (clipboard followed by the prompt "Explain this code") as the prompt.

Support streaming to pipe

Describe the bug
When using in terminal text streaming works as it should, but when piping output - it shows up all at once.

To Reproduce

pytgpt generate --quiet --disable-conversation "Write a short story"|less

Expected behavior
When I pipe other utilities to less - I see output appear line by line.

Environment:

  • os version: Windows 10

Do not force-wrap text when output to file (or piping)

Describe the bug
When text is output to a file, it is always wrapped at position 79.
Lines shorter than 79 characters are padded with spaces until they reach this length.

To Reproduce

pytgpt generate --quiet --disable-conversation "Write a short story">output.md

Expected behavior
I expect that the output should not be reformatted/padded in any way.

Tearing on terminal

I have been experiencing tearing on the terminal using python tgpt which has become worse with the latest update. The output on the terminal completely vanishes once the generation is complete. Maybe you need to lower the rate at which the generation occurs. Also maybe you could provide an option of saving the entire response (including markdown themes) to some buffer and then echoing the output all at once. (That's what the -w flag does in the original tgpt).
I am using Xfce terminal. I checked the same on guake. Same issue so it's not terminal specific.

Can we allow to have the global setting or specific model setting along with model choice

Hi @Simatwa ,

Just have some Ideas, if possible to implement

Can we have options to set/show the System Message, temperature, default model to set, etc. to avoid long command lines arguments

I have found one of the user interface is very easy in terms of setting the parameters and system messages usage ollama

Reference: https://github.com/ollama/ollama/blob/main/cmd/interactive.go#L110-L172

If we have it in pytgpt it would be great.

Multi-line input, I used 0.5.3?

When attempting to input multi-line request, it gets sliced into lines and each lines turns into its own request! Command-line option to enable multi-line input?

pytgpt gpt4free update breaking functioning in termux

after running "pytgpt gpt4free update" in termux on Android, it is no longer possible to use pytgpt.
error message (related to webview package):

~ $ pytgpt
Traceback (most recent call last):
File "/data/data/com.termux/files/usr/bin/pytgpt", line 5, in
from pytgpt.console import main
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/pytgpt/init.py", line 2, in
import g4f
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/init.py", line 6, in
from .models import Model
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/models.py", line 5, in
from .Provider import RetryProvider, ProviderType
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/Provider/init.py", line 8, in
from .deprecated import *
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/Provider/deprecated/init.py", line 1, in
from .AiService import AiService
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/Provider/deprecated/AiService.py", line 6, in
from ..base_provider import AbstractProvider
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/Provider/base_provider.py", line 3, in
from .helper import get_cookies, format_prompt
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/Provider/helper.py", line 3, in
from ..requests.aiohttp import get_connector
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/g4f/requests/init.py", line 12, in
import webview
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/webview/init.py", line 34, in
from webview.window import Window
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/webview/window.py", line 72, in
class Window:
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/webview/window.py", line 259, in Window
def load_html(self, content: str, base_uri: str = base_uri()) -> None:
^^^^^^^^^^
File "/data/data/com.termux/files/usr/lib/python3.11/site-packages/webview/util.py", line 96, in base_uri
if not os.path.exists(base_path):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "", line 19, in exists
TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType

Standard input and standard output

The output of the command should not be printed with the title AI response and framed by default. This is because one might want to pipe the output text in a script. You could however add a flag to print the output as such.

The command tgpt2 like tgpt should ideally take in inputs from standard input also and print results to standard output.

Also it would be wise if the default behaviour is similar to tgpt and we could write something like:
tgpt2 "Hello"
without having to type the command generate and -P flags. It's really inconvenient typing it all again and again.

The other default commands of tgpt like the --quiet flag and -w and -c options should also be mimicked.

I really like the idea of controlling parameters like temperature and brave keys and all. I think tgpt2 has the potential to be a really nice project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.