marcolardera / chatgpt-cli Goto Github PK
View Code? Open in Web Editor NEWSimple yet effective command line client for chatting with ChatGPT using the official API
License: MIT License
Simple yet effective command line client for chatting with ChatGPT using the official API
License: MIT License
$ python chatgpt.py --model gpt-4
ChatGPT CLI
Model in use: gpt-4
───
[0] >>> How are you?
Unknown error, status code 404
{'error': {'message': 'The model: `gpt-4` does not exist', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Total tokens used: 0
Estimated expense: $0.0
This tool is wonderful, btw. Thanks for building it!
I had a great series of prompts going and then I sent one more, and the program failed after returning the following fault:
Maximum context length exceeded
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/home/<username>/Documents/chatgpt-cli/chatgpt.py", line 79, in display_expense
PRICING_RATE[model]["prompt"],
KeyError: 'gpt-3.5-turbo-0301'
Unfortunately, it lost the context that had accumulated. In general, is there a way for the API to remember sessions used in chatgpt-cli, or is the API essentially an atomic session (a "chat"?) that is gone after the program ends?
Again, thanks for what you do!
ASH
Hi,
I like your minimalistic approach a lot! But the lack of a few configurable parameters made me switch to https://github.com/j178/chatgpt. If you could add the following parameters to the yaml file:
"prompts": {
"default": "You are a helpful assistant"
"pirate": "You are pirate Blackbeard. Arr matey!"},
"conversation": {
"prompt": "default",
"stream": true,
"max_tokens": 1024,
"temperature": 0
}
I would be happy to switch back! Basically, this is adding the following functionalities:
--context <FILE PATH>
stream
al1lows the tokens to be sent as they become available, instead than all at once at the end of the reply. This makes quite the difference with long responses and slower models such as GPT-4max_tokens
is self-explanatory 🙂 and it also makes quite the difference when using GPT-4.temperature
set to 0 allows deterministic responses (fundamental for reproducibility. From 0< to 2, it allows increasingly more creative but also less focused.These are very simple modifications, you just need to read them from the yaml file and add them as extra parameters when posting the request. Thanks!
After following the instructions I get this error:
❯ python chatgpt.py
Traceback (most recent call last):
File "/src/chatgpt-cli/chatgpt.py", line 4, in <module>
import click
ModuleNotFoundError: No module named 'click'
I don't use python but I'm guessing I have to find/install a missing dependency. I'm not sure if there are more of these once I fix this one.
Action: I copy-paste Python code produced by chatgpt-cli
Expected behavior: it runs
Actual behavior: it's broken because of a space indentation (coming from the highlighting perhaps).
How to fix this?
Thanks!
From time to time, I need GPT models to explain to me some mathematical works, while the cli tool displays formula as plain text:
[ R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}i)^2}{\sum{i=1}^{n}(y_i - \bar{y})^2} ]
is there a way to render these formulas directly inside terminals (I know there are tools already can do this, matter is how to integrate into this one, I guess.)
On systems running macOS, this runs fine after the shebang line is changed. In macOS the correct shebang would be #!/usr/bin/env python3
. After this change chatgpt-cli works perfectly.
Hi! We should make all the file patterns in .gitignore
start with a slash, in order to make it predictable where they match. Example:
$ mkdir yolo
$ mkdir yolo/cover
$ touch yolo/cover/ohno
$ git status yolo/
On branch implement-yolo
nothing to commit, working tree clean
… which may cause accidential data loss in the ohno
file. (Because the cover/
ignore rule matches in all subdirectories.)
The .history
line should have a comment that describes it saves the prompt history, without replies.
I was successfully running an earlier version of chatgpt-cli, but revisited here and followed the instructions to install the latest version. But the installation doesn't seem to have worked successfully on my system (MacOS 13.4.1).
ttop@tsudio sources % pip install -U pip
Requirement already satisfied: pip in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (23.3.1)
ttop@tsudio sources % pip install git+https://github.com/marcolardera/chatgpt-cli
Collecting git+https://github.com/marcolardera/chatgpt-cli
Cloning https://github.com/marcolardera/chatgpt-cli to /private/var/folders/83/bcy3yyl95f7cn5kdlq8g5yb00000gn/T/pip-req-build-wq_3z1gc
Running command git clone --filter=blob:none --quiet https://github.com/marcolardera/chatgpt-cli /private/var/folders/83/bcy3yyl95f7cn5kdlq8g5yb00000gn/T/pip-req-build-wq_3z1gc
Resolved https://github.com/marcolardera/chatgpt-cli to commit b39eceb70346b10516821f37735adfe3fa2e96ff
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
ttop@tsudio sources % chatgpt-cli
zsh: command not found: chatgpt-cli
ttop@tsudio sources % which chatgpt-cli
chatgpt-cli not found
ttop@tsudio sources %
I can't locate any executable on my system named chatgpt-cli
.
The only hit here is the local dir where I cloned the source:
ttop@tsudio sources % find ~ / -name chatgpt-cli 2>/dev/null
/Users/ttop/sources/chatgpt-cli
^C # aborted after a very long time passed
ttop@tsudio sources % find ~ /usr /opt -name chatgpt-cli 2>/dev/null
/Users/ttop/sources/chatgpt-cli
ttop@tsudio sources %
Apologies if I have done something incorrectly.
If possible, would be nice to use OpenAI with Assistant ID / Thread ID instad? if om not misstaken, this way it is possible to bypass 4k token limit? But also great to have access to the diffrent Assistant that one might build up. Math teacher, "Jarvis" etc etc.
$ ./chatgpt.sh -c hi.txt -c cu.txt
ChatGPT CLI
Model in use: gpt-3.5-turbo
Context file: hi.txt
Context file: cu.txt
───────────────────────────────────────────────────────────────────────────
[0] >>> I requested my ChatGPT client shows you some text files for this
conversation. Did you receive any? Do they have names?
No, I'm sorry, but as an AI language model, I don't have the capability
to receive or view files. I can only generate text based on the input I
receive.
… then how do I refer to them?
Edit: For a moment I felt clever.
[0] >>> Did you receive some system mesage along with this question?
No, I did not receive any system message along with your question.
Is there anything else I can help you with?
To simplify using this client in scripts, I'd like a raw text mode that avoids all terminal control codes (color, cursor positioning, not sure what else).
Raw text mode would thus include disabling markdown rendering.
I found the markdown option, but even with that disabled, the model name and separator line are still in color.
Also I'd like a CLI option for raw text mode.
Also raw text mode shall be the default if standard output is not a TTY. (Even better if we can check termcap for color code support, but that's probably a feature that should be implemented upstream in Rich.)
Also raw text mode shall be the default if using the dumb terminal (environment variable TERM
is set to dumb
).
is there a way to use the repo so that I can cat
a bunch of text to it, rather than having it read one-line at a time?
E.g. "Summarize this file" as context, I want to do cat file.txt | chatgpt-cli
-- but presently it treats each input line as a prompt
Traceback (most recent call last):
File "/Users/tiansu.yu/.local/pipx/venvs/chatgpt-cli/lib/python3.11/site-packages/chatgpt.py", line 135, in display_expense
PRICING_RATE[model]["prompt"],
~~~~~~~~~~~~^^^^^^^
KeyError: 'gpt-4-1106-preview'
something like:
chatgpt-cli -t "how do i install chatgpt-cli on ubuntu"
instead getting into repl
it would really help with scripting
There is what i get when change the model in the config yaml file:
`godoevix ~ > chatgpt
ChatGPT CLI
Model in use: gpt-4
[0] >>> So ?
Unknown error, status code 404
{
'error': {
'message': 'The model: gpt-4
does not exist',
'type': 'invalid_request_error',
'param': None,
'code': 'model_not_found'
}
}
Total tokens used: 0
Estimated expense: $0.0
`
It would be useful to have a way to read the contents a file directly to the prompt without resorting to copy paste.
e.g. /read notes.md
First the service was overloaded, so I retried, and it worked for one question, then 502.
During handling of the above exception, another exception occurred: … requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The only relevant trace step was
File "./chatgpt.py", line 227, in start_prompt console.print(r.json())
Is this HTTP? Then 502 would mean Bad Gateway, which I often observed with nginx as load balancer when the downstream server was too busy to reply at all.
At the moment the config file template is created during the first execution of the script.
It would be a better UX if automatically created when the user pip install
the package.
thanks for the great repo.
right now, we may lose interesting chatgpt answers if we don't copy-paste.
to solve this problem, it would be great to save the history of both prompts and answers (and model version and query date for traceability).
The program won't find its config file (and I assume also the .history file) if it is being executed from anywhere other than the directory where the .py file itself is.
It would be much more convenient if those files would be written to ~/.config/chatgpt-cli or any other known directory so that the program can be executed from anywhere, for example through an alias.
Alternatively or on top of that, you could add a config parameter to supply that path. This would enable use of multiple ChatGPT accounts.
When asking ChatGPT to generate code, it has an extra space/indent at the start any of the lines that aren't indented.
Seems like it's to do with the ``` code blocks - it's a small issue but makes copy/pasting more difficult. Any tips/workarounds appreciated.
Eg. (* shows where a space is)
*function generateList() {
*}
Looking at the session logs, there's no space:
"content": "Here's the function:\n\njavascript\nfunction generateList() {\n}\n
"
Thanks for making this open source btw!
The bug is actually easy to fix but causes the whole code to collapse.
Most Linux machines uses utf-8
as default encoding which is ok with most languages characters, emojis and symbols.
In Windows machines, a big portion of them uses cp1252
as default encoding which is very limited:
open() docs: "if encoding is not specified the encoding used is platform dependent" see
Reproducing the bug
(as you see even if user didn't send any invalid characters, GPT sending one invalid character is enough to cause the bug)
Fix
simply assigning open( , encoding= "utf-8")
for all systems or detecting the system encoding first using:
import locale
sys_char_encoding = locale.getpreferredencoding()
#rest of the code
If welcome, we could look at adding these features:
The --help
option should show a link to this repo.
In the conversation, /h or /help should show available commands.
This thread proposes a /save command. This is different from /copy, because copy only copies the response. Often, I want to copy the whole conversation.
I'd like to propose a /save filename
and /edit
command. /edit
would take the full conversation, put it into a temporary file, and open it in the system default editor.
If a PR on this would likely be accepted, I could look at implementing this.
I saw the project and thought it was going to hit chatgpt, but it looks like an openai api chat client. Are there plans to add chatgpt access from the cli instead of having to go through a web interface for it? So instead of it being a chatgpt-cli it's an openaiapi-tui right?
it is impossible to use newline in prompts.
suggested fix: sending the prompt is done by Modifier+Enter as done in messaging apps, such that newline is supported.
Thanks!
I checked README.md and config.yaml but couldn't find anything about the colors.
Command : chatgpt-cli --model gpt-4-0613
ChatGPT CLI
Model in use: gpt-4-0613
[0] >>> Which GPT model are you
I'm based on OpenAI's GPT-3 model.
[53] >>>
Directly in ChatGPT with Model 4:
Which GPT model/version are you?
ChatGPT
I am based on the GPT-4 architecture.
My training data includes information up to September 2021. If you have any questions or need assistance, feel free to ask!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.