Coder Social home page Coder Social logo

marcolardera / chatgpt-cli Goto Github PK

View Code? Open in Web Editor NEW
438.0 10.0 81.0 288 KB

Simple yet effective command line client for chatting with ChatGPT using the official API

License: MIT License

Python 100.00%
api chatgpt chatgpt-api cli python openai openai-api

chatgpt-cli's Introduction

ChatGPT CLI

Screenshot

Overview

Simple script for chatting with ChatGPT from the command line, using the official API (Released March 1st, 2023). It allows, after providing a valid API Key, to use ChatGPT at the maximum speed, at a fraction of the cost of a full ChatGPT Plus subscription (at least for the average user).

How to get an API Key

Go to platform.openai.com and log-in with your OpenAI account (register if you don't have one). Click on your name initial in the top-right corner, then select "View API keys". Finally click on "Create new secret key". That's it.

You may also need to add a payment method, clicking on Billing --> Payment methods. New accounts should have some free credits, but adding a payment method may still be mandatory. For pricing, check this page.

Update (16/02/2024): Some user accounts now requires to charge in advance the credit balance in order to use the OpenAI APIs. Check Settings --> Billing for that.

Installation and essential configuration

You need Python (at least version 3.10), Pip and Git installed on your system.

First update Pip (having an older version can cause troubles on some systems):

pip install -U pip

Then, the installation is done simply with a single command:

pip install git+https://github.com/marcolardera/chatgpt-cli

After that, you need to configure your API Key. There are three alternative ways to provide this parameter:

  • Edit the api-key parameter in the config.yaml file (see paragraph below)
  • Set the environment variable OPENAI_API_KEY (Check your operating system's documentation on how to do this)
  • Use the command line option --key or -k

If more then one API Key is provided, ChatGPT CLI follows this priority order: Command line option > Environment variable > Configuration file

Configuration file

The configuration file config.yaml can be found in the default config directory of the user defined by the XDG Base Directory Specification.

On a Linux/MacOS system it is defined by the $XDG_CONFIG_HOME variable (check it using echo $XDG_CONFIG_HOME). The default, if the variable is not set, should be the ~/.config folder.

On the first execution of the script, a template of the config file is automatically created. If a config file already exists but is missing any fields, default values are used for the missing fields.

Suppliers

You can set the supplier as openai (the default) or azure in the config.yaml. Remember to set the parameters corresponding to the specific supplier.

Models

ChatGPT CLI, by default, uses the original gpt-3.5-turbo model. In order to use other ChatGPT models, edit the model parameter in the config.yaml file ore use the --model command line option. Here is a list of all the available options:

Name Pricing (input token) Pricing(output token)
gpt-3.5-turbo 0.0005 0.0015
gpt-3.5-turbo-0125 0.0005 0.0015
gpt-3.5-turbo-1106 0.0005 0.0015
gpt-3.5-turbo-0613 0.0005 0.0015
gpt-3.5-turbo-16k 0.0005 0.0015
gpt-4 0.03 0.06
gpt-4-0613 0.03 0.06
gpt-4-32k 0.06 0.12
gpt-4-32k-0613 0.06 0.12
gpt-4-1106-preview 0.01 0.03
gpt-4-0125-preview 0.01 0.03
gpt-4-turbo 0.01 0.03
gpt-4o 0.005 0.015

Pricing is calculated as $/1000 tokens.

Check this page for the technical details of each model.

Also note that, if you use Azure as a supplier, this pricing may not be accurate.

Basic usage

Launch the script typing in your terminal:

chatgpt-cli

Then just chat! The number next to the prompt is the tokens used in the conversation at that point.

Use the /q command to quit and show the number of total tokens used and an estimate of the expense for that session, based on the specific model in use.

Use the /copy (or /c) command to copy code blocks from the generated output. Specifically, /copy or /c followed by an integer copies the nth code block to the clipboard. Code blocks are labeled in the console output so that it is clear which index corresponds to which block. Running the /copy command without any arguments copies the entire contents of the previous response.

For displaying all the available commands check the help with chatgpt-cli --help

Multiline input

Add the --multiline (or -ml) flag in order to toggle multi-line input mode. In this mode use Alt+Enter or Esc+Enter to submit messages.

Context

Use the --context <FILE PATH> command line option (or -c as a short version) in order to provide the model an initial context (technically a system message for ChatGPT). For example:

chatgpt-cli --context notes.txt

Both absolute and relative paths are accepted. Note that this option can be specified multiple times to give multiple files for context. Example:

chatgpt-cli --context notes-from-thursday.txt --context notes-from-friday.txt

Typical use cases for this feature are:

  • Giving the model some code and ask to explain/refactor
  • Giving the model some text and ask to rephrase with a different style (more formal, more friendly, etc)
  • Asking for a translation of some text

Markdown rendering

ChatGPT CLI automatically renders Markdown responses from the model, including code blocks, with appropriate formatting and syntax highlighting. Update (31/05/2023): Now tables are also rendered correctly, thanks to the new 13.4.0 release of Rich.

Change the markdown parameter from true to false in the config.yaml in order to disable this feature and display responses in plain text.

Restoring previous sessions

ChatGPT CLI saves all the past conversations (including context and token usage) in the session-history folder inside the $XDG_CONFIG_HOME discussed in a previous paragraph. In order to restore a session the --restore <YYYYMMDD-hhmmss> (or -r) option is available. For example:

chatgpt-cli --restore 20230728-162302 restores the session from the $XDG_CONFIG_HOME/chatgpt-cli/session-history/chatgpt-session-20230728-162302.json file. Then the chat goes on from that point.

It is also possible to use the special value last:

chatgpt-cli --restore last

In this case it restores the last chat session, without specifying the timestamp.

Note that, if --restore is set, it overwrites any --context option.

Piping

ChatGPT CLI can be used in a UNIX pipeline thanks to the --non-interactive (or -n) mode. Here is an example:

cat example_file.txt | chatgpt-cli -n

In this case the content of example_file is sent directly to ChatGPT and the response is returned on the standard output. This allows to use the tool inside shell scripts.

JSON Mode

Note: This feature is only available for the gpt-3.5-turbo-0125 and gpt-4-turbo-preview models for now.

JSON Mode is enabled using the --json (or -j) flag. This forces ChatGPT to always respond with a JSON to each request. You must ask for a JSON explicitly (if the first message does not include the word "json" an "Invalid request" response is returned) and, in general, describe the schema and the content type of the desired result. Be careful of not being too vague in the request because you may get a very long, random response (with higher expenses).

Also check the OpenAI Documentation.

External tools

Copy selection as context (Linux)

On Linux using XWindows, you can conveniently start a chat with any text you have highlighted in any application as the provided context. This gist shows how this can be done on XFCE using xclip.

DALL-E CLI

Check also my other little project DALL-E CLI. The two tools may even be combined together through piping (Asking ChatGPT to create the perfect prompt to feed into DALL-E). Little example:

echo "Write the perfect prompt for an image generation model in order to represent a man wearing a banana costume" | chatgpt-cli -n | dall-e-cli -p

It works, despite not being 100% clear if it is useful or not.

Contributing to this project

Please read CONTRIBUTING.md

chatgpt-cli's People

Contributors

adarkdev avatar christian-oudard avatar danisztls avatar dmitrii-kravchuk avatar dwymark avatar harukab avatar itachi1621 avatar jdlugole avatar marcolardera avatar monperrus avatar moxious avatar orsnaro avatar pchuri avatar redpublic avatar ryanchentw avatar sikreuz avatar thinkthinking avatar yorhodes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-cli's Issues

chatgpt+ access

I saw the project and thought it was going to hit chatgpt, but it looks like an openai api chat client. Are there plans to add chatgpt access from the cli instead of having to go through a web interface for it? So instead of it being a chatgpt-cli it's an openaiapi-tui right?

GPT Models are not applied correctly

Command : chatgpt-cli --model gpt-4-0613
ChatGPT CLI
Model in use: gpt-4-0613
[0] >>> Which GPT model are you
I'm based on OpenAI's GPT-3 model.
[53] >>>

Directly in ChatGPT with Model 4:

Which GPT model/version are you?
ChatGPT
I am based on the GPT-4 architecture.
My training data includes information up to September 2021. If you have any questions or need assistance, feel free to ask!

Cope more gracefully with "Unknown error, status code 502"

First the service was overloaded, so I retried, and it worked for one question, then 502.

During handling of the above exception, another exception occurred: … requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

The only relevant trace step was

File "./chatgpt.py", line 227, in start_prompt console.print(r.json())

Is this HTTP? Then 502 would mean Bad Gateway, which I often observed with nginx as load balancer when the downstream server was too busy to reply at all.

Rate limit or maximum monthly limit exceeded

first use of the CLI getting this error but my API key have no limit set

`ChatGPT CLI
Supplier: openai
Model in use: gpt-3.5-turbo-16k
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
[0] >>> Hello
Rate limit or maximum monthly limit exceeded
[0] >>>

Total tokens used: 0
Estimated expense: $0.000000`

don't rely on WORKDIR to run

The program won't find its config file (and I assume also the .history file) if it is being executed from anywhere other than the directory where the .py file itself is.

It would be much more convenient if those files would be written to ~/.config/chatgpt-cli or any other known directory so that the program can be executed from anywhere, for example through an alias.

Alternatively or on top of that, you could add a config parameter to supply that path. This would enable use of multiple ChatGPT accounts.

Feature request: adding proper display of latex formula

From time to time, I need GPT models to explain to me some mathematical works, while the cli tool displays formula as plain text:

[ R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - \hat{y}i)^2}{\sum{i=1}^{n}(y_i - \bar{y})^2} ]

is there a way to render these formulas directly inside terminals (I know there are tools already can do this, matter is how to integrate into this one, I guess.)

default character encoding in some machines with Win OS causes crash!

PR#58

The bug is actually easy to fix but causes the whole code to collapse.

Details file i/o `open()` function have `encoding` argument that by default is set to the machine's default character encoding.
  • Most Linux machines uses utf-8 as default encoding which is ok with most languages characters, emojis and symbols.

  • In Windows machines, a big portion of them uses cp1252 as default encoding which is very limited:
    get win default encoding via python script


open() docs: "if encoding is not specified the encoding used is platform dependent" see

  • Reproducing the bug
    bug screenshot using Cmder shell & win terminal preview
    (as you see even if user didn't send any invalid characters, GPT sending one invalid character is enough to cause the bug)

  • Fix
    simply assigning open( , encoding= "utf-8") for all systems or detecting the system encoding first using:

import locale
sys_char_encoding = locale.getpreferredencoding()
#rest of the code

Consider adding more parameters to the config file

Hi,

I like your minimalistic approach a lot! But the lack of a few configurable parameters made me switch to https://github.com/j178/chatgpt. If you could add the following parameters to the yaml file:

  "prompts": {
    "default": "You are a helpful assistant"
     "pirate": "You are pirate Blackbeard. Arr matey!"},
  "conversation": {
    "prompt": "default",
    "stream": true,
    "max_tokens": 1024,
    "temperature": 0
  }

I would be happy to switch back! Basically, this is adding the following functionalities:

  1. the possibility to write down one or more contexts in the yaml file. This is a bit more convenient than having to carry around a separate file for each context, and pass them via --context <FILE PATH>
  2. stream al1lows the tokens to be sent as they become available, instead than all at once at the end of the reply. This makes quite the difference with long responses and slower models such as GPT-4
  3. max_tokens is self-explanatory 🙂 and it also makes quite the difference when using GPT-4.
  4. temperature set to 0 allows deterministic responses (fundamental for reproducibility. From 0< to 2, it allows increasingly more creative but also less focused.

These are very simple modifications, you just need to read them from the yaml file and add them as extra parameters when posting the request. Thanks!

command not found after following installation instructions

I was successfully running an earlier version of chatgpt-cli, but revisited here and followed the instructions to install the latest version. But the installation doesn't seem to have worked successfully on my system (MacOS 13.4.1).

ttop@tsudio sources % pip install -U pip 
Requirement already satisfied: pip in /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages (23.3.1)
ttop@tsudio sources % pip install git+https://github.com/marcolardera/chatgpt-cli
Collecting git+https://github.com/marcolardera/chatgpt-cli
  Cloning https://github.com/marcolardera/chatgpt-cli to /private/var/folders/83/bcy3yyl95f7cn5kdlq8g5yb00000gn/T/pip-req-build-wq_3z1gc
  Running command git clone --filter=blob:none --quiet https://github.com/marcolardera/chatgpt-cli /private/var/folders/83/bcy3yyl95f7cn5kdlq8g5yb00000gn/T/pip-req-build-wq_3z1gc
  Resolved https://github.com/marcolardera/chatgpt-cli to commit b39eceb70346b10516821f37735adfe3fa2e96ff
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Installing backend dependencies ... done
  Preparing metadata (pyproject.toml) ... done
ttop@tsudio sources % chatgpt-cli
zsh: command not found: chatgpt-cli
ttop@tsudio sources % which chatgpt-cli
chatgpt-cli not found
ttop@tsudio sources % 

I can't locate any executable on my system named chatgpt-cli.

The only hit here is the local dir where I cloned the source:

ttop@tsudio sources % find ~ / -name chatgpt-cli 2>/dev/null
/Users/ttop/sources/chatgpt-cli
^C # aborted after a very long time passed
ttop@tsudio sources % find ~ /usr /opt -name chatgpt-cli 2>/dev/null
/Users/ttop/sources/chatgpt-cli
ttop@tsudio sources % 

Apologies if I have done something incorrectly.

GPT4 model not woking

There is what i get when change the model in the config yaml file:

`godoevix ~ > chatgpt
ChatGPT CLI
Model in use: gpt-4

[0] >>> So ?
Unknown error, status code 404
{
'error': {
'message': 'The model: gpt-4 does not exist',
'type': 'invalid_request_error',
'param': None,
'code': 'model_not_found'
}
}

Total tokens used: 0
Estimated expense: $0.0
`

Git ignore file seems overly broad

Hi! We should make all the file patterns in .gitignore start with a slash, in order to make it predictable where they match. Example:

$ mkdir yolo
$ mkdir yolo/cover
$ touch yolo/cover/ohno
$ git status yolo/
On branch implement-yolo
nothing to commit, working tree clean

… which may cause accidential data loss in the ohno file. (Because the cover/ ignore rule matches in all subdirectories.)

The .history line should have a comment that describes it saves the prompt history, without replies.

Code - extra spacing, hard to copy

When asking ChatGPT to generate code, it has an extra space/indent at the start any of the lines that aren't indented.
Seems like it's to do with the ``` code blocks - it's a small issue but makes copy/pasting more difficult. Any tips/workarounds appreciated.

Eg. (* shows where a space is)

*function generateList() {
*}

Looking at the session logs, there's no space:
"content": "Here's the function:\n\njavascript\nfunction generateList() {\n}\n"

Thanks for making this open source btw!

Lost history / context

This tool is wonderful, btw. Thanks for building it!

I had a great series of prompts going and then I sent one more, and the program failed after returning the following fault:

Maximum context length exceeded
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/home/<username>/Documents/chatgpt-cli/chatgpt.py", line 79, in display_expense
    PRICING_RATE[model]["prompt"],
KeyError: 'gpt-3.5-turbo-0301'

Unfortunately, it lost the context that had accumulated. In general, is there a way for the API to remember sessions used in chatgpt-cli, or is the API essentially an atomic session (a "chat"?) that is gone after the program ends?

Again, thanks for what you do!
ASH

multi-line input

is there a way to use the repo so that I can cat a bunch of text to it, rather than having it read one-line at a time?

E.g. "Summarize this file" as context, I want to do cat file.txt | chatgpt-cli -- but presently it treats each input line as a prompt

add support for newline in prompts

it is impossible to use newline in prompts.

suggested fix: sending the prompt is done by Modifier+Enter as done in messaging apps, such that newline is supported.

Thanks!

Request: Be able to use OpenAI Assistant ID / Thread ID...

If possible, would be nice to use OpenAI with Assistant ID / Thread ID instad? if om not misstaken, this way it is possible to bypass 4k token limit? But also great to have access to the diffrent Assistant that one might build up. Math teacher, "Jarvis" etc etc.

Missing reference value for gpt-4-turbo

Traceback (most recent call last):
  File "/Users/tiansu.yu/.local/pipx/venvs/chatgpt-cli/lib/python3.11/site-packages/chatgpt.py", line 135, in display_expense
    PRICING_RATE[model]["prompt"],
    ~~~~~~~~~~~~^^^^^^^
KeyError: 'gpt-4-1106-preview'

Add CLI option for raw text mode (no colors at all).

To simplify using this client in scripts, I'd like a raw text mode that avoids all terminal control codes (color, cursor positioning, not sure what else).
Raw text mode would thus include disabling markdown rendering.
I found the markdown option, but even with that disabled, the model name and separator line are still in color.
Also I'd like a CLI option for raw text mode.
Also raw text mode shall be the default if standard output is not a TTY. (Even better if we can check termcap for color code support, but that's probably a feature that should be implemented upstream in Rich.)
Also raw text mode shall be the default if using the dumb terminal (environment variable TERM is set to dumb).

save complete history

thanks for the great repo.

right now, we may lose interesting chatgpt answers if we don't copy-paste.

to solve this problem, it would be great to save the history of both prompts and answers (and model version and query date for traceability).

Documentation / --help

If welcome, we could look at adding these features:

The --help option should show a link to this repo.

In the conversation, /h or /help should show available commands.

Python code broken because of space prefix

Action: I copy-paste Python code produced by chatgpt-cli

Expected behavior: it runs

Actual behavior: it's broken because of a space indentation (coming from the highlighting perhaps).

How to fix this?

Thanks!

Response injection issue

I asked how to write colored text using rich in Python and the response contained color codes which were apparently interpreted by the UI layer of this tool so I didn't see them but the color changed.

image

No module named 'click'

After following the instructions I get this error:

❯ python chatgpt.py
Traceback (most recent call last):
  File "/src/chatgpt-cli/chatgpt.py", line 4, in <module>
    import click
ModuleNotFoundError: No module named 'click'

I don't use python but I'm guessing I have to find/install a missing dependency. I'm not sure if there are more of these once I fix this one.

Model in use gpt-4 or gpt-3?

image

Hi I have noticed this strange behaviour as I was using the CLI. Chat started to give me old code recommendations, so I asked what model I was interacting with and it said gpt-3 even though I specified to use gpt-4 and this was also confirmed by the CLI interface when I launched the script.

I'm not sure if this is an issue or if I am missing something..

Thanks!

Readme should explain how to refer to the context files.

$ ./chatgpt.sh -c hi.txt -c cu.txt
ChatGPT CLI
Model in use: gpt-3.5-turbo
Context file: hi.txt
Context file: cu.txt
───────────────────────────────────────────────────────────────────────────
[0] >>> I requested my ChatGPT client shows you some text files for this
conversation. Did you receive any? Do they have names?

No, I'm sorry, but as an AI language model, I don't have the capability
to receive or view files. I can only generate text based on the input I
receive.

… then how do I refer to them?

Edit: For a moment I felt clever.

[0] >>> Did you receive some system mesage along with this question?

No, I did not receive any system message along with your question.
Is there anything else I can help you with?

Save full conversation

This thread proposes a /save command. This is different from /copy, because copy only copies the response. Often, I want to copy the whole conversation.

I'd like to propose a /save filename and /edit command. /edit would take the full conversation, put it into a temporary file, and open it in the system default editor.

If a PR on this would likely be accepted, I could look at implementing this.

Some models are missing (e.g. gpt-4)

$ python chatgpt.py --model gpt-4
ChatGPT CLI
Model in use: gpt-4
───
[0] >>> How are you?
Unknown error, status code 404
{'error': {'message': 'The model: `gpt-4` does not exist', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

Total tokens used: 0
Estimated expense: $0.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.