Coder Social home page Coder Social logo

kardolus / chatgpt-cli Goto Github PK

View Code? Open in Web Editor NEW
380.0 380.0 26.0 5.32 MB

ChatGPT CLI is an advanced command-line interface for ChatGPT models via OpenAI and Azure, offering streaming, query mode, and history tracking for seamless, context-aware conversations. Ideal for both users and developers, it provides advanced configuration and easy setup options to ensure a tailored conversational experience with the GPT model.

License: MIT License

Go 96.86% Shell 3.14%
azure chatgpt cli go golang gpt language-model openai

chatgpt-cli's People

Contributors

benbenbang avatar dependabot[bot] avatar johnd0e avatar kardolus avatar manveerbhullar avatar morganbat avatar nopeless avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatgpt-cli's Issues

How to use with Azure?

Great work on the CLI!
How do I configure it to use our azure openai deployment? Thanks!

http error: 429

Everytime i try a query, i get a 429.
nothing more is said

Error: failed to make request

~ $ chatgpt -l
Error: failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
Usage:
  chatgpt [flags]

Flags:
      --clear-history        Clear all prior conversation context for the current thread
  -c, --config               Display the configuration
  -h, --help                 help for chatgpt
  -i, --interactive          Use interactive mode
  -l, --list-models          List available models
  -q, --query                Use query mode instead of stream mode
      --set-max-tokens int   Set a new default max token size by specifying the max tokens
      --set-model string     Set a new default GPT model by specifying the model name
  -v, --version              Display the version information

failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
~ $ chatgpt -c
name: openai
api_key: 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
model: gpt-3.5-turbo
max_tokens: 4096
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models

[BUG] `max_tokens` seems to have no effeect on the output

running on latest main commit (local build)

setting max_tokens to something low like 20 will still output long text

name: openai
api_key: <redacted>
model: gpt-4-turbo-preview
max_tokens: 20
role: You are a developer
temperature: 1
top_p: 1
frequency_penalty: 0
presence_penalty: 0
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models
auth_header: Authorization
auth_token_prefix: 'Bearer '

-m options was useful

I've tried the new version and I believe it's important to reinstate the -m option so that the model can be switched during runtime.

To balance the cost of the APIs, it is often useful to operate with GPT-3 and then pass the output to GPT-4, for example:

chatgpt -m gpt-3 "Do something" | chatgpt -m gpt-4 "Do something else" | ...

Using global variables or file configurations forces one to break the pipeline and save partial results in temporary files, which is not aligned with the rest of the utilities. This doesn't only apply to this scenario, but also when a user wants to change the model on the fly, or create a command alias.

Before this version, I had some very useful aliases:

alias gpt4="chatgpt -m gpt-4"
alias gpt3="chatgpt -m gpt-3-turbo"

which can no longer be created now.

In summary, I believe that:

  • every option should have a default value
  • every option present in the yaml file should have a corresponding argument
  • the hierarchy should be: default, config, argument
  • local variables should be used only for API_KEY

I hope these considerations may be of help!
Thank you!

Please add readline capability for interactive mode

The readline mode can be archived by rlwrap, but it's much better to have it built-in.

$ rlwrap chatgpt -i
[2024-04-01 00:00:00] Q1: hello world┃ # press Ctrl-A to move the cursor to the beginning
[2024-04-01 00:00:00] Q1: ┃hello world 

Thanks!

Regarding autocompletion

Hi there,

Since we are using cobra, any plan to use the completion to help with the flags?
I can do a PR if it's ok for you :)

Thanks

Escape chars

Is there a preferred way to provide escape characters especially in cases where we want to provide multi-line code snippets with the prompt?

Cheers and thanks!

Update Readme on Chatgpt vs OpenAI API models?

This is a great project!

But, in my quest for looking for a way to use the same model that "chat.openapi.com" uses also from the CLI as the browser, I guess I need to keep looking.

I mean of course it's really Openapi's decisions not to offer the ChatGPT model via an API yet.

I think it would benefit this project, if it went to a little bit of effort to explain that in the Readme.

The fact that you can access many different OpenAI models via their APIs, but the ChatGPT model is not yet available.

It would also be worth mentioning the pricing difference.

Error: failed to make request

~ $ chatgpt -l
Error: failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
Usage:
  chatgpt [flags]

Flags:
      --clear-history        Clear all prior conversation context for the current thread
  -c, --config               Display the configuration
  -h, --help                 help for chatgpt
  -i, --interactive          Use interactive mode
  -l, --list-models          List available models
  -q, --query                Use query mode instead of stream mode
      --set-max-tokens int   Set a new default max token size by specifying the max tokens
      --set-model string     Set a new default GPT model by specifying the model name
  -v, --version              Display the version information

failed to make request: Get "https://api.openai.com/v1/models": dial tcp: lookup api.openai.com on [::1]:53: read udp [::1]:46086->[::1]:53: read: connection refused
~ $ chatgpt -c
name: openai
api_key: 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
model: gpt-3.5-turbo
max_tokens: 4096
thread: default
omit_history: false
url: https://api.openai.com
completions_path: /v1/chat/completions
models_path: /v1/models

http error: 429

  • Installed the app (windows)
  • Set the API key accordingly
  • Ran the command
  • Got 429 error (too many requests) despite only making a single query.
  • Multiple attempts yield same result.

Plans to add `--set-thread` and `--list-threads` flags ?

Hi @kardolus ,

Thanks for your work putting this CLI together. It's so nice to use ChatGPT in such a simple way. (And great to use in situations where browsers are disabled, like when using restricted airplane "message only wifi" packages !)

Having used it for a couple of days now, I'm finding the most UX friction to be in changing the threads. The simplest way I have been doing this from the command line is with sed on the config file.

Do you have any plans to add a --set-thread flag, and beyond that, a --list-threads flag?

I think these would make a really positive impact on the UX.

All the best,
Angus

Read `config.yaml` from XDG standard paths

As more CLI apps are adopting the path conventions laid out by the XDG Base Directory Specification, it would be nice if config.yaml and other generated files like thread history are placed in these standard paths—or at least look for them in these paths before using ~/.chatgpt-cli/:

  • config.yaml — use $XDG_CONFIG_HOME/chatgpt-cli/config.yaml (if XDG_CONFIG_HOME is NOT explicitly defined, use ~/.config/chatgpt-cli/config.yaml)
  • history/default.json etc. — use $XDG_DATA_HOME/chatgpt-cli/history/default.json (if XDG_DATA_HOME is NOT explicitly defined, use ~/.local/share/chatgpt-cli/history/default.json)

Feature: multi-instance mode

Love this tool ❤️

chatgpt -n
Error: Another chatgpt instance is running, chatgpt works not well with multiple instances, please close the other one first.
If you are sure there is no other chatgpt instance running, please delete the lock file: /tmp/chatgpt.lock
You can also try `chatgpt -d` to run in detach mode, this check will be skipped, but conversation will not be saved.

How (much effort) would it be to support multiple insances?
Would be amazing :)

Ready to throw a sunday at it (at some point this year 😅)

Start a new conversation

Could be useful exclude history from context and/or have a way to start a new conversation. Something like: chatgpt --new-chat.

I just learned in the hard way how can be painful kept a large context in a loop.

Thx

403 not supported

➜  $ chatgpt test                                                                                                                                                             ✭ ✱
http status 403: Country, region, or territory not supported

HTTP error 429

I just downloaded the program with
curl -L -o chatgpt https://github.com/kardolus/chatgpt-cli/releases/download/v1.3.2/chatgpt-linux-amd64
set up my API key, and run it with a first query. I get the error message:
http error: 429

System: Linux Mint 21.2 Cinnamon
I am using the free tier of chatgpt (3.5), if that is relevant.

OPENAI KEY

so i just installed this that ms copilot told me to do so i can have chatgpt and it says: missing environment variable: OPENAI_API_KEY how am i gonna solve this?

Can't change model

Hello. I can't change from chatgpt-3 to 4. I have checked using openAI python tool that I have access to gpt-4.

`local:.chatgpt-cli yasin$ chatgpt -l
Available models:

  • gpt-3.5-turbo-0613
  • gpt-4-0314
  • gpt-4-0613
  • gpt-4 (current)
  • gpt-3.5-turbo-instruct-0914
  • gpt-3.5-turbo-instruct
  • gpt-3.5-turbo-0301
  • gpt-3.5-turbo-16k
  • gpt-3.5-turbo
  • gpt-3.5-turbo-16k-0613
    local:.chatgpt which model are you
    As an AI developed by OpenAI, I don't have a specific model number. I'm based on the GPT-3 (Generative Pretrained Transformer 3) architecture. My primary function is to assist and facilitate efficient communication.`

Not sure how I should go about debugging this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.