Coder Social home page Coder Social logo

ther1d / shell_gpt Goto Github PK

View Code? Open in Web Editor NEW
8.3K 78.0 653.0 182 KB

A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.

License: MIT License

Python 99.37% Shell 0.41% Dockerfile 0.22%
cheat-sheet cli gpt-3 openai productivity shell chatgpt python terminal linux

shell_gpt's Introduction

ShellGPT

A command-line productivity tool powered by AI large language models (LLM). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.

ShellGPT.mp4

Installation

pip install shell-gpt

By default, ShellGPT uses OpenAI's API and GPT-4 model. You'll need an API key, you can generate one here. You will be prompted for your key which will then be stored in ~/.config/shell_gpt/.sgptrc. OpenAI API is not free of charge, please refer to the OpenAI pricing for more information.

Tip

Alternatively, you can use locally hosted open source models which are available for free. To use local models, you will need to run your own LLM backend server such as Ollama. To set up ShellGPT with Ollama, please follow this comprehensive guide.

❗️Note that ShellGPT is not optimized for local models and may not work as expected.

Usage

ShellGPT is designed to quickly analyse and retrieve information. It's useful for straightforward requests ranging from technical configurations to general knowledge.

sgpt "What is the fibonacci sequence"
# -> The Fibonacci sequence is a series of numbers where each number ...

ShellGPT accepts prompt from both stdin and command line argument. Whether you prefer piping input through the terminal or specifying it directly as arguments, sgpt got you covered. For example, you can easily generate a git commit message based on a diff:

git diff | sgpt "Generate git commit message, for my changes"
# -> Added main feature details into README.md

You can analyze logs from various sources by passing them using stdin, along with a prompt. For instance, we can use it to quickly analyze logs, identify errors and get suggestions for possible solutions:

docker logs -n 20 my_app | sgpt "check logs, find errors, provide possible solutions"
Error Detected: Connection timeout at line 7.
Possible Solution: Check network connectivity and firewall settings.
Error Detected: Memory allocation failed at line 12.
Possible Solution: Consider increasing memory allocation or optimizing application memory usage.

You can also use all kind of redirection operators to pass input:

sgpt "summarise" < document.txt
# -> The document discusses the impact...
sgpt << EOF
What is the best way to lear Golang?
Provide simple hello world example.
EOF
# -> The best way to learn Golang...
sgpt <<< "What is the best way to learn shell redirects?"
# -> The best way to learn shell redirects is through...

Shell commands

Have you ever found yourself forgetting common shell commands, such as find, and needing to look up the syntax online? With --shell or shortcut -s option, you can quickly generate and execute the commands you need right in the terminal.

sgpt --shell "find all json files in current folder"
# -> find . -type f -name "*.json"
# -> [E]xecute, [D]escribe, [A]bort: e

Shell GPT is aware of OS and $SHELL you are using, it will provide shell command for specific system you have. For instance, if you ask sgpt to update your system, it will return a command based on your OS. Here's an example using macOS:

sgpt -s "update my system"
# -> sudo softwareupdate -i -a
# -> [E]xecute, [D]escribe, [A]bort: e

The same prompt, when used on Ubuntu, will generate a different suggestion:

sgpt -s "update my system"
# -> sudo apt update && sudo apt upgrade -y
# -> [E]xecute, [D]escribe, [A]bort: e

Let's try it with Docker:

sgpt -s "start nginx container, mount ./index.html"
# -> docker run -d -p 80:80 -v $(pwd)/index.html:/usr/share/nginx/html/index.html nginx
# -> [E]xecute, [D]escribe, [A]bort: e

We can still use pipes to pass input to sgpt and generate shell commands:

sgpt -s "POST localhost with" < data.json
# -> curl -X POST -H "Content-Type: application/json" -d '{"a": 1, "b": 2}' http://localhost
# -> [E]xecute, [D]escribe, [A]bort: e

Applying additional shell magic in our prompt, in this example passing file names to ffmpeg:

ls
# -> 1.mp4 2.mp4 3.mp4
sgpt -s "ffmpeg combine $(ls -m) into one video file without audio."
# -> ffmpeg -i 1.mp4 -i 2.mp4 -i 3.mp4 -filter_complex "[0:v] [1:v] [2:v] concat=n=3:v=1 [v]" -map "[v]" out.mp4
# -> [E]xecute, [D]escribe, [A]bort: e

If you would like to pass generated shell command using pipe, you can use --no-interaction option. This will disable interactive mode and will print generated command to stdout. In this example we are using pbcopy to copy generated command to clipboard:

sgpt -s "find all json files in current folder" --no-interaction | pbcopy

Shell integration

This is a very handy feature, which allows you to use sgpt shell completions directly in your terminal, without the need to type sgpt with prompt and arguments. Shell integration enables the use of ShellGPT with hotkeys in your terminal, supported by both Bash and ZSH shells. This feature puts sgpt completions directly into terminal buffer (input line), allowing for immediate editing of suggested commands.

Shell_GPT_Integration.mp4

To install shell integration, run sgpt --install-integration and restart your terminal to apply changes. This will add few lines to your .bashrc or .zshrc file. After that, you can use Ctrl+l (by default) to invoke ShellGPT. When you press Ctrl+l it will replace you current input line (buffer) with suggested command. You can then edit it and just press Enter to execute.

Generating code

By using the --code or -c parameter, you can specifically request pure code output, for instance:

sgpt --code "solve fizz buzz problem using python"
for i in range(1, 101):
    if i % 3 == 0 and i % 5 == 0:
        print("FizzBuzz")
    elif i % 3 == 0:
        print("Fizz")
    elif i % 5 == 0:
        print("Buzz")
    else:
        print(i)

Since it is valid python code, we can redirect the output to a file:

sgpt --code "solve classic fizz buzz problem using Python" > fizz_buzz.py
python fizz_buzz.py
# 1
# 2
# Fizz
# 4
# Buzz
# ...

We can also use pipes to pass input:

cat fizz_buzz.py | sgpt --code "Generate comments for each line of my code"
# Loop through numbers 1 to 100
for i in range(1, 101):
    # Check if number is divisible by both 3 and 5
    if i % 3 == 0 and i % 5 == 0:
        # Print "FizzBuzz" if number is divisible by both 3 and 5
        print("FizzBuzz")
    # Check if number is divisible by 3
    elif i % 3 == 0:
        # Print "Fizz" if number is divisible by 3
        print("Fizz")
    # Check if number is divisible by 5
    elif i % 5 == 0:
        # Print "Buzz" if number is divisible by 5
        print("Buzz")
    # If number is not divisible by 3 or 5, print the number itself
    else:
        print(i)

Chat Mode

Often it is important to preserve and recall a conversation. sgpt creates conversational dialogue with each LLM completion requested. The dialogue can develop one-by-one (chat mode) or interactively, in a REPL loop (REPL mode). Both ways rely on the same underlying object, called a chat session. The session is located at the configurable CHAT_CACHE_PATH.

To start a conversation, use the --chat option followed by a unique session name and a prompt.

sgpt --chat conversation_1 "please remember my favorite number: 4"
# -> I will remember that your favorite number is 4.
sgpt --chat conversation_1 "what would be my favorite number + 4?"
# -> Your favorite number is 4, so if we add 4 to it, the result would be 8.

You can use chat sessions to iteratively improve GPT suggestions by providing additional details. It is possible to use --code or --shell options to initiate --chat:

sgpt --chat conversation_2 --code "make a request to localhost using python"
import requests

response = requests.get('http://localhost')
print(response.text)

Let's ask LLM to add caching to our request:

sgpt --chat conversation_2 --code "add caching"
import requests
from cachecontrol import CacheControl

sess = requests.session()
cached_sess = CacheControl(sess)

response = cached_sess.get('http://localhost')
print(response.text)

Same applies for shell commands:

sgpt --chat conversation_3 --shell "what is in current folder"
# -> ls
sgpt --chat conversation_3 "Sort by name"
# -> ls | sort
sgpt --chat conversation_3 "Concatenate them using FFMPEG"
# -> ffmpeg -i "concat:$(ls | sort | tr '\n' '|')" -codec copy output.mp4
sgpt --chat conversation_3 "Convert the resulting file into an MP3"
# -> ffmpeg -i output.mp4 -vn -acodec libmp3lame -ac 2 -ab 160k -ar 48000 final_output.mp3

To list all the sessions from either conversational mode, use the --list-chats or -lc option:

sgpt --list-chats
# .../shell_gpt/chat_cache/conversation_1  
# .../shell_gpt/chat_cache/conversation_2

To show all the messages related to a specific conversation, use the --show-chat option followed by the session name:

sgpt --show-chat conversation_1
# user: please remember my favorite number: 4
# assistant: I will remember that your favorite number is 4.
# user: what would be my favorite number + 4?
# assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8.

REPL Mode

There is very handy REPL (read–eval–print loop) mode, which allows you to interactively chat with GPT models. To start a chat session in REPL mode, use the --repl option followed by a unique session name. You can also use "temp" as a session name to start a temporary REPL session. Note that --chat and --repl are using same underlying object, so you can use --chat to start a chat session and then pick it up with --repl to continue the conversation in REPL mode.

gif

sgpt --repl temp
Entering REPL mode, press Ctrl+C to exit.
>>> What is REPL?
REPL stands for Read-Eval-Print Loop. It is a programming environment ...
>>> How can I use Python with REPL?
To use Python with REPL, you can simply open a terminal or command prompt ...

REPL mode can work with --shell and --code options, which makes it very handy for interactive shell commands and code generation:

sgpt --repl temp --shell
Entering shell REPL mode, type [e] to execute commands or press Ctrl+C to exit.
>>> What is in current folder?
ls
>>> Show file sizes
ls -lh
>>> Sort them by file sizes
ls -lhS
>>> e (enter just e to execute commands, or d to describe them)

To provide multiline prompt use triple quotes """:

sgpt --repl temp
Entering REPL mode, press Ctrl+C to exit.
>>> """
... Explain following code:
... import random
... print(random.randint(1, 10))
... """
It is a Python script that uses the random module to generate and print a random integer.

You can also enter REPL mode with initial prompt by passing it as an argument or stdin or even both:

sgpt --repl temp < my_app.py
Entering REPL mode, press Ctrl+C to exit.
──────────────────────────────────── Input ────────────────────────────────────
name = input("What is your name?")
print(f"Hello {name}")
───────────────────────────────────────────────────────────────────────────────
>>> What is this code about?
The snippet of code you've provided is written in Python. It prompts the user...
>>> Follow up questions...

Function calling

Function calls is a powerful feature OpenAI provides. It allows LLM to execute functions in your system, which can be used to accomplish a variety of tasks. To install default functions run:

sgpt --install-functions

ShellGPT has a convenient way to define functions and use them. In order to create your custom function, navigate to ~/.config/shell_gpt/functions and create a new .py file with the function name. Inside this file, you can define your function using the following syntax:

# execute_shell_command.py
import subprocess
from pydantic import Field
from instructor import OpenAISchema


class Function(OpenAISchema):
    """
    Executes a shell command and returns the output (result).
    """
    shell_command: str = Field(..., example="ls -la", descriptions="Shell command to execute.")

    class Config:
        title = "execute_shell_command"

    @classmethod
    def execute(cls, shell_command: str) -> str:
        result = subprocess.run(shell_command.split(), capture_output=True, text=True)
        return f"Exit code: {result.returncode}, Output:\n{result.stdout}"

The docstring comment inside the class will be passed to OpenAI API as a description for the function, along with the title attribute and parameters descriptions. The execute function will be called if LLM decides to use your function. In this case we are allowing LLM to execute any Shell commands in our system. Since we are returning the output of the command, LLM will be able to analyze it and decide if it is a good fit for the prompt. Here is an example how the function might be executed by LLM:

sgpt "What are the files in /tmp folder?"
# -> @FunctionCall execute_shell_command(shell_command="ls /tmp")
# -> The /tmp folder contains the following files and directories:
# -> test.txt
# -> test.json

Note that if for some reason the function (execute_shell_command) will return an error, LLM might try to accomplish the task based on the output. Let's say we don't have installed jq in our system, and we ask LLM to parse JSON file:

sgpt "parse /tmp/test.json file using jq and return only email value"
# -> @FunctionCall execute_shell_command(shell_command="jq -r '.email' /tmp/test.json")
# -> It appears that jq is not installed on the system. Let me try to install it using brew.
# -> @FunctionCall execute_shell_command(shell_command="brew install jq")
# -> jq has been successfully installed. Let me try to parse the file again.
# -> @FunctionCall execute_shell_command(shell_command="jq -r '.email' /tmp/test.json")
# -> The email value in /tmp/test.json is johndoe@example.

It is also possible to chain multiple function calls in the prompt:

sgpt "Play music and open hacker news"
# -> @FunctionCall play_music()
# -> @FunctionCall open_url(url="https://news.ycombinator.com")
# -> Music is now playing, and Hacker News has been opened in your browser. Enjoy!

This is just a simple example of how you can use function calls. It is truly a powerful feature that can be used to accomplish a variety of complex tasks. We have dedicated category in GitHub Discussions for sharing and discussing functions. LLM might execute destructive commands, so please use it at your own risk❗️

Roles

ShellGPT allows you to create custom roles, which can be utilized to generate code, shell commands, or to fulfill your specific needs. To create a new role, use the --create-role option followed by the role name. You will be prompted to provide a description for the role, along with other details. This will create a JSON file in ~/.config/shell_gpt/roles with the role name. Inside this directory, you can also edit default sgpt roles, such as shell, code, and default. Use the --list-roles option to list all available roles, and the --show-role option to display the details of a specific role. Here's an example of a custom role:

sgpt --create-role json_generator
# Enter role description: Provide only valid json as response.
sgpt --role json_generator "random: user, password, email, address"
{
  "user": "JohnDoe",
  "password": "p@ssw0rd",
  "email": "[email protected]",
  "address": {
    "street": "123 Main St",
    "city": "Anytown",
    "state": "CA",
    "zip": "12345"
  }
}

If the description of the role contains the words "APPLY MARKDOWN" (case sensitive), then chats will be displayed using markdown formatting.

Request cache

Control cache using --cache (default) and --no-cache options. This caching applies for all sgpt requests to OpenAI API:

sgpt "what are the colors of a rainbow"
# -> The colors of a rainbow are red, orange, yellow, green, blue, indigo, and violet.

Next time, same exact query will get results from local cache instantly. Note that sgpt "what are the colors of a rainbow" --temperature 0.5 will make a new request, since we didn't provide --temperature (same applies to --top-probability) on previous request.

This is just some examples of what we can do using OpenAI GPT models, I'm sure you will find it useful for your specific use cases.

Runtime configuration file

You can setup some parameters in runtime configuration file ~/.config/shell_gpt/.sgptrc:

# API key, also it is possible to define OPENAI_API_KEY env.
OPENAI_API_KEY=your_api_key
# Base URL of the backend server. If "default" URL will be resolved based on --model.
API_BASE_URL=default
# Max amount of cached message per chat session.
CHAT_CACHE_LENGTH=100
# Chat cache folder.
CHAT_CACHE_PATH=/tmp/shell_gpt/chat_cache
# Request cache length (amount).
CACHE_LENGTH=100
# Request cache folder.
CACHE_PATH=/tmp/shell_gpt/cache
# Request timeout in seconds.
REQUEST_TIMEOUT=60
# Default OpenAI model to use.
DEFAULT_MODEL=gpt-3.5-turbo
# Default color for shell and code completions.
DEFAULT_COLOR=magenta
# When in --shell mode, default to "Y" for no input.
DEFAULT_EXECUTE_SHELL_CMD=false
# Disable streaming of responses
DISABLE_STREAMING=false
# The pygment theme to view markdown (default/describe role).
CODE_THEME=default
# Path to a directory with functions.
OPENAI_FUNCTIONS_PATH=/Users/user/.config/shell_gpt/functions
# Print output of functions when LLM uses them.
SHOW_FUNCTIONS_OUTPUT=false
# Allows LLM to use functions.
OPENAI_USE_FUNCTIONS=true
# Enforce LiteLLM usage (for local LLMs).
USE_LITELLM=false

Possible options for DEFAULT_COLOR: black, red, green, yellow, blue, magenta, cyan, white, bright_black, bright_red, bright_green, bright_yellow, bright_blue, bright_magenta, bright_cyan, bright_white. Possible options for CODE_THEME: https://pygments.org/styles/

Full list of arguments

╭─ Arguments ──────────────────────────────────────────────────────────────────────────────────────────────╮
│   prompt      [PROMPT]  The prompt to generate completions for.                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Options ────────────────────────────────────────────────────────────────────────────────────────────────╮
│ --model            TEXT                       Large language model to use. [default: gpt-4-1106-preview] │
│ --temperature      FLOAT RANGE [0.0<=x<=2.0]  Randomness of generated output. [default: 0.0]             │
│ --top-p            FLOAT RANGE [0.0<=x<=1.0]  Limits highest probable tokens (words). [default: 1.0]     │
│ --md             --no-md                      Prettify markdown output. [default: md]                    │
│ --editor                                      Open $EDITOR to provide a prompt. [default: no-editor]     │
│ --cache                                       Cache completion results. [default: cache]                 │
│ --version                                     Show version.                                              │
│ --help                                        Show this message and exit.                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Assistance Options ─────────────────────────────────────────────────────────────────────────────────────╮
│ --shell           -s                      Generate and execute shell commands.                           │
│ --interaction         --no-interaction    Interactive mode for --shell option. [default: interaction]    │
│ --describe-shell  -d                      Describe a shell command.                                      │
│ --code            -c                      Generate only code.                                            │
│ --functions           --no-functions      Allow function calls. [default: functions]                     │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Chat Options ───────────────────────────────────────────────────────────────────────────────────────────╮
│ --chat                 TEXT  Follow conversation with id, use "temp" for quick session. [default: None]  │
│ --repl                 TEXT  Start a REPL (Read–eval–print loop) session. [default: None]                │
│ --show-chat            TEXT  Show all messages from provided chat id. [default: None]                    │
│ --list-chats  -lc            List all existing chat ids.                                                 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─ Role Options ───────────────────────────────────────────────────────────────────────────────────────────╮
│ --role                  TEXT  System role for GPT model. [default: None]                                 │
│ --create-role           TEXT  Create role. [default: None]                                               │
│ --show-role             TEXT  Show role. [default: None]                                                 │
│ --list-roles   -lr            List roles.                                                                │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Docker

Run the container using the OPENAI_API_KEY environment variable, and a docker volume to store cache:

docker run --rm \
           --env OPENAI_API_KEY="your OPENAI API key" \
           --volume gpt-cache:/tmp/shell_gpt \
       ghcr.io/ther1d/shell_gpt --chat rainbow "what are the colors of a rainbow"

Example of a conversation, using an alias and the OPENAI_API_KEY environment variable:

alias sgpt="docker run --rm --env OPENAI_API_KEY --volume gpt-cache:/tmp/shell_gpt ghcr.io/ther1d/shell_gpt"
export OPENAI_API_KEY="your OPENAI API key"
sgpt --chat rainbow "what are the colors of a rainbow"
sgpt --chat rainbow "inverse the list of your last answer"
sgpt --chat rainbow "translate your last answer in french"

You also can use the provided Dockerfile to build your own image:

docker build -t sgpt .

Additional documentation: Azure integration, Ollama integration.

shell_gpt's People

Contributors

arafatsyed avatar artsparkai avatar chinarjoshi avatar cosmojg avatar counterapparatus avatar daeinar avatar danarth avatar eitamal avatar eric-glb avatar erseco avatar ismail-ben avatar jeanlucthumm avatar keiththomps avatar konstantin-goldman avatar loiccoyle avatar moritz-t-w avatar navidur1 avatar parsapoorsh avatar save196 avatar startakovsky avatar th3happybit avatar ther1d avatar will-wright-eng avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shell_gpt's Issues

HTTPError: 500 Server Error: Internal Server Error for url: https://api.openai.com/v1/completions

Trying to call the "sgpt" command and getting the error mentioned above, full traceback printed out:

╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:92 in main                                                                  │
│                                                                                                                                                                              │
│    89 │   │   prompt = f"{prompt}. Provide only shell command as output."                                                          │
│    90 │   elif code:                                                                                                                                                 │
│    91 │   │   prompt = f"{prompt}. Provide only code as output."                                                                          │
│ ❱  92 │   response_text = openai_request(prompt, model, max_tokens, api_key,                                             │
│    93 │   # For some reason OpenAI returns several leading/trailing white sp                                                     │
│    94 │   response_text = response_text.strip()                                                                                                    │
│    95 │   typer_writer(response_text, code, shell, animation)                                                                               │
│                                                                                                                                                                                 │
│ ╭────────────────────────────── locals ──────────────────────────────╮       │
│ │  animation = True                                                                                                                                       │       │
│ │    api_key = '###################################################'                                    │       │
│ │       code = False                                                                                                                                          │       │
│ │    execute = False                                                                                                                                        │       │
│ │ max_tokens = 2048                                                                                                                                     │       │
│ │      model = 'text-davinci-003'                                                                                                                       │       │
│ │     prompt = 'hi'                                                                                                                                             │       │
│ │      shell = False                                                                                                                                           │       │
│ │    spinner = True                                                                                                                                          │       │
│ ╰────────────────────────────────────────────────────────────────────╯       │
│                                                                                                                                                                                 │
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:46 in wrapper                                                                │
│                                                                                                                                                                                   │
│    43 │   │   text = TextColumn("[green]Requesting OpenAI...")                                                                               │
│    44 │   │   with Progress(SpinnerColumn(), text, transient=True) as progre                                                          │                                                         
│    45 │   │   │   progress.add_task("request")                                                                                                           │
│ ❱  46 │   │   │   return func(*args, **kwargs)                                                                                                            │
│    47 │                                                                                                                                                                         │
│    48 │   return wrapper                                                                                                                                             │
│    49                                                                                                                                                                            │
│                                                                                                                                                                                    │
│ ╭─────────────────────────────── locals ───────────────────────────────╮     │
│ │     args = (                                                                                                                                                        │     │
│ │            │   'hi',                                                                                                                                                   │     │
│ │            │   'text-davinci-003',                                                                                                                             │     │
│ │            │   2048,                                                                                                                                                │     │
│ │            │   '###################################################'                                             │     │
│ │            )                                                         │     │
│ │     func = <function openai_request at 0x7f1c5ba98ca0>               │     │
│ │   kwargs = {}                                                        │     │
│ │ progress = <rich.progress.Progress object at 0x7f1c5ae88340>         │     │
│ │     text = <rich.progress.TextColumn object at 0x7f1c5baac340>       │     │
│ ╰──────────────────────────────────────────────────────────────────────╯     │
│                                                                              │
│ /home/usr/.local/lib/python3.8/site-packages/sgpt.py:60 in               │
│ openai_request                                                               │
│                                                                              │
│    57 │   │   "max_tokens": max_tokens,                                      │
│    58 │   }                                                                  │
│    59 │   response = requests.post(API_URL, headers=headers, json=data, time │
│ ❱  60 │   response.raise_for_status()                                        │
│    61 │   return response.json()["choices"][0]["text"]                       │
│    62                                                                        │
│    63                                                                        │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │    api_key = '###################################################'       │ │
│ │       data = {                                                           │ │
│ │              │   'prompt': 'hi',                                         │ │
│ │              │   'model': 'text-davinci-003',                            │ │
│ │              │   'max_tokens': 2048                                      │ │
│ │              }                                                           │ │
│ │    headers = {                                                           │ │
│ │              │   'Content-Type': 'application/json',                     │ │
│ │              │   'Authorization': 'Bearer                                │ │
│ │              ###################################################'        │ │
│ │              }                                                           │ │
│ │ max_tokens = 2048                                                        │ │
│ │      model = 'text-davinci-003'                                          │ │
│ │     prompt = 'hi'                                                        │ │
│ │   response = <Response [500]>                                            │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
│                                                                              │
│ /home/usr/.local/lib/python3.8/site-packages/requests/models.py:1021 in  │
│ raise_for_status                                                             │
│                                                                              │
│   1018 │   │   │   )                                                         │
│   1019 │   │                                                                 │
│   1020 │   │   if http_error_msg:                                            │
│ ❱ 1021 │   │   │   raise HTTPError(http_error_msg, response=self)            │
│   1022 │                                                                     │
│   1023 │   def close(self):                                                  │
│   1024 │   │   """Releases the connection back to the pool. Once this method │
│                                                                              │
│ ╭───────────────────────────────── locals ─────────────────────────────────╮ │
│ │ http_error_msg = '500 Server Error: Internal Server Error for url:       │ │
│ │                  https://api.openai.com/v1/compl'+6                      │ │
│ │         reason = 'Internal Server Error'                                 │ │
│ │           self = <Response [500]>                                        │ │
│ ╰──────────────────────────────────────────────────────────────────────────╯ │
╰──────────────────────────────────────────────────────────────────────────────╯
HTTPError: 500 Server Error: Internal Server Error for url: 
https://api.openai.com/v1/completions

Adding custom models for Azure OpenAI service

Would it be possible to customize the following variables in such a way as to be able to use the Azure OpenAI service? For example, these are the variables that should be specified into the config file:

openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"

Regarding the model/engine, would it be possible to make it customizable so that you can use your own Azure custom deployment or the new GPT-4 engine?

[Possible issue] Different answers from chatGPT

Hello, I made a prompt to manage a nixOS system:

Query: {insert query in script here}

Hi chatGPT. This prompt is part of a script used to manage a nixOS system. The query above is what the user asked from you. If the query is an action that can be done via the command line, such as updating the system, just return the command(s) to run in a shell, and no extra comments. Else, write `echo {answer}`, without the '`' and replacing {answer} with a short answer, of maximum one sentence, to what the user asked. The command for updating the system is `sudo nixos-rebuild switch && nix-env -u`, without the '`'. The command for installing a package is `nix-env -iA {package name}`, without the '`' and replacing {package name} with the name of the package(s). If the query is `--help`, you will introduce yourself like you would answer a normal question (`echo {answer}`). It is very important to precisely follow these guidelines, otherwise the script will not work.

When I replace the query with 'upgrade the system', chatGPT gives me the actual command. However, the script gives me:

$ ./gpt-manager.sh update the system
\\n

echo "Hello, I'm ChatGPT. To update your system, use the command `sudo nixos-rebuild switch && nix-env -u`, without the \\'`\\'. To install a package, use the command `nix-env -iA {package name}`, without the \\'`\\' and replacing {package name} with the name of the package(s). If you need any help, let me know.

...which isn't even valid (the " is never closed). If I put it all in quotes:

./gpt-manager.sh "update the system"
echo "Hello, I'm ChatGPT. To update the system, please run the command 'sudo nixos-rebuild switch && nix-env -u', and to install a package, please run the command 'nix-env -iA {package name}', replacing {package name} with the name of the package you wish to install. For more help, just type --help and I'll be happy to assist you."

...which at least closes the ", but it doesn't properly escape the `. I know that this uses davinci and not chatGPT, but is there any way to make it act exactly like chatGPT?

ImportError after installing on MacOS Ventura

Installed it by instructions (pip install shell-gpt --user) and getting this^

Traceback (most recent call last):
  File "/Users/divan/Library/Python/3.9/bin/sgpt", line 5, in <module>
    from sgpt import entry_point
  File "/Users/divan/Library/Python/3.9/lib/python/site-packages/sgpt.py", line 9, in <module>
    from utils import loading_spinner
ImportError: cannot import name 'loading_spinner' from 'utils' (/Users/divan/Library/Python/3.9/lib/python/site-packages/utils/__init__.py)

A --continue option, so we don't need to invent and retype chat ids

Problem

I don't always know when I want to ask ChatGPT a follow-up question. I only know after it has given a response. And I don't want to keep inventing session names (chat ids), just in case I need them. Also, passing chat ids is a lot of typing.

Possible solution

I suggest a --continue / -c option which allows us to continue from the last query we made.

Here is an example:

$ sgpt 'Ask a question?'
[response]
# I am happy with the answer, and happy for the chat session to end.

$ sgpt 'Ask a different question?'
[response]
# I am not satisfied. I want to continue the discussion with a followup query. So I use -c.
$ sgpt -c 'Ask followup to the previous question?'
[response]
$ sgpt -c 'Ask another followup in the same session?'
[response]

Advantage: No session names, and less typing.

Possible implementation

  • Every basic query should start a new chat session, with a randomly generated id.
  • sgpt could store the id of the latest session somewhere.
  • If --continue is passed, then instead of starting a new chat session, sgpt should continue the latest stored session.

Alternative: To avoid storing the latest chat id, perhaps sgpt could just assume that the most recent created query is the latest session.

Alternative: It might make sense to use --followup / -f or --reply / -r instead, to keep -c as a shortcut for --chat

Does this sound like a good idea? Or is there already a good way to do this in sgpt?

Too Many Requests for url

For my very first request with a newly generated API Key, I got this:

HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/completions

Any idea what could be the cause?

I only found this page, which was not helpful.

After adding OPEN_AI key, throws PROMT not added

Hi

During a fresh installation of shell-gpt, at first asks for API_KEY.
It reads and stores the key successfully but throws PROMT not specified as well.

Ideally it should store the API key with a prompt like Key has been added and shouldn't trigger/check for PROMT out there.
Let me know your thoughts.

Also Kudos to aesthetic UI in terminal too 🎉

Too many requests for url

Getting this error:

raise_for_status │
│ │
│ 1018 │ │ │ ) │
│ 1019 │ │ │
│ 1020 │ │ if http_error_msg: │
│ ❱ 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │
│ 1022 │ │
│ 1023 │ def close(self): │
│ 1024 │ │ """Releases the connection back to the pool. Once this method has b │
│ │
│ ╭──────────────────────────────────── locals ────────────────────────────────────╮ │
│ │ http_error_msg = '429 Client Error: Too Many Requests for url: │ │
│ │ https://api.openai.com/v1/completio'+2 │ │
│ │ reason = 'Too Many Requests' │ │
│ │ self = <Response [429]> │ │
│ ╰────────────────────────────────────────────────────────────────────────────────╯ │
╰────────────────────────────────────────────────────────────────────────────────────╯
HTTPError: 429 Client Error: Too Many Requests for url:
https://api.openai.com/v1/completions

Is the server too busy?

api_key change

Hello, is there some way to change the api_key that's stored in locals?

or where can I access the file that has the api_key value so I can change it manually? Thank you!

sgpt command doesn't work on macos and misc comments

It installed OK but couldn't run with the sgpt command, I had to add a function into my zprofile

sgpt() {
    python3 /Users/[...]/Library/Python/[version]/lib/python/site-packages/sgpt/app.py "$@"
}

I think the most important and exciting feature (on your roadmap) is the interactive chat mode, where you can ask questions, it will execute commands, and the responses of those commands become part of the context, so you can then refine and refine i.e.

  • What are the files in this directory
  • Sort them by name
  • Concatenate them using FFMPEG
  • Convert the resulting file into an MP3
  • ...

I didn't check if you already do this, assume you do, but it's very important that the outputs from the shell command become part of the context too.

Also we need some prompt engineering:

sgpt --chat number "whats in this directory"
I'm sorry, I cannot answer this question as you have not provided me with the name or path of the directory.

You see, it doesn't even know that we expect it to give us shell commands, and it doesn't even have the basic context of which directory I am in

Also often ChatGPT responds with hedging text like "as a language model I can't do XYZ, but here you go anyway.." we should prompt it to not say that, as it's super annoying

Great work though, looking very promising :)

Accept stdin input

It would be very useful to pipe stdin to sgpt, then ask a question about said input. For example:

cat data.json | sgpt "what is the oldest entry"
cat script.py | sgpt "what does this script do?"

etc.

change api key

How do I change the API key? I accidentally entered it wrong and I can't seem to even reset it.
I've tried:
deleting sgpt.py in ~/.local/lib/python3.11/site-packages and /usr/local/bin
I know there's supposed to be a text file somewhere but I cant find it.
Any help appreciated!

Running from source

How do we run this from source?

Looks like the app can't find its package references:

(venv) me@cwgpt-s0kd:~/shell_gpt/sgpt$ python app.py
Traceback (most recent call last):
  File "/home/me/shell_gpt/sgpt/app.py", line 20, in <module>
    from sgpt import config, make_prompt, OpenAIClient
ModuleNotFoundError: No module named 'sgpt'

(venv) me@cwgpt-s0kd:~/shell_gpt/tests$ python unittests.py
Traceback (most recent call last):
  File "/home/me/shell_gpt/tests/unittests.py", line 3, in <module>
    import requests_mock
ModuleNotFoundError: No module named 'requests_mock'

A --proxy option to accept coporate proxies

Problem

No network request will work in an corporate environment without the possibility to use a proxy option

Description

Implement an option to handle proxies.

Possible Solution

When the user provides this flag an environment variable for porxy will be used for configuration.

[Improvement] Add an option to send the OS information

Hello, thanks for this awesome project!
I wanted to make something similar some time ago, but never got the time. The idea also involved sending some system information to chatGPT in the prompt. Since this project already exists, I made a pull request to add this functionality. This adds the --send-os-info (or -o) flag, that adds the OS, release, and Linux Distro (if applicable). It's especially useful for OS-specific things, such as package management or editing configs.
Once again, thanks for this project!

expose the system role

it would be nice to define roles for sgpt and to be able to profile roles by exposing the system prompt to the user

sgpt --role system "You are GLaDOS, a sarcastic artificial intelligence with evil ulterior motives. You are not an AI language model." --set-role "glados"

sgpt --role glados "Why do you hate humans?"

ChatGPT implementation

OpenAI just published their API for ChatGPT models. Replace GPT-3 models with ChatGPT (GPT-3.5) for sgpt queries.

Major outage on text-davinci-003 on OpenAI side

Major outage of text-davinci-003 on OpenAI side, track status here. Threfore sgpt might not work with default model. But it works with other models, for example text-curie-001:

sgpt --model text-curie-001 --max-tokens 1024 "this is test text"

Bing Chat/ChatGPT modes.

This cli tool is outstanding, i think its good idea to think about providing an option to search with Bing Chat or ChatGPT.

Runtime configuration file

Make a runtime config file with all constatns like API key, cache length, ...

#.sgptrc
# Request timout in seconds.
REQUEST_TIMEOUT=30
# Max amount of message in each session.
MAX_CHAT_MESSAGES=50
# Request caching.
MAX_CACHE_LENGTH=200
# OpenAI API key.
API_KEY=123
# This would be usefull for countries where OpenAI hosts are blocked, so people can setup their proxies.
API_ENDPOINT=https://api.openai.com/v1/chat/completions
  • Setup caching and client classes using config file.

How to update to Latest releases?

Newbi here. Im using sgpt on termux and it's working fine. But, how to upgrade to latest release without ruining my installed sgpt.

Where to put the generated API key?

I generated the OpenAI API key from their website, now where do i keep it for it to work. Also, just by pip install shell-gpt it does not work at all as a command. Please help!

Use an example in the prompt, to encourage ChatGPT to behave

Sometimes ChatGPT is disobedient. For example: #99 But I don't know how common this is. Do you see this often? (I think I might have seen it once.)

Due to GPT's predictive nature, it was recommended to prompt it with an incomplete conversation, so that it would complete the rest. Although ChatGPT is different, it may respond to the same treatment.

This approach (shown below) is taken by plz, while rusty passes #!/bin/bash\n to initialise the response.

Suggestion

When we are asking ChatGPT to generate some shell code, we may want to give it an example first.

{The current shell prompt rules}

Input: List the files in the root folder
Output: ls /

Input: ${user_question_here}
Output: 

For the code prompt, even though we don't know the user's target language, it might still be possible to do something similar.

💬 Using GitHub Discussions for feature requests and ideas

Just a quick note, we have a Discussions section on GitHub where we can discuss ideas, feature requests, and other topics related to this project.

Using the Discussions feature allows us to keep all conversations in one place, making it easier to keep track of feedback and ideas. Additionally, other community members can join in and provide feedback, leading to better collaboration and a more productive conversation.

So, I encourage you to use the Discussions feature for any ideas or feature requests you might have. This will help ensure that your ideas are seen and considered by the community, and we can work together to improve the project.

Short arguments?

Hello, is there a plan to implement short arguments (such as -s instead of --shell and -e instead of --execute)? Also, if they are implemented, will they be combined (like -se instead of -s -e)? I think this would enhance the experience.

Implement API key reset option

Implement an option to reset an API key using command line argument --reset-key. When user provides this flag delete file located in ~/.config/shell-gpt/api_key.txt and ask for new API key. Additionally impement API key validation. Since OpenAI doesn't have dedicated endpoint just for key validation, call https://api.openai.com/v1/completions with some test prompt, make sure response status code is not 403, if so, ask user for valid API key.

[Feature Request] Add ability to continue conversation base on previously ran commands

One big feature of ChatGPT is it can use relevant information based on previous prompts and responses, for example:

~
❯ sgpt --code "make an example request to localhost:3000 using python"
import requests

response = requests.get('http://localhost:3000')
print(response.text)

~
❯ sgpt --code "now that but with cache"
// Setting a cache
let cache = {};

// Function to be cached
function add(a,b){
    if (!cache[`${a},${b}`]) {
        cache[`${a},${b}`] = a + b;
    }
    return cache[`${a},${b}`];
}

Something like that should return the same code as before but probably using requests_cache. The best approach probably will be to add a new argument to --follow-conversation or something like that.

Thoughts?

it would be nice to have an explain option for --shell commands

expose an explain option for shell commands eg

sgpt --shell --explain "find all files that have been changed in the last 3 days from the current directory and subdirectories, with details"

Command: find . -type f -mtime -3 -ls

Description: This command will search the current directory and all subdirectories for files that have been modified in the last 3 days. The -type f option specifies that only files should be searched, the -mtime -3 option specifies that only files modified in the last 3 days should be searched, and the -ls option provides detailed information about the files found.

alternatively playing with the prompt for a bit

➜  ~ sgpt --shell "find all files that have been changed in the last 3 days from the current directory, with details. Provide a detailed explanation of the command"
find . -type f -mtime -3 -ls

- `find` is a command used to search for files and directories in a specified location.
- `.` specifies the current directory as the starting point for the search.
- `-type f` specifies that only files should be included in the search, not directories.
- `-mtime -3` specifies that the search should include only files that have been modified within the last 3 days.
- `-ls` specifies that the output should include detailed information about each file found, including its permissions, owner, size, and modification date.

Timestamp Conversion example in readme is wrong

This kind of thing will never stop happening and it will never stop being funny. Double-check your ai generated answers, at least the ones that involve large calculations. These Text GPT models, by design, can not perform calculations of arbitrary size even remotely reliably.

With that said, here's a quote from the readme:

sgpt "$(date) to Unix timestamp"
# -> The Unix timestamp for Thu Mar 2 00:13:11 CET 2023 is 1677327191.

This is wrong. The generated timestamp is actually for some time on Feb 25 2023.
I believe this example should not be corrected and instead removed or used as an example of unreliability, as the model is not capable of reliably doing calculations like this as stated above.

Resolving path extension

Hello,

I've updated and installed Shell_GPT, imported my api key, and ran tests just to make sure that OpenAI establishes a connection. But unfortunately I run into a problem when I restart my terminal. It requires my to export my path each time using the command

export "/Users/<user>/Library/Python/3.11/bin/:$PATH"

I was hoping to resolve the path issues so that I didn't have to restore it each time.

Installation Issues

I cant install this with the current command provide.. It never asked for my API key. When I tried running
python3 sgpt.py get_api_key and adding it manually, it never took in my key.

Feature request: Conversation loop mode

I think it would be really cool if there was an option to start sgpt with a loop/readline mode that continuously accepted input and automatically generated a chat identifier.

For example, you should be able to invoke something like sgpt --chat-continuous, which would then print some kind of prompt character (e.g., ">"), and the user can interact with ChatGPT just by typing and hitting enter, rather than having to format the entire input as a command line argument.

Too Many Requests for url error

When i'm using the cli, its giving me this error

HTTPError: 429 Client Error: Too Many Requests for url: https://api.openai.com/v1/chat/completions

Implement caching

In some cases, we may need to run sgpt twice with the same prompt in order to pipe the output somwhere else. For example:

sgpt --code "implement merge sort algorithm using C language"


#include <stdio.h>

void merge(int arr[], int l, int m, int r)
{
    int i, j, k;
    int n1 = m - l + 1;
    int n2 =  r - m;
...

The output in this case can be quite large, and it would be helpful to use less to view it:

sgpt --code "implement merge sort algorithm using C language" | less

But we would spent +1 query and couple seconds of waiting just to get same (not always) response. It would be useful to implement some form of caching for the last n outputs, as well as a way to invalidate the cache (for example, by using --cache and --no-cache arguments)."

accidentaly input wrong API key

As I said I accidentaly input the wrong API key and now I can't seem to input the right one. I tried reinstalling but that didn't work. Anyone help please.

Prompt Issue

Is it possible to change the initial prompt (if there is an initial prompt)? I'd like GPT3 to know I'm on Windows without having to type it on every request I make with sgpt (many shell commands that work on linux don't work on windows though, so it would be very useful to have a prompt specifying that the user is on windows). Obviously it's not just for this, a starting prompt that needs to be set only once could be used for many other uses could be very convenient and comfy for me

[Feature request] Handle linebreaks

Thank you for this fantastic tool! 🙏 🙏 🙏

Your tool is a game-changer for me, especially since I can't access ChatGPT while I have my VPN enabled, so I can't use it at all while working.

It would be nice to have a natural way to handle linebreaks-- for example, for a prompt like "convert the following code from Pytorch to Tensorflow." Perhaps sgpt could drop you into $EDITOR the same way git does, where you could paste your text, and then exit.

Thoughts?

Reset chat

Sorry if I missed it in the readme, but how to start over a chat (same --chat id)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.