Coder Social home page Coder Social logo

mods's Introduction

Mods!

Mods product art and type treatment
Latest Release Build Status

AI for the command line, built for pipelines.

a GIF of mods running

LLM based AI is really good at interpreting the output of commands and returning the results in CLI friendly text formats like Markdown. Mods is a simple tool that makes it super easy to use AI on the command line and in your pipelines. Mods works with OpenAI, Groq, Azure OpenAI, and LocalAI

To get started, install Mods and check out some of the examples below. Since Mods has built-in Markdown formatting, you may also want to grab Glow to give the output some pizzazz.

What Can It Do?

Mods works by reading standard in and prefacing it with a prompt supplied in the mods arguments. It sends the input text to an LLM and prints out the result, optionally asking the LLM to format the response as Markdown. This gives you a way to "question" the output of a command. Mods will also work on standard in or an argument supplied prompt individually.

Be sure to check out the examples and a list of all the features.

Installation

Mods works with OpenAI compatible endpoints. By default, Mods is configured to support OpenAI's official API and a LocalAI installation running on port 8080. You can configure additional endpoints in your settings file by running mods --settings.

OpenAI

Mods uses GPT-4 by default and will fall back to GPT-3.5 Turbo if it's not available. Set the OPENAI_API_KEY environment variable to a valid OpenAI key, which you can get from here.

Mods can also use the Azure OpenAI service. Set the AZURE_OPENAI_KEY environment variable and configure your Azure endpoint with mods --settings.

LocalAI

LocalAI allows you to run a multitude of models locally. Mods works with the GPT4ALL-J model as setup in this tutorial. You can define more LocalAI models and endpoints with mods --settings.

Groq

Groq provides some models powered by their LPU inference engine. Mods will work with both their models (mixtral-8x7b-32768 and llama2-70b-4096). Set the GROQ_API_KEY environment variable to a valid key, which you can get from here.

Install Mods

# macOS or Linux
brew install charmbracelet/tap/mods

# Windows (with Winget)
winget install mods

# Windows (with Scoop)
scoop bucket add charm https://github.com/charmbracelet/scoop-bucket.git
scoop install mods

# Arch Linux (btw)
yay -S mods

# Debian/Ubuntu
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.charm.sh/apt/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/charm.gpg
echo "deb [signed-by=/etc/apt/keyrings/charm.gpg] https://repo.charm.sh/apt/ * *" | sudo tee /etc/apt/sources.list.d/charm.list
sudo apt update && sudo apt install mods

# Fedora/RHEL
echo '[charm]
name=Charm
baseurl=https://repo.charm.sh/yum/
enabled=1
gpgcheck=1
gpgkey=https://repo.charm.sh/yum/gpg.key' | sudo tee /etc/yum.repos.d/charm.repo
sudo yum install mods

Or, download it:

  • Packages are available in Debian and RPM formats
  • Binaries are available for Linux, macOS, and Windows

Or, just install it with go:

go install github.com/charmbracelet/mods@latest

Saving conversations

Conversations save automatically. They are identified by their latest prompt. Similar to Git, conversations have a SHA-1 identifier and a title. Conversations can be updated, maintaining their SHA-1 identifier but changing their title.

Check the features document for more details.

a GIF listing and showing saved conversations.

Settings

--settings

Mods lets you tune your query with a variety of settings. You can configure Mods with mods --settings or pass the settings as environment variables and flags.

Dirs

--dirs

Prints the local directories used by Mods to store its data. Useful if you want to back your conversations up, for example.

Model

-m, --model, MODS_MODEL

Mods uses gpt-4 with OpenAI by default, but you can specify any model as long as your account has access to it or you have installed locally with LocalAI.

You can add new models to the settings with mods --settings. You can also specify a model and an API endpoint with -m and -a to use models not in the settings file.

Ask Model

-M --ask-model

Ask which model to use with an interactive prompt.

Title

-t, --title

Set a custom save title for the conversation.

Continue last

-C, --continue-last

Continues the previous conversation.

Continue

-c, --continue

Continue from the last response or a given title or SHA1.

List

-l, --list

Lists all saved conversations.

Show last

-S, --show-last

Show the previous conversation.

Show

-s, --show

Show the saved conversation the given title or SHA1.

Delete

--delete

Deletes the saved conversation with the given title or SHA1.

--delete-older-than=duration

Delete conversations older than the given duration (e.g. 10d, 3w, 1mo, 1y).

If the terminal is interactive, it'll first list the conversations to be deleted and then will ask for confirmation.

If the terminal is not interactive, or if --quiet is provided, it'll delete the conversations without any confirmation.

Format

-f, --format, MODS_FORMAT

Ask the LLM to format the response in a given format. You can edit the text passed to the LLM with mods --settings then changing the format-text value. You'll likely want to use this in with --format-as.

Format As

--format-as, MODS_FORMAT_AS

When --format is on, instructs the LLM about which format you want the output to be. This can be customized with mods --settings.

Role

--role, MODS_ROLE

You can have customized roles in your settings file, which will be fed to the LLM as system messages in order to change its behavior. The --role flag allows you to change which of these custom roles to use.

Raw

-r, --raw, MODS_RAW

Print the raw response without syntax highlighting, even when connect to a TTY.

Max Tokens

--max-tokens, MODS_MAX_TOKENS

Max tokens tells the LLM to respond in less than this number of tokens. LLMs are better at longer responses so values larger than 256 tend to work best.

Temperature

--temp, MODS_TEMP

Sampling temperature is a number between 0.0 and 2.0 and determines how confident the model is in its choices. Higher values make the output more random and lower values make it more deterministic.

Stop

--stop, MODS_STOP

Up to 4 sequences where the API will stop generating further tokens.

Top P

--topp, MODS_TOPP

Top P is an alternative to sampling temperature. It's a number between 0.0 and 2.0 with smaller numbers narrowing the domain from which the model will create its response.

No Limit

--no-limit, MODS_NO_LIMIT

By default, Mods attempts to size the input to the maximum size the allowed by the model. You can potentially squeeze a few more tokens into the input by setting this but also risk getting a max token exceeded error from the OpenAI API.

Include Prompt

-P, --prompt, MODS_INCLUDE_PROMPT

Include prompt will preface the response with the entire prompt, both standard in and the prompt supplied by the arguments.

Include Prompt Args

-p, --prompt-args, MODS_INCLUDE_PROMPT_ARGS

Include prompt args will include only the prompt supplied by the arguments. This can be useful if your standard in content is long and you just a want a summary before the response.

Max Retries

--max-retries, MODS_MAX_RETRIES

The maximum number of retries to failed API calls. The retries happen with an exponential backoff.

Fanciness

--fanciness, MODS_FANCINESS

Your desired level of fanciness.

Quiet

-q, --quiet, MODS_QUIET

Only output errors to standard err. Hides the spinner and success messages that would go to standard err.

Reset Settings

--reset-settings

Backup your old settings file and reset everything to the defaults.

No Cache

--no-cache, MODS_NO_CACHE

Disables conversation saving.

Wrap Words

--word-wrap, MODS_WORD_WRAP

Wrap formatted output at specific width (default is 80)

HTTP Proxy

-x, --http-proxy, MODS_HTTP_PROXY

Use the HTTP proxy to the connect the API endpoints.

Using within Vim/neovim

You can use mods as an assistant inside Vim. Here are some examples:

  1. :'<,'>w !mods explain this
  2. :.!mods -f write a copyright footer for mycompany, 2024
  3. :'<,'>.!mods improve this code

You can also add user commands for common actions, for example:

command! -range -nargs=0 ModsExplain :'<,'>w !mods explain this, be very succint
command! -range -nargs=* ModsRefactor :'<,'>!mods refactor this to improve its readability
command! -range -nargs=+ Mods :'<,'>w !mods <q-args>

This allows you to visual select some test, and run :ModsExplain, :ModsRefactor, and :Mods your prompt.

Whatcha Think?

We’d love to hear your thoughts on this project. Feel free to drop us a note.

License

MIT


Part of Charm.

The Charm logo

Charm热爱开源 • Charm loves open source

mods's People

Contributors

acuteenvy avatar aymanbagabas avatar bashbunni avatar bradyjoslin avatar caarlos0 avatar cloudbridgeuy avatar cvan avatar dependabot[bot] avatar emivespa avatar goostleek avatar im-aiex avatar maaslalani avatar meowgorithm avatar muesli avatar rachfop avatar sozercan avatar streppel avatar sylv-io avatar yuguorui avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mods's Issues

Is it proper way to configure gpt-4-1106-preview in `mods --settings` ?

Is it proper way to configure gpt-4-1106-preview?

$ mods --settings

# Default model (gpt-3.5-turbo, gpt-4, ggml-gpt4all-j...).
#default-model: gpt-4
default-model: gpt-4-1106-preview
(...)
# Default character limit on input to model.
#max-input-chars: 12250
max-input-chars: 398000
(...)
apis:
  openai:
    base-url: https://api.openai.com/v1
    api-key-env: OPENAI_API_KEY
    models:
(...)
      gpt-4-32k:
        aliases: ["32k"]
        max-input-chars: 98000
        fallback: gpt-4
      gpt-4-1106-preview:
        aliases: ["128k"]
        max-input-chars: 392000
        fallback: gpt-4-1106-preview
(...)

Also, maybe add it as option to default config now? After all it's one of available models?

(Or maybe make a flag mods --update-settings-with-curent-models that would fetch info about new models (from repo or via API...?)

Request: Show last conversation

Just like -C | --continue-last allows continuing a conversation without copying and pasting the ID, it would be great to have a -S | --show-last that is a shortcut to -s <last ID>.

This would allow evolving a conversation interactively, and then after it is done, save it somewhere with mods -S > the.log

confused by configuration file options operation - pls help

Hi Firstly, LOVE 🥰 mods, but am confused with mods configuration file (mods -s).
I've setup the default model etc but it isn't working as expected, seems to insert some extra blank lines in the prompt.
LocalAI is working fine.

I want to use issue the mods cmd as following -
$ mods "how are you today?"

and WANT mods cmd to -
$ mods -a localai -m ./my_local_wizardlm.bin "how are you today?"

but I cant quite set the config using mods -s correctly for it to happen, PLS HELP!

PS issuing $ mods -a localai -m ./my_local_wizardlm.bin "how are you today?" works as expected
I know I could use a shell alias to do this but prefer not to since I plan to use mods to do more work and need the configuration to be correct

[feature request] option for reading OPENAI_API_KEY from file

it would be nice if I could use file like ~/.openai_secret instead of manually adding OPENAI_API_KEY env everytime.

Even though I can export OPENAI_API_KEY='xxx' in my ~/.zshrc, it can't be the best because I'll be not able to publicly share my dotfile anymore. Or, I can do this at least:

// in ~/.zshrc
export OPENAI_API_KEY=$(cat ~/.openai_secret) # and make the file individually

but for me, it seems verbose.

If you agree on my idea, I also want to add the option with a joy!

p.s., Other options are similar case but the token would be better to be seperated

Does not support <(cat file1 file2)

cat file1 file1 | mods "prompt". works just fine; however to keep the pipe steam available for other context I use this syntax in bash: mods "prompt" <(cat file1 file2) which fails. Here is an example response that I get which indicates to me that mods is not processing the file descriptor.

$ mods "describe the purpose of these two Ruby classes" <(cat cli.rb config.rb)

  It seems there might be a bit of confusion in the question. In the context
  of Ruby programming,  /dev/fd/63  doesn't refer to Ruby classes. Rather, it
  looks like you are referring to a special file in a Unix-like operating
  system.

Allow copy code snippets to clipboard

I would appreciate it if there were a "--copy" option that could accept an optional integer argument specifying which code snippet (counting from the end of the conversation) to copy to the clipboard.

Conversation title includes format-text content

Hi all, I don't know if this is the intended behaviour. It appeared unintentional to me so wanted to flag. The content from the format-text setting is postfixed to the title of a conversation when using the -f option.

Screenshot 2024-01-06 at 15 35 07

I'd suggest this doesn't provide any information and could be omitted.

Thanks for all the hard work!

You've git your OpenAI API rate limit

mods version dev
module version: v0.2.0, checksum: h1:/jfv3SSTMTpS7EGICUmQ8ON6vUr9cldtdnACS9ySldA=
platform windows, powershell

ERROR You’ve hit your OpenAI API rate limit.

error, status code: 429, message: You exceeded your current quota, please check your plan and billing details.

Just pulled it on windows and see it is reporting that I've hit my rate limit, which is not true.

I've put fresh token, Write-Output $env:OPENAI_API_KEY prints the token to terminal.

How to fix max prompt size exceeded error?

I am feeding mods a document and asking a question, just like one of the examples in the readme.

ie.

mods -f "my question?" < mydocument.md | glow

Gives me the following error:

   ERROR  Maximum prompt size exceeded.

  error, status code: 400, message: This model's maximum context length is 4097 tokens. However, you requested 5624
  tokens (1527 in the messages, 4097 in the completion). Please reduce the length of the messages or completion.

How can I fix this? Is the only way to send a smaller document?

Thank you

Additional options in mods.yml

It would be convenient if we have such possibilities:

  • specify api key directly (instead of referencing env var)
  • specify file containing api key
  • specify default api to use
    Because different apis can have same models.
    (I know about aliases, but that can be not convenient)

Can this be modified to support Azure OpenAI endpoints?

To use the Azure Cognitive services OpenAI endpoints, additional details are required.

OPENAI_API_TYPE="azure" # might need to support azure_ad, azuread as well, via OAuth...
OPENAI_API_VERSION="2023-05-15" # OR 2021-11-01-preview, or...
OPENAI_API_BASE="<The URL of your resource>" # See below...
OPENAI_API_ENGINE="<The name of your deployment>"

For example, you might have an Azure OpenAI Resource named "My-Example-Cognitive-Resource" and in that resource you have deployed a gpt-35-turbo mode named "MyGPT35". 1

From that, the URI constructed for the chat.completions object will be:

https://my-example-cognitive-resource.openai.azure.com/openai/deployments/MyGPT35/chat/completions?api-version=2023-05-15

If I add what I can from the above to the mods config via mods --settings like this:

apis:
  azure:
    base-url: https://my-example-cognitive-resource.openai.azure.com
    models:
      MyGPT35:
        aliases: ["35t"]
        max-input-chars: 12250
        fallback:

And then call:

cat README.md | mods --api azure --model MyGPT35 "write a new section to this README documenting a Azure support feature"

It tries to process, but ends up returning:

ERROR  Unknown OpenAI API error.
error, status code: 404, message: Resource not found

Probably without support for Azure handling, mods is constructing the URL as:

https://my-example-cognitive-resource.openai.azure.com/engines/MyGPT35/chat/completions

instead of the correct sample above.

Footnotes

  1. This is a fake resource or deployment, just used as an example!

Issues when redirectiong stdout

mods -q do you like kittens>1.md
  1. The output expected to be redirected. And it is, but in parallel it also is typed in terminal, and then cleared on exit.
  2. There are ANSI sequences in 1.md. See also #86

�[38;5;252m�[0m�[38;5;252m�[0m  �[38;5;252mI don't have personal feelings or emotions, so I don't have the capacity to�[38;5;252m �[0m�[0m
�[0m�[38;5;252m�[0m  �[38;5;252mlike or dislike anything, including kittens. However, I can provide�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[0m
�[0m�[38;5;252m�[0m  �[38;5;252minformation and answer questions about kittens if you're interested�[0m�[38;5;252m!�[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m�[38;5;252m �[0m

Feature Request: Add prompt templates

Hi,

First of all, thank you for all the work on this.

Many instruction following models are much more effective when a prompt template is used (which is normally prefaced to the actual prompts. Many models give vastly different responses based on the prompting.
An instruction following template for example (Alpaca):

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
Prompt goes here

### Response:

It would be great to see a feature where prompt templates can be specified:

  • In the CLI by template name/alias
  • Globally (?) for all models (maybe a true/false setting as well as default prompt template setting)
  • Per model configuration

And allow the user to create the prompt templates, or even include some default ones.

go installation on windows

I installed mods with go successfully:
go install github.com/charmbracelet/mods@latest

after installation, my PC crashes just by running mods on git bash.
Do you have any recommendation for installing on windows?

Unknown flag --continue-last

Hello,

I'm Mr. Cherry, a very friendly guy.

mods --continue-last "hello" outputs error:

Unknown flag --continue-last

Environment: raspberry pi 4, debian

panic: runtime error: invalid memory address or nil pointer dereference on OpenBSD

Moin!

I just build mods on OpenBSD -current and got the following panic when trying to run mods. Reproducable with v1.0.0, v1.1.0 and the latest commits from main. v0.2.0 runs fine.

If I can provide any further information, please let me know.

$ mods
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xf8db5c]

goroutine 1 [running]:
modernc.org/libc.(*TLS).setErrno(0x17533e0?, {0x245660?, 0x58f260?})
        /home/jhuhn/go/pkg/mod/modernc.org/[email protected]/etc.go:189 +0xdc
modernc.org/libc.Xmalloc(0x2decc0?, 0x598868?)
        /home/jhuhn/go/pkg/mod/modernc.org/[email protected]/mem.go:34 +0xc5
modernc.org/libc.init()
        /home/jhuhn/go/pkg/mod/modernc.org/[email protected]/libc_openbsd.go:49 +0x1db

Please stop releasing "tarbombs"

Your current releases create "tarbombs" that will overwrite any file with the names of the files in your release. This is a very well established anti-pattern in the industry. Please create a folder (probably mods) instead and properly tar that folder to avoid this.

ANSI Escape Sequences Not Rendering Properly in "Window Console Host" when Retrieving Answers from OpenAI API

When using the latest version (mods do you like kittens) or v0.2 when piped to glow (mods do you like kittens | glow), ANSI escape sequences are not being rendered correctly in the "Window Console Host." Instead, raw codes like ←[38;5;252m←[0m←[38;5;252m←[0m ←[38;5;252m are displayed.

  1. This issue is specific to retrieving answers from the OpenAI API. Other cases such as usage messages and error messages display correctly.
  2. The issue is only observed in the "Window Console Host." "Windows Terminal" functions as expected with no display problems.

Although this issue is likely related to the "Window Console Host," there may be potential workarounds or solutions that could address the problem and improve the rendering of ANSI escape sequences in this context. Any insights or suggestions to resolve this issue would be greatly appreciated.

The default model is GPT-4 , which is not available for most people

So most peoples[0] first couple runs will just result in

   ERROR  Unknown OpenAI API error.

  error, status code: 404, message: The model: `gpt-4` does not exist

even if they have their openapi key set in the env vars.

Obviously we could just do mods -m gpt-3.5-turbo blah each time but the number of keystrokes will kill me :(

  • [0] even ppl like me who requested access months ago to gpt-4 :(

Support Mistral API

Mistral (known for their 7B model and more recently their Mixture of Experts model) have recently started offering an API: https://docs.mistral.ai/api/

It would be great if it could be used with mods.

This is similar to the feature request for supporting Ollama (#162) and supporting llamafile (#168).

Maybe one solution could be to use a library that offers an abstraction over all the different LLM APIs already? There's the Go port for LangChain, which supports various LLMs as backend already for example:

But it doesn't support the Mistral API yet.

mods should have option to reset to 'factory default' settings

HI, Firstly, awesome package THANK YOU!

I screwed up my mods installation while changing settings using: mods -s

and the error is:

$ mods

   ERROR  There was an error in your config file.                                                              

  yaml: line 3: did not find expected key           

not very helpful , at the minimum I recommend error should repeat where the yaml settings file location.
Better would be roll back to default start etc

If this was in Python I would'a pushed but dont no GO.
Thx again!

Feature Request: add text area mode to enter a new input

Hello mods team :)

I've wanted for quite some time to use a CLI tool to interact with my preferred LLM models, best way to incorporate AI to your work, so thank you so much for this project.

My issue is the following: I use mods quite frequently but occasionally face a couple of issues when inserting extensive inputs. Let's say, for instance, I'd like to include some code as part of an input. I'd typically wrap it within triple backticks, but I'd like a streamlined way to go back and forth, enabling me to modify the final input effortlessly. Therefore, would it be possible to incorporate a --text-area mode flag? This would open a text area bubble component, providing a more comfortable way for me to type my prompt and then pass the value as the prompt input.

In fact, I've actually forked this repository and have been working on this feature for my own use. However, it would be fantastic if it could be incorporated into the official codebase, potentially benefitting others who might find it helpful.

Example ⬇️
Screenshot 2024-01-01 at 10 52 53

freezes on raspberry pi os

It often freezes when executing a prompt in raspberry pi 3 B+, debian 11.
The whole system freezes, so no error message.

installation failing on Mint 21.1

$ sudo apt update && sudo apt install mods
Ign:1 http://packages.linuxmint.com vera InRelease                                      
Hit:2 http://archive.ubuntu.com/ubuntu jammy InRelease                                  
Hit:3 http://packages.linuxmint.com vera Release                        
Hit:4 http://archive.ubuntu.com/ubuntu jammy-updates InRelease                          
Get:6 http://archive.ubuntu.com/ubuntu jammy-backports InRelease [108 kB]               
Get:7 http://apt.insync.io/mint vanessa InRelease [5,537 B]                             
Ign:8 http://apt.keepsolid.com/linuxmint vera InRelease                                 
Hit:9 http://packages.microsoft.com/repos/code stable InRelease                         
Hit:10 http://ppa.launchpad.net/agornostal/ulauncher/ubuntu jammy InRelease             
Get:11 https://repo.charm.sh/apt * InRelease                                            
Hit:12 https://repo.steampowered.com/steam stable InRelease                             
Hit:13 https://repo.protonvpn.com/debian stable InRelease                               
Hit:14 https://repo.anaconda.com/pkgs/misc/debrepo/conda stable InRelease               
Hit:15 http://security.ubuntu.com/ubuntu jammy-security InRelease                       
Hit:16 https://dl.google.com/linux/chrome/deb stable InRelease                          
Err:17 http://apt.keepsolid.com/linuxmint vera Release                                  
  404  Not Found [IP: 144.217.71.199 80]
Ign:18 https://download.docker.com/linux/ubuntu vera InRelease                          
Hit:19 https://linux.teamviewer.com/deb stable InRelease                                
Err:20 https://download.docker.com/linux/ubuntu vera Release                            
  404  Not Found [IP: 65.8.228.50 443]
Hit:21 http://ppa.launchpad.net/danielrichter2007/grub-customizer/ubuntu jammy InRelease
Hit:22 https://ppa.launchpadcontent.net/kubuntu-ppa/backports-extra/ubuntu jammy InRelease
Hit:23 http://ppa.launchpad.net/flatpak/stable/ubuntu jammy InRelease                   
Hit:24 http://ppa.launchpad.net/flexiondotorg/quickemu/ubuntu jammy InRelease           
Hit:25 https://ppa.launchpadcontent.net/kubuntu-ppa/backports/ubuntu jammy InRelease    
Hit:26 http://ppa.launchpad.net/team-xbmc/ppa/ubuntu jammy InRelease
Hit:27 http://ppa.launchpad.net/touchegg/stable/ubuntu focal InRelease
Hit:28 http://ppa.launchpad.net/touchegg/stable/ubuntu jammy InRelease
Hit:29 http://ppa.launchpad.net/yannick-mauray/quickgui/ubuntu jammy InRelease
Reading package lists... Done                        
W: http://ppa.launchpad.net/agornostal/ulauncher/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
E: The repository 'http://apt.keepsolid.com/linuxmint vera Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'https://download.docker.com/linux/ubuntu vera Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: http://ppa.launchpad.net/danielrichter2007/grub-customizer/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/flatpak/stable/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/flexiondotorg/quickemu/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/team-xbmc/ppa/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/touchegg/stable/ubuntu/dists/focal/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/touchegg/stable/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://ppa.launchpad.net/yannick-mauray/quickgui/ubuntu/dists/jammy/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
$

Operating System: Linux Mint 21.1
KDE Plasma Version: 5.25.5
KDE Frameworks Version: 5.98.0
Qt Version: 5.15.3
Kernel Version: 5.19.0-43-generic (64-bit)
Graphics Platform: X11
Processors: 12 × AMD Ryzen 5 5600U with Radeon Graphics
Memory: 13.5 GiB of RAM
Graphics Processor: RENOIR

Support Local Llamafile binary/server?

This is the worst kind of issue. A feature request based on a newly released related tool. I'm sorry. 🫣

With the recent release of llamafile it'd be really cool to be able to run this completely locally without sending data up to OpenAI.

Edit: This seems similar in spirit to #162. 👍

Document how to use with Azure OpenAI

I'm not able to get it to work. Keep hitting 404.

  azure:
    # Set to 'azure-ad' to use Active Directory
    # Azure OpenAI setup: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/create-resource
    base-url: https://<resourcename>.openai.azure.com
    api-key-env: AZURE_OPENAI_KEY
    models:
      gpt-4:
        aliases: ["az4"]
        max-input-chars: 24500
        fallback:
      gpt-35-turbo-16k:
        aliases: ["az35t"]
        max-input-chars: 12250
        fallback:
      gpt-35:
        aliases: ["az35"]
        max-input-chars: 12250
        fallback:

image

Despite this I see a 404

echo say hi | mods -a azure -m gpt-35-turbo-16k

   ERROR  Missing model 'gpt-35-turbo-16k' for API 'azure'.

  error, status code: 404, message: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.

Am I missing something?

Tokens in ENV vars is insecure (ChatGPT agrees)

Maybe next time use the tool to avoid writing insecure code.

$ mods -f 'is it safe to store passwords and tokens in environment variables?'

No, it is not safe to store passwords and tokens directly in environment variables. While environment variables provide a convenient way to store configuration data, they are not designed to securely store sensitive information such as passwords and tokens.

Storing passwords and tokens in environment variables can expose them to potential security risks. Environment variables are easily accessible to any process running on the same machine, making it easier for unauthorized users or malicious programs to access this sensitive data.

To enhance security, it is recommended to use secure storage mechanisms specifically designed for sensitive data, such as a dedicated password manager or a secure key vault. These tools provide additional layers of protection, including encryption and access control, to safeguard sensitive information from unauthorized access.

Please add the option to read this from a file (at least).

Prompt option '-p' does not output prompt arguments

Mods is a great CLI tool! Thank you for all the effort in creating, sharing with the community, and maintaining.

I noticed that passing the -p option to the command doesn't repeat the prompt back on the output. From reading the usage info, this is what I understood to be the result.

Screenshot 2024-01-06 at 15 29 42

Let me know if you need any more information, or if I've made a user error.

Cheers.

--quiet still dumps ANSI terminal escapes

They may be invisible, but they are still a problem when using the filter command from within a Vim session, for example.

$ mods --quiet 'do you like kittens'
^[[?25l ^[[0D^[[2K^[[?25h^[[?1002l^[[?1003lAs an AI, I do not have personal preferences or          emotions, so I do not have the ability to like or dislike anything, including kittens.

Saved Prompt Management

I really like the work you have done on mods A new feature set that I found to be very helpful is saved parameterized prompt management. I wrote a Ruby script to see just how nice this kind of prompt management would be as a wrapper around mods I think it works out pretty nicely. It would be great if mods implemented this same kind of stuff that my Ruby wrapper does.

You can see my Ruby wrapper at https://github.com/MadBomber/scripts/blob/master/aip.rb

I define a keyword (a parameter) within a prompt as any UPPER case text within square brackets. This is a [KEYWORD OR PHRASE] in a prompt.

Prompts are saved in my $HOME directory within the ~/.prompts directory. Prompts are text files. Any line in the text file that begins with a "#" character is considered a comment and is not included in the raw prompt that is sent to mods.

Each prompt when evaluated has its keywords replaced with input from the user. This input is saved in a JSON file with the same name as the prompt. If the user runs the prompt again, that JSON file is used as default values for each of the keywords. When the user just hits return on a keyword query, the default shown in (default) parentheses is taken as the value.

An advantage of saving these parameterized prompts in a common directory is that you can make that a git repository and version control your prompts.

Suggestion: Introduce --delete-older-than flag to remove older conversations

btw, thanks for this great tool. I use it a lot and it has improved many of my workflows.

After some time I collected a lot of conversations and was looking for an easy way to delete older ones. Currently the only way to do this is to use a script to delete each conversation explicitly.

How about we introduce the `--delete-older-than' flag, analogous to the use of nix-collect-garbage, where we can automatically delete any conversation older than the period we choose.

e.g.

# delete all conversations older than 8 hours
mods --delete-older-than 8h
# delete all conversations older than 2 weeks
mods --delete-older-than 2w
# delete all conversations older than 1 year
mods --delete-older-than 1y

In my opinion, we should not rely on time.ParseDuration to define the duration because it does not define the suffix for weeks, months, or years. Instead, we should write our own parser and not expect it to be too precise in terms of month length or leap years.
IMHO we should not support minutes and use m for months instead, but I'm open to any suggestions on the period format.

Like my last suggestion, I'm happy to work on this feature, but it probably won't be as easy as the last time. 😉

Any thoughts?

Allow running a shell command like shell_gpt

https://github.com/TheR1D/shell_gpt has a nice sgpt -s 'print me the size of all the files in the src/ dir' which then offers to execute, describe (or cancel) the command returned by chat-gpt, it's very convenients for one-off (and caches the results to rerun them multiple times without hitting the API)

also nice with --editor to create the prompt in an editor

similarly https://github.com/KillianLucas/open-interpreter allows running arbitrary code, and could be a nice integration with the beautiful charm UI

Google Bard

Would be great to be able to also use Google Bard.

How to use the new GPT-4-Turbo-models

Hey everybody,

just for everybody looking, here is how to use the new gpt-4-1106-preview ("GPT-4-Turbo") model with mods:

Type mods --settings

Now edit this presented file as follows:

Add a new model-settings for gpt-4-1106-preview:

apis:
  openai:
    base-url: https://api.openai.com/v1
    api-key-env: OPENAI_API_KEY
    models:
      gpt-4-1106-preview:
        aliases: ["4t"]
        max-input-chars: 350000
        fallback: gpt-4

Now at the very top of the file change the default-model:

default-model: gpt-4-1106-preview

Save and exit the editor.

You can verify your change by trying something like this:

mods "What is your knowledge cutoff?"

The answer should mention April 2023, because that is where GPT-4 Turbo cuts off.

  My knowledge is updated until April 2023. Any events or developments that
  have occurred after this time may not be included in my responses. If you
  have questions about anything that has happened after my knowledge cutoff,
  it is advisable to consult the latest sources for the most recent
  information.

Feature request: Ollama as an API option

First of all, thanks for the truly excellent tool! mods has become an indispensable part of my life on the CLI.

This is a humble feature request:

Provide an option to use ollama as an API, as in the following example:

mods --api ollama --model llama2:latest "Why is the sky blue?"

Ollama provides a REST API, see their docs here.

An example API call:

curl http://localhost:11434/api/generate -d '{
  "model": "llama2",
  "prompt":"Why is the sky blue?"
}'

Feature request: ask user for prompt

Currently, Mods prints the help if the user didn't provide a prompt (either as arguments or from stdin). Instead of printing the help, prompt the user for input if stdin is a terminal. Otherwise, print a not a terminal error with the help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.