Coder Social home page Coder Social logo

paul-gauthier / aider Goto Github PK

View Code? Open in Web Editor NEW
9.6K 88.0 968.0 7.81 MB

aider is AI pair programming in your terminal

Home Page: https://aider.chat/

License: Apache License 2.0

Python 93.52% HTML 0.93% SCSS 0.46% Dockerfile 0.21% Shell 0.42% Scheme 4.46%
chatgpt cli command-line gpt-4 openai gpt-3 gpt-35-turbo

aider's People

Contributors

a1ooha avatar ameramayreh avatar ctoth avatar eltociear avatar fahmad91 avatar h0x91b avatar jackhallam avatar jevon avatar joshuavial avatar mobyvb avatar omri123 avatar paul-gauthier avatar ryanfreckleton avatar taik avatar zackees avatar zestysoft avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aider's Issues

Changes not applied to file when using gpt-3.5-turbo

Hi, when I use aider, response is printed to the terminal but not applied to the file like described and shown in the screencasts. I don't have GPT-4 access yes, though, so am running with the -3 flag.

Thanks for doing this, eager to use it :) .

Running RepoMap in isolation

Given the following test, shouldn’t I see output similar to the sample tree found in the ctags.md? Currently I’m only seeing filenames in the output.

    def test_my_repo_map(self):
        temp_dir = os.path.abspath("../aider")
        rm = RepoMap(root=temp_dir)
        chat_fnames = [temp_dir+'\\main.py', temp_dir+'\\io.py']
        other_fnames = []
        ranked_tags_map = rm.get_ranked_tags_map(chat_fnames, other_fnames)
        print("\n" + ranked_tags_map)
        self.assertIn("append_chat_history (self, text, linebreak=False, blockquote=False)", ranked_tags_map)

============================== 1 failed in 0.87s ==============================
FAILED [100%]
io.py
main.py

OS: Windows 11

ctags --version
Universal Ctags 6.0.0(v6.0.0), Copyright (C) 2015-2022 Universal Ctags Team
Universal Ctags is derived from Exuberant Ctags.
Exuberant Ctags 5.8, Copyright (C) 1996-2009 Darren Hiebert
Compiled: Dec 16 2022, 02:15:28
URL: https://ctags.io/
Output version: 0.0
Optional compiled features: +win32, +wildcards, +regex, +gnulib_regex, +internal-sort, +unix-path-separator, +iconv, +option-directory, +xpath, +json, +interactive, +yaml, +case-insensitive-filenames, +packcc, +optscript, +pcre2

Ctags not recognized

I installed universal ctags but it is not the latest version.

When I run aider, it doesn't seem to recognize it, although I can use ctags on the commandline:

aider
Model: gpt-4
Git repo: none
Repo-map: disabled
Use /help to see in-chat commands.

The Repo-map: disabled part indicates that ctags was not recognized, right?

Could the insalled universal-ctags be to old?

which ctags
/usr/bin/ctags

ctags --version

Universal Ctags 0.0.0, Copyright (C) 2015 Universal Ctags Team
Universal Ctags is derived from Exuberant Ctags.
Exuberant Ctags 5.8, Copyright (C) 1996-2009 Darren Hiebert
  Compiled: Apr 23 2021, 18:29:07
  URL: https://ctags.io/
  Optional compiled features: +wildcards, +regex, +iconv, +option-directory, +xpath, +json, +interactive, +sandbox, +yaml, +packcc

UPDATE:

The solution given by @KnotEnvy worked, I created a git repo in the working directory first:

$ git init
Initialized empty Git repository in /workdir/.git/
$ aider app.py
Model: gpt-4
Creating empty file app.py
Files not tracked in .git:
 - app.py
Add them? y
Added app.py to the git repo
Commit b972fab Added new files to the git repo: app.py
Git repo: .git
Repo-map: universal-ctags using 1024 tokens

So my ctag version seems to be ok.

Openai-api-base / problems getting it to work

Hello Paul,

I noticed that in the argument --help the openai api base is not included:

parser.add_argument(
    "--openai-api-base",
    metavar="OPENAI_API_BASE",
    default="https://api.openai.com/v1",
    help="Specify the OpenAI API base endpoint (default: https://api.openai.com/v1)",
)

is the --help info not automatically generated?

Greetings

getting aider + ctags installed on OSX

Hi i'm trying to figure out how to use ctags with aider but it's not clear from the instructions.

I cloned aider, installed it, and then cloned ctags and put it in the aider directory. i also installed that. it seems to be available because the command "aider" works but it doesn't see the universal ctags. am I doing something wrong?

i navigate to my project directory and type "aider" and it loads up, and I get this:

(aider) ➜  pricer git:(main) ✗ aider .
Model: gpt-4
Git repo: .git
Repo-map: basic using 1024 tokens (universal-ctags not found)
Use /help to see in-chat commands.

Color for diff output

I understand I can set colors for various texts like so:

aider --user-input-color blue --tool-output-color blue --tool-error-color blue -assistant-output-color blue

I tried various combinations and it changes the chat colors. I also found some code in the io.py, for example InputOutput.__init__

But I would very much like to be able to change the color of the diff output. I could apply this in my own fork or locally.

As you can see in the image, in my terminal the diff appears in a light color, maybe inverted:

image

Could you pleace point me to the code, where the inversion or the light color happens. Or if this is done by some library, where does the oputput happen?

Thank you very much!

Visual Studio 2022 Integration.

When i tried on my C# project, aider with gpt-4 and the repo map, I encountered many problems:

  1. I don't understand how the confirmation system works because i write "y" or "yes" and it replies more times to confirm the confirmation but i don't understand how
  2. When i come across to edit anything it can't edit existing files in my C# project from anaconda powershell WITH admin perms.
  3. It have created some files but the Visual Studio recognizes as external files and doesn't add it to the project and I need to do it manually.
  4. When it created classes, it have been created without using's, why?
  5. Finally, the formating of the files where wrong, I needed to make in every file the using's import and shift tab every class created by aider.
    Why this happening?

Integration of base url

Hello Paul,
is it possible to add OPEN_AI_BASE or openai.api_base endpoint

default:
openai.api_base = "https://api.openai.com/v1/chat/completions"

https://platform.openai.com/docs/models/default-usage-policies-by-endpoint

In this way I am possible to try to use local modals with it / with this method for example.
https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md

Model I want to try to use: https://huggingface.co/bigcode/starcoderplus
And also Orca once it is released.

Thanks in advance.
Greetings from Germany

Feature Request: Support Open Source LLM Models & Oobabooga Webui

Currently, this only supports OpenAI models using the OpenAI API. Oobabooga Webui has an API extension that allows requests to be sent to it and open source models to generate the content instead completely locally. Would it be possible to add support for the webui API in the future?

[Bug] multiple lines of unneccessary repetitions

Hey there,

So I tried it again. Backoff works! Thanks.

But now I have the following:
Prompt:

> as the lead software architect please explain me, the product manager the codebase so I get a general sense and give me three suggestions where huge improvement potential lies

Answer:


As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            
As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.          
Codebase Overview:                                                                                                                                                               

As the lead software architect, I'll provide you with an overview of the codebase and suggest three areas with potential for significant improvement.                            

Codebase Overview:                                                                                                                                                               

The codebase is organized into several directories, each serving a specific purpose:                                                                                             

And then it continues with the correct output.

Repository was:

AGiXT

Add files with wildcards

For small projects it would be nice to be able to add all files and/or using wildcards.
Its annoying having to do it every time unless I missed something.

Feature Request / implement Backoff

Hello there,

thanks for making this open source!
I have a request: due to being still on the free credits of OpenAI API, I run into timeouts.

fetch_starred_repos.py
Add these files to the chat? y
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.
Retry in 1 seconds.
RateLimitError: You exceeded your current quota, please check your plan and billing details.

Is it possible for you to implement this library to omit that?
https://pypi.org/project/backoff/

Thanks in advance.
Greetings from Germany

aider command not found

Hi, I'm trying to start with aider but after installing aider-chat, exporting OPENAI_API_KEY I get zsh: command not found: aider when run the command aider myapp.py. I'm on a M1 Macbook using zshell.
I did install it with pip install git+https://github.com/paul-gauthier/aider.git
I then installed tags with brew install --HEAD universal-ctags/universal-ctags/universal-ctags.
Created a folder and cd'ed into it and tried to run the command.

Am I missing to set something up?
Many thanks.

Error running aider..

Hello, after setup and attempt to run aider, I got this error:

admin@AFVFHGQBRQ6L7 aiderEngineer % aider justtest.py
zsh: command not found: aider
admin@AFVFHGQBRQ6L7 aiderEngineer % zsh --version

zsh 5.9 (arm-apple-darwin21.3.0)

via Codespace (testing)
👋 Welcome to Codespaces! You are on our default image. 
   - It includes runtimes and tools for Python, Node.js, Docker, and more. See the full list here: https://aka.ms/ghcs-default-image
   - Want to use a custom image instead? Learn more here: https://aka.ms/configure-codespace

🔍 To explore VS Code to its fullest, search using the Command Palette (Cmd/Ctrl + Shift + P or F1).

📝 Edit away, run your app as usual, and we'll automatically make it available for you to access.

@PublicTrades ➜ /workspaces/aiderEngineer (main) $ aider test.py
bash: aider: command not found
@PublicTrades ➜ /workspaces/aiderEngineer (main) $ 

How can I resolve?

IndexError: string index out of range (Java)

Hi Paul, amazing tool you've created here!

Trying it out with a java project I encountered the following error:

string index out of range

Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/aider/coder.py", line 803, in apply_updates
    edited = method(content)
             ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aider/coder.py", line 616, in update_files_gpt4
    if utils.do_replace(full_path, original, updated, self.dry_run):
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aider/utils.py", line 216, in do_replace
    new_content = replace_most_similar_chunk(content, before_text, after_text)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aider/utils.py", line 99, in replace_most_similar_chunk
    res = replace_part_with_missing_leading_whitespace(whole, part, replace)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aider/utils.py", line 70, in replace_part_with_missing_leading_whitespace
    if all(pline[0].isspace() for pline in part_lines):
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/aider/utils.py", line 70, in <genexpr>
    if all(pline[0].isspace() for pline in part_lines):
           ~~~~~^^^
IndexError: string index out of range

Seems at least on the part_lines was empty here, crashing the program.

updating utils.py#70 to:
if all((len(pline)> 0 and pline[0].isspace()) for pline in part_lines):
fixed the problem

Let me know if you like a PR.

Enable repo map with gpt 3.5 turbo-16k?

From what I've read, repo map is disabled outside gpt-4 because 3.5 cant handle the token load. But with 3.5-16k released has anyone tried with 16k?

I'm going to dive into aider codebase and try to enable for 3.5 16k but thought I would ask for insight first

ctags failure on macOS

Using git repo: .git
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ctags: illegal option -- -
usage: ctags [-BFadtuwvx] [-f tagsfile] file ...

Connection reset by peer

Looks like connection to OpenAI gets closed at some point. Here's the trace

> can you please refactor the `eastern` variable into a global                                                                                           

Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 790, in urlopen
    response = self._make_request(
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connection.py", line 454, in getresponse
    httplib_response = super().getresponse()
  File "/usr/lib/python3.9/http/client.py", line 1347, in getresponse
    response.begin()
  File "/usr/lib/python3.9/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.9/http/client.py", line 268, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.9/socket.py", line 704, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.9/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib/python3.9/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
ConnectionResetError: [Errno 104] Connection reset by peer

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 844, in urlopen
    retries = retries.increment(
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/util/retry.py", line 470, in increment
    raise reraise(type(error), error, _stacktrace)
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/util/util.py", line 38, in reraise
    raise value.with_traceback(tb)
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 790, in urlopen
    response = self._make_request(
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 536, in _make_request
    response = conn.getresponse()
  File "/home/pi/.local/lib/python3.9/site-packages/urllib3/connection.py", line 454, in getresponse
    httplib_response = super().getresponse()
  File "/usr/lib/python3.9/http/client.py", line 1347, in getresponse
    response.begin()
  File "/usr/lib/python3.9/http/client.py", line 307, in begin
    version, status, reason = self._read_status()
  File "/usr/lib/python3.9/http/client.py", line 268, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/lib/python3.9/socket.py", line 704, in readinto
    return self._sock.recv_into(b)
  File "/usr/lib/python3.9/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/lib/python3.9/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/pi/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 520, in request_raw
    result = _thread_context.session.request(
  File "/home/pi/.local/lib/python3.9/site-packages/requests/sessions.py", line 587, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/pi/.local/lib/python3.9/site-packages/requests/sessions.py", line 701, in send
    r = adapter.send(request, **kwargs)
  File "/home/pi/.local/lib/python3.9/site-packages/requests/adapters.py", line 501, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/pi/.local/bin/aider", line 8, in <module>
    sys.exit(main())
  File "/home/pi/.local/lib/python3.9/site-packages/aider/main.py", line 240, in main
    coder.run()
  File "/home/pi/.local/lib/python3.9/site-packages/aider/coder.py", line 226, in run
    new_user_message = self.run_loop()
  File "/home/pi/.local/lib/python3.9/site-packages/aider/coder.py", line 286, in run_loop
    return self.send_new_user_message(inp)
  File "/home/pi/.local/lib/python3.9/site-packages/aider/coder.py", line 304, in send_new_user_message
    content, interrupted = self.send(messages)
  File "/home/pi/.local/lib/python3.9/site-packages/aider/coder.py", line 402, in send
    completion = openai.ChatCompletion.create(
  File "/home/pi/.local/lib/python3.9/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
  File "/home/pi/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
  File "/home/pi/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 220, in request
    result = self.request_raw(
  File "/home/pi/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 533, in request_raw
    raise error.APIConnectionError(
openai.error.APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))```

Change color of the assistant output

Maybe I'm missing something obvious, but how do you change the color of the assistant output? Currently it is a dark blue that is very difficult to see on my terminal:

Screen Shot 2023-06-17 at 10 27 23 PM

I can change the user input and the tool output but I don't see an option to change the assistant output.

Respond to PRs

Can aider respond to PRs? Almost like a repo watcher, different developers can submit their code and aider tests it and if it passes a human can merge. If not it can add comments and code snippets to fix the errors.

When I drop all files, and add one I have traceback and aider exits.

Traceback (most recent call last):
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\Scripts\aider.exe\__main__.py", line 7, in <module>
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\main.py", line 293, in main
    coder.run()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 328, in run
    new_user_message = self.run_loop()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 392, in run_loop
    return self.commands.run(inp)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\commands.py", line 58, in run
    return self.do_run(matching_commands[0][1:], rest_inp)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\commands.py", line 43, in do_run
    return cmd_method(args)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\commands.py", line 254, in cmd_add
    self.coder.abs_fnames.add(abs_file_path)
AttributeError: 'list' object has no attribute 'add'

Feature Request: Easier CTAGS execution on specific file extension

Hey there,
So I want to become a product manager, and want to understand codebases better with the help of aider.

For this I would like to do things less manual and more automated.
Here is what I thought about - and asked GPT-4 based on your README:

I want to run aider with the ctags on all .py files in a different repo, how do I do this?

And the answer was:

To run Aider with ctags on all .py files in a different repo, follow these steps:

Open your terminal and navigate to the root directory of the target repository.
cd /path/to/your/repo
Install the universal ctags if you haven’t already:
# For Ubuntu/Debian-based systems
sudo apt-get install universal-ctags

# For macOS (using Homebrew)
brew install --HEAD universal-ctags/universal-ctags/universal-ctags
Generate the tags file for your repo using ctags:
ctags -R *.py
This command will recursively search for all .py files and generate a tags file in the repository root.

Run Aider with the --ctags flag and specify all .py files you want to work on:
aider --ctags file1.py file2.py ...
Replace file1.py, file2.py, etc., with the paths to the source code files you want to work on.

Now, Aider will launch using ctags for better understanding and modification of your codebase while working on specified .py files. If you don’t want to specify each individual .py file, you can use wildcards or other tools like [find](https://man7.org/linux/man-pages/man1/find.1.html) to pass all desired .py files as arguments.

And then I asked further - and received this:

You can accomplish this using the find command along with a wildcard to specify all .py files. Here’s an example of how you can do this:

Open your terminal or command prompt.
Navigate to the directory containing the .py files you want to work with.
Use the following command to pass all .py files as arguments to a script called your_script.py:
python your_script.py $(find . -name "*.py")
In this example, find . -name "*.py" will search for all .py files in the current directory and its subdirectories. The $( ... ) syntax will pass the output of the find command (i.e., the list of found .py files) as arguments

This works out quite nicely. BUT includes all files, also the 'hidden files' within .venv for example.
So to make it work without these folders - GPT-4 answer was the following:

python3 .venv/bin/aider --ctags $(find . -name "*.py" ! -path "./. venv/*") -c .venv/bin/. aider.conf .yml

How about putting this inside the README?


So after executing the command above:

RESULT: I get the following:

Added generate_repo_info.py to the chat
Added main.py to the chat
Using git repo: . git
Use /help to see in-chat commands.
generate_repo_info.py main. py
>

In the repo I am trying to analyse are three .py files.
So this does exclude the "fetch_starred_repos.py" (dont know why)

Any idea why this happens?

Steps I took:

  1. git clone CrawlGitHub
  2. cd into directory
  3. python3 -m venv .venv
  4. python3 source .venv/bin/activate
  5. pip install git+https://github.com/paul-gauthier/aider.git
  6. create new file .aider.conf.yml with openai-api-key:{mykey}
  7. sudo apt-get install universal-ctags
  8. ctags -R *.py
  9. python3 .venv/bin/aider --ctags $(find . -name ".py" ! -path "./. venv/") -c .venv/bin/. aider.conf .yml
  10. see result: "fetch_starred_repos.py" is missing

File decode fails with special characters

Characters like 🎙️ and 🔈cause the file loading to fail. I suspect this is because it expects a pure UTF-8 file.

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\USERNAME\miniconda3\envs\aider\Scripts\aider.exe\__main__.py", line 7, in <module>
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\site-packages\aider\main.py", line 293, in main
    coder.run()
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\site-packages\aider\coders\base_coder.py", line 328, in run
    new_user_message = self.run_loop()
                       ^^^^^^^^^^^^^^^
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\site-packages\aider\coders\base_coder.py", line 369, in run_loop
    inp = self.io.get_input(
          ^^^^^^^^^^^^^^^^^^
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\site-packages\aider\io.py", line 152, in get_input
    completer_instance = AutoCompleter(root, rel_fnames, addable_rel_fnames, commands)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\site-packages\aider\io.py", line 43, in __init__
    content = f.read()
              ^^^^^^^^
  File "C:\Users\USERNAME\miniconda3\envs\aider\Lib\encodings\cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 4455: character maps to <undefined>

try aider with anthropic claude-instant-100k

I would be very curious to see how aider performs with claude-instant, especially the 100k context variant. I think the latency and cost might be really different. Let me know if you're interested and don't have access

Azure OpenAI model does not work properly

It seems that setting up the Azure OpenAI model does not work. I passed the arguments --openai-api-base and --openai-api-key to the command line when I ran aider. The output is as follows:

Model: gpt-4
Git repo: none
Repo-map: disabled
Use /help to see in-chat commands.

It appears that my model has been recognized, but when I issued prompts, nothing happened:

────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> create me a snake game


────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> create me a fabonacci function.


────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
> /exit

Using OpenAI models works well however.

[idea] - Use CSV formatted output for repo map

Hi Paul,

Love the work you have done to date!
Without digging through your actual code - you mention ctags needing / using? json output - json obviously repeats its header names with every object. Given that GPT4 understands CSV formatted data, is it worth trying to use CSV data in the repomap in an attempt to save tokens?

Eg.

echo 'name,line,filename,pattern,kind_long,type,scope'; ctags -x --fields=+Ks   --_xformat="%N,%n,%F,%P,%K,%t,%s"   filename-here.py

name,line,filename,pattern,kind_long,type,scope
double_uri_encode,142,harbour_api.py,/^    def double_uri_encode(self, input_string: str) -> str:$/,member,typename:str,harbourAPI
logger,11,harbour_api.py,/^logger = config.configure_logging()$/,variable,-,

Using CSV output in a crude example reduced character output by 42%.
Additional tokens could be saved by dropping %F and telling GPT the filename before the table....?

Anyway just an idea. Feel free to close.

Thanks, Cam

FYI ctags --list-fields provides the letters for xformat.

UnicodeDecodeError when trying to read non-text files

Describe the bug
When running the aider tool on a directory containing non-text files such as images (png, jpg, etc.), the program crashes with a UnicodeDecodeError.

To Reproduce
Steps to reproduce the behavior:

  1. use /add on a directory that contains non-text files
  2. See error

Expected behavior
The aider tool should be able to skip non-text files or handle them appropriately without crashing.

Error message

Traceback (most recent call last):
  File "/home/palmerd/.local/bin/aider", line 8, in <module>
    sys.exit(main())
  File "/home/palmerd/.local/lib/python3.11/site-packages/aider/main.py", line 299, in main
    coder.run()
  File "/home/palmerd/.local/lib/python3.11/site-packages/aider/coders/base_coder.py", line 366, in run
    new_user_message = self.send_new_user_message(new_user_message)
  File "/home/palmerd/.local/lib/python3.11/site-packages/aider/coders/base_coder.py", line 439, in send_new_user_message
    self.choose_fence()
  File "/home/palmerd/.local/lib/python3.11/site-packages/aider/coders/base_coder.py", line 290, in choose_fence
    all_content += Path(fname).read_text() + "\n"
  File "/usr/lib64/python3.11/pathlib.py", line 1059, in read_text
    return f.read()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

Similar to issue #28 but I believe this is different enough behavior/usecase to create a separate issue.

What's happening:

Aider seems to be unable to handle files that are not UTF-8 encoded, and when /add is called on a directory that contains such files the programs errors and exits. I believe that aider should handle (at least by default) files that are not specifically UTF-8 text files by silently rejecting them or just ignoring them.

OS: fedora 37
python: 3.11.3
aider: installed with pip install git+https://github.com/paul-gauthier/aider.git so I'm assuming it's latest (update: 0.7.2)

string index out of range in `replace_part_with_missing_leading_whitespace`

hi Paul,

Noticed another crash when applying another java diff.

string index out of range

Traceback (most recent call last):
  File "/Users/wouter/Projects/try/aider/aider/coder.py", line 807, in apply_updates
    edited = method(content)
             ^^^^^^^^^^^^^^^
  File "/Users/wouter/Projects/try/aider/aider/coder.py", line 620, in update_files_gpt4
    if utils.do_replace(full_path, original, updated, self.dry_run):
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wouter/Projects/try/aider/aider/utils.py", line 216, in do_replace
    new_content = replace_most_similar_chunk(content, before_text, after_text)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wouter/Projects/try/aider/aider/utils.py", line 99, in replace_most_similar_chunk
    res = replace_part_with_missing_leading_whitespace(whole, part, replace)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/wouter/Projects/try/aider/aider/utils.py", line 76, in replace_part_with_missing_leading_whitespace
    if c == part_lines[0][0]:
            ~~~~~~~~~~~~~^^^

This was the diff it generated, seems it has a problem with the empty line in the beginning

 src/test/java/de/wwestenbrink/demo/scheduler/SchedulerNotificationTest.java
 <<<<<<< ORIGINAL

   @Test
   public void shouldNotHaveAccessToOthersNotification() {
     String notificationId = scheduleNotification(60, "[email protected]");

     List<NotificationDTO> response = listNotifications(TEST_SUBJECT);
     assertThat(response).isEmpty();

     getNotification(notificationId, TEST_SUBJECT, HttpStatus.NOT_FOUND);

     cancelNotification(notificationId, TEST_SUBJECT, HttpStatus.NOT_FOUND);
   }
 =======
   @Test
   public void shouldNotHaveAccessToOthersNotification() {
     String notificationId = scheduleNotification(60, "[email protected]");

     List<NotificationDTO> response = listNotifications(TEST_SUBJECT);
     assertThat(response).isEmpty();

     getNotification(notificationId, TEST_SUBJECT, HttpStatus.NOT_FOUND);

     cancelNotification(notificationId, TEST_SUBJECT, HttpStatus.NOT_FOUND);
   }

   @Test
   public void shouldNotFindNonExistingNotification() {
     String nonExistingNotificationId = "nonExistingId";
     getNotification(nonExistingNotificationId, TEST_SUBJECT, HttpStatus.NOT_FOUND);
   }
 >>>>>>> UPDATED

Not sure about the fix this time.

I've noticed aider is struggling a bit with diffs in general, have you considered using Unified Format (unidiff)?

Feature Request: support for regex/glob expressions in the `/add` function

Feature Description
Currently, the /add function accepts directory paths to add the entire directory or spefic file paths to the program's context. A useful enhancement would be to support the use of regex or glob expressions with the /add function. This would give users more flexibility and control to specify exactly which files to include.

Proposed Solution
Allow the /add function to interpret glob or regex patterns. For example, a user could use /add src/*.py to include only Python files from the src directory or /add src/[A-C]* to add only files that start with A, B, or C.

Expected Behavior
When a user runs /add with a regex or glob pattern, the program should add to the context only the files that match the pattern. The program should handle non-matching files or non-existent paths gracefully, preferably by ignoring them and printing a warning message.

Alternatives Considered
An alternative could be providing an additional function, say /addPattern, that accepts regex/glob patterns. This approach would leave the existing /add functionality unchanged for users who don't need the enhanced functionality.

Additional Context
Adding this feature would boost the flexibility of the /add function, making the program more convenient to use for a variety of tasks. Users could, for instance, exclude certain types of files or select a subset of files based on their naming conventions.

When I moved file to another folder, provided a crash

My theory is that when I moved the file into another folder and deleted the folder, aider doesn't updated the repomap and it crashes because the path doesn't exist actually.

> /commit Now, I actually moved the Endpoints into the OpenAI namespace.

Commit 6278bce Now, I actually moved the Endpoints into the OpenAI namespace.
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Traceback (most recent call last):
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\Scripts\aider.exe\__main__.py", line 7, in <module>
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\main.py", line 293, in main
    coder.run()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 328, in run
    new_user_message = self.run_loop()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 369, in run_loop
    inp = self.io.get_input(
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\io.py", line 152, in get_input
    completer_instance = AutoCompleter(root, rel_fnames, addable_rel_fnames, commands)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\io.py", line 42, in __init__
    with open(fname, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Coder\\Documents\\Visual Studio Projects\\GigaBrain\\GigaBrain.API\\OpenAI\\Endpoints\\Endpoints.cs'

What happened:
I had a file named Endpoint.cs inside Endpoints, i wished to move it into OpenAI root directory of OpenAI API Wrapper, and deleted the Endpoints folder Aider confused because the path doesn't exist at all and it crashes.

Theory how to fix:
Make that every maybe request to aider in TUI will be regenerate the u-ctags and reply after that.

Restart after fail

After Aider crashed I restarted Aider in the same terminal but when I tried to carry on it failed to realise that I was working in C# and it tried to work in python.

is it possible to either scan the current directory and see what the code format is for the project or have it so that it can pick up from where it picked up?

[Info] How to install Aider on Windows

Since I had troubles installing Aider on Windows, I finaly managed it with these steps:

  1. > pip install aider-chat --user
  2. > $env:OPENAI_API_KEY='sk-...
  3. This created aider.exe in c:\users\%username%\appdata\roaming\python\python311\scripts\ folder. But this folder was not added to PATH by default. So you need to add the folder to yout PATH manually. Be avare the folder will be different for different versions of Python.

Is this normal: "PackagesNotFoundError: The following packages are not available from current channels"

I used conda to do the installation, but caught this message (below) in the terminal. I was wondering if this is normal before passing this step:

Solving environment: failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - backoff==2.2.1
  - gitdb==4.0.10
  - multidict==6.0.4
  - urllib3==2.0.2
  - openai==0.27.6
  - networkx==3.1
  - tiktoken==0.4.0
  - charset-normalizer==3.1.0
  - mdurl==0.1.2
  - prompt-toolkit==3.0.38
  - aiosignal==1.3.1
  - requests==2.30.0
  - yarl==1.9.2
  - aiohttp==3.8.4
  - attrs==23.1.0
  - smmap==5.0.0
  - diskcache==5.6.1
  - wcwidth==0.2.6
  - gitpython==3.1.31

Current channels:

  - https://repo.anaconda.com/pkgs/main/osx-arm64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/osx-arm64
  - https://repo.anaconda.com/pkgs/r/noarch

Current channels:

  - https://repo.anaconda.com/pkgs/main/osx-arm64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/osx-arm64
  - https://repo.anaconda.com/pkgs/r/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

Please advice

Failed to apply edit

Amazing project! I love where this is going, this is the future of development!

I'm trying the 'pong' example, and it seems to get confused a lot, but making slow progress. At some point when I ask to fix the scores not getting incremented, it systematically fails to apply the edits to the file.

main.py                                                                                                                                                                                                   
 <<<<<<< ORIGINAL                                                                                                                                                                                          
 # Main game loop                                                                                                                                                                                          
 running = True                                                                                                                                                                                            
 while running:                                                                                                                                                                                            
     # Set up the scores                                                                                                                                                                                   
     left_score = 0                                                                                                                                                                                        
     right_score = 0                                                                                                                                                                                       
 =======                                                                                                                                                                                                   
 # Set up the scores                                                                                                                                                                                       
 left_score = 0                                                                                                                                                                                            
 right_score = 0                                                                                                                                                                                           
                                                                                                                                                                                                           
 # Main game loop                                                                                                                                                                                          
 running = True                                                                                                                                                                                            
 while running:                                                                                                                                                                                            
 >>>>>>> UPDATED    

I tried cleaning the chat, starting from scratch, but it always seems to get stuck applying that simple change. It has no issue doing other more complex changes in the same file, it only fails for that particular change.

I will look into it and submit a PR if I find the issue.

Getting stuck

When running Aider I just get stuck here in the terminal. I have installed everything according to the instructions. Am I doing something wrong perhaps?

(venv) jw project % aider
Model: gpt-4
Git repo: .git
Repo-map: universal-ctags using 1024 tokens

Failed to apply edit to a python script when it writes one code markdown block maybe with two diffs inside of it?

What happened:

The script is slow because it's doing a lot of work. It's reading a large CSV file, transforming the data, creating a DataLoader, initializing a neural network and an optimizer, and then training the network over 100 epochs. Each of   
these steps can take a significant amount of time, especially if the dataset is large.

The script uses the tqdm library to display progress bars, but these only update once per epoch during training, and once per row during data transformation. If the dataset is large and the transformations are complex, it can take a   
long time to see any progress.

To make the script more responsive, we can add more frequent progress updates. For example, we can update the progress bar after each mini-batch during training, and after each row during data transformation.

Here are the changes I propose:


 train.py
 <<<<<<< ORIGINAL
 for i in tqdm(range(len(self.data['exp'])), desc="Transforming data", leave=True):
     self.data.at[i, 'exp'] = self.le.fit_transform([self.data.at[i, 'exp']])[0]
 self.data['exp'] = self.ohe.fit_transform(self.data['exp'].values.reshape(-1, 1)).toarray()
 =======
 for i in range(len(self.data['exp'])):
     self.data.at[i, 'exp'] = self.le.fit_transform([self.data.at[i, 'exp']])[0]
     if i % 1000 == 0:  # update progress every 1000 rows
         print(f"Transformed {i} rows")
 self.data['exp'] = self.ohe.fit_transform(self.data['exp'].values.reshape(-1, 1)).toarray()
 >>>>>>> UPDATED

 train.py
 <<<<<<< ORIGINAL
 for epoch in tqdm(range(100), desc="Training", leave=True):  # loop over the dataset multiple times
     running_loss = 0.0
     for i, data in enumerate(tqdm(train_loader, desc="Batch", leave=False), 0):
         # get the inputs; data is a list of [inputs, labels]
         inputs, labels = data
         # zero the parameter gradients
         optimizer.zero_grad()
         # forward + backward + optimize
         outputs = model(inputs)
         loss = nn.CrossEntropyLoss()(outputs, labels)
         loss.backward()
         optimizer.step()
         # print statistics
         running_loss += loss.item()
         if i % 2000 == 1999:    # print every 2000 mini-batches
             print('[%d, %5d] loss: %.3f' %
                 (epoch + 1, i + 1, running_loss / 2000))
             running_loss = 0.0
 =======
 for epoch in range(100):  # loop over the dataset multiple times
     running_loss = 0.0
     for i, data in enumerate(train_loader, 0):
         # get the inputs; data is a list of [inputs, labels]
         inputs, labels = data
         # zero the parameter gradients
         optimizer.zero_grad()
         # forward + backward + optimize
         outputs = model(inputs)
         loss = nn.CrossEntropyLoss()(outputs, labels)
         loss.backward()
         optimizer.step()
         # print statistics
         running_loss += loss.item()
         if i % 200 == 199:    # print every 200 mini-batches
             print('[%d, %5d] loss: %.3f' %
                 (epoch + 1, i + 1, running_loss / 200))
             running_loss = 0.0
     print(f"Finished epoch {epoch}")
 >>>>>>> UPDATED


These changes will make the script print progress updates more frequently, which should make it feel more responsive. However, they won't make the script run any faster. If you want to speed up the script, you'll need to optimize the  
data transformations or the training process.

Failed to apply edit to train.py
Applied edit to train.py

GPT claiming it can't read files that Aider or I add to chat

This is my first time using this, and while it appeared to do what it was supposed to, Aider kept prompting me to add the same file to chat over and over. When it did so, GPT claimed that it could not see the code to add changes. Is this one of those hallucination looping issues that faces every GPT app? I'll add the history markdown file for reference. Perhaps I didn't prompt it correctly?
.aider.chat.history.md

When I was prompted to give the Endpoints.cs file for GPT-4 to analyze, aider crashed.

Maybe when @paul-gauthier fixed #33 , created another bug when the user prompted to give to AI the context of specific file. I am using the latest git+ pip install.
Here's goes nothing:

GigaBrain.API/OpenAI/Endpoints/Endpoints.cs
Add these files to the chat? y
Traceback (most recent call last):
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\Scripts\aider.exe\__main__.py", line 7, in <module>
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\main.py", line 293, in main
    coder.run()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 328, in run
    new_user_message = self.run_loop()
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 394, in run_loop
    self.check_for_file_mentions(inp)
  File "C:\Users\Coder\AppData\Local\Programs\Python\Python310\lib\site-packages\aider\coders\base_coder.py", line 538, in check_for_file_mentions
    self.abs_fnames.add(os.path.abspath(os.path.join(self.root, rel_fname)))
AttributeError: 'list' object has no attribute 'add'

No need for the diff itself in commit msgs

Very cool project! Inspiring.
Going through commits on this repo I find duplication as some commit msgs contain the conversation with GPT including the diff, while the commit object itself is the diff.
WDYT?

Feature Request: Support git worktrees

I think git worktrees offer a great workflow for a tool like aider. Worktrees reduce the friction that naturally stems from having an external tool commit automatically, resulting in a smoother experience.

In case you're not familiar: man git-worktree.

A sample workflow would look something like this:

mkdir mydir && cd mydir

# create a dummy repo
mkdir myrepo && cd myrepo && touch README.md && git init && git add -A && git commit -m "Initial commit"

# aider works here
aider -v

# create a worktree to fix some bug. CD into it.
mkdir ../worktrees
git worktree add ../worktrees/bugfix
cd ../worktrees/bugfix

# use aider to help fix the bug (This will result in the error shown below)
aider -v
Traceback (most recent call last):
  File "/home/nbe/.local/bin/aider", line 8, in <module>
    sys.exit(main())
  File "/home/nbe/projects/playground/python-exp/venv/lib/python3.10/site-packages/aider/main.py", line 216, in main
    coder = Coder(
  File "/home/nbe/projects/playground/python-exp/venv/lib/python3.10/site-packages/aider/coder.py", line 85, in __init__
    self.set_repo(fnames)
  File "/home/nbe/projects/playground/python-exp/venv/lib/python3.10/site-packages/aider/coder.py", line 149, in set_repo
    repo = git.Repo(repo_paths.pop(), odbt=git.GitDB)
  File "/home/nbe/projects/playground/python-exp/venv/lib/python3.10/site-packages/git/repo/base.py", line 265, in __init__
    raise InvalidGitRepositoryError(epath)
git.exc.InvalidGitRepositoryError: /home/nbe/mydir/myrepo/.git/worktrees/bugfix

Large uncommitted modifications make it impossible to use `aider` on a small file tracked in git

Reproduction steps:

  • Pick any git repo and run aider smallfile, where smallfile is already committed and there are no uncommitted changes - you should successfully end up at the prompt smallfile>
  • Shut down aider
  • Modify some file other than smallfile and add or remove a bunch of lines (enough changes that there will be too many tokens in the prompt)
  • Run aider smallfile again - this time, you should see an error that looks like the following:
<large diff>
<big error message>
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 9132 tokens. Please reduce the length of the messages.

Current workaround:

cp smallfile untrackedsmallfile
aider untrackedsmallfile

^and since it is not tracked, the diff is irrelevant. I think it would be nice if this was still possible if the file was tracked
e.g. if the diff is too large, forget about committing files that were not explicitly mentioned in the aider invocation

Feel free to close this issue if you do not feel like this is a priority. The current workaround is fine for me at the moment. And it is simple enough to just make sure not to have uncommitted changes before running aider

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.