mlflexer / aiui.nvim Goto Github PK
View Code? Open in Web Editor NEWA unified set of modules to interact with different LLM providers.
License: MIT License
A unified set of modules to interact with different LLM providers.
License: MIT License
Depends on: #7
Depends on #2
The # and @ symbols should be used to refer to code, such that the user can use it as a shorthand to input it into a prompt.
The current best idea as how to do this is to use the LSP to query for function/method names and then replace that with the actual code when prompting the LLM
When changing chat model or closing the chat buffer, the instance and text should be saved with the save_current_chat
function
Some code/files are old and unused. They should be removed
Calls to decoding json should be of vim.json.decode(data, { luanil = { object = true, array = true } }) as it ensures that nil objects are nil and not vim.NIL
There should be an easy way for the user to reference a file in the current working directory.
It might also be useful to reference a buffer, as it enables the user to input input from something like a terminal buffer.
Move the file previewer to a local function, to improve reuseability across pickers.
Improve functionality of the previewer, to allign with the default telescope file previewer
Add dummy LLM module, whichs mocks real LLM interface.
This enables testing for free in terms of time and money
Add ability to list all models
add ability to keep track of loaded chats with the context, so a user can continue a chat.
add ability to list all loaded chats
currently all chats are written even empty ones, when the save_current_chat()
function is called
Should allign more with telescopes buildin see here to make it possible for users without fd
to use
If you modify the output buffer of the chat window, and then try to close neovim, then the user is notified if they wish to save the modified output buffer. This is undesired behaviour, as it should either be automatically saved to a file with each write, thus not prompting the user. Alternatively it should silently close the buffer without confirmation.
Improve the readability of the testing clients outputs. Might be usefull to make it echo the input and/or full context, but without spawning a process, just plain lua
Depends on #8
A user should be able to switch which model it wants to start a new chat with by using fuzzy seach
Some users might want to use buffers instead of floating windows when chatting with LLMs.
There might be a way to abstract the chat logic from the UI, by moving some of the logic of interacting with buffers from floating window chat into a seperate file which can share logic.
There should be a written markdown guide which helps users add their own clients, such that it is easy to exstend with you own model.
Might be usefull to add to the wiki
Depends on #2
Depends on #5
A user should be able to switch which chat it wants to chat with by using fuzzy search
A user might want to change the layout, border, position ect. They might also want to change what text is being shown before each chat msg
Depends on #2
Depends on #5
The user should minimumly be notified about failing requests
Make decrypt_file_with_gpg
a part of ModelCollection or some other place, as it can be shared across mutiple clients.
It currently is in Chat.lua
A user should be able to hit a keybind when on a line with an LSP error, and then the relavent information about that error should be passed on to an llm which should try to fix it.
This enables a user to highlight some text in a buffer, and then use it in a prompt
Add ability to batch response from LLM such that each line is outputted or similar, as to reduce CPU load compared to streaming each token.
A user should be aware that a job is running such that the user does not think previous jobs have completed
A user should be able to fuzzy find over the directory of saved chats and then be able to pick a chat to load into the chatwindow
Add the ability to diff text inline, not 2 buffers, one showing additions and the other deletions, but inline in a single buffer.
This enables a user to easily view text changes, which could be changed by an LLM prompted with: "fix the bug in the following text " or alike.
Add ability to chat with LLM in a window
Add 2 floating windows one for input, and one for output
To improve performance the message/context from the LLM could be created on the fly when processing stdout. ATM this is done by exstracting the right information from all the lines after the job has finished.
It would improve performance if it was done with stdout, as we do not need to exstract the relevant strings twice
When you click on the output window, it focuses, and because the southern border does not have a title, then the title is removed from the input window, see:
Add a footer to the output window options, but is only available in neovim 10.0, see PR
When a user has a chat which uses more lines that available viewable lines in the buffer the last lines are not shown.
The expected behavior is to always show the last lines
In the function change_instance
the window config is changed. I would like to use
vim.api.nvim_win_set_config(self.output.window_handle, self.output.window_opts)
vim.api.nvim_win_set_config(self.input.window_handle, self.input.window_opts)
However this lowers the placement of the output window, and thus I have resulted to use a double call to toggle
as a temporary fix for this.
Add ability to use mistral.ai API
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.