Coder Social home page Coder Social logo

cmp-ai's Introduction

cmp-ai

AI source for hrsh7th/nvim-cmp

This is a general purpose AI source for cmp, easily adapted to any restapi supporting remote code completion.

For now, HuggingFace SantaCoder, OpenAI Chat, Codestral and Google Bard are implemeted.

Install

Dependencies

  • You will need plenary.nvim to use this plugin.
  • For using Codestral, OpenAI or HuggingFace, you will also need curl.
  • For using Google Bard, you will need dsdanielpark/Bard-API.

Using a plugin manager

Using Lazy:

return require("lazy").setup({
    {'tzachar/cmp-ai', dependencies = 'nvim-lua/plenary.nvim'},
    {'hrsh7th/nvim-cmp', dependencies = {'tzachar/cmp-ai'}},
})

And later, tell cmp to use this plugin:

require'cmp'.setup {
    sources = {
        { name = 'cmp_ai' },
    },
}

Setup

Please note the use of : instead of a .

To use HuggingFace:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 1000,
  provider = 'HF',
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

You will also need to make sure you have the Hugging Face api key in you environment, HF_API_KEY.

To use OpenAI:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 1000,
  provider = 'OpenAI',
  provider_options = {
    model = 'gpt-4',
  },
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

You will also need to make sure you have the OpenAI api key in you environment, OPENAI_API_KEY.

Available models for OpenAI are gpt-4 and gpt-3.5-turbo.

To use Codestral:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 1000,
  provider = 'Codestral',
  provider_options = {
    model = 'codestral-latest',
  },
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

You will also need to make sure you have the Codestral api key in you environment, CODESTRAL_API_KEY.

To use Google Bard:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 1000,
  provider = 'Bard',
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

You will also need to follow the instructions on dsdanielpark/Bard-API to get the __Secure-1PSID key, and set the environment variable BARD_API_KEY accordingly (note that this plugin expects BARD_API_KEY without a leading underscore).

To use Ollama:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 100,
  provider = 'Ollama',
  provider_options = {
    model = 'codellama:7b-code',
  },
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

With Ollama you can also use the suffix parameter, typically when you want to use cmp-ai for codecompletion and you want to use the default plugin/prompt.

If the model you're using has the following template:

{{- if .Suffix }}<|fim_prefix|>{{ .Prompt }}<|fim_suffix|>{{ .Suffix }}<|fim_middle|>
{{- else }}{{ .Prompt }}
{{- end }}

then you can use the suffix parameter to not change the prompt. since the model will use your suffix and the prompt to construct the template. The prompts should be the lines_before and suffix the lines_after Now you can even change the model without the need to adjust the prompt or suffix functions.

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 100,
  provider = 'Ollama',
  provider_options = {
    model = 'codegemma:2b-code',
    prompt = function(lines_before, lines_after)
        return lines_before
    end,
    suffix = function(lines_after)
      return lines_after
    end,
  },
  notify = true,
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
})

To use Tabby:

local cmp_ai = require('cmp_ai.config')

cmp_ai:setup({
  max_lines = 1000,
  provider = 'Tabby',
  notify = true,
  provider_options = {
    -- These are optional
    -- user = 'yourusername',
    -- temperature = 0.2,
    -- seed = 'randomstring',
  },
  notify_callback = function(msg)
    vim.notify(msg)
  end,
  run_on_every_keystroke = true,
  ignored_file_types = {
    -- default is not to ignore
    -- uncomment to ignore in lua:
    -- lua = true
  },
})

You will also need to make sure you have the Tabby api key in your environment, TABBY_API_KEY.

notify

As some completion sources can be quit slow, setting this to true will trigger a notification when a completion starts and ends using vim.notify.

notify_callback

The default notify function uses vim.notify, but an override can be configured. For example:

notify_callback = function(msg)
  require('notify').notify(msg, vim.log.levels.INFO, {
    title = 'OpenAI',
    render = 'compact',
  })
end

max_lines

How many lines of buffer context to use

run_on_every_keystroke

Generate new completion items on every keystroke.

ignored_file_types (table: <string:bool>)

Which file types to ignore. For example:

local ignored_file_types = {
  html = true,
}

cmp-ai will not offer completions when vim.bo.filetype is html.

Dedicated cmp keybindings

As completions can take time, and you might not want to trigger expensive apis on every keystroke, you can configure cmp-ai to trigger only with a specific key press. For example, to bind cmp-ai to <c-x>, you can do the following:

cmp.setup({
  ...
  mapping = {
    ...
    ['<C-x>'] = cmp.mapping(
      cmp.mapping.complete({
        config = {
          sources = cmp.config.sources({
            { name = 'cmp_ai' },
          }),
        },
      }),
      { 'i' }
    ),
  },
})

Also, make sure you do not pass cmp-ai to the default list of cmp sources.

Pretty Printing Menu Items

You can use the following to pretty print the completion menu (requires lspkind and patched fonts (https://www.nerdfonts.com)):

require('cmp').setup({
  sources = {
    { name = 'cmp_ai' },
  },
  formatting = {
    format = require('lspkind').cmp_format({
      mode = "symbol_text",
      maxwidth = 50,
      ellipsis_char = '...',
      show_labelDetails = true,
      symbol_map = {
        HF = "",
        OpenAI = "",
        Codestral = "",
        Bard = "",
      }
    });
  },
})

Sorting

You can bump cmp-ai completions to the top of your completion menu like so:

local compare = require('cmp.config.compare')
cmp.setup({
  sorting = {
    priority_weight = 2,
    comparators = {
      require('cmp_ai.compare'),
      compare.offset,
      compare.exact,
      compare.score,
      compare.recently_used,
      compare.kind,
      compare.sort_text,
      compare.length,
      compare.order,
    },
  },
})

cmp-ai's People

Contributors

bmichotte avatar catgoose avatar chmanie avatar github-actions[bot] avatar mdietrich16 avatar milafrerichs avatar mrloop avatar nifoc avatar tzachar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

cmp-ai's Issues

OpenAI model not working

Hi there, I'm getting the "Completion started" notification in each keystroke, but the suggestion does not show up at all.

All other completion cmp plugins are working fine, even tabnine.

My OPENAI_API_KEY key is set and working.

Please let me know what is missing:

        local cmp_ai = require("cmp_ai.config")
        local ai_compare = require("cmp_ai.compare")

...
        -- Cmp AI config
        cmp_ai:setup({
            max_lines = 1000,
            provider = "OpenAI",
            model = "gpt-4",
            notify = true,
            run_on_every_keystroke = true,
            ignored_file_types = {
                -- default is not to ignore
                -- uncomment to ignore in lua:
                -- lua = true
            },
        })

            sources = cmp.config.sources({
                { name = "cmp_ai" },
...

                    -- Set source name
                    vim_item.menu = ({
                        cmp_ai = "[CMP_AI ]",
....
                    -- AI custom symbol and type
                    if entry.source.name == ("cmp_tabnine" or "cmp_ai") then
                        local detail = (entry.completion_item.labelDetails or {}).detail

                        vim_item.kind = ""

                        if detail and detail:find(".*%%.*") then
                            vim_item.kind = vim_item.kind .. " " .. detail
                        end

                        if (entry.completion_item.data or {}).multiline then
                            vim_item.menu = "[AI - ML]"
                        end
                    end

                    local maxwidth = 80
                    vim_item.abbr = string.sub(vim_item.abbr, 1, maxwidth)

                    return vim_item
                end,
                ....
                
            sorting = {
                priority_weight = 2,
                comparators = {
                    ai_compare,

Bard Error: json.choices returns nil instead of table

I'm trying to use the Bard backend with cmp-ai but I'm getting the error in the screenshot below.

Screenshot from 2023-06-20 05-50-43

Just to make sure it's not a problem with my Bard API installation, I can successfully call the Bard API from a Python REPL as you can see in the screenshot below.

Screenshot from 2023-06-20 06-05-12

The culprit seems to be json.choices returning nil instead of a table on line 83 in the bard backend.

Let me know if you need me to provide further details.

Add delay when firing auto-complete

I wish user could define delay e.g. 0.3 sec, where cmp-ai would wait for user to be idle for 0.3 before sending request to server.
I think right now it fires autocomplete on every stroke, right?
I know about [C-x] mapping, but still having delay option would be cool - it would prevent my GPU fans from spining when typing fast, and not waiting for autocomplete.

Support for AWS codewhisperer

Awesome project!

I am searching for an AWS code whisperer plugin and found this project. I guess this would be the best place to insert a new AI tool.

HF logic issue

hf.lua line 50

if response.generated_text == nil then

should be

if response.generated_text ~= nil then

Working cmp-ai configuration for NVChad?

Not an issue as such. Does anyone have a working configuration sample for cmp-ai plugin working with NVChad? I haven't been able to get this working, and would appreciate any "best practices".

I believe I need to:

  1. Install and setup plugins in /lua/plugins/init.lua:
    {'tzachar/cmp-ai', dependencies = 'nvim-lua/plenary.nvim'},

    This one-liner seems to work fine

  2. A step to configure cmp-ai to use a specific external AI source (e.g. Codestral) in /lua/configs/cmp-ai.lua?

  3. A step that involves nvm-cmp sources override? I assume also a file in /lua/configs/nvim-cmp.lua

Much appreciated!

cannot change ollama model

I think something is wrong with arguments passed to ollama:new
image

the o variable seems to hold 'provider_options' (initialized by user - to override build in model) :
image
params is nill.
Thus build-in parameters cannot be overridden in line:
self.params = vim.tbl_deep_extend('keep', params or {}, {

Config:

local cmp_ai = require "cmp_ai.config"

cmp_ai:setup {
  max_lines = 20,
  provider = "Ollama",
  provider_options = {
    model = 'deepseek-coder:1.3b-base-q5_0', -- way stronger version.. but not working..--need custom promps.
    -- prompt="<|fim▁begin|>"..lines_before.."<|fim▁hole|>"..lines_after.."<|fim▁end|>"
    options = {
      temperature = 0.2,
    }
  },      
.......

Btw. is there way to override model promp from within 'provider_options' ?

Feature: Support more Models out of the box

Thanks for you package and your feedback on my PR.

Regarding my PR #25 I put some more thought in and wanted to create an issue to keep the discussion more visible and not in an existing PR.

Where am I coming from

I want to use a different model than the default you set. To do so I need to adjust the prompt function and of course change the model.

So instead of the default I need to look up what the template for the Model is and adjust the prompt.
Template:

{{- if .Suffix }}<|fim_prefix|>{{ .Prompt }}<|fim_suffix|>{{ .Suffix }}<|fim_middle|>
{{- else }}{{ .Prompt }}
{{- end }}

So I would write the options as follows:

provider_options = {
	model = 'codegemma:2b-code',
	prompt = function(lines_before, lines_after)
		return '<|fim_prefix|>' .. lines_before .. '<|fim_suffix|>' .. lines_after .. '<|fim_middle|>'
	end,
}

And if I want to use a different model I would need to do the same.

Proposal

Support more models by not having a default prompt for a specific model but a more general purpose prompt and use the existing templates via a suffix param.

Instead of having a default prompt for a specific model we just return the lines_before

- prompt = self.params.prompt and self.params.prompt(lines_before, lines_after) or '<PRE> ' .. lines_before .. ' <SUF>' .. lines_after .. ' <MID>',
+ prompt = self.params.prompt and self.params.promt(lines_before, lines_after) or lines_before
+ suffix = lines_after

This results in the exact same behaviour the plugin has currently. Since the default models template looks like this:

{{- if .Suffix }}<PRE> {{ .Prompt }} <SUF>{{ .Suffix }} <MID>
{{- else }}{{ .Prompt }}
{{- end }}

But now every other model can be used without touching prompt or anything else. Someone just needs to adjust the modelname.

Future Improvements

The plugin could introduce a new option to disable the suffix.
For example the raw parameter could be introduced which would be a nice addition as well since it would override the template.

From the ollama docs:
raw: if true no formatting will be applied to the prompt. You may choose to use the raw parameter if you are specifying a full templated prompt in your request to the API

I'm happy to adjust my PR or create a new one.

feature request: Tabbyml support

Tabbyml is an self-hosted AI, with OpenAI deprecated completion similar API: https://tabby.tabbyml.com/api/completion/ , but with a few modifications.

Difference vs OpenAI for starters is:

  • custom URL
  • custom autnentication token passed to authorization: bearer

Then there are also different paramters for completions api, like git_url or declarations, that would be nice to also provide for tabby.

I believe an initial version should be easy to implement. I'll maybe sit down to it, depending on the time.

Thanks.

ollama completion does not work at all

Using manual completion (C-x) it just spams "Completion started" until I have to close nvim.

my config:

{
	'maxwell-bland/cmp-ai',
	config = function()
		local cmp_ai = require('cmp_ai.config')

		cmp_ai:setup({
			max_lines = 100,
			provider = 'Ollama',
			provider_options = {
				stream = true,
				model = 'mistral',
			},
			notify = true,
			notify_callback = function(msg)
				vim.notify(msg)
			end,
			run_on_every_keystroke = false,
		})
	end,
},

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.