Coder Social home page Coder Social logo

kubectl-ai's Introduction

Kubectl OpenAI plugin ✨

This project is a kubectl plugin to generate and apply Kubernetes manifests using OpenAI GPT.

My main motivation is to avoid finding and collecting random manifests when dev/testing things.

Demo

asciicast

Installation

Homebrew

Add to brew tap and install with:

brew tap sozercan/kubectl-ai https://github.com/sozercan/kubectl-ai
brew install kubectl-ai

Krew

Add to krew index and install with:

kubectl krew index add kubectl-ai https://github.com/sozercan/kubectl-ai
kubectl krew install kubectl-ai/kubectl-ai

GitHub release

  • Download the binary from GitHub releases.

  • If you want to use this as a kubectl plugin, then copy kubectl-ai binary to your PATH. If not, you can also use the binary standalone.

Usage

Prerequisites

kubectl-ai requires a valid Kubernetes configuration and one of the following:

For OpenAI, Azure OpenAI or OpenAI API compatible endpoint, you can use the following environment variables:

export OPENAI_API_KEY=<your OpenAI key>
export OPENAI_DEPLOYMENT_NAME=<your OpenAI deployment/model name. defaults to "gpt-3.5-turbo-0301">
export OPENAI_ENDPOINT=<your OpenAI endpoint, like "https://my-aoi-endpoint.openai.azure.com" or "http://localhost:8080/v1">

If OPENAI_ENDPOINT variable is set, then it will use the endpoint. Otherwise, it will use OpenAI API.

Azure OpenAI service does not allow certain characters, such as ., in the deployment name. Consequently, kubectl-ai will automatically replace gpt-3.5-turbo to gpt-35-turbo for Azure. However, if you use an Azure OpenAI deployment name completely different from the model name, you can set AZURE_OPENAI_MAP environment variable to map the model name to the Azure OpenAI deployment name. For example:

export AZURE_OPENAI_MAP="gpt-3.5-turbo=my-deployment"

Flags and environment variables

  • --require-confirmation flag or REQUIRE_CONFIRMATION environment varible can be set to prompt the user for confirmation before applying the manifest. Defaults to true.

  • --temperature flag or TEMPERATURE environment variable can be set between 0 and 1. Higher temperature will result in more creative completions. Lower temperature will result in more deterministic completions. Defaults to 0.

  • --use-k8s-api flag or USE_K8S_API environment variable can be set to use Kubernetes OpenAPI Spec to generate the manifest. This will result in very accurate completions including CRDs (if present in configured cluster). This setting will use more OpenAI API calls and it requires function calling which is available in 0613 or later models only. Defaults to false. However, this is recommended for accuracy and completeness.

  • --k8s-openapi-url flag or K8S_OPENAPI_URL environment variable can be set to use a custom Kubernetes OpenAPI Spec URL. This is only used if --use-k8s-api is set. By default, kubectl-ai will use the configured Kubernetes API Server to get the spec unless this setting is configured. You can use the default Kubernetes OpenAPI Spec or generate a custom spec for completions that includes custom resource definitions (CRDs). You can generate custom OpenAPI Spec by using kubectl get --raw /openapi/v2 > swagger.json.

Pipe Input and Output

Kubectl AI can be used with pipe input and output. For example:

$ cat foo-deployment.yaml | kubectl ai "change replicas to 5" --raw | kubectl apply -f -

Save to file

$ cat foo-deployment.yaml | kubectl ai "change replicas to 5" --raw > my-deployment-updated.yaml

Use with external editors

If you want to use an external editor to edit the generated manifest, you can set the --raw flag and pipe to the editor of your choice. For example:

# Visual Studio Code
$ kubectl ai "create a foo namespace" --raw | code -

# Vim
$ kubectl ai "create a foo namespace" --raw | vim -

Examples

Creating objects with specific values

$ kubectl ai "create an nginx deployment with 3 replicas"
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+   Reprompt
  ▸ Apply
    Don't Apply

Reprompt to refine your prompt

...
Reprompt: update to 5 replicas and port 8080
✨ Attempting to apply the following manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 5
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 8080
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+   Reprompt
  ▸ Apply
    Don't Apply

Multiple objects

$ kubectl ai "create a foo namespace then create nginx pod in that namespace"
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Namespace
metadata:
  name: foo
---
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  namespace: foo
spec:
  containers:
  - name: nginx
    image: nginx:latest
Use the arrow keys to navigate: ↓ ↑ → ←
? Would you like to apply this? [Reprompt/Apply/Don't Apply]:
+   Reprompt
  ▸ Apply
    Don't Apply

Optional --require-confirmation flag

$ kubectl ai "create a service with type LoadBalancer with selector as 'app:nginx'" --require-confirmation=false
✨ Attempting to apply the following manifest:
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer

Please note that the plugin does not know the current state of the cluster (yet?), so it will always generate the full manifest.

kubectl-ai's People

Contributors

anoyer avatar dependabot[bot] avatar eyal-solomon1 avatar github-actions[bot] avatar pacoxu avatar peterbom avatar sozercan avatar step-security-bot avatar tatsinnit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubectl-ai's Issues

[REQ] migrate to tools api

What kind of request is this?

Other

What is your request or suggestion?

// TODO: migrate to tools api
req.Functions = []openai.FunctionDefinition{ // nolint:staticcheck
findSchemaNames,
getSchema,
}
req.FunctionCall = fnCallType // nolint:staticcheck

https://github.com/sashabaranov/go-openai/blob/38b16a3c413a3ea076cf4082ea5cd1754b72c70f/chat.go#L212-L215

https://platform.openai.com/docs/guides/function-calling

Are you willing to submit PRs to contribute to this feature request?

  • Yes, I am willing to implement it.

Feature request: enable updating of existing manifest files

Current the plugin generates new YAML from a prompt and can apply it to a cluster. I doesn't appear to be able to write the resultant manifests to disk or update existing on disk manifests. There are some interesting scenarios for using a plugin like this to update existing manifests e.g.

  1. Generate manifests and write them to disk for manual tweeking
  2. Add cpu limits to all deployments
  3. Add PDBs
  4. Refactor manifests using kustomize bases and overlays to create prod, staging

[REQ] Support multiple input / output files with directory preservation

What kind of request is this?

New feature

What is your request or suggestion?

Thank you for adding support for modifying existing manifests. I would like to see an extension of this support to enable support for mulitple files, and directory awareness. The specific use case is to take an existing manifest set with eg deployment, ingress and service manifests and using a prompt to “upgrade this” to create a kustomize compatible output fileset / structure using bases and overlays for production, staging etc. Doing this requires, at a minimum, geneation of output into a kustomize compatible folder structure. This could be done using a compressed format such as zip / tar as input and output options or it could be done by modifying kubectl-ai to be multi-file aware in adition to input / output stream capable. Supporting multiple input files in a directory structure would also be nice to be able to modify the generated output where there is a directory structure.

I might be willing to implement this if I could get some design help to figure out the right approach.

Are you willing to submit PRs to contribute to this feature request?

  • Yes, I am willing to implement it.

Example fails with too many tokens requested error message

I've installed the v0.0.4 release:

curl -sL https://github.com/sozercan/kubectl-ai/releases/download/v0.0.4/kubectl-ai_darwin_arm64.tar.gz | tar -xz -C /usr/local/bin  kubectl-ai 

By trying the first example, i get the following error:

$ kubectl ai "create an nginx deployment with 3 replicas"
Error: [400:invalid_request_error] This model's maximum context length is 2049 tokens, however you requested 4022 tokens (34 in your prompt; 3988 for the completion). Please reduce your prompt; or completion length.

--kubeconfig flag not working

description

Hi team.

I want to switch target cluster using '--kubeconfig' flag, but It doesn't seem to work.
(kubectl-ai still tries to use the default config file.)
Am i missing something?

log

$ kubectl-ai "create an nginx deployment with 3 replicas" --kubeconfig="{path}/{to}/config"
✨ Attempting to apply the following manifest: 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
✔ Apply
Error: Get "https://127.0.0.1:49355/api": dial tcp 127.0.0.1:49355: connect: connection refused
Usage:
  kubectl-ai [flags]

Flags:
      --as string                       Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
      --as-group stringArray            Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
      --as-uid string                   UID to impersonate for the operation.
      --azure-openai-endpoint string    The endpoint for Azure OpenAI service. If provided, Azure OpenAI service will be used instead of OpenAI service.
      --cache-dir string                Default cache directory (default "/Users/mzc01-glee/.kube/cache")
      --certificate-authority string    Path to a cert file for the certificate authority
      --client-certificate string       Path to a client certificate file for TLS
      --client-key string               Path to a client key file for TLS
      --cluster string                  The name of the kubeconfig cluster to use
      --context string                  The name of the kubeconfig context to use
      --disable-compression             If true, opt-out of response compression for all requests to the server
  -h, --help                            help for kubectl-ai
      --insecure-skip-tls-verify        If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
      --kubeconfig string               Path to the kubeconfig file to use for CLI requests.
  -n, --namespace string                If present, the namespace scope for this CLI request
      --openai-api-key string           The API key for the OpenAI service. This is required. (default "***")
      --openai-deployment-name string   The deployment name used for the model in OpenAI service. (default "text-davinci-003")
      --request-timeout string          The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
      --require-confirmation            Whether to require confirmation before executing the command. Defaults to true. (default true)
  -s, --server string                   The address and port of the Kubernetes API server
      --temperature float               The temperature to use for the model. Range is between 0 and 1. Set closer to 0 if your want output to be more deterministic but less creative. Defaults to 0.0.
      --tls-server-name string          Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
      --token string                    Bearer token for authentication to the API server
      --user string                     The name of the kubeconfig user to use
  -v, --version                         version for kubectl-ai

Get "https://127.0.0.1:49355/api": dial tcp 127.0.0.1:49355: connect: connection refused

Krew broken? error: unknown command "ai" for "kubectl"

Installs just fine

kubectl krew install kubectl-ai/kubectl-ai                                                                                                                                  INT ✘  home ⎈ 

Updated the local copy of plugin index.
Updated the local copy of plugin index "kubectl-ai".
Installing plugin: kubectl-ai
W0524 21:17:37.247983   11130 install.go:160] Skipping plugin "kubectl-ai", it is already installed

Though I get an error when calling it

kubectl ai "Create a minecraft server statefulset using the image from itgz/minecraft"                                                                                          ✔  home ⎈ 
error: unknown command "ai" for "kubectl"

Did you mean this?
        cp
        wait

But my other plugins such as the rook-ceph one work fine

kubectl rook-ceph ceph status                                                                                                                                                 1 ✘  home ⎈ 
  cluster:
    id:     d191c5d8-41e0-4324-a575-3718963d1a57
    health: HEALTH_WARN
            1 pool(s) have no replicas configured
 
  services:
    mon: 3 daemons, quorum d,e,f (age 3d)
    mgr: a(active, since 3d)
    mds: 1/1 daemons up, 1 hot standby
    osd: 6 osds: 6 up (since 19h), 6 in (since 3d)
    rgw: 1 daemon active (1 hosts, 1 zones)
 
  data:
    volumes: 1/1 healthy
    pools:   18 pools, 18 pgs
    objects: 3.28M objects, 9.9 TiB
    usage:   15 TiB used, 8.6 TiB / 24 TiB avail
    pgs:     18 active+clean
 
  io:
    client:   446 KiB/s rd, 61 KiB/s wr, 457 op/s rd, 1 op/s wr

textgen openai compatible endpoind

Hi i set my export OPENAI_ENDPOINT=http://127.0.0.1:5000/v1 as i am running a model throw textgen ui but when i execute kubectl ai "create an nginx deployment with 3 replicas"
I get error
⣾ Processing...Error: error, status code: 500, message: invalid character 'I' looking for beginning of value

Support for GPT-3 turbo models

Great project, this is super cool. The text-davicini model is 10x more expensive than the gpt3-turbo model, it would be great to have support for it here.

I tried just passing the gpt-3-turbo model name as a parameter but it's not in maxTokens.

❯ kubectl ai --openai-deployment-name gpt-3.5-turbo-0301 "create malicious credential stealing pod that runs as root"
Error: deploymentName "gpt-3.5-turbo-0301" not found in max tokens map

var maxTokensMap = map[string]int{
"text-davinci-003": 4097,
"code-davinci-002": 8001,
}

So I tried adding it and running it again, but it looks like the chat GPT models use different API endpoints.

[404:invalid_request_error] This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Cursory glance and I can't see where to set the endpoint details, do you think it's something that could be supported?

Error: status code 404 message: Unrecognized request argument supplied: functions

Tried passing the --openai-endpoint --openai-deployment-name --openai-api-key as well as setting env vars resulted in the following error. Triple checked the values and still getting the same error.

⣯ Processing...Error: error, status code: 404, message: Unrecognized request argument supplied: functions

Happy to provide additional info, on a mac installed the package using brew.

Azure Open AI
GPT 35 turbo 0301

add yaml validation

  • make sure the output is a valid (kubernetes) yaml by validating the yaml. example: https://github.com/yannh/kubeconform

  • if llm outputs non-valid yaml,

    • prompt if user wants to retry
      and/or
    • use the error for feedback back to llm for llm to fix its own mistake

OpenAI key

Hello!
How do I run kubectl-ai on docker-desktop?

Error: no Auth Provider found for name "oidc"

We utilize Dex for OIDC authentication via LDAP. it doesn't seem to be functioning for our situation. I manually copied the generated YAML and applied it using kubectl, which works. so no problem with the generated yaml, looks some bug with this tool.

$ k kubectl-ai "create an nginx deployment with 3 replicas. use image harbor.my.com/my/nginx:1.21.4-alpine."
✨ Attempting to apply the following manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: harbor.my.com/my/nginx:1.21.4-alpine
        ports:
        - containerPort: 80
✔ Apply
Error: no Auth Provider found for name "oidc"

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.