Coder Social home page Coder Social logo

k8s's Introduction

K8s

Module Version Coverage Status Last Updated

Build Status CI Build Status Elixir Build Status K8s

Hex Docs Total Download License

K8s - Kubernetes API Client for Elixir

Features

  • A client API for humans ๐Ÿ‘ฉ๐Ÿผ๐Ÿง‘๐Ÿ‘ฉ๐Ÿป๐Ÿ‘ฉ๐Ÿฝ๐Ÿ‘ฉ๐Ÿพ๐Ÿง‘๐Ÿป๐Ÿง‘๐Ÿฝ๐Ÿง‘๐Ÿง‘๐Ÿพ๐Ÿ‘จ๐Ÿผ๐Ÿ‘จ๐Ÿพ๐Ÿ‘จ๐Ÿฟ
  • ๐Ÿ”ฎ Kubernetes resources, groups, and CRDs are autodiscovered at boot time. No swagger file to include or override.
  • Client supports standard HTTP calls, async batches, wait on status โฒ๏ธ, and watchers ๐Ÿ‘€
  • โš™๏ธ HTTP Request middleware
  • Multiple clusters โš“ โš“ โš“
  • ๐Ÿ” Multiple authentication credentials
    • ๐Ÿค– serviceaccount
    • token
    • ๐Ÿ“œ certificate
    • auth-provider
    • Pluggable auth providers!
  • ๐Ÿ†— Tested against Kubernetes versions 1.10+ and master
  • ๐Ÿ› ๏ธ CRD support
  • ๐Ÿ“ˆ Integrated with :telemetry
  • โ„น๏ธ Kubernetes resource and version helper functions
  • ๐Ÿงฐ Kube config file parsing
  • ๐ŸŽ๏ธ Macro free; fast compile & fast startup

Installation

The package can be installed by adding :k8s to your list of dependencies in mix.exs:

def deps do
  [
    {:k8s, "~> 2.0"}
  ]
end

Usage

Check out the Usage Guide for in-depth examples. If you like learning with Livebook, check out kino_k8s. It comes with nice smart cells to help you generate your first working code.

Most functions are also written using doctests.

If you are interested in building Kubernetes Operators or Schedulers, check out Bonny.

tl;dr Examples

Configure a cluster connection

Cluster connections can be created using the K8s.Conn module.

K8s.Conn.from_file/1 will use the current context in your kubeconfig.

{:ok, conn} = K8s.Conn.from_file("path/to/kubeconfig.yaml")

K8s.Conn.from_file/2 accepts a keyword list to set the :user, :cluster, and/or :context

Connections can also be created in-cluster from a service account.

{:ok, conn} = K8s.Conn.from_service_account("/path/to/service-account/directory")

Check out the connection guide for additional details.

Creating a deployment

{:ok, conn} = K8s.Conn.from_file("path/to/kubeconfig.yaml")

opts = [namespace: "default", name: "nginx", image: "nginx:nginx:1.7.9"]
{:ok, resource} = K8s.Resource.from_file("priv/deployment.yaml", opts)

operation = K8s.Client.create(resource)
{:ok, deployment} = K8s.Client.run(conn, operation)

Listing deployments

In a namespace:

{:ok, conn} = K8s.Conn.from_file("path/to/kubeconfig.yaml")

operation = K8s.Client.list("apps/v1", "Deployment", namespace: "prod")
{:ok, deployments} = K8s.Client.run(conn, operation)

Across all namespaces:

{:ok, conn} = K8s.Conn.from_file("path/to/kubeconfig.yaml")

operation = K8s.Client.list("apps/v1", "Deployment", namespace: :all)
{:ok, deployments} = K8s.Client.run(conn, operation)

Getting a deployment

{:ok, conn} = K8s.Conn.from_file("path/to/kubeconfig.yaml")

operation = K8s.Client.get("apps/v1", :deployment, [namespace: "default", name: "nginx-deployment"])
{:ok, deployment} = K8s.Client.run(conn, operation)

k8s's People

Contributors

alexgleason avatar bforchhammer avatar bradleyd avatar brandonduff avatar corka149 avatar coryodaniel avatar darrenclark avatar dependabot[bot] avatar drowzy avatar e-nikolov avatar elliottneilclark avatar freedomben avatar gpopides avatar hanspagh avatar icehaunter avatar jlgeering avatar jtarasovic avatar kennethito avatar linkdd avatar mruoss avatar overbryd avatar praveenperera avatar rbino avatar thephw avatar wisq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

Elixir Stream type response for list operations

For paginate API requests, it would be nice to return a Stream

Pulled from another project I'm working on.

defmodule Kube do
  @limit 10

  defmodule StreamListRequest do
    @moduledoc "List operation as a Stream data type"
    @typedoc "K8s pagination request"
    @type t :: %{
            operation: K8s.Operation.t(),
            cluster: atom,
            continue: nil | binary | :halt,
            opts: keyword | map
          }
    defstruct [:operation, :cluster, :continue, :opts]
  end

  alias Kube.StreamListRequest

  @type state_t :: {list(map), StreamListRequest.t()}
  @type halt_t :: {:halt, state_t}

  @spec stream(K8s.Operation.t(), atom, keyword | nil) :: Enumerable.t()
  def stream(%K8s.Operation{} = op, cluster, opts \\ []) do
    request = %StreamListRequest{
      operation: op,
      cluster: cluster,
      opts: opts
    }

    Stream.resource(
      fn -> init_request(request) end,
      &next_item/1,
      &stop/1
    )
  end

  @spec init_request(StreamListRequest.t()) :: state_t
  def init_request(%StreamListRequest{} = request) do
    case list(request) do
      {:ok, initial_state} ->
        initial_state

      _error ->
        {[], nil}
    end
  end

  @spec next_item(state_t) :: state_t | halt_t
  # Handle case of no items, no state. When started, but no items found.
  def next_item({[], nil} = state), do: {:halt, state}

  # All items in list have been popped, get more
  def next_item({[], _request} = state), do: fetch_next_page(state)

  # Items are in list, pop one and keep on keeping on.
  def next_item(state), do: pop_item(state)

  # fetches next page when item list is empty. Returns `:halt` to stream processor when
  # do_continue returns `:halt`
  @spec fetch_next_page(state_t | {any, :halt}) :: state_t | halt_t
  def fetch_next_page({_, :halt} = state), do: {:halt, state}

  def fetch_next_page({[], next_request} = state) do
    case list(next_request) do
      {:ok, state} -> pop_item(state)
      {:error, _msg} -> {:halt, state}
    end
  end

  @spec list(StreamListRequest.t()) :: {:ok, state_t} | {:error, atom() | binary()}
  def list(%StreamListRequest{} = request) do
    pagination_params = %{limit: @limit, continue: request.continue}
    params = Map.merge(request.opts[:params] || %{}, pagination_params)
    # TODO: allow opts into .run/N

    response = K8s.Client.run(request.operation, request.cluster, params: params)

    case response do
      # convert k8s response to stream state
      {:ok, response} ->
        items = Map.get(response, "items")
        next_request = Map.put(request, :continue, do_continue(response))
        {:ok, {items, next_request}}

      error ->
        error
    end
  end

  @spec do_continue(map) :: :halt | binary
  defp do_continue(%{"metadata" => %{"continue" => ""}}), do: :halt
  defp do_continue(%{"metadata" => %{"continue" => cont}}) when is_binary(cont), do: cont
  defp do_continue(_map), do: :halt

  @doc false
  # Return the next item to the stream caller `[head]` and return the tail as the new state of the Stream
  @spec pop_item(state_t) :: state_t
  def pop_item({[head | tail], next}) do
    new_state = {tail, next}
    {[head], new_state}
  end

  @doc false
  # Stop processing the stream.
  @spec stop(state_t) :: nil
  def stop(_state), do: nil
end

Stream module dialyzer issues

Dialyzer is mad.

lib/k8s/client/runner/stream.ex:87:invalid_contract
The @spec for the function does not match the success typing of the function.

Function:
K8s.Client.Runner.Stream.fetch_next_page/1

Success typing:
@spec fetch_next_page({_, %K8s.Client.Runner.Stream.ListRequest{:continue => _, _ => _}}) ::
  {:halt | [any(), ...], {_, _}}
________________________________________________________________________________
lib/k8s/client/runner/stream.ex:99:invalid_contract
The @spec for the function does not match the success typing of the function.

Function:
K8s.Client.Runner.Stream.list/1

Success typing:
@spec list(%K8s.Client.Runner.Stream.ListRequest{
  :cluster => atom(),
  :continue => _,
  :operation => %K8s.Operation{
    :group_version => binary(),
    :kind => atom() | binary(),
    :method => atom(),
    :path_params => [{_, _}],
    :resource => map(),
    :verb => atom()
  },
  :opts => nil | Keyword.t() | map(),
  _ => _
}) ::
  {:error, atom() | binary()}
  | {:ok,
     {_,
      %K8s.Client.Runner.Stream.ListRequest{
        :cluster => atom(),
        :continue => :halt | binary(),
        :operation => map(),
        :opts => nil | [any()] | map(),
        _ => _
      }}}
________________________________________________________________________________
lib/k8s/client/runner/stream.ex:126:invalid_contract
The @spec for the function does not match the success typing of the function.

Function:
K8s.Client.Runner.Stream.pop_item/1

Success typing:
@spec pop_item({nonempty_maybe_improper_list(), _}) :: {[any(), ...], {_, _}}
________________________________________________________________________________

Comprehensive testing of path generation

Originally k8s used property testing to test against all versions of kubernetes that had a swagger file under ./test/support. I think we can forgo testing against the swagger files and using property testing and simply test against a few cases.

  • Deployment resource REST methods
  • a CRD resource REST methods
  • Namespaced, Cluster scoped, and all namespaces operations
  • Subresource requests

Original Issue Notes:

Update cluster_tests.exs.

Currently based on File driver, but since file driver stub is small now and the property test exercises Swagger files, there are lots of failures.

Gets a little more complicated since it tests multiple versions of Kubernetes ...

Update to use an File Driver, which will require a per cluster configuration of drivers...

  • Add a mix task to get a swagger definition and fold that into a File driver config?
  • instead of routing_tests name each cluster w/ the k8s version and use applicable file routing_tests_1.14 ... this would allow the cluster property test to be run offline...

Note: URL building has been moved to K8s.Conn

0.5 Documentation Update

  • Split USAGE.md into multiple files, include in hex docs
  • Document writing Request middleware
  • Document registering middleware to a cluster
  • Document conn vs :cluster name for running operations
  • Document Discovery driver configuration and behaviour
  • #51
  • Revisit moduledocs
  • Replace @docs w/ doctests where possible
  • Advanced.md (below)
  • add Resource loaders w/o !
  • usage.md High level usage docs, common use cases
  • Local Development and Testing
  • README link to usage docs

Question: Running a command in a pod?

Hi, I want to be able to run a command in a pod and this requires a websocket upgrade when using Core.V1.connect_get_namespaced_pod_exec. Is there any examples you can point me at ?

Thanks

BYO-Resource structs

Support for deserializing k8s responses into custom structs instead of returning maps.

Add a protocol (K8s.Resource?) with serialize/deserialize functions to be implemented by enduser resource module.

K8s.Client.run(.... into: MyDeploymentModule)

How to handle list responses? It would be burdensome to make someone define a list for each type. List responses are pretty normalized. Could maybe include a single K8s.ResourceList module that defines a kind and items struct keys.

Start cluster router w/ start_supervised for tests.

Currently a cluster router is started in test_helper and used by doctests and unit tests. I'm not sure if this is the best approach, but it works since nothing writes to the route tables after they are created.

Alternatively a named router could be started for each doc test and each unit test. With the unit tests we could use start_supervised. Not positive of best approach for starting a process from inside a doctest.

Client support for sub-resources

Currently finalize, binding, scale, and approval subresources aren't supported in K8s.Client. The path module will generate this paths.

Adding support is a matter of:

  • finding a good API for setting the subresource when building a K8s.Operation and being able to wrap that up in the K8s.Client
  • validating a subresource is supported by a resource {:error, :unsupported_subresource}

Suggestion: Decouple the registry from auth and the client?

The thing stopping me from using k8s right now is the registry. My clusters have dex for user auth, and my app gets auth tokens from there. My users do not share Kubernetes credentials when interacting with clusters; they each get their own session tokens via OAuth and use those.

If the relationship between configuration and the registry and auth were decoupled a bit, such that I didn't have to necessarily use it, I would be able to benefit from using K8s.Client. Such as it currently is, it seems like I would have to register a cluster per set of credentials? (Please correct me if that's not right.)

This is ideally how I'd like to be able to interact with k8s:

%{
  api_host: "https://example.com:6443",
  # And more, but not auth credentials
}
|> Client.new(token: "my user's token")
|> Client.list("apps/v1", :deployment, namespace: "default")
|> Client.run()

I don't like this quite as much, but it might be a shorter leap from how k8s is now, and it would solve my problem I think:

prod = K8s.Conn.new(some_config_without_credentials)
K8s.Cluster.Registry.add(:prod, prod)

operation = Client.list("apps/v1", :deployment, namespace: "default")
{:ok, deployments} = Client.run(operation, :prod, token: "my user's token")

Watch times out after 5 seconds

I spawn a genserver to listen for pod create/delete events like this:

  @impl true
  def init(stack) do
    operation = K8s.Client.list("v1", :pod, namespace: "default")
    {:ok, ref} = K8s.Client.Runner.Watch.run(operation, :default, 0, stream_to: self())
    Logger.info "Starting watcher with ref #{inspect(ref)}"
    {:ok, stack}
  end

  @impl true
  def handle_info(val, state) do
    Logger.debug "Got info: #{inspect(val)}"
    {:noreply, state}
  end

It works great for the first 5 seconds, but then throws this:
14:40:36.664 [debug] Got info: %HTTPoison.Error{id: #Reference<0.593330850.2224553994.255733>, reason: {:closed, :timeout}}

I'm not sure if I'm using the watch functionality incorrectly, or if this is a bug. I have run it both internally and externally with the same issue.

I went down this path as well as creating a controller for the pods and pattern matching on the pods I want. I'm not sure if either way is correct/better but if you had any suggestions I am very open!

Thanks.

Service Discovery Timeout

This might fall under the umbrella of #43

I'm also unsure how this could actually be fixed but figured I'd raise it any way and we can close the ticket if it's a problem we have to live with.

The gist of it is: https://sentry.io/share/issue/1d3b7deedd2d41c4bc9eb0b533c0fcc7/

The context is that the provider I'm using for Kubernetes (Digital Ocean) will timeout on random requests, I guess due to network conditions. These timeouts are transient (and not just related to service discovery but every interaction with the API). One possible solution is to mitigate with an automatic retry (https://hexdocs.pm/httpoison_retry/HTTPoison.Retry.html).

JIT HTTP Caching

Cheaped out and skipped caching for the HTTP Provider. Importante

Rename K8s.Conf -> K8s.Conn

K8s.Conf handles configuration, connection and authentication details for a cluster by parsing a k8s config file. Name made sense, but now that there is a K8s.Config module for working with the config of the k8s library, its a little confusing.

Consider one:

  • renaming K8s.Conf -> K8s.Conn to illustrate that it encapsulates the connection to a cluster
  • renaming / moving K8s.Config to another module

K8s.Client.Runner.When

Create a when runner that dispatch is a function when a condition is met

Spawns task, runs until and when true | error, dispatches function

K8s.Conn atomizes cluster names

Currently cluster names are atomized in K8s.Conn. While this isn't a big deal for most use cases, in the event that many different cluster names are used, could lead to bloating the atom table.

Cluster names where originally atoms when they behaved like atoms. They were used as a reference to look up cluster connection details. Now that k8s is connection-oriented, the cluster name is only used to look up middleware to apply. Middleware is stored in amap clustername:middleware stackin an Agent, so changing to string based should not be an issue.

Broken links in Readme / Hexdocs

First off, awesome library. I've been digging into it the past couple of hours and I've really enjoyed it. I'll be tearing out the jury rigged K8s API that I've made and replacing it with this.

A small note, the links to the Certificate and Token example files are broken in the readme + hex docs. I should be able to put together a PR to fix in the next few days if you'd like.

Add'l configuration options

Add additional configuration options to allow for compile time vs runtime configuration

  • conf: :service_account
  • conf_opts: []
  • spec: "path/to/spec"

Support retry_on_conflict for update api

Since almost all correctly update usage should check 'object conflict failure' and retry to get & update again, hope there will be an option or new function for supporting retry/update patten.

Refactor auth provider, add `supports?/1`

  • Add a callback to auth provider support?/1 - callback should take a config and return true/false if it can support that config
  • remove nil return options from Auth.Certificate
  • refactor provider selector to check support?
  • raise or return {:error, :unsupported_config}

Patch operations fail with a 415 error

Patch operations to the K8s API require a slightly different header from other operations, that is: Content-Type: application/merge-patch+json

Unless that header is present, the request will fail with: 415: Unsupported Media Type

Further details can be found in: kubernetes/kubernetes#61103

Looks like this fix would be located in: https://github.com/coryodaniel/k8s/blob/master/lib/k8s/client/http_provider.ex#L90 unless I'm misunderstanding something about the HTTP client. I can pick this one up shortly.

Response middleware

A lot of the middleware functionality has been developed and stubbed already. Should be straight forward to integrate.

See Feature #46
See Issue #42

K8s.Selector

Add module to ease adding selectors to client operations.

"v1"
|> K8s.Client.get(:pods)
|> K8s.Selector.labels(...)
|> K8s.Selector.expressions(...)
|> K8s.Client.run(:default)

This would require a label_selector key on the operation struct. When running an operation this would be used to build the label selector, being overwritten by any selector supplied directly to the run/N function

Rediscover interval

Clusters are a changin'. Discovery should have a rediscover interval to detect changes to CRDs, etc.

Replace 'docker-for-desktop' context

  • Change dev config to target k8s-elixir-client as a kube context instead of docker-for-desktop
  • update contribution docs for how to set up dev/testing

Error handling in Cluster, Conf, and Discovery

Error handling is a little rough when configuration isn't valid. Either configuration values that dont exist in kubeconfig or when the credentials dont have permissions.

Consider breaking cluster registration out into its own module.

K8s.Client.WatchProxy

Abstraction is a bit leaky streaming the httpoison responses from watch operations. It might be nice to add a "proxy" module that receives async responses and dispatches as events are received.

opts = [dispatch_to: MyHandlerModule, watch_count: :infinity] # default to 1
K8s.Client.Runner.Watch.run(op, cluster, opts, http_opts)

MyHandlerModule would receive add, modify, and delete events from the watch. This shifts a good portion of functionality from Bonny to this package.

Consider an option for continuous watching.

K8s.Servers.Controller module

defmodule MyController do
  use K8s.Servers.Controller, cluster: :default
end

Shorthand for use K8s.Servers.Watcher and use K8s.Servers.Reconciler

How to handle both behaviours requiring operation/0?

K8s.Servers.Reconciler module

defmodule MyReconciler do
  use K8s.Servers.Reconciler, cluster: :default
  
  @doc "List operation to reconcile"
  @impl true
  def operation() do
    K8s.Client.list("v1", :pods, namespaces: :all)
  end

  @doc "Resource to reconcile"
  @impl true
  def reconcile(resource), do: :ok
end

K8s.Servers.Watcher module

Behaviour / use macro for creating a watcher.

defmodule MyWatcher do
  use K8s.Servers.Watcher, cluster: :default
  
  @doc "Operation to watch"
  @impl true
  def operation() do
    K8s.Client.list("v1", :pods, namespaces: :all)
  end

  @doc "Resource was added"
  @impl true
  def add(resource), do: :ok

  @doc "Resource was modified"
  @impl true
  def modify(resource), do: :ok

  @doc "Resource was deleted"
  @impl true
  def delete(resource), do: :ok
end

`%Operation{}` should include query params

It seems to me that it is strange for the struct to encapsulate body data but not query params. Not sure if you have a reason for doing what you're currently doing, but I would think it would make more sense for %Operation{} to be a complete specification of the request that you're about to make.

My app (that I'm trying to port to use k8s) uses Dex to get tokens via oauth2. Consider the case where my token expires. If the operation struct encapsulates everything including query params, I can make the user re-authenticate, and then I can replay the operation as is. If the operation doesn't encapsulate the query params, I have to fold it into another data structure that does.

Ship a K8s.Client mock provider

Refactor the test mock provider into lib/ and make it a bit more extendable so it easy to test libraries using k8s w/o a kubernetes cluster.

Either a mock of base.ex or http_provider.ex

K8s.Servers.Scheduler module

Kubernetes scheduler server.

defmodule MyScheduler do
  use K8s.Servers.Scheduler, cluster: :default, name: "scheduler-name"
  
  @doc "Given a pod and list of nodes, returns node to schedule pod on"
  @impl true
  def schedule(pod, nodes) do
    nodes |> List.first
  end

  @doc """
  Returns a list of nodes that can be scheduled on. This will be passed as the second arg in `schedule/2`
  
  Default impl will be all nodes.

  """
  @impl true # overridable
  def nodes() do
    K8s.Client.list("v1", :nodes, params: "my-node-label-selector")
  end
end

JIT API Discovery

JIT /api and /apis discovery per comment

  • first request is made for a given gvk, it is discovered and the naming options and scope are cached1
  • http path is generated as it is now, with the cached details
  • subsequent requests build urls from data in the cache

Initial thoughts
Evaluate if there is a non-discovery route that doesnโ€™t require codegen that allows for loose naming of resource kinds and scope detection (namespaced/cluster)

Discovery exists to support various forms of resource names being passed to K8s.Client and how to format URLs with respect to resource scope, but makes development and testing a bit difficult as it requires stubbing our HTTP requests and the discovery phase (see File driver).

This could also serve as a ticket to make testing not as painful :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.