Coder Social home page Coder Social logo

k8s's Introduction

A simple Go client for Kubernetes

GoDoc Build Status

A slimmed down Go client generated using Kubernetes' protocol buffer support. This package behaves similarly to official Kubernetes' Go client, but only imports two external dependencies.

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/ericchiang/k8s"
    corev1 "github.com/ericchiang/k8s/apis/core/v1"
)

func main() {
    client, err := k8s.NewInClusterClient()
    if err != nil {
        log.Fatal(err)
    }

    var nodes corev1.NodeList
    if err := client.List(context.Background(), "", &nodes); err != nil {
        log.Fatal(err)
    }
    for _, node := range nodes.Items {
        fmt.Printf("name=%q schedulable=%t\n", *node.Metadata.Name, !*node.Spec.Unschedulable)
    }
}

Requirements

Usage

Create, update, delete

The type of the object passed to Create, Update, and Delete determine the resource being acted on.

configMap := &corev1.ConfigMap{
    Metadata: &metav1.ObjectMeta{
        Name:      k8s.String("my-configmap"),
        Namespace: k8s.String("my-namespace"),
    },
    Data: map[string]string{"hello": "world"},
}

if err := client.Create(ctx, configMap); err != nil {
    // handle error
}

configMap.Data["hello"] = "kubernetes"

if err := client.Update(ctx, configMap); err != nil {
    // handle error
}

if err := client.Delete(ctx, configMap); err != nil {
    // handle error
}

Get, list, watch

Getting a resource requires providing a namespace (for namespaced objects) and a name.

// Get the "cluster-info" configmap from the "kube-public" namespace
var configMap corev1.ConfigMap
err := client.Get(ctx, "kube-public", "cluster-info", &configMap)

When performing a list operation, the namespace to list or watch is also required.

// Pods from the "custom-namespace"
var pods corev1.PodList
err := client.List(ctx, "custom-namespace", &pods)

A special value AllNamespaces indicates that the list or watch should be performed on all cluster resources.

// Pods in all namespaces
var pods corev1.PodList
err := client.List(ctx, k8s.AllNamespaces, &pods)

Watches require a example type to determine what resource they're watching. Watch returns an type which can be used to receive a stream of events. These events include resources of the same kind and the kind of the event (added, modified, deleted).

// Watch configmaps in the "kube-system" namespace
var configMap corev1.ConfigMap
watcher, err := client.Watch(ctx, "kube-system", &configMap)
if err != nil {
    // handle error
}
defer watcher.Close()

for {
    cm := new(corev1.ConfigMap)
    eventType, err := watcher.Next(cm)
    if err != nil {
        // watcher encountered and error, exit or create a new watcher
    }
    fmt.Println(eventType, *cm.Metadata.Name)
}

Both in-cluster and out-of-cluster clients are initialized with a primary namespace. This is the recommended value to use when listing or watching.

client, err := k8s.NewInClusterClient()
if err != nil {
    // handle error
}

// List pods in the namespace the client is running in.
var pods corev1.PodList
err := client.List(ctx, client.Namespace, &pods)

Custom resources

Client operations support user defined resources, such as resources provided by CustomResourceDefinitions and aggregated API servers. To use a custom resource, define an equivalent Go struct then register it with the k8s package. By default the client will use JSON serialization when encoding and decoding custom resources.

import (
    "github.com/ericchiang/k8s"
    metav1 "github.com/ericchiang/k8s/apis/meta/v1"
)

type MyResource struct {
    Metadata *metav1.ObjectMeta `json:"metadata"`
    Foo      string             `json:"foo"`
    Bar      int                `json:"bar"`
}

// Required for MyResource to implement k8s.Resource
func (m *MyResource) GetMetadata() *metav1.ObjectMeta {
    return m.Metadata
}

type MyResourceList struct {
    Metadata *metav1.ListMeta `json:"metadata"`
    Items    []MyResource     `json:"items"`
}

// Require for MyResourceList to implement k8s.ResourceList
func (m *MyResourceList) GetMetadata() *metav1.ListMeta {
    return m.Metadata
}

func init() {
    // Register resources with the k8s package.
    k8s.Register("resource.example.com", "v1", "myresources", true, &MyResource{})
    k8s.RegisterList("resource.example.com", "v1", "myresources", true, &MyResourceList{})
}

Once registered, the library can use the custom resources like any other.

func do(ctx context.Context, client *k8s.Client, namespace string) error {
    r := &MyResource{
        Metadata: &metav1.ObjectMeta{
            Name:      k8s.String("my-custom-resource"),
            Namespace: &namespace,
        },
        Foo: "hello, world!",
        Bar: 42,
    }
    if err := client.Create(ctx, r); err != nil {
        return fmt.Errorf("create: %v", err)
    }
    r.Bar = -8
    if err := client.Update(ctx, r); err != nil {
        return fmt.Errorf("update: %v", err)
    }
    if err := client.Delete(ctx, r); err != nil {
        return fmt.Errorf("delete: %v", err)
    }
    return nil
}

If the custom type implements proto.Message, the client will prefer protobuf when encoding and decoding the type.

Label selectors

Label selectors can be provided to any list operation.

l := new(k8s.LabelSelector)
l.Eq("tier", "production")
l.In("app", "database", "frontend")

var pods corev1.PodList
err := client.List(ctx, "custom-namespace", &pods, l.Selector())

Subresources

Access subresources using the Subresource option.

err := client.Update(ctx, &pod, k8s.Subresource("status"))

Creating out-of-cluster clients

Out-of-cluster clients can be constructed by either creating an http.Client manually or parsing a Config object. The following is an example of creating a client from a kubeconfig:

import (
    "io/ioutil"

    "github.com/ericchiang/k8s"

    "github.com/ghodss/yaml"
)

// loadClient parses a kubeconfig from a file and returns a Kubernetes
// client. It does not support extensions or client auth providers.
func loadClient(kubeconfigPath string) (*k8s.Client, error) {
    data, err := ioutil.ReadFile(kubeconfigPath)
    if err != nil {
        return nil, fmt.Errorf("read kubeconfig: %v", err)
    }

    // Unmarshal YAML into a Kubernetes config object.
    var config k8s.Config
    if err := yaml.Unmarshal(data, &config); err != nil {
        return nil, fmt.Errorf("unmarshal kubeconfig: %v", err)
    }
    return k8s.NewClient(&config)
}

Errors

Errors returned by the Kubernetes API are formatted as unversioned.Status objects and surfaced by clients as *k8s.APIErrors. Programs that need to inspect error codes or failure details can use a type cast to access this information.

// createConfigMap creates a configmap in the client's default namespace
// but does not return an error if a configmap of the same name already
// exists.
func createConfigMap(client *k8s.Client, name string, values map[string]string) error {
    cm := &v1.ConfigMap{
        Metadata: &metav1.ObjectMeta{
            Name:      &name,
            Namespace: &client.Namespace,
        },
        Data: values,
    }

    err := client.Create(context.TODO(), cm)

    // If an HTTP error was returned by the API server, it will be of type
    // *k8s.APIError. This can be used to inspect the status code.
    if apiErr, ok := err.(*k8s.APIError); ok {
        // Resource already exists. Carry on.
        if apiErr.Code == http.StatusConflict {
            return nil
        }
    }
    return fmt.Errorf("create configmap: %v", err)
}

k8s's People

Contributors

chesleybrown avatar ericchiang avatar exekias avatar hongshaoyang avatar hoskeri avatar jhaynie avatar jorritsalverda avatar jsoriano avatar lukeshu avatar mikesplain avatar msharbaji avatar pingles avatar tombooth avatar ulexus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

k8s's Issues

Rename ThirdPartyResources to CustomResources

ThirdPartyResource was the old name as far as I remember, but it has been renamed to CustomResourceDefinition. Perhaps useful to rename it in this client as well, or add it as an alias.

cannot decode json payload into protobuf object

I had a custom resource definition named ServiceMonitor, which was created by prometheus-operator (https://github.com/coreos/prometheus-operator). And I created struct definitions as follows:

type ServiceEndpoint struct {
	Port *string `json:"port"`
}

type ServiceNamespaceSelector struct {
	MatchNames []string `json:"matchNames"`
}

type ServiceSelector struct {
	MatchLabels map[string]string `json:"matchLabels"`
}

type ServiceMonitorSpec struct {
	Endpoints         []*ServiceEndpoint        `json:"endpoints"`
	NamespaceSelector *ServiceNamespaceSelector `json:"namespaceSelector"`
	Selector          *ServiceSelector          `json:"selector"`
}

type ServiceMonitor struct {
	Metadata *k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta `json:"metadata"`
	Spec     *ServiceMonitorSpec                              `json:"spec,omitempty"`
}

Then I called the client Create() function, it returned an error:
decode error status 404: cannot decode json payload into protobuf object
How can I solve this issue?

No default namespace for resource

When I attempt to create a Job without Namespace: k8s.String("default") specified in the Metadata I get a no resource namespace provided error. Is this the expected behavior?

Container compute resources cannot be unmarshalled

Thank you for this package.

In an attempt to unmarshal a Deployment, I find compute resources such as cpu and memory limits do not unmarshal.

Backing information

  • package version: v0.3.0
  • Runtime environment: centos7 container

Input

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: the-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: the-app
  template:
    metadata:
      labels:
        name: the-app
      name: the-app
    spec:
      containers:
      - name: the-app
        image: docker-registry.example.com/acme/the-app:develop
        ports:
        - containerPort: 5586
          name: server-port
        resources:
          limits:
            cpu: 500m
            memory: 1500Mi
          requests:
            cpu: 500m
            memory: 1500Mi

Desired action

Attempt to unmarshal this resource, where data is a byte slice backing the input

var t v1beta1.Deployment
if err := json.Unmarshal(data, &t); err != nil {
    log.Fatal(err)
}
...

Result

2017/05/06 13:31:38 json: cannot unmarshal string into Go struct field ResourceRequirements.limits of type resource.Quantity

Workaround

Remove the compute resources

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: the-app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: the-app
  template:
    metadata:
      labels:
        name: the-app
      name: the-app
    spec:
      containers:
      - name: the-app
        image: docker-registry.example.com/acme/the-app:develop
        ports:
        - containerPort: 5586
          name: server-port

This input can be successfully unmarshaled into a Deployment.

Clean up generation scripts

They run great on my machine but could probably be expanded. Consider using Makefiles for things like downloading the Kubernetes releases.

proto: wrong wireType = 6 for field ServiceAccountName while watching pod

Version I used:

"revision": "5803ed75e31fc1998b5f781ac08e22ff985c3f8f",
"revisionTime": "2017-06-29T16:56:01Z"

kubectl version

Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T17:09:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T16:55:06Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

InsecureSkipTLSVerify not implemented

In the config being passed to k8s.NewClient, I set the flag InsecureSkipTLSVerify to truefor a particular cluster. But any request made to this cluster using the client returned by k8s.NewClient gives x509: certificate signed by unknown authority. I guess this is happening because the flag is not being respected in the library. I went through the source code and confirmed the same. Any help?

discovery support

Support the ability to discover what API groups are enabled on an API server, and what version of Kubernetes is being used.

Export the Options interface QueryParam method and create a FieldSelector copy of LabelSelector.

I am trying to use WatchConfigMaps to target a single named config map at the moment. I can't find any documentation on label selectors being able to target fields -- so my assumption is that this is not possible without a fieldSelector. I have to be missing something lol.

FieldSelector also provides access to the downstream API if I remember correctly.

Also there are a lot of handy metadata name constants here bringing some of these over into one of the packages would be very useful to figure out how to construct queries. Documentation on FieldSelector usage is quite scarce.

Some structs should be inlined.

LocalObjectReference in v1.SecretKeySelector should have ,inline in its json tag.
This is likely a wrong setting of protoc.

As a result, creating and v1.EnvVar in a container with an env variable drawn from a secret does not work.

Watcher on Service resource not catching annotation updates

We are using the Watch function on services with something like this:
s := new(corev1.Service)
eventType, err := serviceWatcher.Next(s)

But we have noticed it doesn't trigger an event for annotation changes to a service resource, when using it for pods/deployments an annotation change does trigger an update.

Are we doing something wrong or this a limitation?

1.6 support

The proto file locations are dramatically different from 1.3, 1.4 and 1.5. Might need to develop some tooling for extracting the zip files.

federation/apis/federation/v1beta1/generated.proto
pkg/kubelet/api/v1alpha1/runtime/api.proto
pkg/apis/batch/v1/generated.proto
pkg/apis/batch/v2alpha1/generated.proto
pkg/apis/authorization/v1/generated.proto
pkg/apis/authorization/v1beta1/generated.proto
pkg/apis/settings/v1alpha1/generated.proto
pkg/apis/rbac/v1alpha1/generated.proto
pkg/apis/rbac/v1beta1/generated.proto
pkg/apis/policy/v1beta1/generated.proto
pkg/apis/authentication/v1/generated.proto
pkg/apis/authentication/v1beta1/generated.proto
pkg/apis/apps/v1beta1/generated.proto
pkg/apis/extensions/v1beta1/generated.proto
pkg/apis/imagepolicy/v1alpha1/generated.proto
pkg/apis/autoscaling/v1/generated.proto
pkg/apis/autoscaling/v2alpha1/generated.proto
pkg/apis/certificates/v1beta1/generated.proto
pkg/apis/storage/v1/generated.proto
pkg/apis/storage/v1beta1/generated.proto
pkg/api/v1/generated.proto
staging/src/k8s.io/apiserver/pkg/apis/example/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/batch/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/batch/v2alpha1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/authorization/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/authorization/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/settings/v1alpha1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/rbac/v1alpha1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/rbac/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/policy/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/authentication/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/authentication/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/apps/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/extensions/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/autoscaling/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/autoscaling/v2alpha1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/certificates/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/storage/v1/generated.proto
staging/src/k8s.io/client-go/pkg/apis/storage/v1beta1/generated.proto
staging/src/k8s.io/client-go/pkg/api/v1/generated.proto
staging/src/k8s.io/apimachinery/pkg/runtime/schema/generated.proto
staging/src/k8s.io/apimachinery/pkg/runtime/generated.proto
staging/src/k8s.io/apimachinery/pkg/util/intstr/generated.proto
staging/src/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto
staging/src/k8s.io/apimachinery/pkg/api/resource/generated.proto
third_party/forked/etcd237/wal/walpb/record.proto
third_party/forked/etcd221/wal/walpb/record.proto
third_party/protobuf/google/protobuf/descriptor.proto
third_party/protobuf/google/protobuf/compiler/plugin.proto

Add leader election examples

Probably with the use of an outside package. Are there any reasonable lock-based leader election implementations?

Consider adding yaml tags on Config struct

Currently there are just json tags, this is not an issue with github.com/ghodss/yaml since it does yaml -> json before unmarshalling, but if you're not using this package you will find things don't unmarshal correctly.

Watchers should allow a single resource

I want to watch for some resources (pods) and retrieve their IPs or change (e.g. delete/recreate). The simplest way to hook into this lifecycle is to group the pods in a Service and watch that endpoint.
Unfortunately, this library does not expose watch interface for a single resource by name.

client.CoreV1().WatchEndpoints is hard-coding the resource name as "" here:

url := c.client.urlFor("", "v1", namespace, "endpoints", "", options...)

If I am not missing the right way to do that and it is in fact possible, can this be extended to pass the resource name in watch requests?

The equivalent HTTP API call is:

curl -sSk -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/watch/namespaces/default/endpoints/rtpengine where rtpengine is the endpoint name.

context doesn't have a user

I'm trying to connect externally to a minikube cluster using

func loadClient(kubeconfigPath string) (*k8s.Client, error) {
	data, err := ioutil.ReadFile(kubeconfigPath)
	if err != nil {
		return nil, fmt.Errorf("read kubeconfig: %v", err)
	}

	// Unmarshal YAML into a Kubernetes config object.
	var config k8s.Config
	if err := yaml.Unmarshal(data, &config); err != nil {
		return nil, fmt.Errorf("unmarshal kubeconfig: %v", err)
	}
	return k8s.NewClient(&config)
}

I'm getting an error: 2017/10/14 19:21:17 context doesn't have a user

Output shows I do have a user configured though:

kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO   NAMESPACE
*         minikube   minikube   minikube

kubectl config current-context
minikube

Seems like I shouldn't be getting this error with that info, unless there's some part I'm missing.

Errors can sometimes be non-protobuf encoded

Reported by a user:

Client got an RBAC denial with an in-cluster client. The error returned was:

decode error status: payload is not a kubernetes protobuf object

Try to reproduce. Maybe an upstream issue?

Better documentation

This most recent rewrite I am having a difficult time digging to see how to do basic things. ie: How do I scale a deployment, how do I update an image for a deployment? I have been able to pull out a pod list, iterate to find the container status where images are listed and altered those structures and attempted to update. No errors, but the images aren't changing. I haven't found any clue from the code how you could possibly scale a deployment either. I want this library to work with its less dependencies, but it seem very unfriendly for non experts of kubernetes now. Can you help me with these specific actions, and enhance the documentation or point me in the right place

I'm sorry if an Issue is the incorrect place for this kind of comment. I just don't know where else to go.

Add support for DeletionPropagation

When deleting a job the spawned pods persist after the job has been deleted. The kubectl command as well as the API does support removing the related pods with one delete command using DeletionPropagation.


Trying to achieve this behaviour with k8s.QueryParam() leads to an error:

[...]
        job := &batchv1.Job{}

        if err := client.Get(ctx, "default", name, job); err != nil {
                t.Fatalf("client.Get failed: %s", err)
        }

        if err := client.Delete(ctx, job, k8s.QueryParam("propagationPolicy", "Background")); err != nil {
                t.Fatalf("client.Delete failed: %s", err)
        }
[...]

client.Delete failed: kubernetes api: Failure 400 converting (url.Values).[]string.[]string to (v1.DeleteOptions).*v1.DeletionPropagation.v1.DeletionPropagation: couldn't copy '[]string' into 'v1.DeletionPropagation'; didn't understand types


Also passing *"github.com/ericchiang/k8s/apis/meta/v1".DeleteOptions as an k8s.Option does not work:

cannot use *"github.com/ericchiang/k8s/apis/meta/v1".DeleteOptions literal (type *"github.com/ericchiang/k8s/apis/meta/v1".DeleteOptions) as type k8s.Option in argument to client.Client.Delete: *"github.com/ericchiang/k8s/apis/meta/v1".DeleteOptions does not implement k8s.Option (missing k8s.updateURL method)


In order to delete the pods using the API it seems there must be a JSON payload in the DELETE request: kubernetes/kubernetes#20902 (comment)


So far it seems to me the only current way to clean up the pods as well is to collect them before deleting the job and delete them in separate requests which could be far more easy using the DeletionPropagation.

List from cache behavior

List operations can set "resourceVersion=0 to allow the API server to prefer its local cache instead of guaranteeing the freshest results. Provide a convince for this and maybe even make it the default. Most apps don't care about having the absolute latest data.

Maybe something like this?

// DisableCaching, when provided to a list call, ensures that the API server doesn't refer
// to its internal cache of resources and queries for the freshest resources from its
// backing storage.
func DisableCaching() Option

Naming could probably be worked out. Maybe DisableCache, NoCaching, SkipCache, LatestResults, etc.

Pros:

  • Most apps don't care about having the absolute latest data.
  • Performance improvement.
  • Most apps should use this anyway.

Cons:

  • Extremely surprising if you write apps requiring the latest data.
  • Have to provide to every API call (maybe a client setting as well?)

Parsing kubeconfig's certificate-authority-data fails

certificate-authority-data (k8s. Cluster.CertificateAuthorityData) normally is a base64 string, but the current code somehow tries to translate it into []uint8:

yaml: unmarshal errors:
  line 6: cannot unmarshal !!str `LS0tLS1...` into []uint8

Subresource updates

Maybe something like this?

func (c *CoreV1) UpdatePodSubresource(ctx context.Context, obj *apiv1.Pod, subresource string) (*apiv1.Pod, error)

How should a watcher be terminated?

It seems that watcher.Next() is blocking. I want to be able to run a watch in a goroutine and stop the watcher via a channel:

func watch(watcher *k8s.CoreV1PodWatcher,  stop <-chan struct{}) {

	for {
		select {

		default:
			if event, pod, err := watcher.Next(); err != nil {
				k.logger.Errorf("Error getting next watch: %s", err)
			} else {
				// Do stuff here
			}

		case <-stop:
			watcher.Close()
			return
		}
	}
}

//run watcher in goroutine
go watch(podWatcher, stop)

However, since watcher.Next() is blocking, it is possible that the case for receiving from stop may never be called. Is there a way to terminate a watcher?

User is "system:anonymous" when using out of cluster client with GKE

Using the following code to create an out of cluster client:

	data, err := ioutil.ReadFile(utils.HomeDir() + "/.kube/config")
	if err != nil {
		panic(err)
	}

	// Unmarshal YAML into a Kubernetes config object.
	var config k8s.Config
	if err := yaml.Unmarshal(data, &config); err != nil {
		panic(err)
	}
	client, err := k8s.NewClient(&config)

I am getting the following error when I try to list pods in a namespace:

kubernetes api: Failure 403 pods is forbidden: User "system:anonymous" cannot list pods in the namespace "ns": No policy matched.

(I'm also running Kubernetes on a GKE cluster)

I've confirmed that the config in use is the same as the one that kubectl uses. I don't have any problems running kubectl.

Is there something extra I need to do in order to authenticate?

As mentioned in #77, I have chosen this library in that hope that it will abstract away a lot of the lower level Kubernetes functions and allow me to easily execute some simple commands on my cluster. I've raised this as an issue as I'm unsure where else to go, but it could be an error on my part.

Untyped watches

For many caching frameworks, it'd be helpful to watch an untyped value rather than a concrete type like v1.Pod.

Consider adding a generalized watch:

type Watcher struct {}

func (w *Watcher) Close() error {}
func (w *Watcher) Next() (*versioned.Event, *runtime.Unknown, error)

func (c *Client) Watch(ctx context.Context, apiVersion, namespace string, options ...Option) (*Watcher, error)

This would allow packages to write generalized caching, then deserialize into concrete types later.

type cacher struct {
    // method to watch a specific resource
    watch func() (*k8s.Watcher, error)

    // in memory cache or something.
}

// List lists a set of unknown
func (c *cacher) List(ctx context.Context) ([]*runtime.Unknown, error) {}

type podsCacher struct {
    cacher *cacher
}

func (c *podsCacher) ListPods(ctx context.Context) ([]*v1.Pod, error) {
    // underlying cacher holds all the caching logic.
    list, err := c.cacher.List(ctx)
    if err != nil {
        return nil, err
    }

    pods := make([]*v1.Pod, len(list))
    for i, obj := range list {
        // might check obj.TypeMeta
        var p v1.Pod
        if err := proto.Unmarshal(obj.Raw, &p); err != nil {
            return nil, err
        }
        pods[i] = &[
    }
    return pods, nli
}

cc @brancz @fabxc

Document supported Kubernetes versions/ dependency management

Thank you Eric for this great and simplified K8s client.

Before digging deeper into it, what is your recommendation for managing dependencies of your k8s when using it in larger Go projects? Also, any support guidelines around which K8s versions are expected to work/not work?

Just want to make sure I get dep management right before falling off at a later stage...

cannot decode json payload into protobuf object

Got the above error when trying to Create a custom resource.
I traced the problem to v1.Status not implementing json.Unmarshaler.

This is what happens:

  • I called Create. Since I do not implement the protobuffer interface, it marshals to JSON
  • I made a mistake in naming, so k8s returns an error in the form of a Status object encoded in JSON. (but this can be any normal error that k8s can return)
  • The client detects the error and tries to unmarshal in *v1.Status.
  • Since Status does implement the protobuffer interface but not json.Unmarshaler, the unmarshal function does not accept the JSON body.

The problem was gone when I excluded *v1.Status from the check in unmarshal.

	if isPBMsg {
		if _, ok := i.(*metav1.Status); ok {
			// only decode into JSON of a protobuf message if the type
			// explicitly implements json.Unmarshaler
			if _, ok := i.(json.Unmarshaler); !ok {
				return errors.New("cannot decode json payload into protobuf object: " + string(data) + ", " + reflect.TypeOf(i).String())
			}
		}
	}

Readme example

I'm unable to follow the example in the readme and see pods in namespaces without an error

The line that errors is 'pods, err := core.ListPods(ctx, k8s.AllNamespaces)' with 'undefined: core' and 'undefined: ctx'

This is using the latest version, with out of cluster auth, and the listing of nodes works just fine.

thirdpartyresource support

EDIT: this was added but needs more docs and examples. See initial docs here https://godoc.org/github.com/ericchiang/k8s#ThirdPartyResources

Maybe a reflect based implementation?

// Object is an instance of a Kubernetes resource.
type Object interface {
    GetMetadata() *v1.ObjectMeta
}

func RegisterThirdPartyResource(apiGroup, apiVersion, resourcePlural string, obj Object) {
    // Use reflect to register the underlying type of `obj` on some global map.
}

Then clients could do:

type OAuth2Client struct {
    *v1.TypeMeta
    Metadata *v1.ObjectMeta `json:"metadata"`

    Foo int `json:"foo"`
}

func (o *OAuth2Client) GetMetadata() { return o.Metadata }

func init() {
    k8s.RegisterThirdPartyResource("foo.example.com", "v1", "oauth2clients", &OAuth2Client{})
}

func main() {
    client, err := k8s.InClusterClient()
    if err != nil {
        log.Fatal(err)
    }
    oauth2Client := &OAuth2Client{
        Metadata: &v1.ObjectMeta{
            Name: "bar",
        },
        Foo: 2,
    }

    // Response is unmarshaled into the oauth2Client argument.
    if err := client.ThirdPartyResources().Create(context.Background(), oauth2Client); err != nil {
        log.Fatal(err)
    }
}

versioned clients

This can either be done through git tags or sub-directories. Git tags would be easier, but means users can't use multiple versions of a client based off discovery information.

Maybe just generate types for every version of a Kubernetes API version ever? e.g. have both alpha and beta versions?

Explore OpenShift support

Notes:

  • OpenShift publishes all there proto definitions here[0].
  • Wire compatible?
  • Must not be imported by default for users of the k8s package.
  • Need to expose more methods on the k8s.Client so other packages can use it?
  • Figure out testing strategy (is there a minikube equivalent?)
  • What does support look like? I don't use OpenShift so it's hard for me to guarantee I'll keep it up to date.

Potential API might look like:

import (
    "github.com/ericchiang/k8s"
    "github.com/ericchiang/k8s/openshift"
)

func main() {
    // Load regular client.
    client, err := k8s.NewInClusterClient()
    if err != nil {
        // handle error
    }

    // OpenShift client is initialized using the k8s client.
    oc := openshift.NewClient(client)
    groups, err := oc.UserV1().ListGroups()
    // ...
}

[0] https://github.com/openshift/origin/tree/v1.5.0-alpha.2/api/protobuf-spec

1.7 support

Known issues:

  • Custom resource definitions are in a different repo.
  • Was ObjectMeta removed from v1?

open /var/run/secrets/kubernetes.io/serviceaccount/namespace: no such file or directory

go run test-client.go 
2018/01/19 23:45:12 open /var/run/secrets/kubernetes.io/serviceaccount/namespace: no such file or directory
exit status 1

I don't know why my system don't have this directory,
I use kubeadm to install my cluster, version 1.9

func main(){
    client, err := k8s.NewInClusterClient()
    if err != nil{
        log.Fatal(err)
    }   

    nodes, err := client.CoreV1().ListNodes(context.Background())
    if err != nil{
        log.Fatal(err)
    }   
    for _, node:= range nodes.Items{
        fmt.Println("name=%q schedulable=%t\n", *node.Metadata.Name, !*node.Spec.Unschedulable)    }
}      

can anyone help me?

Watch Struct Cleanup

The signature is perfectly valid, but if you're generating these...

func (c *CoreV1) WatchNodes(ctx context.Context, options ...Option) (interface {
    Next() (*versioned.Event, *apiv1.Node, error)
    Close() error
}, error)

You might as well generate these:

func (c *CoreV1) WatchNodes(ctx context.Context, options ...Option) (NodeWatcher, error)
type NodeWatcher interface {
    Next() (*versioned.Event, *apiv1.Node, error)
    Close() error
}

(Pick generated interface name to taste)

List and watch resources in all namespaces

Probably something like:

func (c *CoreV1) ListPods(ctx context.Context, namespace string, options ...Option) (*apiv1.PodList, error)
func (c *CoreV1) ListAllPods(ctx context.Context, options ...Option) (*apiv1.PodList, error)
// ...
func (c *CoreV1) WatchPods(ctx context.Context, namespace string, options ...Option) (*CoreV1PodWatcher, error)
func (c *CoreV1) WatchAllPods(ctx context.Context, options ...Option) (*CoreV1PodWatcher, error)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.