Coder Social home page Coder Social logo

kubernetes-retired / cluster-api-provider-docker Goto Github PK

View Code? Open in Web Editor NEW
52.0 8.0 43.0 656 KB

A Cluster API Provider implementation using docker containers as the infra provider. Cluster API locally for a change!

License: Apache License 2.0

Go 77.65% Shell 18.74% Dockerfile 1.30% HCL 0.20% Makefile 2.11%
k8s-sig-cluster-lifecycle

cluster-api-provider-docker's People

Contributors

arvinderpal avatar ashish-amarnath avatar chuckha avatar dennisme avatar detiber avatar erwinvaneyk avatar fabriziopandini avatar joonas avatar k8s-ci-robot avatar liztio avatar mytunguyen avatar ncdc avatar neolit123 avatar nikhita avatar noamran avatar sethp-nr avatar thebsdbox avatar yuzhang17 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cluster-api-provider-docker's Issues

Add verify script to run `docker build`

/kind feature

Describe the solution you'd like
I'd like the verify scripts to also build the docker image (not push, just build and make sure it's working)

Set up automated tests

Get the prow configuration in place so we can run tests on PRs. I don't think we need periodics yet, just PR jobs.

Fixup RBAC

Rbac is missing some more stuff. Copy and pasting in new things, #32 i need u 😭

/kind bug
/lifecycle active

Capdctl must be able to fetch (and kustomize) YAML

/kind feature

Describe the solution you'd like
In order to have capdctl be able to spin up a management cluster we must use kustomize to build the YAML provided by kubebuilder. capdctl should go fetch all the necessary yaml from each provider requested.

Anything else you would like to add:
None

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

/lifecycle active
/assign

Update documentation for v1alpha2

/kind feature

Describe the solution you'd like
We have not touched documentation for some time. It needs to be updated!

Anything else you would like to add:

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Cluster actuator should reconcile deleted ELB nodes

/kind bug

What steps did you take and what happened:

  • Setup management cluster using capdctl setup
  • Apply capi CRDs using capdctl crd...
  • Deploy capd controller manager using capdctl capd ...
  • Create a sample workload cluster. cluster only w/o creating any machines
  • Find the ELB for the workload cluster using `docker ps --filter name=-external-load-balancer
  • Kill the ELB container using `docker kill
  • Wait for the next reconciliation loop of the cluster controller and the actuator fails to detect the deleted ELB and logs
External Load Balancer already exists. Nothing to do for this cluster.

What did you expect to happen:
Deleted ELB should have been detected and a new ELB should have been setup.

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Seperate `capdctl setup` into `capdctl kind` and `capdctl platform`

/kind feature

Describe the solution you'd like
[A clear and concise description of what you want to happen.]

I'm trying to start using tilt.dev, and it'd be useful for me to be able to set up the kind cluster separately from setting up the control plane. Sometimes tearing down and bringing up all of kind is too heavyweight and gets in the way of iteration

Anything else you would like to add:

we could just keep capdctl setup as-is, and have it just combine capdctl kind and capdctl platform behind the signs

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

actuators are not actually logging

/kind bug

What steps did you take and what happened:
I ran this like always but there are no logs to stdout using kubectl logs

It's probably a klog setting thing. Not entirely sure yet.

/assign
/lifecycle active

Update capdctl to work with v1a2 infrastructure

/kind feature

Describe the solution you'd like
Capdctl needs to be able to load 3 different providers from various places onto a management cluster.

Anything else you would like to add:

/assign
/priority important-soon

Fix secret name for capi 1.4

CAPI expects child clusters to leave secrets in a secret called %s-kubeconfig where %s is the name of the cluster.

We were so close, we did kubeconfig-%s, clusterName

Document how to create/use local images for testing

Presently README explains that for creating local image for the controller the following command can be used:
docker build -t my-repository/capd-manager:latest .

In order to use this local image, it must be loaded into the kind cluster using kind load command:
kind load docker-image my-custom-image:unique-tag

However, kind documentation warns about not using the latest label:

The Kubernetes default pull policy is IfNotPresent unless the image tag is :latest in which case the default policy is Always. IfNotPresent causes the Kubelet to skip pulling an image if it already exists. If you want those images loaded into node to work as expected, please:

don’t use a :latest tag

CAPD documentation should include this same instructions and don't suggest using latest label.

/documentation

cluster api v0.1.4 doesn't work with current implementation

/kind bug

The main problem lies in the fact that the cluster api node ref controller, running in the management cluster (kind) doesn't share the network with the load balancer.

For this to work cluster API network must have access to the child cluster's network.

`go build ./cmd/...` fails

go build ./cmd/... fails with the below error

πŸ’° go build ./cmd/...
cmd/capd-manager/main.go:23:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/actuators" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/actuators (from $GOROOT)
	/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/actuators (from $GOPATH)
cmd/capd-manager/main.go:24:2: cannot find package "k8s.io/client-go/kubernetes" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/k8s.io/client-go/kubernetes (from $GOROOT)
	/Users/ashisha/go/src/k8s.io/client-go/kubernetes (from $GOPATH)
cmd/capd-manager/main.go:30:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/client/config" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/client/config (from $GOROOT)
	/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/client/config (from $GOPATH)
cmd/capd-manager/main.go:31:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/manager" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/manager (from $GOROOT)
	/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/manager (from $GOPATH)
cmd/capd-manager/main.go:32:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/runtime/signals" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/runtime/signals (from $GOROOT)
	/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/runtime/signals (from $GOPATH)
cmd/capdctl/main.go:26:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/execer" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/execer (from $GOROOT)
	/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/execer (from $GOPATH)
cmd/capdctl/main.go:27:2: cannot find package "k8s.io/apimachinery/pkg/apis/meta/v1" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOROOT)
	/Users/ashisha/go/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOPATH)
cmd/kind-test/main.go:25:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/kind/actions" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/kind/actions (from $GOROOT)
	/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/kind/actions (from $GOPATH)
cmd/kind-test/main.go:26:2: cannot find package "sigs.k8s.io/kind/pkg/cluster/constants" in any of:
	/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/kind/pkg/cluster/constants (from $GOROOT)
	/Users/ashisha/go/src/sigs.k8s.io/kind/pkg/cluster/constants (from $GOPATH)

There are multiple references to github.com/chuckha which needs to be removed.

Improve dev-builds

/kind feature

Describe the solution you'd like
Builds are now optimized towards releasing and not dev.

In order to make the dev workflow experience better we should create a makefile that builds docker-images and builds binaries with version embedded

This issue is a result of changing the release pipeline.

Support v1alpha2

/kind feature

Describe the solution you'd like
This is a tracking issue for getting CAPD to support v1alpha2

  • #123 Update capdctl to support v1a2 infrastructure
  • #125 Import the new types and stop importing the old types
  • #67 Implement a release strategy for v1a1
  • #131 Update documentation
  • Add more here

Anything else you would like to add:
/cc @fabianofranz

Create a SECURITY_CONTACTS file.

As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.

The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".

Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)

Thanks so much, let me know if you have any questions.

(This issue was generated from a tool, apologies for any weirdness.)

[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md

Dependency Issues

There's some places where the chuckha repo is imported. This affects being able to successfully build the binaries.

RBAC missing permissions

Server rejected event '&v1.Event{<SNIP>} is forbidden: User "system:serviceaccount:cluster-api-system:default" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)

/kind bug
/assign
/lifecycle active

make 6443 a constant

We hardcode 6443 all over the place and it should be a constant

/kind enhancement

publish-manager.sh fails without the logging package in the build container

/kind bug

What steps did you take and what happened:

Fetching https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1
Parsing meta tags from https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1 (status code 200)
get "sigs.k8s.io/cluster-api-provider-docker/logger": found meta tag get.metaImport{Prefix:"sigs.k8s.io/cluster-api-provider-docker", VCS:"git", RepoRoot:"https://github.com/kubernetes-sigs/cluster-api-provider-docker"} at https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1
get "sigs.k8s.io/cluster-api-provider-docker/logger": verifying non-authoritative meta tag
Fetching https://sigs.k8s.io/cluster-api-provider-docker?go-get=1
Parsing meta tags from https://sigs.k8s.io/cluster-api-provider-docker?go-get=1 (status code 200)
go: finding sigs.k8s.io/cluster-api-provider-docker/logger latest
Fetching https://sigs.k8s.io?go-get=1
Parsing meta tags from https://sigs.k8s.io?go-get=1 (status code 200)
build sigs.k8s.io/cluster-api-provider-docker/cmd/capd-manager: cannot load sigs.k8s.io/cluster-api-provider-docker/logger: cannot find module providing package sigs.k8s.io/cluster-api-provider-docker/logger
The command '/bin/sh -c go install -v ./cmd/capd-manager' returned a non-zero code: 1

What did you expect to happen:
Should be able to successfully build and push capd-manager images

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

`capdctl version` command.

/kind feature

Describe the solution you'd like
When debugging, it's important to know exactly what environment you're running in. And unfortunately, there are a lot of version variables when it comes to cadp. I think one single capdctl version command should return all the information about the environment, including:

  • capdctl version
  • docker runtime version
  • kind version
  • capd docker image
  • capi docker image
  • management cluster k8s control plane version
  • management cluster k8s kubelet version
  • constituent cluster k8s control plane version
  • constituent cluster k8s kubelet version
  • ... and probably more stuff I'm forgetting

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Test out kind v0.4.0

Try the new kind version and see if we can use it as the default!

This will also involve going through the code to see if the API or behaviors we depend on changed at all.

Update to newest kubebuilder

/kind feature

Describe the solution you'd like
We should use kubebuilder v0.2 for the controller and DockerMachine types coming in v1a2 as that's what all the other CAPD projects have settled on.

/priority important-soon
/assign
/lifecycle active

Import v1alpha2 instead of v1alpha1

/kind feature

Describe the solution you'd like
We need to import the new cluster-api types and not import the old cluster-api types.

/assign
/lifecycle active
/priority important-soon

Generated cluster does not have Kind/Version

/kind bug

What steps did you take and what happened:
capdctl needs to generate JSON with kind & version embedded in it.

/good-first-issue

This may be true of all generated JSON, as part of this task, please all generated JSON has Kind & Version.

I expect

capdctl control-plane -namespace default | kubectl apply -f - and
capdctl cluster -namespace default | kubectl apply -f - to work

Automate github action to push images to GCR

/kind feature

Describe the solution you'd like
I would like GitHub actions to push images to GCR. It is not very straightforward. If you look at some of the history on the .github/main.workflow file you will see many attempts were made.

Ideally this will tie into the image promoter process.

Anything else you would like to add:
I would strongly suggest having an alternative repo where you can experiment on that has access to GitHub actions as iterating in this repo is extremely slow due to the review process required.

Figure out release strategy

/kind feature

Describe the solution you'd like
Right now we're using latest and building from HEAD. This is a very bad practice and we should figure out releases for this project.

I'm hoping that the GitHub packing is available now to solve a large part of this issue for us.

Anything else you would like to add:
Don't use latest image tag.

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Make the different kinds of functions more obvious

There are two types of functions in this repo (in the actions/ doodle)

  1. the functions that act on the management cluster (the kind cluster)
  2. the functions that act on the target cluster

They all share a lot of code and I think there is a room for improvement here. This is intentionally a vague ticket. It may involve a few iterations/api design work.

/kind cleanup

capdctl setup needs Cluster CRD

Running capdctl setup doesn't work out of the box. The Cluster CRD needs to be added if it does not already exist.

amyc-a01:cluster-api-provider-docker amy$ capdctl setup
Error: error loading config: decoding failure: no kind "Cluster" is registered for version "kind.sigs.k8s.io/v1alpha3" in scheme "sigs.k8s.io/kind/pkg/cluster/config/encoding/scheme.go:34"
panic: exit status 1

goroutine 1 [running]:
main.makeManagementCluster(0x17138a4, 0x4)
	/Users/amy/cluster-api-provider-docker/cmd/capdctl/main.go:186 +0x188
main.main()
	/Users/amy/cluster-api-provider-docker/cmd/capdctl/main.go:82 +0x6cb

Fix up fmt.Printlns everywhere

There are a lot of fmt.Printlns. They should be treated as a logging dependency, as in, for now it's ok to add a parameter to the function that is a logger interface. We have been using logr.Logger in a few places, so I think it would be good to continue using that.

/kind enhancement

capdctl help messages are out of sync

/kind bug

What steps did you take and what happened:

πŸ’° capdctl setup --help
Usage of setup:
  -cluster-name string
    	The name of the management cluster (default "kind")

however

πŸ’° capdctl --help | grep setup -A1
  setup - Create a management cluster
    example: capdctl setup --name my-management-cluster-name

What did you expect to happen:
The help messages should have been in sync

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Final control plane node can never be deleted

/kind bug

What steps did you take and what happened:

  1. Create a kind management cluster
  2. create a capa cluster using capd
  3. delete the capd cluster (using the changes in kubernetes-sigs/cluster-api#1180 so the machines get deleted first)
  4. Control plane node fails to delete with an error like failed with exit code one

What did you expect to happen:

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

The cluster will try to find another control-plane node to run kubectl on, but of course no such container exists. It will therefore try to execute a command on a container with the name of the empty string, which doesn't exist.

Environment:

  • Cluster-api-provider-docker version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

Find a place to put images

It would be swell to have a place to put container images that is kubernetes community oriented. Right now it's up to each person to build their own container image or rely on my VMware provided google project.

I know @amy has been working on the image promoter, is that ready to be used?

add boiler plate check written in go

the existing k8s boiler plate checks are written in python, for some reason.
a boiler plate check can be written in go (~50 LOC). basically any new, custom verify checks should be either bash or go and we should not depend on extra languages if possible.

should be added to verify-all.sh once #13 merges.
i should add this in kinder too, because we don't have it there yet.

/assign
/kind feature
/priority important-longterm

CAPD ignoring controlPlane version in machine spec

/kind bug

What steps did you take and what happened:

$ capdctl cluster -cluster-name zecora -namespace zecora | kubectl apply -f-
cluster.cluster.k8s.io/zecora created
$ capdctl control-plane -cluster-name zecora -namespace zecora -version v1.14.0 | kubectl apply -f-
machine.cluster.k8s.io/my-machine created
$ kubectl get machines -n zecora my-machine -o jsonpath='{.spec.versions.controlPlane}'
v1.14.0
$ kubectl get secret -n zecora kubeconfig-zecora -o json | jq -r '.data.kubeconfig' | base64 -d > /tmp/zecora.yaml
$ kubectl --kubeconfig /tmp/zecora.yaml version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-17T00:58:35Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

What did you expect to happen:

The reported version by kubectl should've been 1.14.0

Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]

Environment:

$ git rev-parse HEAD
c3b249bb607ccc244a2ab8cb73b4ca241039fff1
$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.4
 Git commit:        e8ff056
 Built:             Thu May  9 23:11:19 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.4
  Git commit:       e8ff056
  Built:            Thu May  9 22:59:19 2019
  OS/Arch:          linux/amd64
  Experimental:     false
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic

This is nice on linux and unfriendly on os x

/kind bug

What steps did you take and what happened:
Containers must talk to each other. Containers can talk to each other over their container IP address. Containers live in the docker0 network. My laptop cannot use the container IP to communicate with containers as the docker0 network is not bridged. Therefore my laptop can only talk to containers over localhost. The containers cannot talk to each other over localhost. Therefore the secret that contains the kube-apiserver's endpoint will only contain one value and will be wrong for either the remote ref controller or the host both of which are required to work.

This is a non-issue on linux since the docker0 network is bridged to my local network.

Don't copy/paste the CRDs from Cluster API

It would be great to not copy and paste the CRDs from cluster API.

Ideally capdctl can produce the YAML from multiple versions of Cluster API. I propose a command that looks like this:

capdctl crds <cluster-api-version>

for example

capdctl crds 0.1.3

This will then print out the crds.

There are two implementations that come to mind: 1) capdctl reaches out to cluster-api's repo directly and downloads the yaml and prints it out to std out and 2) we generate all the different versions of yaml and embed them into capdctl so that capdctl doesn't need to hit the network to work.

I'm sure there are more implementations, these are just two suggestions. I'd leave this up to the implementer to pick one.

This could very well lead us into the realm of testing cluster-api v1alpha1 to v1alpha2 upgrades.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.