kubernetes-retired / cluster-api-provider-docker Goto Github PK
View Code? Open in Web Editor NEWA Cluster API Provider implementation using docker containers as the infra provider. Cluster API locally for a change!
License: Apache License 2.0
A Cluster API Provider implementation using docker containers as the infra provider. Cluster API locally for a change!
License: Apache License 2.0
/kind feature
Describe the solution you'd like
I'd like the verify scripts to also build the docker image (not push, just build and make sure it's working)
Get the prow configuration in place so we can run tests on PRs. I don't think we need periodics yet, just PR jobs.
Rbac is missing some more stuff. Copy and pasting in new things, #32 i need u π
/kind bug
/lifecycle active
/kind feature
Describe the solution you'd like
In order to have capdctl be able to spin up a management cluster we must use kustomize to build the YAML provided by kubebuilder. capdctl should go fetch all the necessary yaml from each provider requested.
Anything else you would like to add:
None
Environment:
kubectl version
):/etc/os-release
):/lifecycle active
/assign
cluster api expects the secret to be stored in "value" not "kubeconfig"
/lifecycle active
/kind bug
/assign
/kind feature
Describe the solution you'd like
We have not touched documentation for some time. It needs to be updated!
Anything else you would like to add:
Environment:
kubectl version
):/etc/os-release
):/kind bug
What steps did you take and what happened:
capdctl setup
capdctl crd...
capdctl capd ...
External Load Balancer already exists. Nothing to do for this cluster.
What did you expect to happen:
Deleted ELB should have been detected and a new ELB should have been setup.
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):/kind feature
Describe the solution you'd like
[A clear and concise description of what you want to happen.]
I'm trying to start using tilt.dev
, and it'd be useful for me to be able to set up the kind cluster separately from setting up the control plane. Sometimes tearing down and bringing up all of kind is too heavyweight and gets in the way of iteration
Anything else you would like to add:
we could just keep capdctl setup
as-is, and have it just combine capdctl kind
and capdctl platform
behind the signs
Environment:
kubectl version
):/etc/os-release
):/kind bug
What steps did you take and what happened:
I ran this like always but there are no logs to stdout using kubectl logs
It's probably a klog setting thing. Not entirely sure yet.
/assign
/lifecycle active
/kind feature
Describe the solution you'd like
Capdctl needs to be able to load 3 different providers from various places onto a management cluster.
Anything else you would like to add:
/assign
/priority important-soon
/kind feature
We want to test out v1alpha2 with this provider, but we will also have to support v1alpha1 as well
Node refs matter now and we are doing it all wrong!
/kind bug
/lifecycle active
CAPI expects child clusters to leave secrets in a secret called %s-kubeconfig
where %s
is the name of the cluster.
We were so close, we did kubeconfig-%s, clusterName
Presently README explains that for creating local image for the controller the following command can be used:
docker build -t my-repository/capd-manager:latest .
In order to use this local image, it must be loaded into the kind
cluster using kind
load command:
kind load docker-image my-custom-image:unique-tag
However, kind
documentation warns about not using the latest
label:
The Kubernetes default pull policy is IfNotPresent unless the image tag is :latest in which case the default policy is Always. IfNotPresent causes the Kubelet to skip pulling an image if it already exists. If you want those images loaded into node to work as expected, please:
donβt use a :latest tag
CAPD documentation should include this same instructions and don't suggest using latest
label.
/documentation
/kind feature
/lifecycle active
capdctl needs to be able to generate machinedeployments so we can test those out
/kind feature
Describe the solution you'd like
kubebuilder v0.2 likes the manager to be called manager
Anything else you would like to add:
Environment:
/assign
/lifecycle active
/kind bug
The main problem lies in the fact that the cluster api node ref controller, running in the management cluster (kind) doesn't share the network with the load balancer.
For this to work cluster API network must have access to the child cluster's network.
go build ./cmd/...
fails with the below error
π° go build ./cmd/...
cmd/capd-manager/main.go:23:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/actuators" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/actuators (from $GOROOT)
/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/actuators (from $GOPATH)
cmd/capd-manager/main.go:24:2: cannot find package "k8s.io/client-go/kubernetes" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/k8s.io/client-go/kubernetes (from $GOROOT)
/Users/ashisha/go/src/k8s.io/client-go/kubernetes (from $GOPATH)
cmd/capd-manager/main.go:30:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/client/config" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/client/config (from $GOROOT)
/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/client/config (from $GOPATH)
cmd/capd-manager/main.go:31:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/manager" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/manager (from $GOROOT)
/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/manager (from $GOPATH)
cmd/capd-manager/main.go:32:2: cannot find package "sigs.k8s.io/controller-runtime/pkg/runtime/signals" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/controller-runtime/pkg/runtime/signals (from $GOROOT)
/Users/ashisha/go/src/sigs.k8s.io/controller-runtime/pkg/runtime/signals (from $GOPATH)
cmd/capdctl/main.go:26:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/execer" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/execer (from $GOROOT)
/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/execer (from $GOPATH)
cmd/capdctl/main.go:27:2: cannot find package "k8s.io/apimachinery/pkg/apis/meta/v1" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOROOT)
/Users/ashisha/go/src/k8s.io/apimachinery/pkg/apis/meta/v1 (from $GOPATH)
cmd/kind-test/main.go:25:2: cannot find package "github.com/chuckha/cluster-api-provider-docker/kind/actions" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/github.com/chuckha/cluster-api-provider-docker/kind/actions (from $GOROOT)
/Users/ashisha/go/src/github.com/chuckha/cluster-api-provider-docker/kind/actions (from $GOPATH)
cmd/kind-test/main.go:26:2: cannot find package "sigs.k8s.io/kind/pkg/cluster/constants" in any of:
/usr/local/Cellar/go/1.12.6/libexec/src/sigs.k8s.io/kind/pkg/cluster/constants (from $GOROOT)
/Users/ashisha/go/src/sigs.k8s.io/kind/pkg/cluster/constants (from $GOPATH)
There are multiple references to github.com/chuckha
which needs to be removed.
/kind bug
Describe the solution you'd like
capi v0.1.6 is missing a crucial bug fix. We need to bump to v0.1.7 when it is released. This provider will not work until then.
/kind feature
Describe the solution you'd like
Builds are now optimized towards releasing and not dev.
In order to make the dev workflow experience better we should create a makefile that builds docker-images and builds binaries with version embedded
This issue is a result of changing the release pipeline.
Ex: kind
tool needs to have a version >= v0.3.0
/kind feature
Describe the solution you'd like
This is a tracking issue for getting CAPD to support v1alpha2
Anything else you would like to add:
/cc @fabianofranz
As per the email sent to kubernetes-dev[1], please create a SECURITY_CONTACTS
file.
The template for the file can be found in the kubernetes-template repository[2].
A description for the file is in the steering-committee docs[3], you might need
to search that page for "Security Contacts".
Please feel free to ping me on the PR when you make it, otherwise I will see when
you close this issue. :)
Thanks so much, let me know if you have any questions.
(This issue was generated from a tool, apologies for any weirdness.)
[1] https://groups.google.com/forum/#!topic/kubernetes-dev/codeiIoQ6QE
[2] https://github.com/kubernetes/kubernetes-template-project/blob/master/SECURITY_CONTACTS
[3] https://github.com/kubernetes/community/blob/master/committee-steering/governance/sig-governance-template-short.md
There's some places where the chuckha
repo is imported. This affects being able to successfully build the binaries.
Server rejected event '&v1.Event{<SNIP>} is forbidden: User "system:serviceaccount:cluster-api-system:default" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
/kind bug
/assign
/lifecycle active
We hardcode 6443 all over the place and it should be a constant
/kind enhancement
/kind bug
What steps did you take and what happened:
Fetching https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1
Parsing meta tags from https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1 (status code 200)
get "sigs.k8s.io/cluster-api-provider-docker/logger": found meta tag get.metaImport{Prefix:"sigs.k8s.io/cluster-api-provider-docker", VCS:"git", RepoRoot:"https://github.com/kubernetes-sigs/cluster-api-provider-docker"} at https://sigs.k8s.io/cluster-api-provider-docker/logger?go-get=1
get "sigs.k8s.io/cluster-api-provider-docker/logger": verifying non-authoritative meta tag
Fetching https://sigs.k8s.io/cluster-api-provider-docker?go-get=1
Parsing meta tags from https://sigs.k8s.io/cluster-api-provider-docker?go-get=1 (status code 200)
go: finding sigs.k8s.io/cluster-api-provider-docker/logger latest
Fetching https://sigs.k8s.io?go-get=1
Parsing meta tags from https://sigs.k8s.io?go-get=1 (status code 200)
build sigs.k8s.io/cluster-api-provider-docker/cmd/capd-manager: cannot load sigs.k8s.io/cluster-api-provider-docker/logger: cannot find module providing package sigs.k8s.io/cluster-api-provider-docker/logger
The command '/bin/sh -c go install -v ./cmd/capd-manager' returned a non-zero code: 1
What did you expect to happen:
Should be able to successfully build and push capd-manager images
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):/kind feature
Describe the solution you'd like
When debugging, it's important to know exactly what environment you're running in. And unfortunately, there are a lot of version variables when it comes to cadp.
I think one single capdctl version
command should return all the information about the environment, including:
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):Try the new kind version and see if we can use it as the default!
This will also involve going through the code to see if the API or behaviors we depend on changed at all.
/kind feature
Describe the solution you'd like
We should use kubebuilder v0.2 for the controller and DockerMachine types coming in v1a2 as that's what all the other CAPD projects have settled on.
/priority important-soon
/assign
/lifecycle active
/kind feature
Describe the solution you'd like
We need to import the new cluster-api types and not import the old cluster-api types.
/assign
/lifecycle active
/priority important-soon
/kind bug
What steps did you take and what happened:
capdctl needs to generate JSON with kind & version embedded in it.
/good-first-issue
This may be true of all generated JSON, as part of this task, please all generated JSON has Kind & Version.
I expect
capdctl control-plane -namespace default | kubectl apply -f -
and
capdctl cluster -namespace default | kubectl apply -f -
to work
/kind feature
Describe the solution you'd like
I would like GitHub actions to push images to GCR. It is not very straightforward. If you look at some of the history on the .github/main.workflow file you will see many attempts were made.
Ideally this will tie into the image promoter process.
Anything else you would like to add:
I would strongly suggest having an alternative repo where you can experiment on that has access to GitHub actions as iterating in this repo is extremely slow due to the review process required.
/kind bug
What steps did you take and what happened:
kubernetes-sigs/cluster-api#1104 causes a panic in the cluster-api system. In order for this to proceed, please use gcr.io/kubernetes1-226021/cluster-api-controller-amd64:dev
until that issue is closed and a release that contains that patch is in.
/kind feature
Describe the solution you'd like
Right now we're using latest and building from HEAD. This is a very bad practice and we should figure out releases for this project.
I'm hoping that the GitHub packing is available now to solve a large part of this issue for us.
Anything else you would like to add:
Don't use latest image tag.
Environment:
kubectl version
):/etc/os-release
):There are two types of functions in this repo (in the actions/ doodle)
They all share a lot of code and I think there is a room for improvement here. This is intentionally a vague ticket. It may involve a few iterations/api design work.
/kind cleanup
Running capdctl setup
doesn't work out of the box. The Cluster CRD needs to be added if it does not already exist.
amyc-a01:cluster-api-provider-docker amy$ capdctl setup
Error: error loading config: decoding failure: no kind "Cluster" is registered for version "kind.sigs.k8s.io/v1alpha3" in scheme "sigs.k8s.io/kind/pkg/cluster/config/encoding/scheme.go:34"
panic: exit status 1
goroutine 1 [running]:
main.makeManagementCluster(0x17138a4, 0x4)
/Users/amy/cluster-api-provider-docker/cmd/capdctl/main.go:186 +0x188
main.main()
/Users/amy/cluster-api-provider-docker/cmd/capdctl/main.go:82 +0x6cb
There are a lot of fmt.Printlns. They should be treated as a logging dependency, as in, for now it's ok to add a parameter to the function that is a logger interface. We have been using logr.Logger
in a few places, so I think it would be good to continue using that.
/kind enhancement
with the update to cluster-api, v0.1.4 will update the node refs for us. We should not be doing this manually.
/kind enhancement
/kind feature
Describe the solution you'd like
We verify golint and gofmt but we don't test goimports
/kind bug
What steps did you take and what happened:
π° capdctl setup --help
Usage of setup:
-cluster-name string
The name of the management cluster (default "kind")
however
π° capdctl --help | grep setup -A1
setup - Create a management cluster
example: capdctl setup --name my-management-cluster-name
What did you expect to happen:
The help messages should have been in sync
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version
):/etc/os-release
):/kind bug
What steps did you take and what happened:
failed with exit code one
What did you expect to happen:
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
The cluster will try to find another control-plane node to run kubectl on, but of course no such container exists. It will therefore try to execute a command on a container with the name of the empty string, which doesn't exist.
Environment:
kubectl version
):/etc/os-release
):It would be swell to have a place to put container images that is kubernetes community oriented. Right now it's up to each person to build their own container image or rely on my VMware provided google project.
I know @amy has been working on the image promoter, is that ready to be used?
the existing k8s boiler plate checks are written in python, for some reason.
a boiler plate check can be written in go (~50 LOC). basically any new, custom verify checks should be either bash or go and we should not depend on extra languages if possible.
should be added to verify-all.sh once #13 merges.
i should add this in kinder too, because we don't have it there yet.
/assign
/kind feature
/priority important-longterm
/kind bug
What steps did you take and what happened:
$ capdctl cluster -cluster-name zecora -namespace zecora | kubectl apply -f-
cluster.cluster.k8s.io/zecora created
$ capdctl control-plane -cluster-name zecora -namespace zecora -version v1.14.0 | kubectl apply -f-
machine.cluster.k8s.io/my-machine created
$ kubectl get machines -n zecora my-machine -o jsonpath='{.spec.versions.controlPlane}'
v1.14.0
$ kubectl get secret -n zecora kubeconfig-zecora -o json | jq -r '.data.kubeconfig' | base64 -d > /tmp/zecora.yaml
$ kubectl --kubeconfig /tmp/zecora.yaml version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-17T00:58:35Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
What did you expect to happen:
The reported version by kubectl should've been 1.14.0
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
$ git rev-parse HEAD
c3b249bb607ccc244a2ab8cb73b4ca241039fff1
$ docker version
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.4
Git commit: e8ff056
Built: Thu May 9 23:11:19 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: e8ff056
Built: Thu May 9 22:59:19 2019
OS/Arch: linux/amd64
Experimental: false
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
/kind bug
/assign
Until Kubeadm fixes kubernetes/kubeadm#1632 we need to work around it by modifying the kubeadm config map cluster statuses when we remove a node from the cluster.
I was quite lazy with handling errors and panics in capdctl. You'll find https://github.com/kubernetes-sigs/cluster-api-provider-docker/blob/master/cmd/capdctl/main.go#L183 this littered throughout the code. It's not great.
I would like the functions to return errors instead of panicking and print the error out to the user if there is one.
/kind enhancement
/kind bug
What steps did you take and what happened:
Containers must talk to each other. Containers can talk to each other over their container IP address. Containers live in the docker0 network. My laptop cannot use the container IP to communicate with containers as the docker0 network is not bridged. Therefore my laptop can only talk to containers over localhost. The containers cannot talk to each other over localhost. Therefore the secret that contains the kube-apiserver's endpoint will only contain one value and will be wrong for either the remote ref controller or the host both of which are required to work.
This is a non-issue on linux since the docker0 network is bridged to my local network.
It would be great to not copy and paste the CRDs from cluster API.
Ideally capdctl can produce the YAML from multiple versions of Cluster API. I propose a command that looks like this:
capdctl crds <cluster-api-version>
for example
capdctl crds 0.1.3
This will then print out the crds.
There are two implementations that come to mind: 1) capdctl reaches out to cluster-api's repo directly and downloads the yaml and prints it out to std out and 2) we generate all the different versions of yaml and embed them into capdctl so that capdctl doesn't need to hit the network to work.
I'm sure there are more implementations, these are just two suggestions. I'd leave this up to the implementer to pick one.
This could very well lead us into the realm of testing cluster-api v1alpha1 to v1alpha2 upgrades.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.