Coder Social home page Coder Social logo

kubic-control's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kubic-control's Issues

kubicd default configuration no debugging; kubicctl init fails?

Hi, I have a fresh install of microos / tumbleweed

kubcctl by itself is pretty useless, until I systemctl start kubicd
ok got past setting up the certificates
linux-o02m:/home/erratic # kubicctl certificates initialize
that seemed to work
kubicctl init

no dice:

linux-o02m:/home/erratic # kubicctl init
Initializing kubernetes master can take several minutes, please be patient.
Initialize Kubernetes control-plane
Error invoking kubeadm: exit status 1
linux-o02m:/home/erratic # 

syslog

-- 
-- The job identifier is 3926 and the job result is done.
Jul 18 15:15:14 linux-o02m kubicd[5880]: time="2019-07-18T15:15:14-04:00" level=info msg="[preflight] Running pre-flight checks\n[reset] No etcd config found. Assuming external etcd\n[reset] Please manually reset etcd to prevent further issues\n[reset] Stopping the kubelet service\n[reset] unmounting mounted directorie>
Jul 18 15:15:14 linux-o02m systemd[1]: Reloading.
Jul 18 15:15:14 linux-o02m systemd[1]: /usr/lib/systemd/system/auditd.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/auditd.pid → /run/auditd.pid; please update the unit file accordingly.
Jul 18 15:15:14 linux-o02m systemd[1]: /usr/lib/systemd/system/chronyd.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/chrony/chronyd.pid → /run/chrony/chronyd.pid; please update the unit file accordingly.
Jul 18 15:15:14 linux-o02m systemd[1]: Stopping Open Container Initiative Daemon...
-- Subject: A stop job for unit crio.service has begun execution
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- A stop job for unit crio.service has begun execution.
-- 
-- The job identifier is 3927.
Jul 18 15:15:14 linux-o02m systemd[2820]: var-lib-containers-storage-btrfs.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- The unit UNIT has successfully entered the 'dead' state.
Jul 18 15:15:14 linux-o02m systemd[4514]: var-lib-containers-storage-btrfs.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- The unit UNIT has successfully entered the 'dead' state.
Jul 18 15:15:14 linux-o02m systemd[1]: var-lib-containers-storage-btrfs.mount: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- The unit var-lib-containers-storage-btrfs.mount has successfully entered the 'dead' state.
Jul 18 15:15:14 linux-o02m systemd[1]: crio.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- The unit crio.service has successfully entered the 'dead' state.
Jul 18 15:15:14 linux-o02m systemd[1]: Stopped Open Container Initiative Daemon.
-- Subject: A stop job for unit crio.service has finished
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- A stop job for unit crio.service has finished.
-- 
-- The job identifier is 3927 and the job result is done.
Jul 18 15:15:14 linux-o02m kubicd[5880]: time="2019-07-18T15:15:14-04:00" level=info
Jul 18 15:15:14 linux-o02m systemd[1]: Reloading.
Jul 18 15:15:15 linux-o02m systemd[1]: /usr/lib/systemd/system/auditd.service:11: PIDFile= references a path below legacy directory /var/run/, updating /var/run/auditd.pid → /run/auditd.pid; please update the unit file accordingly.
Jul 18 15:15:15 linux-o02m systemd[1]: /usr/lib/systemd/system/chronyd.service:13: PIDFile= references a path below legacy directory /var/run/, updating /var/run/chrony/chronyd.pid → /run/chrony/chronyd.pid; please update the unit file accordingly.
Jul 18 15:15:15 linux-o02m kubicd[5880]: time="2019-07-18T15:15:15-04:00" level=info
Jul 18 15:15:15 linux-o02m kubicd[5880]: time="2019-07-18T15:15:15-04:00" level=info msg="Function: /api.Kubeadm/InitMaster, Caller: admin, Duration: 2.410909974s, Error: <nil>"

Unfortunately I don't know enough about kubernetes or kubic for that matter to fix this or understand whats going on.

I'm not getting anywhere in terms of finding information on how to get this to work.

I don't get what the problem is really, do you want money?

kubicctl init fails because it can't find registry.opensuse.org/kubic/kube-proxy:v1.18.0

When I try to install kubic, I get this error:

logs from journalctl

I replaced \n in the logs with actual new lines to make this more readable.

May 05 22:04:12 kubic-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
May 05 22:04:21 kubic-master kubicd[2182]: time="2020-05-05T22:04:21-04:00" level=error msg="Error invoking kubeadm: exit status 1
\nW0505 22:03:10.815596    3915 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
\nerror execution phase preflight: [preflight] Some fatal errors occurred:
  \n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-apiserver:v1.18.0: output: time=\"2020-05-05T22:03:18-04:00\" level=fatal msg=\"pulling image failed: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, OS linux\"
\n, error: exit status 1
  \n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-controller-manager:v1.18.0: output: time=\"2020-05-05T22:03:26-04:00\" level=fatal msg=\"pulling image failed: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, OS linux\"
\n, error: exit status 1
  \n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-scheduler:v1.18.0: output: time=\"2020-05-05T22:03:33-04:00\" level=fatal msg=\"pulling image failed: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, OS linux\"
\n, error: exit status 1
\n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-proxy:v1.18.0: output: time=\"2020-05-05T22:03:41-04:00\" level=fatal msg=\"pulling image failed: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, OS linux\"
\n, error: exit status 1
\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
\nTo see the stack trace of this error execute with --v=5 or higher
\n"
May 05 22:04:21 kubic-master kubicd[2182]: time="2020-05-05T22:04:21-04:00" level=info msg="Executing /usr/bin/kubeadm: [kubeadm reset --force]"
May 05 22:04:21 kubic-master systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
May 05 22:04:21 kubic-master kubicd[2182]: time="2020-05-05T22:04:21-04:00" level=info msg="[preflight] Running pre-flight checks
\n[reset] No etcd config found. Assuming external etcd
\n[reset] Please, manually reset etcd to prevent further issues
\n[reset] Stopping the kubelet service
\n[reset] Unmounting mounted directories in \"/var/lib/kubelet\"
\n[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
\n[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
\n[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

\n\nThe reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.
\nIf you wish to reset iptables, you must do so manually by using the \"iptables\" command.

\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
\nto reset your system's IPVS tables.

\n\nThe reset process does not clean your kubeconfig files and you must remove them manually.
\nPlease, check the contents of the $HOME/.kube/config file.
\n"
May 05 22:04:21 kubic-master kubicd[2182]: time="2020-05-05T22:04:21-04:00" level=info msg="Executing /usr/bin/systemctl: [systemctl disable --now crio]"
May 05 22:04:21 kubic-master systemd[1]: Reloading.

As you can see in the error, I get failed to pull image registry.opensuse.org/kubic/kube-proxy:v1.18.0, which I can confirm (on 2 different machines):

kubic-master:~ #  crictl --debug pull  registry.opensuse.org/kubic/kube-proxy:v1.18.0
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:registry.opensuse.org/kubic/kube-proxy:v1.18.0,},Auth:nil,SandboxConfig:nil,}
DEBU[0001] PullImageResponse: nil
FATA[0001] pulling image failed: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, OS linux

However, I don't think this is something I'm doing wrong because I can pull from your registry:

kubic-master:~ # crictl --debug pull  registry.opensuse.org/kubic/kube-proxy
DEBU[0000] PullImageRequest: &PullImageRequest{Image:&ImageSpec{Image:registry.opensuse.org/kubic/kube-proxy,},Auth:nil,SandboxConfig:nil,}
DEBU[0031] PullImageResponse: &PullImageResponse{ImageRef:registry.opensuse.org/kubic/kube-proxy@sha256:f9e0785b34594befe1c6bfa0f2fb05b90a6770b3bcfc7cf5a52b368f3099297e,}
Image is up to date for registry.opensuse.org/kubic/kube-proxy@sha256:f9e0785b34594befe1c6bfa0f2fb05b90a6770b3bcfc7cf5a52b368f3099297e

But I also don't think this is something you're doing wrong because the image is available

Note: I'm using a MITM cert from my company, so that could be causing errors, but I don't think it is because I can pull from the registry.

Add a way to specify a FDQN to add to the certs

To access the cluster's management plane from another network you need to specify an FDQN to connect to. When connecting to the management plane's fdqn over https, your certs won't work unless you have the fdqn in the certs.

To solve this issue, I ran these kubeadm steps. Unfortunately, those steps leave old networking configurations behind as they say in their output:

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

So if there's a way to specify this fdqn in kubicctl it would be great because I wouldn't have to do this extra step. I see kubicctl init's --adv-addr and kubicctl certificates. Is there a clean way to specify the fdqn that I'm not seeing?

Here's a full blog post about how to do this https://blog.scottlowe.org/2019/07/30/adding-a-name-to-kubernetes-api-server-certificate/

on new install, kubicctl fails

It complains that it was unable to invoke kubeadm. I am not sure what to do at this point. Last year, this was a very, very easy thing to accomplish. Something has broken somewhere.

Possible salt bug? (fresh install and default init)

master-1:~ # kubicctl init --haproxy k8s-lb.mydomain.com --multi-master k8s-lb.mydomain.com
Initializing kubernetes master can take several minutes, please be patient.
Setting up multi-master kubernetes node (reacheable as 'k8s-lb.mydomain.com') with weave
Configure haproxy on node k8s-lb.mydomain.com
Error invoking salt: exit status 1
(k8s-lb.mydomain.com:
    ERROR executing 'cmd.run': Executor '[direct_call]' is not available)

I assumed it was an issue with the LB I deployed, but when I reinstalled the entire cluster, with absolutely no change (keeping everything as close to default as possible), I am still getting this issue.

setup:
2 master nodes
3 worker nodes
1 LB node

Add a way to remove deployed apps

kubicctl allows you to deploy apps with kubicctl deploy ... but there's no kubicctl remove .... In my case this became an issue because I deployed metallb with only one available IP address.

[feature] The ability to set Kubernetes pod and service cidr via kubicctl

Our Kubernetes cluster resides on our corporate LAN and are thus bound by our existing deployed networks. The default pod and service network cidr's overlap with those of existing networks and create numerous issues due to colliding IPs.

We need the ability to configure designated networks for Kubernetes services and pods at deploy time to avoid network collisions. At present kubeadm exposes --pod-network-cidr and --service-cidr on it's init command for this purpose.

Does not support cilium with the latest release of opensuse

With the latest versions of cilium-k8s-yaml, we now have the Helm chart installed on the system instead of a yaml.

The code search for the yaml in "/usr/share/k8s-yaml/cilium/cilium.yaml" to apply it. So the only way to deploy cilium is to create a new btrfs snapshot and put a cilium.yaml generated with helm template where kubicctl expect it to be.

Couldn't it be possible to use the helm chart provided by cilium-k8s-yaml or to embed the binary with a working yaml ?

all kubicctl commands throw rpc error

Hello, just installed my kubic k8s cluster last week with 3 master nodes.
Now when i try to run any command against kubicctl the following error gets thrown:

rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"

Everything is working fine as is with kubectl.
No errors to be found in journalctl | grep kubicd.

salt issue on 0.12.3

k8s-master-1:~ # kubicctl version
kubicctl version 0.12.3
k8s-master-1:~ # kubicctl init --haproxy=k8s-lb-1.MYDOMAIN.com --multi-master=k8s-lb-1.MYDOMAIN.com
Initializing kubernetes master can take several minutes, please be patient.
Setting up multi-master kubernetes node (reacheable as 'k8s-lb-1.MYDOMAIN.com') with weave
Configure haproxy on node k8s-lb-1.MYDOMAIN.com
Error invoking salt: exit status 1
(k8s-lb-1.MYDOMAIN.com:
    ERROR executing 'cmd.run': Executor '[direct_call]' is not available)

I don't know if @aplanas change has been implemented in this build. if it is, it seems like it is still causing issues. Using the devel repo

--adv-addr not working

Hello, I tried to play with kubicctl and kube-vip.
Unfortunatly I'm hard stuck that I can't set my --adv-addr (full command: kubicctl init --multi-master 192.168.179.60 --adv-addr 192.168.179.61):

Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="Executing /usr/bin/rpm: [rpm -q --qf '%{VERSION}' kubernetes-kubeadm]"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="'1.18.4'"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="Calling kubeadm '[init --apiserver-advertise-address=192.168.179.61 --config=/var/lib/kubic-control/multi-master/kubeadm-config.yaml]'"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm init --apiserver-advertise-address=192.168.179.61 --config=/var/lib/kubic-control/multi-master/kubeadm-config.yaml]"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=error msg="Error invoking kubeadm: exit status 1\ncan not mix '--config' with arguments [apiserver-advertise-address]\nTo see the stack trace of this error execute with --v=5 or higher\n"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm reset --force]"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="[preflight] Running pre-flight checks\n[reset] No etcd config found. Assuming external etcd\n[reset] Please, manually reset etcd to prevent further issues\n[reset] Stopping the kubelet service\n[reset] Unmounting mounted directories in \"/var/lib/kubelet\"\n[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]\n[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]\n[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]\n\nThe reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d\n\nThe reset process does not reset or clean up iptables rules or IPVS tables.\nIf you wish to reset iptables, you must do so manually by using the \"iptables\" command.\n\nIf your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)\nto reset your system's IPVS tables.\n\nThe reset process does not clean your kubeconfig files and you must remove them manually.\nPlease, check the contents of the $HOME/.kube/config file.\n"
Jul 16 13:12:08 kubic-m1 kubicd[2070]: time="2020-07-16T13:12:08Z" level=info msg="Executing /usr/bin/systemctl: [systemctl disable --now crio]"

maybe a problem with kubeadm 1.8.x.

p.s.: actually multi-master with an ip address might work (also gcloud uses ip addresses for lb) I probably also need to wait for a new kubicctl release so I can add another SANS certificate, because the last time I couldn't add a second master because of failing certificates, I'm pretty sure that multi-master only allows DNS addresses.

[feautre] Make ClusterConfiguration variable

Currently there is no option the handle any special configuration for the cluster (for example to enable oidc).
By making ClusterConfiguration variable you can setup the kubernetes cluster as you like.

Proposal:
1.) Handling ClusterConfiguration with https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm#ClusterConfiguration struct not as string
2.) Update Object CLusterConfiguraiton with user input from kubicctl.conf key ClusterConfiguration
3.) Update Object ClusterConfiguration with

_, err = f.WriteString("apiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterConfiguration\nkubernetesVersion: " + kubernetes_version + "\ncontrolPlaneEndpoint: \"" + in.MultiMaster + ":6443\"\n")
Multimaster
4.) Write Object ´/var/lib/kubic-control/clusterconfiguration.yaml`
5.) update kubeadm args with new location

If you like I can implement my proposal and create a mr?

Add a way to deploy ceph

Kubic seems to include ceph yaml configs in /usr/share/k8s-yaml/rook/ceph, but there's no way to run kubicctl deploy ceph.

kubic-control is not taking care about the certs :-(

Unfortunately kubicctl is not maintaining the certs of kubectl after joining the cluster :-(
kubic-n1:~ # curl -v https://localhost:10250 -k 2>&1 |grep 'expire date'

  • expire date: Apr 26 12:19:52 2022 GMT
    So Kubeadm is not taking care about it, --rotate-certificates is not configured and kubicctl is also not caring about it.
    It is already annoying that I have from time to time fix the manifests to refresh kured as it will outdate quite quick in the opensuse repo...

€: I have to add that kubelet-client-current.pem exists and is valid, but it will not be used even as it is configured under /etc/kubernetes/kubelet.conf

€2: May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.623508 30776 server.go:874] "Client rotation is on, will bootstrap in background"
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.625617 30776 bootstrap.go:84] "Current kubeconfig file contents are still valid, no bootstrap necessary"
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.625724 30776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.626056 30776 server.go:918] "Starting client certificate rotation"
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.626090 30776 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.626235 30776 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2023-02-17 20:06:35 +0000 UTC, rota>
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.626306 30776 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Waiting 4150h50m43.164241273s for next certificate rotation
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.626939 30776 dynamic_cafile_content.go:118] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
May 18 15:59:22 kubic-n1 kubelet[30776]: I0518 15:59:22.627034 30776 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
May 18 15:59:22 kubic-n1 kubelet[30776]: W0518 15:59:22.627096 30776 manager.go:159] Cannot detect current cgroup on cgroup v2

So it takes care about the certs but somehow they are not used correctly which is strange...

Default `kubicctl init` not working

Summary

kubicctl init failed with the following error. I didn't change any of the default kubic settings except the password.

The only unusual things about my setup are:

  • I'm running kubic inside vmware vsphere 6.7.
  • I'm in a corporate network with firewalls etc. which may be causing errors related to this kubic issue about ip forwarding.
    • I doubt this is the issue because I can use salt in the network to add notes and salt '*' test.ping works.

Please let me know if you have any ideas!

Error

localhost:/etc/kubicd # localhost:/etc/kubicd # kubicctl init
Initializing kubernetes master can take several minutes, please be patient.
Setting up single-master kubernetes node with weave
Initialize Kubernetes control-plane
Error invoking kubeadm: exit status 1

Debugging

As far as I can tell everything is installed according to the docs as you can see below.

localhost:/etc/kubicd # systemctl enable --now kubicd-init
localhost:/etc/kubicd # systemctl enable --now kubicd
localhost:/etc/kubicd # kubicctl init
Initializing kubernetes master can take several minutes, please be patient.
Setting up single-master kubernetes node with weave
Initialize Kubernetes control-plane
Error invoking kubeadm: exit status 1
localhost:~ # mkdir .config/kubicctl
localhost:~/ # cd .config/kubicctl
localhost:~/.config/kubicctl # kubicctl certificates initialize
Error invoking certstrap: exit status 1
CA with specified name "Kubic-Control-CA" already exists!

Error creating CA: exit status 1
localhost:~/.config/kubicctl # ls /root/.config/kubicctl/
localhost:~/.config/kubicctl # ls /etc/kubicd/pki
Kubic-Control-CA.crl  Kubic-Control-CA.key  KubicD.csr  admin.crt  admin.key
Kubic-Control-CA.crt  KubicD.crt            KubicD.key  admin.csr
localhost:~/.config/kubicctl # ls /usr/etc/kubicd/
kubicd.conf  rbac.conf
localhost:~/.config/kubicctl # kubicctl init
Initializing kubernetes master can take several minutes, please be patient.
Setting up single-master kubernetes node with weave
Initialize Kubernetes control-plane
Error invoking kubeadm: exit status 1

Full journalctl

Apr 14 20:35:37 localhost kubicd[1919]: time="2020-04-14T20:35:37Z" level=info msg="Received: Init Master"
Apr 14 20:35:37 localhost kubicd[1919]: time="2020-04-14T20:35:37Z" level=info msg="Executing /usr/bin/systemctl: [s>Apr 14 20:35:37 localhost systemd[1]: Reloading.
Apr 14 20:35:37 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below >Apr 14 20:35:37 localhost systemd[1]: Starting CRI-O Auto Update Script...
Apr 14 20:35:37 localhost crio[3219]: major and minor version unchanged; no wipe needed
Apr 14 20:35:37 localhost systemd[1]: crio-wipe.service: Succeeded.
Apr 14 20:35:37 localhost systemd[1]: Started CRI-O Auto Update Script.
Apr 14 20:35:37 localhost systemd[1]: Starting Container Runtime Interface for OCI (CRI-O)...
Apr 14 20:35:37 localhost unknown: testing the buffer
Apr 14 20:35:37 localhost unknown: testing the buffer
Apr 14 20:35:37 localhost systemd[1]: Started Container Runtime Interface for OCI (CRI-O).
Apr 14 20:35:37 localhost kubicd[1919]: time="2020-04-14T20:35:37Z" level=info
Apr 14 20:35:37 localhost kubicd[1919]: time="2020-04-14T20:35:37Z" level=info msg="Executing /usr/bin/systemctl: [s>Apr 14 20:35:37 localhost systemd[1]: Reloading.
Apr 14 20:35:37 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below >Apr 14 20:35:38 localhost systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Apr 14 20:35:38 localhost bash[3270]: TARGET      SOURCE FSTYPE OPTIONS
Apr 14 20:35:38 localhost bash[3270]: /sys/fs/bpf none   bpf    rw,nosuid,nodev,noexec,relatime,mode=700
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/rpm: [rpm -q >Apr 14 20:35:38 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="'1.18.0'"
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Calling kubeadm '[init --kuberne>Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/kubeadm: [kub>Apr 14 20:35:38 localhost kubelet[3273]: Flag --runtime-request-timeout has been deprecated, This parameter should b>Apr 14 20:35:38 localhost kubelet[3273]: Flag --cgroup-driver has been deprecated, This parameter should be set via >Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.327922    3273 flags.go:33] FLAG: --add-dir-header="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328028    3273 flags.go:33] FLAG: --address="0.0.0.0"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328079    3273 flags.go:33] FLAG: --allowed-unsafe-sysctls=">Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328123    3273 flags.go:33] FLAG: --alsologtostderr="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328162    3273 flags.go:33] FLAG: --anonymous-auth="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328203    3273 flags.go:33] FLAG: --application-metrics-coun>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328247    3273 flags.go:33] FLAG: --authentication-token-web>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328285    3273 flags.go:33] FLAG: --authentication-token-web>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328325    3273 flags.go:33] FLAG: --authorization-mode="Alwa>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328363    3273 flags.go:33] FLAG: --authorization-webhook-ca>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328403    3273 flags.go:33] FLAG: --authorization-webhook-ca>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328441    3273 flags.go:33] FLAG: --azure-container-registry>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328483    3273 flags.go:33] FLAG: --boot-id-file="/proc/sys/>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328522    3273 flags.go:33] FLAG: --bootstrap-checkpoint-pat>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328559    3273 flags.go:33] FLAG: --bootstrap-kubeconfig="/e>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328593    3273 flags.go:33] FLAG: --cert-dir="/var/lib/kubel>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328640    3273 flags.go:33] FLAG: --cgroup-driver="systemd"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328680    3273 flags.go:33] FLAG: --cgroup-root=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328718    3273 flags.go:33] FLAG: --cgroups-per-qos="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328758    3273 flags.go:33] FLAG: --chaos-chance="0"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328816    3273 flags.go:33] FLAG: --client-ca-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328860    3273 flags.go:33] FLAG: --cloud-config=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328898    3273 flags.go:33] FLAG: --cloud-provider=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328936    3273 flags.go:33] FLAG: --cluster-dns="[]"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.328981    3273 flags.go:33] FLAG: --cluster-domain=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329019    3273 flags.go:33] FLAG: --cni-bin-dir="/opt/cni/bi>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329061    3273 flags.go:33] FLAG: --cni-cache-dir="/var/lib/>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329099    3273 flags.go:33] FLAG: --cni-conf-dir="/etc/cni/n>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329132    3273 flags.go:33] FLAG: --config="/var/lib/kubelet>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329170    3273 flags.go:33] FLAG: --container-hints="/etc/ca>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329209    3273 flags.go:33] FLAG: --container-log-max-files=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329248    3273 flags.go:33] FLAG: --container-log-max-size=">Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329291    3273 flags.go:33] FLAG: --container-runtime="remot>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329331    3273 flags.go:33] FLAG: --container-runtime-endpoi>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329371    3273 flags.go:33] FLAG: --containerd="/run/contain>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329419    3273 flags.go:33] FLAG: --contention-profiling="fa>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329460    3273 flags.go:33] FLAG: --cpu-cfs-quota="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329498    3273 flags.go:33] FLAG: --cpu-cfs-quota-period="10>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329537    3273 flags.go:33] FLAG: --cpu-manager-policy="none"Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329579    3273 flags.go:33] FLAG: --cpu-manager-reconcile-pe>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329617    3273 flags.go:33] FLAG: --docker="unix:///var/run/>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329665    3273 flags.go:33] FLAG: --docker-endpoint="unix://>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329705    3273 flags.go:33] FLAG: --docker-env-metadata-whit>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329743    3273 flags.go:33] FLAG: --docker-only="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329781    3273 flags.go:33] FLAG: --docker-root="/var/lib/do>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329822    3273 flags.go:33] FLAG: --docker-tls="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329876    3273 flags.go:33] FLAG: --docker-tls-ca="ca.pem"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329919    3273 flags.go:33] FLAG: --docker-tls-cert="cert.pe>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329958    3273 flags.go:33] FLAG: --docker-tls-key="key.pem"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.329996    3273 flags.go:33] FLAG: --dynamic-config-dir=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330031    3273 flags.go:33] FLAG: --enable-cadvisor-json-end>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330073    3273 flags.go:33] FLAG: --enable-controller-attach>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330117    3273 flags.go:33] FLAG: --enable-debugging-handler>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330155    3273 flags.go:33] FLAG: --enable-load-reader="fals>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330198    3273 flags.go:33] FLAG: --enable-server="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330233    3273 flags.go:33] FLAG: --enforce-node-allocatable>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330272    3273 flags.go:33] FLAG: --event-burst="10"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330310    3273 flags.go:33] FLAG: --event-qps="5"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330347    3273 flags.go:33] FLAG: --event-storage-age-limit=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330390    3273 flags.go:33] FLAG: --event-storage-event-limi>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330433    3273 flags.go:33] FLAG: --eviction-hard="imagefs.a>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330484    3273 flags.go:33] FLAG: --eviction-max-pod-grace-p>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330525    3273 flags.go:33] FLAG: --eviction-minimum-reclaim>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330588    3273 flags.go:33] FLAG: --eviction-pressure-transi>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330631    3273 flags.go:33] FLAG: --eviction-soft=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330670    3273 flags.go:33] FLAG: --eviction-soft-grace-peri>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330709    3273 flags.go:33] FLAG: --exit-on-lock-contention=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330750    3273 flags.go:33] FLAG: --experimental-allocatable>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330788    3273 flags.go:33] FLAG: --experimental-bootstrap-k>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330832    3273 flags.go:33] FLAG: --experimental-check-node->Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330870    3273 flags.go:33] FLAG: --experimental-dockershim=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330908    3273 flags.go:33] FLAG: --experimental-dockershim->Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330952    3273 flags.go:33] FLAG: --experimental-kernel-memc>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.330993    3273 flags.go:33] FLAG: --experimental-mounter-pat>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331051    3273 flags.go:33] FLAG: --fail-swap-on="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331093    3273 flags.go:33] FLAG: --feature-gates=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331134    3273 flags.go:33] FLAG: --file-check-frequency="20>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331173    3273 flags.go:33] FLAG: --global-housekeeping-inte>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331218    3273 flags.go:33] FLAG: --hairpin-mode="promiscuou>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331258    3273 flags.go:33] FLAG: --healthz-bind-address="12>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331264    3273 flags.go:33] FLAG: --healthz-port="10248"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331269    3273 flags.go:33] FLAG: --help="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331272    3273 flags.go:33] FLAG: --hostname-override=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331276    3273 flags.go:33] FLAG: --housekeeping-interval="1>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331280    3273 flags.go:33] FLAG: --http-check-frequency="20>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331284    3273 flags.go:33] FLAG: --image-gc-high-threshold=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331287    3273 flags.go:33] FLAG: --image-gc-low-threshold=">Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331291    3273 flags.go:33] FLAG: --image-pull-progress-dead>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331297    3273 flags.go:33] FLAG: --image-service-endpoint=""Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331301    3273 flags.go:33] FLAG: --iptables-drop-bit="15"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331304    3273 flags.go:33] FLAG: --iptables-masquerade-bit=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331308    3273 flags.go:33] FLAG: --keep-terminated-pod-volu>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331311    3273 flags.go:33] FLAG: --kube-api-burst="10"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331315    3273 flags.go:33] FLAG: --kube-api-content-type="a>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331319    3273 flags.go:33] FLAG: --kube-api-qps="5"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331323    3273 flags.go:33] FLAG: --kube-reserved=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331330    3273 flags.go:33] FLAG: --kube-reserved-cgroup=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331333    3273 flags.go:33] FLAG: --kubeconfig="/etc/kuberne>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331337    3273 flags.go:33] FLAG: --kubelet-cgroups=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331341    3273 flags.go:33] FLAG: --lock-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331344    3273 flags.go:33] FLAG: --log-backtrace-at=":0"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331349    3273 flags.go:33] FLAG: --log-cadvisor-usage="fals>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331353    3273 flags.go:33] FLAG: --log-dir=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331356    3273 flags.go:33] FLAG: --log-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331361    3273 flags.go:33] FLAG: --log-file-max-size="1800"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331365    3273 flags.go:33] FLAG: --log-flush-frequency="5s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331368    3273 flags.go:33] FLAG: --logtostderr="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331372    3273 flags.go:33] FLAG: --machine-id-file="/etc/ma>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331376    3273 flags.go:33] FLAG: --make-iptables-util-chain>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331380    3273 flags.go:33] FLAG: --manifest-url=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331383    3273 flags.go:33] FLAG: --manifest-url-header=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331389    3273 flags.go:33] FLAG: --master-service-namespace>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331395    3273 flags.go:33] FLAG: --max-open-files="1000000"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331402    3273 flags.go:33] FLAG: --max-pods="110"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331406    3273 flags.go:33] FLAG: --maximum-dead-containers=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331410    3273 flags.go:33] FLAG: --maximum-dead-containers->Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331414    3273 flags.go:33] FLAG: --minimum-container-ttl-du>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331418    3273 flags.go:33] FLAG: --minimum-image-ttl-durati>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331421    3273 flags.go:33] FLAG: --network-plugin=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331425    3273 flags.go:33] FLAG: --network-plugin-mtu="0"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331429    3273 flags.go:33] FLAG: --node-ip=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331433    3273 flags.go:33] FLAG: --node-labels=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331438    3273 flags.go:33] FLAG: --node-status-max-images=">Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331442    3273 flags.go:33] FLAG: --node-status-update-frequ>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331446    3273 flags.go:33] FLAG: --non-masquerade-cidr="10.>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331449    3273 flags.go:33] FLAG: --oom-score-adj="-999"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331453    3273 flags.go:33] FLAG: --pod-cidr=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331456    3273 flags.go:33] FLAG: --pod-infra-container-imag>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331462    3273 flags.go:33] FLAG: --pod-manifest-path=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331465    3273 flags.go:33] FLAG: --pod-max-pids="-1"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331469    3273 flags.go:33] FLAG: --pods-per-core="0"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331473    3273 flags.go:33] FLAG: --port="10250"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331477    3273 flags.go:33] FLAG: --protect-kernel-defaults=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331480    3273 flags.go:33] FLAG: --provider-id=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331484    3273 flags.go:33] FLAG: --qos-reserved=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331488    3273 flags.go:33] FLAG: --read-only-port="10255"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331492    3273 flags.go:33] FLAG: --really-crash-for-testing>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331496    3273 flags.go:33] FLAG: --redirect-container-strea>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331499    3273 flags.go:33] FLAG: --register-node="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331503    3273 flags.go:33] FLAG: --register-schedulable="tr>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331506    3273 flags.go:33] FLAG: --register-with-taints=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331511    3273 flags.go:33] FLAG: --registry-burst="10"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331514    3273 flags.go:33] FLAG: --registry-qps="5"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331518    3273 flags.go:33] FLAG: --reserved-cpus=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331523    3273 flags.go:33] FLAG: --resolv-conf="/etc/resolv>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331526    3273 flags.go:33] FLAG: --root-dir="/var/lib/kubel>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331530    3273 flags.go:33] FLAG: --rotate-certificates="fal>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331533    3273 flags.go:33] FLAG: --rotate-server-certificat>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331537    3273 flags.go:33] FLAG: --runonce="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331540    3273 flags.go:33] FLAG: --runtime-cgroups=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331544    3273 flags.go:33] FLAG: --runtime-request-timeout=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331547    3273 flags.go:33] FLAG: --seccomp-profile-root="/v>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331554    3273 flags.go:33] FLAG: --serialize-image-pulls="t>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331557    3273 flags.go:33] FLAG: --skip-headers="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331561    3273 flags.go:33] FLAG: --skip-log-headers="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331565    3273 flags.go:33] FLAG: --stderrthreshold="2"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331568    3273 flags.go:33] FLAG: --storage-driver-buffer-du>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331572    3273 flags.go:33] FLAG: --storage-driver-db="cadvi>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331576    3273 flags.go:33] FLAG: --storage-driver-host="loc>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331580    3273 flags.go:33] FLAG: --storage-driver-password=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331585    3273 flags.go:33] FLAG: --storage-driver-secure="f>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331589    3273 flags.go:33] FLAG: --storage-driver-table="st>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331592    3273 flags.go:33] FLAG: --storage-driver-user="roo>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331596    3273 flags.go:33] FLAG: --streaming-connection-idl>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331600    3273 flags.go:33] FLAG: --sync-frequency="1m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331603    3273 flags.go:33] FLAG: --system-cgroups=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331607    3273 flags.go:33] FLAG: --system-reserved=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331610    3273 flags.go:33] FLAG: --system-reserved-cgroup=""Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331614    3273 flags.go:33] FLAG: --tls-cert-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331618    3273 flags.go:33] FLAG: --tls-cipher-suites="[]"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331624    3273 flags.go:33] FLAG: --tls-min-version=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331627    3273 flags.go:33] FLAG: --tls-private-key-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331631    3273 flags.go:33] FLAG: --topology-manager-policy=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331634    3273 flags.go:33] FLAG: --v="2"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331638    3273 flags.go:33] FLAG: --version="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331648    3273 flags.go:33] FLAG: --vmodule=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331653    3273 flags.go:33] FLAG: --volume-plugin-dir="/var/>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331658    3273 flags.go:33] FLAG: --volume-stats-agg-period=>Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331708    3273 feature_gate.go:243] feature gates: &{map[]}
Apr 14 20:35:38 localhost kubelet[3273]: F0414 20:35:38.331809    3273 server.go:199] failed to load Kubelet config >Apr 14 20:35:38 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Apr 14 20:35:38 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=error msg="Error invoking kubeadm: exit st>Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/kubeadm: [kub>Apr 14 20:35:38 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="[preflight] Running pre-flight c>Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/systemctl: [s>Apr 14 20:35:38 localhost systemd[1]: Reloading.
Apr 14 20:35:38 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below >Apr 14 20:35:38 localhost systemd[2860]: var-lib-containers-storage-btrfs.mount: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)...
Apr 14 20:35:38 localhost systemd[1]: var-lib-containers-storage-btrfs.mount: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: crio.service: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O).
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/systemctl: [s>Apr 14 20:35:38 localhost systemd[1]: Reloading.
Apr 14 20:35:39 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below >Apr 14 20:35:39 localhost kubicd[1919]: time="2020-04-14T20:35:39Z" level=info
Apr 14 20:35:39 localhost kubicd[1919]: time="2020-04-14T20:35:39Z" level=info msg="Function: /api.Kubeadm/InitMaste>lines 5300-5353/5353 (END)
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331530    3273 flags.go:33] FLAG: --rotate-certificates="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331533    3273 flags.go:33] FLAG: --rotate-server-certificates="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331537    3273 flags.go:33] FLAG: --runonce="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331540    3273 flags.go:33] FLAG: --runtime-cgroups=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331544    3273 flags.go:33] FLAG: --runtime-request-timeout="15m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331547    3273 flags.go:33] FLAG: --seccomp-profile-root="/var/lib/kubelet/seccomp"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331554    3273 flags.go:33] FLAG: --serialize-image-pulls="true"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331557    3273 flags.go:33] FLAG: --skip-headers="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331561    3273 flags.go:33] FLAG: --skip-log-headers="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331565    3273 flags.go:33] FLAG: --stderrthreshold="2"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331568    3273 flags.go:33] FLAG: --storage-driver-buffer-duration="1m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331572    3273 flags.go:33] FLAG: --storage-driver-db="cadvisor"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331576    3273 flags.go:33] FLAG: --storage-driver-host="localhost:8086"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331580    3273 flags.go:33] FLAG: --storage-driver-password="root"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331585    3273 flags.go:33] FLAG: --storage-driver-secure="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331589    3273 flags.go:33] FLAG: --storage-driver-table="stats"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331592    3273 flags.go:33] FLAG: --storage-driver-user="root"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331596    3273 flags.go:33] FLAG: --streaming-connection-idle-timeout="4h0m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331600    3273 flags.go:33] FLAG: --sync-frequency="1m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331603    3273 flags.go:33] FLAG: --system-cgroups=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331607    3273 flags.go:33] FLAG: --system-reserved=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331610    3273 flags.go:33] FLAG: --system-reserved-cgroup=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331614    3273 flags.go:33] FLAG: --tls-cert-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331618    3273 flags.go:33] FLAG: --tls-cipher-suites="[]"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331624    3273 flags.go:33] FLAG: --tls-min-version=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331627    3273 flags.go:33] FLAG: --tls-private-key-file=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331631    3273 flags.go:33] FLAG: --topology-manager-policy="none"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331634    3273 flags.go:33] FLAG: --v="2"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331638    3273 flags.go:33] FLAG: --version="false"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331648    3273 flags.go:33] FLAG: --vmodule=""
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331653    3273 flags.go:33] FLAG: --volume-plugin-dir="/var/lib/kubelet/volume-plugin"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331658    3273 flags.go:33] FLAG: --volume-stats-agg-period="1m0s"
Apr 14 20:35:38 localhost kubelet[3273]: I0414 20:35:38.331708    3273 feature_gate.go:243] feature gates: &{map[]}
Apr 14 20:35:38 localhost kubelet[3273]: F0414 20:35:38.331809    3273 server.go:199] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open >
Apr 14 20:35:38 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Apr 14 20:35:38 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=error msg="Error invoking kubeadm: exit status 1\nW0414 20:35:38.225829    3274 configset.go:202] WARNING: kubeadm cannot validate component configs for API group>
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm reset --force]"
Apr 14 20:35:38 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="[preflight] Running pre-flight checks\n[reset] No etcd config found. Assuming external etcd\n[reset] Please, manually reset etcd to prevent further issu>
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/systemctl: [systemctl disable --now crio]"
Apr 14 20:35:38 localhost systemd[1]: Reloading.
Apr 14 20:35:38 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/vmtoolsd.pid → /run/vmtoolsd.pid; please update the unit file accordingly.
Apr 14 20:35:38 localhost systemd[2860]: var-lib-containers-storage-btrfs.mount: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: Stopping Container Runtime Interface for OCI (CRI-O)...
Apr 14 20:35:38 localhost systemd[1]: var-lib-containers-storage-btrfs.mount: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: crio.service: Succeeded.
Apr 14 20:35:38 localhost systemd[1]: Stopped Container Runtime Interface for OCI (CRI-O).
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info
Apr 14 20:35:38 localhost kubicd[1919]: time="2020-04-14T20:35:38Z" level=info msg="Executing /usr/bin/systemctl: [systemctl disable --now kubelet]"
Apr 14 20:35:38 localhost systemd[1]: Reloading.
Apr 14 20:35:39 localhost systemd[1]: /usr/lib/systemd/system/vmtoolsd.service:12: PIDFile= references a path below legacy directory /var/run/, updating /var/run/vmtoolsd.pid → /run/vmtoolsd.pid; please update the unit file accordingly.
Apr 14 20:35:39 localhost kubicd[1919]: time="2020-04-14T20:35:39Z" level=info
Apr 14 20:35:39 localhost kubicd[1919]: time="2020-04-14T20:35:39Z" level=info msg="Function: /api.Kubeadm/InitMaster, Caller: admin, Duration: 2.144213309s, Error: <nil>"

Internal DNS doesn't work on a newly added node

After adding a third worker node with kubicctl node add <node name> I can't reach services running inside the cluster from pods running on the new node. To fix this I deleted the new node, so unfortunately I can't provide logs. If you can't reproduce this issue feel free to close it.

Cert error on kubicctl upgrade

bug

When I run kubicctl upgrade I get the following error

master:~ # kubicctl upgrade
Upgrading kubernetes can take a very long time, please be patient.
Could not upgrade: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0"

If I export the following variables from the error message and from here the upgrade works

master:~ #  export GODEBUG=x509ignoreCN=0

logs

  • Here are some logs from around the time I hit this issue. After this I had to run transactional-update up on all my nodes to get kubicctl update to work as you can see in the logs.
Apr 15 18:56:46 master systemd[1]: Started Kubic Daemon to setup Kubernetes.
Apr 15 18:56:46 master kubicd[2145]: time="2021-04-15T18:56:46-04:00" level=info msg="Kubic Daemon: 0.10.2"
May 19 06:33:45 master kubicd[2145]: time="2021-05-19T06:33:45-04:00" level=info msg="Received: upgrade Kubernetes"
May 19 06:33:45 master kubicd[2145]: time="2021-05-19T06:33:45-04:00" level=info msg="Executing /usr/bin/rpm: [rpm -q --qf '%{VERSION}' kubernetes-kubeadm]"
May 19 06:33:45 master kubicd[2145]: time="2021-05-19T06:33:45-04:00" level=info msg="'1.20.2'"
May 19 06:33:45 master kubicd[2145]: time="2021-05-19T06:33:45-04:00" level=info msg="Executing /usr/bin/kubeadm: [kubeadm upgrade plan v1.20.2]"
May 19 06:33:47 master kubicd[2145]: time="2021-05-19T06:33:47-04:00" level=info msg="[upgrade/config] Making sure the configuration is correct:\n[upgrade/config] Reading configuration from the cluster...\n[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'\n[preflight] Running pre-flight checks.\n[upgrade] Running cluster health checks\n[upgrade] Fetching available versions to upgrade to\n[upgrade/versions] Cluster version: v1.20.1\n[upgrade/versions] kubeadm version: v1.20.2\n[upgrade/versions] Latest stable version: v1.20.2\n[upgrade/versions] Latest version in the v1.20 series: v1.20.2\n\nComponents that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':\nCOMPONENT   CURRENT       AVAILABLE\nkubelet     1 x v1.20.2   v1.20.2\n            2 x v1.20.6   v1.20.2\n\nUpgrade to the latest version in the v1.20 series:\n\nCOMPONENT                 CURRENT    AVAILABLE\nkube-apiserver            v1.20.1    v1.20.2\nkube-controller-manager   v1.20.1    v1.20.2\nkube-scheduler            v1.20.1    v1.20.2\nkube-proxy                v1.20.1    v1.20.2\nCoreDNS                   1.7.0      1.7.0\netcd                      3.4.13-0   3.4.13-0\n\nYou can now apply the upgrade by executing the following command:\n\n\tkubeadm upgrade apply v1.20.2\n\n_____________________________________________________________________\n\n\nThe table below shows the current state of component configs as understood by this version of kubeadm.\nConfigs that have a \"yes\" mark in the \"MANUAL UPGRADE REQUIRED\" column require manual config upgrade or\nresetting to kubeadm defaults before a successful upgrade can be performed. The version to manually\nupgrade to is denoted in the \"PREFERRED VERSION\" column.\n\nAPI GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED\nkubeproxy.config.k8s.io   v1alpha1          v1alpha1            no\nkubelet.config.k8s.io     v1beta1           v1beta1             no\n_____________________________________________________________________\n\n"
May 19 06:33:47 master kubicd[2145]: time="2021-05-19T06:33:47-04:00" level=info msg="Executing /usr/bin/kubectl: [kubectl --kubeconfig=/etc/kubernetes/admin.conf drain master --timeout 10m --delete-local-data --force --ignore-daemonsets]"
May 19 06:34:04 master kubicd[2145]: time="2021-05-19T06:34:04-04:00" level=info msg="node/master cordoned\nevicting pod kubernetes-dashboard/kubernetes-dashboard-74d688b6bc-hkvj4\nevicting pod kube-system/coredns-694d7767d4-85b5j\nevicting pod kube-system/coredns-694d7767d4-jxxlr\nevicting pod kubernetes-dashboard/dashboard-metrics-scraper-7b59f7d4df-l76kn\npod/coredns-694d7767d4-85b5j evicted\npod/dashboard-metrics-scraper-7b59f7d4df-l76kn evicted\npod/kubernetes-dashboard-74d688b6bc-hkvj4 evicted\npod/coredns-694d7767d4-jxxlr evicted\nnode/master evicted\n"
May 19 06:34:04 master kubicd[2145]: time="2021-05-19T06:34:04-04:00" level=info msg="Executing /usr/bin/kubeadm: [kubeadm upgrade apply v1.20.2 --yes]"
May 19 06:34:20 master kubicd[2145]: time="2021-05-19T06:34:20-04:00" level=error msg="Error invoking kubeadm: exit status 2\n[preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-apiserver:v1.20.2: output: time=\"2021-05-19T06:34:09-04:00\" level=fatal msg=\"pulling image: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, variant \\\"\\\", OS linux\"\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-controller-manager:v1.20.2: output: time=\"2021-05-19T06:34:12-04:00\" level=fatal msg=\"pulling image: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, variant \\\"\\\", OS linux\"\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-scheduler:v1.20.2: output: time=\"2021-05-19T06:34:16-04:00\" level=fatal msg=\"pulling image: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, variant \\\"\\\", OS linux\"\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/kube-proxy:v1.20.2: output: time=\"2021-05-19T06:34:20-04:00\" level=fatal msg=\"pulling image: rpc error: code = Unknown desc = Error choosing image instance: no image found in manifest list for architecture amd64, variant \\\"\\\", OS linux\"\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`\nTo see the stack trace of this error execute with --v=5 or higher\n"
May 19 06:34:20 master kubicd[2145]: time="2021-05-19T06:34:20-04:00" level=info msg="Executing /usr/bin/kubectl: [kubectl --kubeconfig=/etc/kubernetes/admin.conf uncordon master]"
May 19 06:34:24 master kubicd[2145]: time="2021-05-19T06:34:24-04:00" level=info msg="node/master uncordoned\n"
May 19 06:34:24 master kubicd[2145]: time="2021-05-19T06:34:24-04:00" level=info msg="Executing /usr/bin/salt: [salt -G kubicd:kubic-worker-node grains.get kubic-worker-node]"
May 19 06:34:25 master kubicd[2145]: time="2021-05-19T06:34:25-04:00" level=info msg="hpe.verizon.net:\nthor.verizon.net:\n"
May 19 06:34:25 master kubicd[2145]: time="2021-05-19T06:34:25-04:00" level=info msg="Function: /api.Kubeadm/UpgradeKubernetes, Caller: admin, Duration: 39.875934442s, Error: rpc error: code = Unavailable desc = transport is closing"
m

Other information

IIRC when I installed I used this flag kubicctl init --adv-addr <my-hostname.com> so that could be part of the problem

kubicctl init fails with missing /var/lib/kubelet/config.yaml file

Hey there,

I have a fresh installation of openSUSE kubic with the Kubic Admin Node systemrole selected as it has been written inside of the docs for the master node.

When I try to run the command
kubicctl init

I get the following result

Initializing kubernetes master can take several minutes, please be patient.
Setting up single-master kubernetes node with weave
Initialize Kubernetes control-plane
Error invoking kubeadm: exit status 1

Inside of the logs I get the cause of this which is something like

Mar 04 19:14:50 localhost kubelet[3593]: F0304 19:14:50.702454    3593 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/>
Mar 04 19:14:50 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Mar 04 19:14:50 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.

Since I am new to kubic, I have no idea what the problem is or if there is a bug.

I have tried a transactional-upgrade of my system but this did not fix the issue.

I've used the config as found here: https://github.com/thkukuk/kubic-control/tree/master/etc/kubicd which has been mentioned in this guide: https://en.opensuse.org/Kubic:KubicD_and_kubicctl

0.12.3 seems to be still broken

Hi,

Was just trying the latest release of kubicctl 0.12.3 and having some strange results:

kubic-test-master:~ # salt-key -A
The key glob '*' does not match any unaccepted keys.
kubic-test-master:~ # salt-key -L
Accepted Keys:
kubic-test-worker.suse
Denied Keys:
Unaccepted Keys:
Rejected Keys:
kubic-test-master:~ # salt kubic-test-worker.suse test.ping
kubic-test-worker.suse:
    True
kubic-test-master:~ # kubicctl node add kubic-test-worker.suse
ERRO[0000] could not initialize: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate relies on legacy Common Name field, use SANs instead" 
kubic-test-master:~ # rpm -qa | grep -i kubicctl
kubicctl-0.12.3-54.1.x86_64
kubic-test-master:~ # rpm -qa | grep -i kubicd
kubicd-0.12.3-54.1.x86_64

I had it built on OBS (mind you, at this point I'm an eager amateur and have incredibly limited experience with OBS so absolutely possible that I messed something up): https://build.opensuse.org/package/show/home:adathor:branches:openSUSE:Factory/kubic-control

Add a debug flag

Add a debug flag that includes more verbose information like what line the program failed on. This would make it easier to see what kubicctl is doing under the hood.

Unable to add node, exits with `ERRO[0060] could not initialize: rpc error: code = DeadlineExceeded desc = context deadline exceeded`

Hi, I'am testing out Kubic to dive into Kubernetes using these instructions: https://en.opensuse.org/Kubic:KubicD_and_kubicctl

  • I installed a Kubic Admin Node and a Kubic Worker Node (linux-3ks2) with VirtualBox.
    Both VMs are connected with a Host-Only Adapter.
  • After installation I simply added the hostnames into the /etc/hosts file for DNS resolution.
  • kubicctl init on the master worked properly
  • adding the worker as minion also worked like expected (salt commands are fully functional)
  • When I try to add the worker node kubicctl node add linux-3ks2 the command fails after a minute with the following message:
ERRO[0060] could not initialize: rpc error: code = DeadlineExceeded desc = context deadline exceeded

How can I debug the process to track down the issue?

Invalid bootstrap token when adding new node

When adding a new node to the cluster:

# kubicctl node add devkubewkr02
Generate new token ...
Error invoking kubeadm: exit status 1

I found this in the kubicd logs:

Jul 23 10:58:40 devkubemst01 kubicd[1740]: time="2020-07-23T10:58:40Z" level=info msg="Received: add node  devkubewkr02"
Jul 23 10:58:40 devkubemst01 kubicd[1740]: time="2020-07-23T10:58:40Z" level=info msg="Token to join nodes too old, creating new one"
Jul 23 10:58:40 devkubemst01 kubicd[1740]: time="2020-07-23T10:58:40Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm token create --print-join-command 2>/dev/null]"
Jul 23 10:58:41 devkubemst01 kubicd[1740]: time="2020-07-23T10:58:41Z" level=error msg="Error invoking kubeadm: exit status 1\nthe bootstrap token \"2>/dev/null\" was not of the form \"\\\\A([a-z0-9]{6})\\\\.([a-z0-9]{16})\\\\z\"\nTo >
Jul 23 10:58:41 devkubemst01 kubicd[1740]: time="2020-07-23T10:58:41Z" level=info msg="Function: /api.Kubeadm/AddNode, Caller: admin, Duration: 782.466674ms, Error: <nil>"

I also tried manually creating a token with kubeadm token create but kubicctl still tries to create a new one and fails.

Is there any way for me to manually create a token and get kubicctl to see it (so it doesn't try and generate a new one) so I can work around this?

The admin node and worker both running MicroOS 20200720, kubicctl 0.10.0-1.1.

Cilium Image

Hi Thorsten,

I'm not sure if you want bugs for this here or in bugzilla, but I found one:

Command I ran:

kubicctl init --pod-network cilum

Error:

Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  3m59s                  default-scheduler  Successfully assigned kube-system/cilium-2mrjh to worker1
  Normal   Pulling    2m21s (x4 over 3m52s)  kubelet, worker1   Pulling image "registry.opensuse.org/devel/kubic/containers/container/kubic/cilium-init:1.4"
  Warning  Failed     2m21s (x4 over 3m51s)  kubelet, worker1   Failed to pull image "registry.opensuse.org/devel/kubic/containers/container/kubic/cilium-init:1.4": rpc error: code = Unknown desc = Error reading manifest 1.4 in registry.opensuse.org/devel/kubic/containers/container/kubic/cilium-init: manifest unknown
  Warning  Failed     2m21s (x4 over 3m51s)  kubelet, worker1   Error: ErrImagePull
  Normal   BackOff    2m7s (x6 over 3m51s)   kubelet, worker1   Back-off pulling image "registry.opensuse.org/devel/kubic/containers/container/kubic/cilium-init:1.4"
  Warning  Failed     114s (x7 over 3m51s)   kubelet, worker1   Error: ImagePullBackOff

According to registry.opensuse.org, the image should now be registry.opensuse.org/opensuse/factory/totest/containers/kubic/cilium-init:1.5 not 1.4

Command kubicctl node add not working

While trying to add a new node on the latest release of OpenSUSE Kubic (20200717)
I get the following error :

master01:~ # kubicctl node add worker01
Generate new token ...
Error invoking kubeadm: exit status 1

When searching the logs I get the following :

● kubicd.service - Kubic Daemon to setup Kubernetes
     Loaded: loaded (/usr/lib/systemd/system/kubicd.service; enabled; vendor preset: disabled)
     Active: active (running) since Sun 2020-07-19 20:41:24 UTC; 27min ago
   Main PID: 2960 (kubicd)
      Tasks: 10 (limit: 4915)
     Memory: 307.6M
     CGroup: /system.slice/kubicd.service
             └─2960 /usr/sbin/kubicd

Jul 19 20:50:21 master01 kubicd[2960]: time="2020-07-19T20:50:21Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm token create --print-join-command 2>/dev/null]"
Jul 19 20:50:21 master01 kubicd[2960]: time="2020-07-19T20:50:21Z" level=error msg="Error invoking kubeadm: exit status 1\nthe bootstrap token \"2>/dev/null\" was not of the form \"\\\\A([a-z0-9]{6})\\\\.([a-z0-9]{16})\\\\z\"\nTo see the stack trace of this error execute with --v=5 or higher\n"

What is weird is that the command works on its own :

master01:~ # kubeadm token create --print-join-command 2>/dev/null
kubeadm join 10.0.6.1:6443 --token y8c96g.psy0envbc6xi2e8d     --discovery-token-ca-cert-hash sha256:5b6ac4e9358939831b68e03d57cfea38d834a5d5d57bb9698adf4def4e936d53 

Cillium still not working

I am testing this on the latest Snapshot20190627.

Init worked well:

master:~ # kubicctl init --pod-network cilium
Initializing kubernetes master can take several minutes, please be patient.
Initialize Kubernetes control-plane
Deploy cilium
Deploy Kubernetes Reboot Daemon (kured)
Kubernetes master was succesfully setup.

cilium-etcd-operator pod is still not coming up:

master:~ # kubectl get pods -n kube-system
NAME                                    READY   STATUS             RESTARTS   AGE
cilium-5fdrl                            0/1     PodInitializing    0          73s
cilium-etcd-operator-5f9468cf8c-6b5bn   0/1     ImagePullBackOff   0          73s
cilium-operator-cb87f5c57-d8qwz         0/1     Pending            0          73s
coredns-fb8b8dccf-c7f8n                 0/1     Pending            0          73s
coredns-fb8b8dccf-wl5rg                 0/1     Pending            0          73s
etcd-master                             1/1     Running            0          30s
kube-apiserver-master                   1/1     Running            0          40s
kube-controller-manager-master          1/1     Running            0          37s
kube-proxy-jgttd                        1/1     Running            0          73s
kube-scheduler-master                   1/1     Running            0          37s

Can not pull the image from registry.opensuse.org/kubic/cilium-etcd-operator:2.0

Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  4m28s                 default-scheduler  Successfully assigned kube-system/cilium-etcd-operator-5f9468cf8c-6b5bn to master
  Normal   Pulling    103s (x4 over 4m25s)  kubelet, master    Pulling image "registry.opensuse.org/kubic/cilium-etcd-operator:2.0"
  Warning  Failed     103s (x4 over 4m11s)  kubelet, master    Failed to pull image "registry.opensuse.org/kubic/cilium-etcd-operator:2.0": rpc error: code = Unknown desc = Error reading manifest 2.0 in registry.opensuse.org/kubic/cilium-etcd-operator: name unknown
  Warning  Failed     103s (x4 over 4m11s)  kubelet, master    Error: ErrImagePull
  Normal   BackOff    89s (x6 over 4m10s)   kubelet, master    Back-off pulling image "registry.opensuse.org/kubic/cilium-etcd-operator:2.0"
  Warning  Failed     78s (x7 over 4m10s)   kubelet, master    Error: ImagePullBackOff

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.