Comments (31)
I was missing the looback
binary of CNI. Now everything is fine,
from deployment.
I'm running into the same issue, and I have IPv6 enabled. I'm using a fresh setup using kubeadm
on Ubuntu Server 18.04.
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-k2xxd 0/1 ContainerCreating 0 91m
kube-system coredns-576cbf47c7-pqk9k 0/1 ContainerCreating 0 91m
When I run describe
I get the following error message,
$ kubectl describe pod coredns-576cbf47c7-pqk9k
Error from server (NotFound): pods "coredns-576cbf47c7-pqk9k" not found
Which isn't very useful.
Any luck @doctorCC? Also, @drpaneas, what do you mean by loopback for CNI?
from deployment.
from deployment.
I was missing the
looback
binary of CNI. Now everything is fine,how exactly did u set the 'loopback'?
- go get -d github.com/containernetworking/plugins
- cd ~/go/src/github.com/containernetworking/plugins
- ./build.sh
- sudo cp bin/* /opt/cni/bin/
from deployment.
Could you elaborate? I am having a similar issue, where I was trying to use the flannel plugin (which is Running) but there are 2 coredns which seem to be stuck in ContainerCreating
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcdf6894-sdvhr 0/1 ContainerCreating 0 6m
kube-system coredns-78fcdf6894-t94cp 0/1 ContainerCreating 0 6m
kube-system etcd-ubuntu 1/1 Running 0 5m
kube-system kube-apiserver-ubuntu 1/1 Running 0 5m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 5m
kube-system kube-flannel-ds-s390x-7z2zx 1/1 Running 0 2m
kube-system kube-proxy-bk62x 1/1 Running 0 6m
kube-system kube-scheduler-ubuntu 1/1 Running 0 5m
from deployment.
executing spifire's recommended steps did not work. running this work:
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
(kubernetes/kubernetes#48798)
as recommended by smorrisfv
from deployment.
run this command to active flannel mode
it works. core dns gets running.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
from deployment.
Thanks @johnbelamaric for your help. I found my issue, it was a bug in Flannel (flannel-io/flannel#1044) when running on K8s 1.12. What fixed it for me was the suggestion from this comment (kubernetes/kubernetes#48798).
from deployment.
Thank you @spitfire88! This works for me. Although I have now idea why I need to download and compile something when I'm trying to use kubeadm
from deployment.
Actually, I only had to do a:
sudo systemctl restart containerd
and the CoreDNS containers got created the next second.
from deployment.
I'm running into the same issue, and I have IPv6 enabled. I'm using a fresh setup using
kubeadm
on Ubuntu Server 18.04.NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-576cbf47c7-k2xxd 0/1 ContainerCreating 0 91m kube-system coredns-576cbf47c7-pqk9k 0/1 ContainerCreating 0 91m
When I run
describe
I get the following error message,$ kubectl describe pod coredns-576cbf47c7-pqk9k Error from server (NotFound): pods "coredns-576cbf47c7-pqk9k" not found
Which isn't very useful.
Any luck @doctorCC? Also, @drpaneas, what do you mean by loopback for CNI?
you should add --namespace=kube-system when you run describe pod command.
from deployment.
fixed by kubernetes/kubernetes#70202 (comment)
from deployment.
@doctorCC Any errors/warnings in kubelet logs?
from deployment.
from deployment.
I tore down the sysem and rebuilt it, allowing the emptyCluster to sit there overnight. Again there were two coredns which were never built. I tried to deploy a simple nginx, which also went into a ContainerCreating state. The logs from that were available and are below.
It appears to me that there is an issue with the Flannel network
I am running:
s390 architecture
kubectl version 1.11.3
attempting to follow directions from https://blog.alexellis.io/kubernetes-in-10-minutes/
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default my-nginx-7b6fc8965-24skw 0/1 ContainerCreating 0 11m
kube-system coredns-78fcdf6894-ddjnv 0/1 ContainerCreating 0 22h
kube-system coredns-78fcdf6894-kvwvw 0/1 ContainerCreating 0 22h
kube-system etcd-ubuntu 1/1 Running 0 22h
kube-system kube-apiserver-ubuntu 1/1 Running 0 22h
kube-system kube-controller-manager-ubuntu 1/1 Running 0 22h
kube-system kube-flannel-ds-s390x-djgcz 1/1 Running 0 22m
kube-system kube-proxy-j7lg4 1/1 Running 0 22h
kube-system kube-scheduler-ubuntu 1/1 Running 0 22h
ubuntu@ubuntu:$ kubectl describe pod coredns-78fcdf6894-ddjnv$ kubectl describe pod coredns-78fcdf6894-kvwvw
Error from server (NotFound): pods "coredns-78fcdf6894-ddjnv" not found
ubuntu@ubuntu:
Error from server (NotFound): pods "coredns-78fcdf6894-kvwvw" not found
ubuntu@ubuntu:~$ kubectl describe pod my-nginx-7b6fc8965-24skw
Name: my-nginx-7b6fc8965-24skw
Namespace: default
Priority: 0
PriorityClassName:
Node: ubuntu/9.12.41.193
Start Time: Tue, 25 Sep 2018 16:30:17 +0000
Labels: pod-template-hash=362974521
run=my-nginx
Annotations:
Status: Pending
IP:
Controlled By: ReplicaSet/my-nginx-7b6fc8965
Containers:
my-nginx:
Container ID:
Image: nginx:latest
Image ID:
Port: 9000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bkv6n (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-bkv6n:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bkv6n
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 12m default-scheduler Successfully assigned default/my-nginx-7b6fc8965-24skw to ubuntu
Warning FailedCreatePodSandBox 12m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "4e15faf72b2677c8ed5ffd06213b08e4597c8fe62c4a343926a08ee54dc93e2f" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "4e15faf72b2677c8ed5ffd06213b08e4597c8fe62c4a343926a08ee54dc93e2f" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 12m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "13bba308436c129bea5817f85a4df37631a44fcdb580e57405e6c3884c545a10" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "13bba308436c129bea5817f85a4df37631a44fcdb580e57405e6c3884c545a10" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 12m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "e36596ee6792d20609a851f1421a5dd808106be4a74bc64ff3329ae20a09583f" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "e36596ee6792d20609a851f1421a5dd808106be4a74bc64ff3329ae20a09583f" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 12m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "6a3a5c9badde3da9e514d0671688712bf93ba06fbee6450f4e3c8a020d1eab76" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "6a3a5c9badde3da9e514d0671688712bf93ba06fbee6450f4e3c8a020d1eab76" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 12m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "ea9c900d04d01cf41edf182c94612c7fb2a5db46d74d05bda035586e458642d4" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "ea9c900d04d01cf41edf182c94612c7fb2a5db46d74d05bda035586e458642d4" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 11m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "94000515ffeca2689a8c509921fad57cc239a18960fb53f758e75cc25d8ed217" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "94000515ffeca2689a8c509921fad57cc239a18960fb53f758e75cc25d8ed217" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 11m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "8e7322d37dfe96801fd3e731f5304431ff9f9848e04c5f01408c4fe00a0cea35" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "8e7322d37dfe96801fd3e731f5304431ff9f9848e04c5f01408c4fe00a0cea35" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 11m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "f83fdb6fc70c05e71f2515089f0f3c55142a05cecb9354be4e0be321658c35fc" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "f83fdb6fc70c05e71f2515089f0f3c55142a05cecb9354be4e0be321658c35fc" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Warning FailedCreatePodSandBox 11m kubelet, ubuntu Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "41d825f96630b1ba16dc1ed04e893d6157facd65024c63080ddb03e066592de4" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "41d825f96630b1ba16dc1ed04e893d6157facd65024c63080ddb03e066592de4" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
Normal SandboxChanged 11m (x12 over 12m) kubelet, ubuntu Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m (x182 over 11m) kubelet, ubuntu (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "a3da6c0ffa892f34f9f3e3d9cd3a1f46590a551e895879c44448bb58fe43cb7a" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to set up pod "my-nginx-7b6fc8965-24skw_default" network: open /proc/sys/net/ipv6/conf/eth0/accept_dad: no such file or directory, failed to clean up sandbox container "a3da6c0ffa892f34f9f3e3d9cd3a1f46590a551e895879c44448bb58fe43cb7a" network for pod "my-nginx-7b6fc8965-24skw": NetworkPlugin cni failed to teardown pod "my-nginx-7b6fc8965-24skw_default" network: failed to get IP addresses for "eth0": ]
from deployment.
@doctorCC, Perhaps it's this ipv6 disabled in kernel issue...
from deployment.
I was missing the
looback
binary of CNI. Now everything is fine,
how exactly did u set the 'loopback'?
from deployment.
This is does not fix my issue
from deployment.
docker 18.09.0,kube1.13.1
上面都不太好用,我CentOS7.3,是这样解决的:
编辑:/etc/default/grub中,ipv6.disable=0
运行:grub2-mkconfig -o /boot/grub2/grub.cfg
重启服务器
Tried above comments are not available for me, In my CentOS 7.3 server, here is the solution:
vi / etc/default/grub ==edit===> ipv6. disable = 0
grub2-mkconfig -o /boot/grub2/grub.cfg
Restart the server
上記の方法は私に対してあまり使いにくいですが、私のCenoOS 7.3は、このように解決しました。
編集:/etc/default/grub =====> ipv6. disable = 0
実行:grub2-mkconfig -o /boot/grub2/grub.cfg
再起動サーバー
from deployment.
I was missing the
looback
binary of CNI. Now everything is fine,how exactly did u set the 'loopback'?
- go get -d github.com/containernetworking/plugins
- cd ~/go/src/github.com/containernetworking/plugins
- ./build.sh
- sudo cp bin/* /opt/cni/bin/
worked for me
from deployment.
Hi! I applied the deployment and the go get steps, now I get:
network: failed to create bridge "cni0": could not add "cni0": operation not supported
Please help
Host: ClearLinux
Kubernetes:
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"archive", BuildDate:"2019-07-04T23:54:15Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:09:21Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Bridges:
# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02422aea25b1 no
Docker version:
# docker version
Client:
Version: 18.06.3
API version: 1.38
Go version: go1.12.5
Git commit: d7080c17a580919f5340a15a8e5e013133089680
Built: Mon Jun 3 17:41:01 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.3
API version: 1.38 (minimum version 1.12)
Go version: go1.12.5
Git commit: d7080c17a580919f5340a15a8e5e013133089680
Built: Mon Jun 3 17:41:13 2019
OS/Arch: linux/amd64
Experimental: false
from deployment.
@msq6323013 on master node or worker?
from deployment.
Master node
from deployment.
I am also getting the same error - container creation >
bash-4.2$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-hvhts 0/1 ContainerCreating 0 27m
coredns-66bff467f8-qcxb4 0/1 ContainerCreating 0 27m
Name: coredns-66bff467f8-hvhts
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: k8s-lab-master/192.168.0.55
Start Time: Sat, 09 May 2020 13:29:52 +0530
Labels: k8s-app=kube-dns
pod-template-hash=66bff467f8
Annotations:
Status: Pending
IP:
IPs:
Controlled By: ReplicaSet/coredns-66bff467f8
Containers:
coredns:
Container ID:
Image: k8s.gcr.io/coredns:1.6.7
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-clxk5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-clxk5:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-clxk5
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
Normal Scheduled 29m default-scheduler Successfully assigned kube-system/coredns-66bff467f8-hvhts to k8s-lab-master
Warning FailedCreatePodSandBox 28m kubelet, k8s-lab-master Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "101fcda6d89dbbed9d1752f2be5f16172b380e75189b77a926fc7a54ca726970" network for pod "coredns-66bff467f8-hvhts": networkPlugin cni failed to set up pod "coredns-66bff467f8-hvhts_kube-system" network: unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get http:///var/run/cilium/cilium.sock/v1/config: dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
Is the agent running?
from deployment.
spitfire88
I install go in a Nvidia AI machine with ARM64. Please see the go installation guide in the following go weblink.
Afterwards, I build, test hello.go and go runs well. While I run the commands as you mentioned,
go get -d github.com/containernetworking/plugins
cd ~/go/src/github.com/containernetworking/plugins
./build.sh
sudo cp bin/* /opt/cni/bin/
I get to know that build.sh is replaced by build_linux.sh. So I have the following configuration.
$ go get -d github.com/containernetworking/plugins
package github.com/containernetworking/plugins: no Go files in /home/cm/go/src/github.com/containernetworking/plugins
Even though it has the above message, the plugins marked with the directory have existed.
$ cd ~/go/src/github.com/containernetworking/plugins
~/go/src/github.com/containernetworking/plugins$ ./build_linux.sh
Building plugins
bandwidth
firewall
flannel
portmap
sbr
tuning
bridge
host-device
ipvlan
loopback
macvlan
ptp
vlan
dhcp
host-local
static
~/go/src/github.com/containernetworking/plugins$ sudo cp bin/* /opt/cni/bin/
It seems the completion of the CNI plugins. However, one of the nodes is still marked with NotReady (in the ContainerCreating status).
from deployment.
Following the late-shared go installation, I have solved the issue with the following setup of Flannel cluster network.
1. Configure the Flannel
For ARM64 Architecture
$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \
> | sed "s/amd64/arm/g" | sed "s/vxlan/host-gw/g" \
> > kube-flannel.yaml
Or
For AMD64 Architecture
$ curl https://rawgit.com/coreos/flannel/master/Documentation/kube-flannel.yml \ > kube-flannel.yaml
2. Apply Flannel and Check the Configuration
1). Apply Flannel
$ kubectl apply -f kube-flannel.yaml
2). Check the Configuration
Check kube-flannel-cfg
$ kubectl describe --namespace=kube-system configmaps/kube-flannel-cfg
Check the Device
For ARM64 Architecture
$ kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds-arm64
or
For AMD64 Architecture
$ kubectl describe --namespace=kube-system daemonsets/kube-flannel-ds
3. Apply RBAC
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
- Get nodes
I can get all the nodes with the Status of Ready now after executing the commands on the Master node.
$ kubectl get nodes
Notes:
Before setting up the Flannel, you must complete the master node initiation of Kubernetes and worker nodes joining.
Cheers.
Reference
-
Kubernetes: Up & Running, Second Edition, Brendan Burns, Joe Beda, Kelsey Hightower
-
Kubernetes Network Guideline, Du Jun
-
https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
from deployment.
Such issue (stuck at ContainerCreating) happened to me because flannel was not properly installed after I created k8s cluster.
I used CentOS 7.8
Pls try this
how to install flannel (https://phoenixnap.com/kb/how-to-install-kubernetes-on-centos)
[root@master-node ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel configured
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created
[Result]
subnet.env is created
[root@master-node flannel]# cat subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
coredns and dashboard became normal (running state)
[root@master-node flannel]k get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-74ff55c5b-jkmx4 1/1 Running 0 6h21m 10.244.0.3 master-node
kube-system coredns-74ff55c5b-zsrkz 1/1 Running 0 6h21m 10.244.0.2 master-node
kube-system etcd-master-node 1/1 Running 1 6h21m 172.30.1.6 master-node
kube-system kube-apiserver-master-node 1/1 Running 1 6h21m 172.30.1.6 master-node
kube-system kube-controller-manager-master-node 1/1 Running 2 6h21m 172.30.1.6 master-node
kube-system kube-flannel-ds-5lk4m 1/1 Running 0 2m26s 172.30.1.6 master-node
kube-system kube-flannel-ds-jb8nh 1/1 Running 0 2m26s 172.30.1.28 worker-node1
kube-system kube-proxy-pspvs 1/1 Running 1 6h21m 172.30.1.6 master-node
kube-system kube-proxy-shst7 1/1 Running 1 6h18m 172.30.1.28 worker-node1
kube-system kube-scheduler-master-node 1/1 Running 1 6h21m 172.30.1.6 master-node
kubernetes-dashboard dashboard-metrics-scraper-7b59f7d4df-826jt 1/1 Running 0 15m 10.244.1.2 worker-node1
kubernetes-dashboard kubernetes-dashboard-74d688b6bc-zgnl4 1/1 Running 0 15m 10.244.1.3 worker-node1
you can see flannel interface too
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
inet6 fe80::d4c4:a9ff:fe9f:54b2 prefixlen 64 scopeid 0x20
ether d6:c4:a9:9f:54:b2 txqueuelen 1000 (Ethernet)
RX packets 849 bytes 70675 (69.0 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 876 bytes 111771 (109.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
ether 02:42:a4:42:b2:39 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.30.1.6 netmask 255.255.255.0 broadcast 172.30.1.255
inet6 fe80::20c:29ff:fe87:de2c prefixlen 64 scopeid 0x20
ether 00:0c:29:87:de:2c txqueuelen 1000 (Ethernet)
RX packets 7726 bytes 1593036 (1.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8198 bytes 4826457 (4.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 10.244.0.0
inet6 fe80::a8fd:57ff:fe17:4526 prefixlen 64 scopeid 0x20
ether aa:fd:57:17:45:26 txqueuelen 0 (Ethernet)
RX packets 69 bytes 8943 (8.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 80 bytes 11636 (11.3 KiB)
TX errors 0 dropped 36 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1000 (Local Loopback)
RX packets 587564 bytes 108471519 (103.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 587564 bytes 108471519 (103.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethcbb22c7b: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::5cb1:88ff:fe2d:c538 prefixlen 64 scopeid 0x20
ether 5e:b1:88:2d:c5:38 txqueuelen 0 (Ethernet)
RX packets 422 bytes 41068 (40.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 469 bytes 59356 (57.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethff9c51c2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet6 fe80::1c3c:fdff:fec5:a649 prefixlen 64 scopeid 0x20
ether 1e:3c:fd:c5:a6:49 txqueuelen 0 (Ethernet)
RX packets 428 bytes 41583 (40.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 470 bytes 58632 (57.2 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:a2:70:5a txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
from deployment.
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0540b7e66e4e5c5aa34abefd27bf303cf901ad2bad771f2408c1d9f41952311a" network for pod "coredns-74ff55c5b-6pn9f": networkPlugin cni failed to set up pod "coredns-74ff55c5b-6pn9f_kube-system" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
For me its looking for calico durectory . i am confused why it looks for calico when i am installing flannel .
Any idea how to proceed further
from deployment.
it worked for my issues with Calico network plugin
Install Calico
Install the Tigera Calico operator and custom resource definitions.
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
Install Calico by creating the necessary custom resource. For more information on configuration options available in this manifest, see the installation reference.
kubectl create -f https://docs.projectcalico.org/manifests/custom-resources.yaml
from deployment.
creating the /run/flannel/subnet.env fixes the coredns issue not starting but it's only temporary.
My solution for the master/control-plane :
kubeadm init --control-plane-endpoint=whatever --node-name whatever --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
- restart all
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker
from deployment.
So here is my solution:
First, coreDNS will run on your [Master / Control-Plane] Nodes
Now let's run ifconfig
to check for these 2 interfaces cni0
and flannel.1
Suppose cni0=10.244.1.1
& flannel.1=10.244.0.0
then your DNS will not be created
It should be cni0=10.244.0.1
& flannel.1=10.244.0.0
. Which mean cni0 must follow flannel.1/24
patterns
Run the following 2 command to Down Interface and Remove it from your Master/Control-Plane Machines
sudo ifconfig cni0 down;
sudo ip link delete cni0;
Now check viaifconfig
you will see 2 morevethxxxxxxxx
Interface appears. This should fixed your problem.
from deployment.
Related Issues (20)
- Migrating CoreDNS chart from helm/charts to coredns/charts HOT 6
- Helm stable/coredns now won't install HOT 11
- Enable github pages for gh-pages branch HOT 5
- MIssing update in CoreDNS-k8s_version.md HOT 3
- Update CoreDNS-k8s_version.md for recent K8S versions 1.21 and 1.22
- Warnings During Package Build HOT 1
- Debian/Ubuntu Package Service Fails to Start HOT 1
- open /var/lib/kubernetes/ca.pem: no such file or directory HOT 1
- Is there an easy way to build a specific version? HOT 4
- Unmet Build Dependency: dh-systemd (Missing on 22.04 LTS Ubuntu) HOT 2
- coredns is stuck in ContainerCreating status HOT 1
- CoreDNS's default configuration cause information Leaks and DoS in kubernetes HOT 8
- How to install CoreDNS in a fresh cluster, running no kube-dns? HOT 1
- 不小心删除了coredns 的deployment,怎么恢复 HOT 2
- Deprecate Kubernetes deployment scripts
- $ symbol escape
- Update to v1.10.1 please HOT 4
- Issue on debian, invalid characters in version number
- Coredns in Debian repository
- oilers stuck ContainerCreating
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from deployment.