kubeovn / kube-ovn Goto Github PK
View Code? Open in Web Editor NEWA Bridge between SDN and Cloud Native (Project under CNCF)
Home Page: https://kubeovn.github.io/docs/stable/en/
License: Apache License 2.0
A Bridge between SDN and Cloud Native (Project under CNCF)
Home Page: https://kubeovn.github.io/docs/stable/en/
License: Apache License 2.0
While testing kube-ovn, I couldn't figure out wether the podNetwork and serviceNetwork should be related to the default cidr or if one should drop the following flags from the controller-manager :
--allocate-node-cidrs=true
--cluster-cidr=x.x.0.0/16
--node-cidr-mask-size=24
and let the kube-ovn controller do the job.
when add qos for pod with command:
kubectl annotate --overwrite pod qperf-7dffd64c97-jnq55 ovn.kubernetes.io/egress_rate=300
and i get error in kube-ovn-cni pod:
I1113 07:28:38.467382 14022 controller.go:314] handle qos update for pod default/qperf-7dffd64c97-jnq55 E1113 07:28:38.467426 14022 controller.go:323] validate pod default/qperf-7dffd64c97-jnq55 failed, is not a valid ovn.kubernetes.io/egress_rate E1113 07:28:38.467486 14022 controller.go:302] error syncing 'default/qperf-7dffd64c97-jnq55': is not a valid ovn.kubernetes.io/egress_rate, requeuing I1113 07:28:38.467560 14022 event.go:258] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"qperf-7dffd64c97-jnq55", UID:"1b6f5d46-6c4b-48e1-90ab-62c6392cbf58", APIVersion:"v1", ResourceVersion:"5187423", FieldPath:""}): type: 'Warning' reason: 'ValidatePodNetworkFailed' is not a valid ovn.kubernetes.io/egress_rate I1113 07:28:43.587691 14022 controller.go:314] handle qos update for pod default/qperf-7dffd64c97-jnq55 E1113 07:28:43.587745 14022 controller.go:323] validate pod default/qperf-7dffd64c97-jnq55 failed, is not a valid ovn.kubernetes.io/egress_rate E1113 07:28:43.587839 14022 controller.go:302] error syncing 'default/qperf-7dffd64c97-jnq55': is not a valid ovn.kubernetes.io/egress_rate, requeuing I1113 07:28:43.587884 14022 event.go:258] Event(v1.ObjectReference{Kind:"Pod", Namespace:"default", Name:"qperf-7dffd64c97-jnq55", UID:"1b6f5d46-6c4b-48e1-90ab-62c6392cbf58", APIVersion:"v1", ResourceVersion:"5187423", FieldPath:""}): type: 'Warning' reason: 'ValidatePodNetworkFailed' is not a valid ovn.kubernetes.io/egress_rate
貌似往宿主机写文件需要特权模式?
kubelet不一定都开启了这个参数。
另外其他cni貌似都没这个要求。
另外rpm也是完全基于ovs的官方构建的?
For now the kube-ovn-pinger is doing a few checks to verify everything is in order.
It does some ping and verify the proper dns resolution of kubernetes.default.svc.cluster.local.
However, the local kuberntes domain is configuration dependant and can be changed.
If the cluster is configured with something different from the default cluster.local, dns check fails.
IMHO, the kubernetes local domain should be infered from the apiserver or in an easier way, the kube-ovn-pinger should check kubernetes.default rather than the FQDN kubernetes.default..
Only appoint subnet in statefulset, when creating pod assign an IP and keep it unchanged when pod recreates.
When kube-ovn-controller leader switch or restart, it need a long time to process init add queue events. When the cluster is large the time is unacceptable, we need to find ways to resolve this issue.
Some possible ways:
首先感谢你的代码,想取到逻辑交换机/路由器 上的端口流量,来实现的端口的流量统计功能。这个有实现的办法吗?
OVN dns function can provide several advantages over coredns
We can use OVN dns function to take over cluster internal dns request, and leave coredns only handle the external dns request
distro version
cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.6 LTS"
kernel version
uname -a
Linux su-test-4 4.4.0-154-generic #181-Ubuntu SMP Tue Jun 25 05:29:03 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
dmesg error
[70682.621811] netlink: Unknown conntrack attr (type=6, max=5)
[70682.621815] netlink: Unknown conntrack attr (type=6, max=5)
[70682.621820] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70682.621823] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70682.621824] netlink: Unknown conntrack attr (type=6, max=5)
[70682.621833] netlink: Unknown conntrack attr (type=6, max=5)
[70690.645353] netlink: Unknown conntrack attr (type=6, max=5)
[70690.645356] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70690.645360] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70690.645361] netlink: Unknown conntrack attr (type=6, max=5)
[70690.645368] netlink: Unknown conntrack attr (type=6, max=5)
[70690.645372] netlink: Unknown conntrack attr (type=6, max=5)
[70690.645373] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70690.645376] netlink: Unknown conntrack attr (type=6, max=5)
[70706.609642] netlink: Unknown conntrack attr (type=6, max=5)
[70706.609648] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70706.609655] netlink: Unknown conntrack attr (type=6, max=5)
[70706.610231] netlink: Unknown conntrack attr (type=6, max=5)
[70706.610235] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70706.610244] netlink: Unknown conntrack attr (type=6, max=5)
[70706.612170] netlink: Unknown conntrack attr (type=6, max=5)
[70706.612173] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70706.612180] netlink: Unknown conntrack attr (type=6, max=5)
[70707.608497] netlink: Unknown conntrack attr (type=6, max=5)
[70713.620342] net_ratelimit: 17 callbacks suppressed
[70713.620346] netlink: Unknown conntrack attr (type=6, max=5)
[70713.620349] netlink: Unknown conntrack attr (type=6, max=5)
[70713.620349] openvswitch: netlink: Flow actions may not be safe on all matching packets.
[70713.620354] netlink: Unknown conntrack attr (type=6, max=5
背景:
kube-ovn v0.8.0,参考 Pod IP 直接对外暴露 文档验证集群外节点访问pod ip功能
现象:
1、gatewayType=distributed,natOutgoing=false :
集群外部节点添加pod所在节点路由才可以ping通,并且不能ping通其他节点的pod IP
2、gatewayType=centralized,natOutgoing=false :
集群外部节点任一节点路由,都不能ping pod IP
咨询:
请问除了文档中的配置,还有其他需要注意的地方么。
NAME STATUS ROLES AGE VERSION LABELS
192.168.56.101 Ready master 1y v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kube-ovn/role=master,kubernetes.io/hostname=192.168.56.101,kubernetes.io/role=master
192.168.56.103 Ready master 58d v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.56.103,kubernetes.io/role=master
192.168.56.104 Ready master 1y v1.11.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=192.168.56.104,kubernetes.io/role=master
NAME PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE USED AVAILABLE
join IPv4 192.20.0.0/16 false false false distributed 3 65532
ovn-default IPv4 172.20.0.0/16 false true true distributed 5 65530
subnet-gateway IPv4 172.30.0.0/16 false false false distributed 2 65533
Name: subnet-gateway
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubeovn.io/v1","kind":"Subnet","metadata":{"annotations":{},"name":"subnet-gateway","namespace":""},"spec":{"cidrBlock":"172.30.0.0/16",...
API Version: kubeovn.io/v1
Kind: Subnet
Metadata:
Creation Timestamp: 2019-11-15T02:11:57Z
Generation: 3
Resource Version: 10881313
Self Link: /apis/kubeovn.io/v1/subnets/subnet-gateway
UID: 4cdbe95a-074d-11ea-b87c-0800273dab21
Spec:
Cidr Block: 172.30.0.0/16
Default: false
Exclude Ips:
172.30.0.1
Gateway: 172.30.0.1
Gateway Type: distributed
Namespaces:
default
Nat Outgoing: false
Private: false
Protocol: IPv4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-64f497f8fd-bw75b 1/1 Running 0 6m 172.30.0.2 192.168.56.103 <none>
nginx1-846d65cc74-gwbfd 1/1 Running 0 6m 172.30.0.3 192.168.56.104 <none>
172.30.0.0 192.168.56.104 255.255.0.0 UG 0 0 0 enp0s8
[root@localhost ~]# curl 172.30.0.2
^C
[root@localhost ~]# curl 172.30.0.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
kube-ovn-controller经常Crash,重启后过不了多长时间就宕了。两个POD的日志如下,请帮看看问题出在哪里,谢谢。
[root@host-10-19-17-139 ~]# kubectl get pod -n kube-ovn
NAME READY STATUS RESTARTS AGE
kube-ovn-cni-dvgzj 1/1 Running 6 3d18h
kube-ovn-cni-krsnc 1/1 Running 5 3d18h
kube-ovn-cni-w74td 1/1 Running 6 3d18h
kube-ovn-controller-86d7c8d6c4-4ddpx 0/1 CrashLoopBackOff 15 67m
kube-ovn-controller-86d7c8d6c4-5gv99 0/1 CrashLoopBackOff 14 67m
ovn-central-8ddc7dd8-ww7mf 1/1 Running 3 3d18h
ovs-ovn-dbrg2 1/1 Running 3 3d18h
ovs-ovn-jxjc5 1/1 Running 3 3d18h
ovs-ovn-s5rxz 1/1 Running 3 3d18h
[root@host-10-19-17-139 ~]# kubectl logs kube-ovn-controller-86d7c8d6c4-4ddpx -n kube-ovn
I0908 01:41:12.419714 1 config.go:145] no --kubeconfig, use in-cluster kubernetes config
I0908 01:41:12.422485 1 config.go:136] config is &{ 172.88.120.118 6641 0xc0002f43c0 0xc000071d80 ovn-default 172.30.0.0/16 172.30.0.1 172.30.0.1 ovn-cluster join 100.64.0.0/16 100.64.0.1 cluster-tcp-loadbalancer cluster-udp-loadbalancer kube-ovn-controller-86d7c8d6c4-4ddpx kube-ovn 3 10660}
I0908 01:41:12.426643 1 ovn-nbctl.go:21] ovn-nbctl command lr-list in 3ms
I0908 01:41:12.426673 1 init.go:76] exists routers [ovn-cluster]
I0908 01:41:12.430579 1 ovn-nbctl.go:21] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-tcp-loadbalancer in 3ms
I0908 01:41:12.430607 1 init.go:100] tcp load balancer 8e7945e4-ad8b-4d71-a5ca-dde31c130307 exists
I0908 01:41:12.433371 1 ovn-nbctl.go:21] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-udp-loadbalancer in 2ms
I0908 01:41:12.433390 1 init.go:115] udp load balancer 03030083-5495-462a-a9e6-134cfcae340b exists
I0908 01:41:12.450147 1 controller.go:240] Starting OVN controller
I0908 01:41:12.450187 1 controller.go:251] waiting for becoming a leader
I0908 01:41:12.451760 1 leaderelection.go:235] attempting to acquire leader lease kube-ovn/ovn-config...
I0908 01:41:12.455782 1 election.go:74] new leader elected: kube-ovn-controller-86d7c8d6c4-5gv99
I0908 01:41:13.450279 1 controller.go:251] waiting for becoming a leader
I0908 01:41:14.450454 1 controller.go:251] waiting for becoming a leader
I0908 01:41:15.450623 1 controller.go:251] waiting for becoming a leader
I0908 01:41:16.450821 1 controller.go:251] waiting for becoming a leader
I0908 01:41:17.451023 1 controller.go:251] waiting for becoming a leader
I0908 01:41:18.451183 1 controller.go:251] waiting for becoming a leader
I0908 01:41:19.451340 1 controller.go:251] waiting for becoming a leader
I0908 01:41:20.452014 1 controller.go:251] waiting for becoming a leader
I0908 01:41:21.452234 1 controller.go:251] waiting for becoming a leader
I0908 01:41:22.452378 1 controller.go:251] waiting for becoming a leader
I0908 01:41:23.452594 1 controller.go:251] waiting for becoming a leader
I0908 01:41:24.452846 1 controller.go:251] waiting for becoming a leader
I0908 01:41:25.452983 1 controller.go:251] waiting for becoming a leader
I0908 01:41:26.453175 1 controller.go:251] waiting for becoming a leader
I0908 01:41:27.453355 1 controller.go:251] waiting for becoming a leader
I0908 01:41:28.453500 1 controller.go:251] waiting for becoming a leader
I0908 01:41:29.453668 1 controller.go:251] waiting for becoming a leader
I0908 01:41:30.453859 1 controller.go:251] waiting for becoming a leader
I0908 01:41:31.454007 1 controller.go:251] waiting for becoming a leader
I0908 01:41:32.454131 1 controller.go:251] waiting for becoming a leader
I0908 01:41:33.454382 1 controller.go:251] waiting for becoming a leader
I0908 01:41:34.454531 1 controller.go:251] waiting for becoming a leader
I0908 01:41:35.397975 1 leaderelection.go:245] successfully acquired lease kube-ovn/ovn-config
I0908 01:41:35.398247 1 election.go:52] I am the new leader
I0908 01:41:35.398271 1 election.go:74] new leader elected: kube-ovn-controller-86d7c8d6c4-4ddpx
I0908 01:41:35.454701 1 controller.go:251] waiting for becoming a leader
I0908 01:41:35.455291 1 controller.go:261] Waiting for informer caches to sync
I0908 01:41:35.556036 1 controller.go:266] Starting workers
I0908 01:41:35.559845 1 ovn-nbctl.go:21] ovn-nbctl command ls-list in 3ms
I0908 01:41:35.563974 1 ovn-nbctl.go:21] ovn-nbctl command acl-del bsp-net -- acl-add bsp-net to-lport 3000 ip4.src==100.64.0.0/16 allow-related in 4ms
I0908 01:41:38.556300 1 pod.go:397] add pod kube-system/kubernetes-dashboard-66b96fb8d7-qw2mg
I0908 01:41:38.556339 1 controller.go:297] Started workers
I0908 01:41:38.556383 1 pod.go:397] add pod default/dnsutils-ds-9h6k2
I0908 01:41:38.556561 1 pod.go:397] add pod kube-ovn/kube-ovn-controller-86d7c8d6c4-5gv99
I0908 01:41:38.556591 1 pod.go:399] pod kube-ovn/kube-ovn-controller-86d7c8d6c4-5gv99 in host network mode no need for ovn process
I0908 01:41:38.556610 1 pod.go:397] add pod kube-ovn/ovs-ovn-s5rxz
I0908 01:41:38.556623 1 pod.go:399] pod kube-ovn/ovs-ovn-s5rxz in host network mode no need for ovn process
I0908 01:41:38.556636 1 pod.go:397] add pod bsp/bspnginx-594785f757-bptkp
I0908 01:41:38.556733 1 endpoint.go:95] update endpoint default/kubernetes
I0908 01:41:38.556859 1 endpoint.go:95] update endpoint default/my-nginx
I0908 01:41:38.556981 1 endpoint.go:95] update endpoint kube-ovn/nginx-ds
I0908 01:41:38.561332 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system -- lsp-set-addresses kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system 0a:00:00:1e:00:10 172.30.0.15 in 4ms
I0908 01:41:38.561930 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default dnsutils-ds-9h6k2.default -- lsp-set-addresses dnsutils-ds-9h6k2.default 0a:00:00:1e:00:0d 172.30.0.12 in 5ms
I0908 01:41:38.567424 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.1:443 10.19.17.139:6443,10.19.17.140:6443,10.19.17.141:6443 in 10ms
I0908 01:41:38.567449 1 endpoint.go:95] update endpoint kube-ovn/ovn-sb
I0908 01:41:38.569022 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add bsp-net bspnginx-594785f757-bptkp.bsp -- lsp-set-addresses bspnginx-594785f757-bptkp.bsp 0a:00:00:42:00:0c 172.66.0.11 in 12ms
I0908 01:41:38.570164 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-141 -- lsp-set-addresses node-host-10-19-17-141 0a:00:00:40:00:05 100.64.0.4 in 13ms
I0908 01:41:38.572023 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security dnsutils-ds-9h6k2.default 0a:00:00:1e:00:0d 172.30.0.12/16 in 10ms
I0908 01:41:38.574294 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-140 -- lsp-set-addresses node-host-10-19-17-140 0a:00:00:40:00:04 100.64.0.3 in 17ms
I0908 01:41:38.574416 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --if-exists lb-del cluster-tcp-loadbalancer 172.88.131.73:80 in 17ms
I0908 01:41:38.574432 1 endpoint.go:95] update endpoint kube-system/kube-dns
I0908 01:41:38.574523 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --if-exists lb-del cluster-tcp-loadbalancer 172.88.187.46:80 in 17ms
I0908 01:41:38.574545 1 endpoint.go:95] update endpoint kube-system/kube-scheduler
I0908 01:41:38.574559 1 endpoint.go:95] update endpoint default/dnsutils-ds
I0908 01:41:38.574667 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system 0a:00:00:1e:00:10 172.30.0.15/16 in 13ms
I0908 01:41:38.578488 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.66.103:6642 10.19.17.139:6642 in 11ms
I0908 01:41:38.578516 1 endpoint.go:95] update endpoint kube-ovn/ovn-nb
I0908 01:41:38.579704 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-139 -- lsp-set-addresses node-host-10-19-17-139 0a:00:00:40:00:03 100.64.0.2 in 23ms
I0908 01:41:38.584396 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.120.118:6641 10.19.17.139:6641 in 5ms
I0908 01:41:38.584415 1 endpoint.go:95] update endpoint kube-system/kube-controller-manager
I0908 01:41:38.584429 1 endpoint.go:95] update endpoint kube-system/kubernetes-dashboard
I0908 01:41:38.587971 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.141 100.64.0.4 in 17ms
I0908 01:41:38.588460 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security bspnginx-594785f757-bptkp.bsp 0a:00:00:42:00:0c 172.66.0.11/16 in 19ms
I0908 01:41:38.590457 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.139 100.64.0.2 in 10ms
I0908 01:41:38.592332 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.13.14:80 172.30.0.10:80,172.30.0.11:80,172.30.0.12:80 in 17ms
I0908 01:41:38.592354 1 endpoint.go:95] update endpoint kube-system/metrics-server
I0908 01:41:38.595857 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.221.160:443 172.30.0.15:8443 in 11ms
I0908 01:41:38.596793 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.140 100.64.0.3 in 22ms
I0908 01:41:38.597520 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-udp-loadbalancer 172.88.0.2:53 172.30.0.7:53 in 23ms
I0908 01:41:38.597523 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.7.44:443 172.30.0.13:443 in 5ms
I0908 01:41:38.600467 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.2:53 172.30.0.7:53 in 2ms
I0908 01:41:38.603449 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.2:9153 172.30.0.7:9153 in 2ms
I0908 01:41:39.398229 1 leaderelection.go:281] failed to renew lease kube-ovn/ovn-config: failed to tryAcquireOrRenew context deadline exceeded
I0908 01:41:39.398259 1 election.go:60] I am not leader anymore
F0908 01:41:39.398278 1 election.go:71] leaderelection lost
[root@host-10-19-17-139 ~]# kubectl logs kube-ovn-controller-86d7c8d6c4-5gv99 -n kube-ovn
I0908 01:45:56.516747 1 config.go:145] no --kubeconfig, use in-cluster kubernetes config
I0908 01:45:56.517855 1 config.go:136] config is &{ 172.88.120.118 6641 0xc00024de00 0xc000322520 ovn-default 172.30.0.0/16 172.30.0.1 172.30.0.1 ovn-cluster join 100.64.0.0/16 100.64.0.1 cluster-tcp-loadbalancer cluster-udp-loadbalancer kube-ovn-controller-86d7c8d6c4-5gv99 kube-ovn 3 10660}
I0908 01:45:56.523986 1 ovn-nbctl.go:21] ovn-nbctl command lr-list in 6ms
I0908 01:45:56.524041 1 init.go:76] exists routers [ovn-cluster]
I0908 01:45:56.527053 1 ovn-nbctl.go:21] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-tcp-loadbalancer in 2ms
I0908 01:45:56.527109 1 init.go:100] tcp load balancer 8e7945e4-ad8b-4d71-a5ca-dde31c130307 exists
I0908 01:45:56.529878 1 ovn-nbctl.go:21] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-udp-loadbalancer in 2ms
I0908 01:45:56.529896 1 init.go:115] udp load balancer 03030083-5495-462a-a9e6-134cfcae340b exists
I0908 01:45:56.547170 1 controller.go:240] Starting OVN controller
I0908 01:45:56.547270 1 controller.go:251] waiting for becoming a leader
I0908 01:45:56.547297 1 leaderelection.go:235] attempting to acquire leader lease kube-ovn/ovn-config...
I0908 01:45:56.550726 1 election.go:74] new leader elected: kube-ovn-controller-86d7c8d6c4-4ddpx
I0908 01:45:57.547432 1 controller.go:251] waiting for becoming a leader
I0908 01:45:58.547569 1 controller.go:251] waiting for becoming a leader
I0908 01:45:59.547798 1 controller.go:251] waiting for becoming a leader
I0908 01:46:00.547997 1 controller.go:251] waiting for becoming a leader
I0908 01:46:01.548263 1 controller.go:251] waiting for becoming a leader
I0908 01:46:02.548499 1 controller.go:251] waiting for becoming a leader
I0908 01:46:03.548678 1 controller.go:251] waiting for becoming a leader
I0908 01:46:04.548910 1 controller.go:251] waiting for becoming a leader
I0908 01:46:05.549117 1 controller.go:251] waiting for becoming a leader
I0908 01:46:06.549404 1 controller.go:251] waiting for becoming a leader
I0908 01:46:07.549652 1 controller.go:251] waiting for becoming a leader
I0908 01:46:07.874330 1 leaderelection.go:245] successfully acquired lease kube-ovn/ovn-config
I0908 01:46:07.874391 1 election.go:74] new leader elected: kube-ovn-controller-86d7c8d6c4-5gv99
I0908 01:46:07.874402 1 election.go:52] I am the new leader
I0908 01:46:08.549810 1 controller.go:251] waiting for becoming a leader
I0908 01:46:08.549856 1 controller.go:261] Waiting for informer caches to sync
I0908 01:46:08.650135 1 controller.go:266] Starting workers
I0908 01:46:08.653689 1 ovn-nbctl.go:21] ovn-nbctl command ls-list in 3ms
I0908 01:46:08.657295 1 ovn-nbctl.go:21] ovn-nbctl command acl-del bsp-net -- acl-add bsp-net to-lport 3000 ip4.src==100.64.0.0/16 allow-related in 3ms
I0908 01:46:08.664746 1 ovn-nbctl.go:21] ovn-nbctl command ls-list in 2ms
I0908 01:46:08.668067 1 ovn-nbctl.go:21] ovn-nbctl command acl-del join -- acl-add join to-lport 3000 ip4.src==100.64.0.0/16 allow-related in 3ms
I0908 01:46:08.675242 1 ovn-nbctl.go:21] ovn-nbctl command ls-list in 2ms
I0908 01:46:08.678642 1 ovn-nbctl.go:21] ovn-nbctl command acl-del ovn-default -- acl-add ovn-default to-lport 3000 ip4.src==100.64.0.0/16 allow-related in 3ms
I0908 01:46:11.650373 1 controller.go:297] Started workers
I0908 01:46:11.650436 1 endpoint.go:95] update endpoint default/dnsutils-ds
I0908 01:46:11.650441 1 pod.go:397] add pod kube-ovn/kube-ovn-cni-krsnc
I0908 01:46:11.650461 1 pod.go:399] pod kube-ovn/kube-ovn-cni-krsnc in host network mode no need for ovn process
I0908 01:46:11.650472 1 pod.go:397] add pod default/dnsutils-ds-rzvhs
I0908 01:46:11.650848 1 pod.go:397] add pod kube-ovn/kube-ovn-cni-dvgzj
I0908 01:46:11.650862 1 pod.go:399] pod kube-ovn/kube-ovn-cni-dvgzj in host network mode no need for ovn process
I0908 01:46:11.650870 1 pod.go:397] add pod kube-system/kubernetes-dashboard-66b96fb8d7-qw2mg
I0908 01:46:11.650970 1 endpoint.go:95] update endpoint default/kubernetes
I0908 01:46:11.651117 1 pod.go:397] add pod default/dnsutils-ds-6jxwj
I0908 01:46:11.651130 1 endpoint.go:95] update endpoint default/my-nginx
I0908 01:46:11.654262 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-139 -- lsp-set-addresses node-host-10-19-17-139 0a:00:00:40:00:03 100.64.0.2 in 3ms
I0908 01:46:11.654264 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.13.14:80 172.30.0.10:80,172.30.0.11:80,172.30.0.12:80 in 3ms
I0908 01:46:11.654432 1 endpoint.go:95] update endpoint kube-ovn/ovn-nb
I0908 01:46:11.654669 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-140 -- lsp-set-addresses node-host-10-19-17-140 0a:00:00:40:00:04 100.64.0.3 in 3ms
I0908 01:46:11.658185 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add join node-host-10-19-17-141 -- lsp-set-addresses node-host-10-19-17-141 0a:00:00:40:00:05 100.64.0.4 in 7ms
I0908 01:46:11.658983 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system -- lsp-set-addresses kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system 0a:00:00:1e:00:10 172.30.0.15 in 8ms
I0908 01:46:11.659290 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.1:443 10.19.17.139:6443,10.19.17.140:6443,10.19.17.141:6443 in 8ms
I0908 01:46:11.659410 1 endpoint.go:95] update endpoint kube-system/kube-dns
I0908 01:46:11.659379 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default dnsutils-ds-6jxwj.default -- lsp-set-addresses dnsutils-ds-6jxwj.default 0a:00:00:1e:00:0c 172.30.0.11 in 8ms
I0908 01:46:11.663266 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --if-exists lb-del cluster-tcp-loadbalancer 172.88.131.73:80 in 12ms
I0908 01:46:11.663273 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default dnsutils-ds-rzvhs.default -- lsp-set-addresses dnsutils-ds-rzvhs.default 0a:00:00:1e:00:0b 172.30.0.10 in 12ms
I0908 01:46:11.663287 1 endpoint.go:95] update endpoint kube-system/kube-scheduler
I0908 01:46:11.663315 1 endpoint.go:95] update endpoint kube-system/kubernetes-dashboard
I0908 01:46:11.664216 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.120.118:6641 10.19.17.139:6641 in 9ms
I0908 01:46:11.664309 1 endpoint.go:95] update endpoint kube-ovn/nginx-ds
I0908 01:46:11.665498 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.141 100.64.0.4 in 7ms
I0908 01:46:11.665622 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.140 100.64.0.3 in 10ms
I0908 01:46:11.665718 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist --policy=dst-ip lr-route-add ovn-cluster 10.19.17.139 100.64.0.2 in 11ms
I0908 01:46:11.669490 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.221.160:443 172.30.0.15:8443 in 6ms
I0908 01:46:11.669604 1 endpoint.go:95] update endpoint kube-ovn/ovn-sb
I0908 01:46:11.671947 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security kubernetes-dashboard-66b96fb8d7-qw2mg.kube-system 0a:00:00:1e:00:10 172.30.0.15/16 in 12ms
I0908 01:46:11.673712 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-udp-loadbalancer 172.88.0.2:53 172.30.0.7:53 in 14ms
I0908 01:46:11.674302 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security dnsutils-ds-6jxwj.default 0a:00:00:1e:00:0c 172.30.0.11/16 in 14ms
I0908 01:46:11.676889 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.66.103:6642 10.19.17.139:6642 in 7ms
I0908 01:46:11.676908 1 endpoint.go:95] update endpoint kube-system/kube-controller-manager
I0908 01:46:11.676921 1 endpoint.go:95] update endpoint kube-system/metrics-server
I0908 01:46:11.677583 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security dnsutils-ds-rzvhs.default 0a:00:00:1e:00:0b 172.30.0.10/16 in 14ms
I0908 01:46:11.678272 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --if-exists lb-del cluster-tcp-loadbalancer 172.88.187.46:80 in 13ms
I0908 01:46:11.678876 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.2:53 172.30.0.7:53 in 5ms
I0908 01:46:11.680109 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.7.44:443 172.30.0.13:443 in 3ms
I0908 01:46:11.681939 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.2:9153 172.30.0.7:9153 in 3ms
I0908 01:46:11.695639 1 pod.go:397] add pod kube-ovn/ovs-ovn-jxjc5
I0908 01:46:11.695658 1 pod.go:399] pod kube-ovn/ovs-ovn-jxjc5 in host network mode no need for ovn process
I0908 01:46:11.695672 1 pod.go:397] add pod kube-ovn/ovs-ovn-s5rxz
I0908 01:46:11.695680 1 pod.go:399] pod kube-ovn/ovs-ovn-s5rxz in host network mode no need for ovn process
I0908 01:46:11.695688 1 pod.go:397] add pod kube-system/coredns-5cd7f99d8f-k6ns6
I0908 01:46:11.700987 1 pod.go:397] add pod kube-ovn/kube-ovn-controller-86d7c8d6c4-4ddpx
I0908 01:46:11.701020 1 pod.go:399] pod kube-ovn/kube-ovn-controller-86d7c8d6c4-4ddpx in host network mode no need for ovn process
I0908 01:46:11.701047 1 pod.go:397] add pod kube-ovn/ovn-central-8ddc7dd8-ww7mf
I0908 01:46:11.701055 1 pod.go:399] pod kube-ovn/ovn-central-8ddc7dd8-ww7mf in host network mode no need for ovn process
I0908 01:46:11.701062 1 pod.go:397] add pod kube-ovn/kube-ovn-cni-w74td
I0908 01:46:11.701070 1 pod.go:399] pod kube-ovn/kube-ovn-cni-w74td in host network mode no need for ovn process
I0908 01:46:11.701097 1 pod.go:397] add pod default/dnsutils-ds-9h6k2
I0908 01:46:11.701594 1 pod.go:397] add pod kube-ovn/ovs-ovn-dbrg2
I0908 01:46:11.701607 1 pod.go:399] pod kube-ovn/ovs-ovn-dbrg2 in host network mode no need for ovn process
I0908 01:46:11.701616 1 pod.go:397] add pod kube-system/metrics-server-796f44f5cb-vb58n
I0908 01:46:11.702280 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default coredns-5cd7f99d8f-k6ns6.kube-system -- lsp-set-addresses coredns-5cd7f99d8f-k6ns6.kube-system 0a:00:00:1e:00:08 172.30.0.7 in 6ms
I0908 01:46:11.706039 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default dnsutils-ds-9h6k2.default -- lsp-set-addresses dnsutils-ds-9h6k2.default 0a:00:00:1e:00:0d 172.30.0.12 in 4ms
I0908 01:46:11.707082 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-default metrics-server-796f44f5cb-vb58n.kube-system -- lsp-set-addresses metrics-server-796f44f5cb-vb58n.kube-system 0a:00:00:1e:00:0e 172.30.0.13 in 5ms
I0908 01:46:11.709073 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security dnsutils-ds-9h6k2.default 0a:00:00:1e:00:0d 172.30.0.12/16 in 3ms
I0908 01:46:11.711825 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security coredns-5cd7f99d8f-k6ns6.kube-system 0a:00:00:1e:00:08 172.30.0.7/16 in 9ms
I0908 01:46:11.713556 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security metrics-server-796f44f5cb-vb58n.kube-system 0a:00:00:1e:00:0e 172.30.0.13/16 in 6ms
I0908 01:46:11.716487 1 pod.go:397] add pod bsp/bspnginx-594785f757-bptkp
I0908 01:46:11.720396 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lsp-add bsp-net bspnginx-594785f757-bptkp.bsp -- lsp-set-addresses bspnginx-594785f757-bptkp.bsp 0a:00:00:42:00:0c 172.66.0.11 in 3ms
I0908 01:46:11.720665 1 pod.go:397] add pod kube-ovn/kube-ovn-controller-86d7c8d6c4-5gv99
I0908 01:46:11.720682 1 pod.go:399] pod kube-ovn/kube-ovn-controller-86d7c8d6c4-5gv99 in host network mode no need for ovn process
I0908 01:46:11.723500 1 ovn-nbctl.go:21] ovn-nbctl command lsp-set-port-security bspnginx-594785f757-bptkp.bsp 0a:00:00:42:00:0c 172.66.0.11/16 in 3ms
I0908 01:46:29.490090 1 endpoint.go:95] update endpoint default/kubernetes
I0908 01:46:29.498270 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.1:443 10.19.17.139:6443 in 8ms
I0908 01:46:29.505655 1 endpoint.go:95] update endpoint default/kubernetes
I0908 01:46:29.519146 1 ovn-nbctl.go:21] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 172.88.0.1:443 10.19.17.139:6443,10.19.17.140:6443,10.19.17.141:6443 in 13ms
I0908 01:47:02.118376 1 leaderelection.go:281] failed to renew lease kube-ovn/ovn-config: failed to tryAcquireOrRenew context deadline exceeded
I0908 01:47:02.118398 1 election.go:60] I am not leader anymore
F0908 01:47:02.118413 1 election.go:71] leaderelection lost
[root@host-10-19-17-139 ~]#
First i create subnet and bind it with a namespace, then i delete the subnet, the namespace's annotations is also the subnet's info which is deleted.
When controller restart all running pod will be added to AddPodQueue, it will consume lots of time to do the check/create lsp, port security things. We can only check static route exists if pod already has a Pod IP
When run ovn-db in ha mode, only leader will in ready status that can be selected by svc. However the unready status will confuse users, we need to find a better way to resolve it.
Now when pod use cluster ip to access it self, the packet will be dropped. We need to trace where the packet is dropped and modify the flow to let the packet hit itself
pinger should check an external address to validate external connectivity
cat ns.yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
ovn.kubernetes.io/allow: 100.65.0.0/16,100.66.0.0/16
ovn.kubernetes.io/cidr: 100.66.0.0/16
ovn.kubernetes.io/exclude_ips: 100.66.0.0..100.66.0.10
ovn.kubernetes.io/gateway: 100.66.0.1
ovn.kubernetes.io/logical_switch: ovn-subnet
ovn.kubernetes.io/private: "true"
name: ovn
cat deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
labels:
name: tomcat
name: tomcat
namespace: ovn
spec:
replicas: 2
selector:
matchLabels:
name: tomcat
template:
metadata:
annotations:
ovn.kubernetes.io/ip_pool: 100.66.0.20,100.66.0.21,100.66.0.22,100.66.0.23
labels:
name: tomcat
spec:
containers:
- image: tomcat:8.5.30
imagePullPolicy: IfNotPresent
name: tomcat
ports:
- containerPort: 8080
protocol: TCP
list of pods
[root@centos1 pods]# kubectl -n ovn get po -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
tomcat-695fb64484-6twxr 1/1 Running 0 56m 100.67.67.3 centos2 <none>
tomcat-695fb64484-hk99r 1/1 Running 0 56m 100.64.13.195 centos3 <none>
the pods' ip isn't what i am defined in annotation pools.
part of kube-ovn-controller log
I0710 09:11:09.611936 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lb-add cluster-udp-loadbalancer 10.96.0.10:53 100.66.208.131:53,100.66.208.132:53 in 22ms
I0710 09:11:09.629484 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lb-add cluster-tcp-loadbalancer 10.96.0.10:53 100.66.208.131:53,100.66.208.132:53 in 17ms
I0710 09:23:35.959721 1 ovn-nbctl.go:20] ovn-nbctl command ls-list in 10ms
I0710 09:23:36.015283 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist ls-add ovn-subnet -- set logical_switch ovn-subnet other_config:subnet=100.66.0.0/16 -- set logical_switch ovn-subnet other_config:gateway=100.66.0.1 -- set logical_switch ovn-subnet other_config:exclude_ips=100.66.0.0..100.66.0.10 -- acl-add ovn-subnet to-lport 3000 ip4.src==100.64.0.0/16 allow-related in 55ms
I0710 09:23:36.015463 1 ovn-nbctl.go:108] create route port for switch ovn-subnet
I0710 09:23:36.015530 1 ovn-nbctl.go:196] add ovn-subnet to ovn-cluster with ip: 100.66.0.1/16, mac :00:00:00:10:1B:D9
I0710 09:23:36.033889 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-subnet ovn-subnet-ovn-cluster -- set logical_switch_port ovn-subnet-ovn-cluster type=router -- set logical_switch_port ovn-subnet-ovn-cluster addresses="00:00:00:10:1B:D9" -- set logical_switch_port ovn-subnet-ovn-cluster options:router-port=ovn-cluster-ovn-subnet in 18ms
I0710 09:23:36.049717 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lrp-add ovn-cluster ovn-cluster-ovn-subnet 00:00:00:10:1B:D9 100.66.0.1/16 in 15ms
I0710 09:23:36.060005 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist ls-lb-add ovn-subnet cluster-tcp-loadbalancer in 10ms
I0710 09:23:36.068866 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist ls-lb-add ovn-subnet cluster-udp-loadbalancer in 8ms
I0710 09:23:36.074189 1 ovn-nbctl.go:20] ovn-nbctl command acl-del ovn-subnet -- acl-add ovn-subnet to-lport 1000 inport=="ovn-subnet-ovn-cluster" drop -- acl-add ovn-subnet to-lport 1001 ip4.src==100.65.0.0/16 allow-related -- acl-add ovn-subnet to-lport 1001 ip4.src==100.66.0.0/16 allow-related in 5ms
I0710 09:25:22.519439 1 pod.go:440] add ip pool address pod msxu-ovn/tomcat-695fb64484-6twxr
I0710 09:25:22.584084 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-subnet tomcat-695fb64484-6twxr.msxu-ovn -- lsp-set-addresses tomcat-695fb64484-6twxr.msxu-ovn dynamic 100.66.0.20 in 16ms
I0710 09:25:22.587945 1 ovn-nbctl.go:20] ovn-nbctl command get logical_switch_port tomcat-695fb64484-6twxr.msxu-ovn dynamic-addresses in 3ms
I0710 09:25:22.598657 1 ovn-nbctl.go:20] ovn-nbctl command get logical_switch ovn-subnet other_config:subnet other_config:gateway in 10ms
I0710 09:25:22.606904 1 pod.go:440] add ip pool address pod msxu-ovn/tomcat-695fb64484-hk99r
I0710 09:25:22.663104 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist lsp-add ovn-subnet tomcat-695fb64484-hk99r.msxu-ovn -- lsp-set-addresses tomcat-695fb64484-hk99r.msxu-ovn dynamic 100.66.0.21 in 14ms
I0710 09:25:22.667146 1 ovn-nbctl.go:20] ovn-nbctl command get logical_switch_port tomcat-695fb64484-hk99r.msxu-ovn dynamic-addresses in 3ms
I0710 09:25:22.671810 1 ovn-nbctl.go:20] ovn-nbctl command get logical_switch ovn-subnet other_config:subnet other_config:gateway in 4ms
I0710 09:25:25.298113 1 pod.go:593] update pod msxu-ovn/tomcat-695fb64484-6twxr
I0710 09:25:25.319672 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist --policy=src-ip lr-route-add ovn-cluster 100.67.67.3 100.64.0.3 in 21ms
I0710 09:25:25.711946 1 pod.go:593] update pod msxu-ovn/tomcat-695fb64484-hk99r
I0710 09:25:25.736622 1 ovn-nbctl.go:20] ovn-nbctl command --wait=sb --may-exist --policy=src-ip lr-route-add ovn-cluster 100.64.13.195 100.64.0.4 in 24ms
appreciate for any help
我在裸机上部署了kube-ovn, service使用type LoadBalancer如何获取external ip地址,需要如何设置
Kube-OVN now supports using static routes to connect container network with external networks.
However, it has some disadvantages:
We can learn from Calico's BGP solution that use a software router reflector to dynamically announce routes to upstream router.
Create deployment with follow yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: kube-ovn
name: static-deployment
labels:
app: static-deployment
spec:
replicas: 1
selector:
matchLabels:
app: static-deployment
template:
metadata:
labels:
app: static-deployment
annotations:
ovn.kubernetes.io/ip_pool: "172.50.1.5"
spec:
containers:
- name: static-deployment
image: centos
command:
- sleep
- "3600"
and get following pod:
# kubectl get po -n kube-ovn -o wide | grep static
static-deployment-56ff4bdd56-lmpm2 1/1 Running 0 19s 172.50.1.5 10.18.124.1 <none> <none>
then check the lsp info:
# kubectl ko nbctl list logical-switch_port static-deployment-56ff4bdd56-lmpm2.kube-ovn
_uuid : 386fa1f7-a203-48f5-a709-e07cf4b078f8
addresses : ["dynamic 172.50.1.5"]
dhcpv4_options : []
dhcpv6_options : []
dynamic_addresses : "0a:00:00:32:01:06 172.50.1.8"
enabled : []
external_ids : {}
name : "static-deployment-56ff4bdd56-lmpm2.kube-ovn"
options : {}
parent_name : []
port_security : ["0a:00:00:32:01:06 172.50.1.5/24"]
tag : []
tag_request : []
type : ""
up : true
the addresses and dynamic_addresses have different ip address.
First i delete kube-ovn-controller, then i delete my pod. in ovn-northbound DB the lsp corresponding the pod which is deleted also exist.
Although the operation is not normal, but with gc the controller will work better.
i've created a test namespace 'test-ns'
$ kubectl get ns test-ns -o yaml
apiVersion: v1
kind: Namespace
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"ovn.kubernetes.io/cidr":"10.17.0.0/24","ovn.kubernetes.io/exclude_ips":"10.17.0.1..10.17.0.10","ovn.kubernetes.io/gateway":"10.17.0.1","ovn.kubernetes.io/logical_switch":"test-ns-subnet"},"name":"test-ns"}}
ovn.kubernetes.io/cidr: 10.17.0.0/24
ovn.kubernetes.io/exclude_ips: 10.17.0.1..10.17.0.10
ovn.kubernetes.io/gateway: 10.17.0.1
ovn.kubernetes.io/logical_switch: test-ns-subnet
creationTimestamp: "2019-05-22T06:14:31Z"
name: test-ns
resourceVersion: "27634"
selfLink: /api/v1/namespaces/test-ns
uid: dcc8c1e6-7c58-11e9-98e9-525400cecc16
it worked when i created a deploy in 'test-ns' with pod ips automatically allocated;
then i came to test the static ip allocating
apiVersion: v1
kind: Pod
metadata:
name: static-ip
namespace: test-ns
annotations:
ovn.kubernetes.io/ip_address: 10.17.0.100
ovn.kubernetes.io/mac_address: 00:00:00:53:6B:B6
spec:
containers:
- name: static-ip
image: nginx:alpine
it failed with errors:
$ kubectl describe pod -n test-ns static-ip
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned test-ns/static-ip to 10.100.97.42
Warning FailedCreatePodSandBox 18m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f3e813745324dcc73800977d830a6d4ecf93f6d0a8dc89600c754d93be29a9c5" network for pod "static-ip": NetworkPlugin cni failed to set up pod "static-ip_test-ns" network: json: cannot unmarshal string into Go value of type request.PodResponse
and errors in ovn-controller:
I0526 13:53:10.924145 1 pod.go:335] add pod test-ns/static-ip
E0526 13:53:10.924287 1 pod.go:352] validate pod test-ns/static-ip failed, 10.17.0.100 is not a valid ovn.kubernetes.io/ip_address
E0526 13:53:10.924457 1 pod.go:154] error syncing 'test-ns/static-ip': 10.17.0.100 is not a valid ovn.kubernetes.io/ip_address, requeuing
I0526 13:53:10.924616 1 event.go:221] Event(v1.ObjectReference{Kind:"Pod", Namespace:"test-ns", Name:"static-ip", UID:"12eda5d4-7fbc-11e9-98e9-525400cecc16", APIVersion:"v1", ResourceVersion:"753112", FieldPath:""}): type: 'Warning' reason: 'ValidatePodNetworkFailed' 10.17.0.100 is not a valid ovn.kubernetes.io/ip_address
When targetPort is service is a string, the port number should be inferred from related pod spec. But now we infer it from endpoint spec that will lead lb not created
Now we use ovn-nbctl to communicate with ovn-db, which can not use cache to reduce read access. Go-ovn provide a go sdk to communicate with ovn-db and embed cache in it, we can use it to optimize kube-ovn operation latency
依据这个网址快速教程安装了kube-ovn
https://github.com/alauda/kube-ovn/wiki/%E5%BF%AB%E9%80%9F%E5%AE%89%E8%A3%85
一切步骤都没问题,也得到了响应的结果。节点之间,pods之间以及node和pod之间都能ping通。
但是容器内部只能通过IP访问外网,域名无法解析。请问这是什么原因那?
下图是kube-dns的内容,Endpoints看起来无异常:
节点(宿主机)可以用域名访问外网,能正常解析域名。
When pods number of 10000, ovnsb will consumes 100% cpu and in cluster mode raft read and write will hang and any further network operations will not take into effect
代码里特别注明了DO NOT add ovn dns/lb to node switch
这样从node不就无法访问service了吗?
ovn-central有两个南北桥数据库的service,kube-ovn-controller是需要访问这个service的
而kube-ovn-controller的网络类型又是hostNetwork,这样在没有kube-proxy的iptables的情况下是无法访问南北桥数据库的
请问下为什么不给node switch加上loadBalancer呢
kube-ovn component status aflter "kubectl apply -f kube-ovn.yaml" in step 3.
# kubectl -n kube-ovn get po
NAME READY STATUS RESTARTS AGE
kube-ovn-cni-nj5rp 0/1 CrashLoopBackOff 8 18m
kube-ovn-cni-rxnmw 0/1 CrashLoopBackOff 8 18m
kube-ovn-cni-zrkkd 0/1 CrashLoopBackOff 8 18m
kube-ovn-controller-75fbd69f4-cpz26 0/1 CrashLoopBackOff 8 18m
kube-ovn-controller-75fbd69f4-w4xbt 0/1 CrashLoopBackOff 8 18m
ovn-central-74f6567dfd-5m6fd 1/1 Running 0 13h
ovs-ovn-jlx7j 1/1 Running 0 13h
ovs-ovn-llfhm 1/1 Running 0 13h
ovs-ovn-n5xlp 1/1 Running 0 13h
logs of kube-ovn-cni
# kubectl -n kube-ovn logs -f kube-ovn-cni-nj5rp
I0709 01:57:49.380862 28682 config.go:128] no --kubeconfig, use in-cluster kubernetes config
I0709 01:57:49.382146 28682 config.go:120] daemon config: &{eth0 1442 /run/openvswitch/kube-ovn-daemon.sock /run/openvswitch/db.sock 0xc00018f0e0 centos93 10.96.0.0/12 10665}
panic: runtime error: index out of range
goroutine 1 [running]:
github.com/alauda/kube-ovn/pkg/daemon.InitNodeGateway(0xc000110600, 0x0, 0x0)
/Users/oilbeater/workspace/src/github.com/alauda/kube-ovn/pkg/daemon/init.go:32 +0x68b
main.main()
/Users/oilbeater/workspace/src/github.com/alauda/kube-ovn/cmd/daemon/cniserver.go:27 +0x10d
logs of kube-ovn-controller
# kubectl -n kube-ovn logs -f kube-ovn-controller-75fbd69f4-cpz26
I0709 02:02:48.643986 1 config.go:143] no --kubeconfig, use in-cluster kubernetes config
I0709 02:02:48.645605 1 config.go:134] config is &{ 10.98.181.28 6641 0xc00011c900 ovn-default 10.16.0.0/16 10.16.0.1 10.16.0.1 ovn-cluster join 100.64.0.0/16 100.64.0.1 cluster-tcp-loadbalancer cluster-udp-loadbalancer kube-ovn-controller-75fbd69f4-cpz26 kube-ovn 3 10660}
I0709 02:02:48.651209 1 ovn-nbctl.go:20] ovn-nbctl command lr-list in 5ms
I0709 02:02:48.651257 1 init.go:81] exists routers [ovn-cluster]
I0709 02:02:48.655191 1 ovn-nbctl.go:20] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-tcp-loadbalancer in 3ms
I0709 02:02:48.655257 1 init.go:105] tcp load balancer 079ec34a-8e45-43a4-9bc7-5585c4d18fc6 exists
I0709 02:02:48.658555 1 ovn-nbctl.go:20] ovn-nbctl command --data=bare --no-heading --columns=_uuid find load_balancer name=cluster-udp-loadbalancer in 3ms
I0709 02:02:48.658580 1 init.go:120] udp load balancer 824cdb69-1870-44ee-895f-b114b8ddf5b9 exists
I0709 02:02:48.661792 1 ovn-nbctl.go:20] ovn-nbctl command ls-list in 3ms
I0709 02:02:48.661846 1 init.go:60] exists switches [join]
E0709 02:02:48.690642 1 init.go:48] patch namespace kube-ovn failed namespaces "kube-ovn" is forbidden: User "system:serviceaccount:kube-ovn:ovn" cannot patch resource "namespaces" in API group "" in the namespace "kube-ovn"
E0709 02:02:48.690959 1 controller.go:53] init default switch failed namespaces "kube-ovn" is forbidden: User "system:serviceaccount:kube-ovn:ovn" cannot patch resource "namespaces" in API group "" in the namespace "kube-ovn"
kubernetes version
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:24:45Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.5", GitCommit:"51dd616cdd25d6ee22c83a858773b607328a18ec", GitTreeState:"clean", BuildDate:"2019-01-16T18:14:49Z", GoVersion:"go1.10.7", Compiler:"gc", Platform:"linux/amd64"}
appreciate if there is any help
I notice the pod ip is declared in ovn.kubernetes.io/ip_address
or ovn.kubernetes.io/ip_address
. Is there a graceful way to give a suggestion of ip or ips depend on cidr to avoid conflict.😝
before i apply acl, i have two tomcat pod as server, and one client pod. just like below:
#kubectl get po -o wide -l app=sqian-tomcat
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
sqian-tomcat-7d69fd645d-gzxn8 1/1 Running 0 32m 10.250.0.13 kubemaster03 <none> <none>
sqian-tomcat-7d69fd645d-xfbf4 1/1 Running 0 7m41s 10.250.0.2 kubemaster01 <none> <none>
# kubectl get po -o wide -l app=test
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-app-67b457cf9c-8wpd7 1/1 Running 0 5m41s 10.250.0.6 kubemaster01 <none> <none>
and i cant visit the service in tomcat pod normally in client pod and node, like bellow:
# curl 10.250.0.13:8080
<h1>Hello World</h1>
# curl 10.250.0.2:8080
<h1>Hello World</h1>
# kubectl exec -it test-app-67b457cf9c-8wpd7 curl 10.250.0.13:8080
<h1>Hello World</h1>
# kubectl exec -it test-app-67b457cf9c-8wpd7 curl 10.250.0.2:8080
<h1>Hello World</h1>
now i want to do some network policy, which just let my client pod can visit the service.
the network policy yaml is:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: sqian-tomcat
namespace: default
spec:
podSelector:
matchLabels:
app: sqian-tomcat
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: test
ports:
- protocol: TCP
port: 8080
when i use kubectl create -f the above network policy yaml file, i can see the pg/address_set/acl in ovn northd database, just like:
acl:
# ovn-nbctl list acl
_uuid : 21a9dccd-2767-4136-bdd3-23f0a4f65fd8
action : allow-related
direction : to-lport
external_ids : {}
log : false
match : "ip4.src == $sqian.tomcat.default.ingress.allow && tcp.dst == 8080 && ip4.dst == $sqian.tomcat.default_ip4"
meter : []
name : []
priority : 2001
severity : []
_uuid : 9888de52-f16d-453a-9afc-976b9ddff86e
action : drop
direction : to-lport
external_ids : {}
log : false
match : "ip4.dst == $sqian.tomcat.default_ip4"
meter : []
name : []
priority : 2000
severity : []
_uuid : 33bba6b2-f5b1-42e0-88a3-13c95c797d42
action : drop
direction : to-lport
external_ids : {}
log : false
match : "ip4.src == $sqian.tomcat.default.ingress.except && ip4.dst == $sqian.tomcat.default_ip4"
meter : []
name : []
priority : 2002
severity : []
pg:
# ovn-nbctl list port-group
_uuid : ea47e009-101b-4239-91f4-da292cbd92a4
acls : [21a9dccd-2767-4136-bdd3-23f0a4f65fd8, 33bba6b2-f5b1-42e0-88a3-13c95c797d42, 9888de52-f16d-453a-9afc-976b9ddff86e]
external_ids : {}
name : sqian.tomcat.default
ports : [61a6f2e1-7a8c-45eb-bad5-6f5ab3b2bf7f, ea8d82d7-707f-4f52-816c-e36c88bbf9ca]
address_set:
# ovn-nbctl list address_set
_uuid : 4ecc0d3c-790b-4e39-9f77-50930734ba5a
addresses : ["10.250.0.6"]
external_ids : {}
name : sqian.tomcat.default.ingress.allow
_uuid : 9309c6e7-5394-4451-a530-1c8782529378
addresses : []
external_ids : {}
name : sqian.tomcat.default.ingress.except
then i visit my tomcat service:
# curl 10.250.0.13:8080
curl: (7) Failed connect to 10.250.0.13:8080; Connection timed out
# curl 10.250.0.2:8080
curl: (7) Failed connect to 10.250.0.2:8080; Connection timed out
# kubectl exec -it test-app-67b457cf9c-8wpd7 curl 10.250.0.13:8080
curl: (7) Failed connect to 10.250.0.13:8080; Connection timed out
command terminated with exit code 7
# kubectl exec -it test-app-67b457cf9c-8wpd7 curl 10.250.0.2:8080
<h1>Hello World</h1>
as we can see, in host, we cannot visit the tomcat service.
in client pod, i can visit 2:8080 and cannot visit 13:8080.
also, in host i cannot ping the client pod:
# ping 10.250.0.6
PING 10.250.0.6 (10.250.0.6) 56(84) bytes of data.
the attach file is the output of ovn-sbctl lflow-list.
1.txt
刚测试了下k8s 1.14.2三节点集群使用kube-ovn作为网络插件,出现错误
root@k8s402:~# kubectl get node
NAME STATUS ROLES AGE VERSION
10.100.97.41 Ready,SchedulingDisabled master 4h4m v1.14.2
10.100.97.42 Ready node 4h4m v1.14.2
10.100.97.43 Ready node 4h4m v1.14.2
root@k8s402:~# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
root@k8s402:~# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-ovn kube-ovn-cni-2n5n5 1/1 Running 2 3h9m
kube-ovn kube-ovn-cni-57q94 1/1 Running 2 3h9m
kube-ovn kube-ovn-cni-99qzt 1/1 Running 1 3h9m
kube-ovn kube-ovn-controller-db57b48cb-cvrz4 1/1 Running 1 3h9m
kube-ovn kube-ovn-controller-db57b48cb-n9lf5 1/1 Running 1 3h9m
kube-ovn ovn-central-8658697f7f-2ngrx 1/1 Running 1 3h56m
kube-ovn ovs-ovn-2t48n 1/1 Running 1 3h47m
kube-ovn ovs-ovn-4fmh6 1/1 Running 1 3h47m
kube-ovn ovs-ovn-qvp2w 1/1 Running 1 3h47m
kube-system coredns-55f46dd959-d2r2l 0/1 ContainerCreating 0 20m
kube-system coredns-55f46dd959-ldnnw 0/1 ContainerCreating 0 20m
kube-system heapster-fdb7596d6-nvrld 0/1 ContainerCreating 0 20m
kube-system kubernetes-dashboard-68ddcc97fc-r2g66 0/1 ContainerCreating 0 20m
kube-system metrics-server-6c898b5b8b-kr6kd 0/1 ContainerCreating 0 20m
查看处于 ContainerCreating 状态的pod信息
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 25m default-scheduler Successfully assigned kube-system/kubernetes-dashboard-68ddcc97fc-r2g66 to 10.100.97.42
Warning FailedCreatePodSandBox 25m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f305fdde7222127f5d5d7540fb1d517fc29c13766a2b173e1cad9ca3a1faa0fe" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 25m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dda174779a31b49ef1824af0c653b7166b3d289fb56bf2cceff495471a644cd3" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 24m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "52a4ae0fa25b54f3e502f1559883ae0369710f3c627ac7e921ef4c8937b4cd9d" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 24m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fbdc5017dedb438c9ce49e9aafc47cda1adc74c3693efa219fc397f0a2b5f540" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 24m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "dea1c332e603aa5521da948a85a17913be382e8721f133ebf5af3204e0f20616" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 23m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cd98c6024d9f1ecdcc32f675d77f4b6da0340927302df95f030fcf91545770da" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 23m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e541f93469d66a8d2bf17385ef69cf0118b71f8bcda9140abcee5fcdef55dfe8" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 23m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "116fda52d69c71d118e9602177d1c44f9e9b8a353ac14c98c178622d8c28c1cd" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Warning FailedCreatePodSandBox 22m kubelet, 10.100.97.42 Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f84d2fb8be033b48c6d35b316bf9f708dde68d2935e8129df5201a33bf2a230c" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
Normal SandboxChanged 21m (x12 over 25m) kubelet, 10.100.97.42 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 23s (x63 over 22m) kubelet, 10.100.97.42 (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9624d2476b22166ad59d3fc3466102ee00e2deda2eaf431ba2bdd26b3c02c147" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
查看节点 kubelet 日志也有类似的报错
May 21 14:05:14 k8s402 kubelet[2088]: E0521 14:05:14.707791 2088 cni.go:331] Error adding kube-system_kubernetes-dashboard-68ddcc97fc-r2g66/0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412 to network kube-ovn/kube-ovn: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:14 k8s402 kubelet[2088]: E0521 14:05:14.890156 2088 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:14 k8s402 kubelet[2088]: E0521 14:05:14.891084 2088 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:14 k8s402 kubelet[2088]: E0521 14:05:14.892947 2088 kuberuntime_manager.go:693] createPodSandbox for pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412" network for pod "kubernetes-dashboard-68ddcc97fc-r2g66": NetworkPlugin cni failed to set up pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:14 k8s402 kubelet[2088]: E0521 14:05:14.893725 2088 pod_workers.go:190] Error syncing pod 5d8638e0-7b8c-11e9-9145-525400cecc16 ("kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)"), skipping: failed to "CreatePodSandbox" for "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412\" network for pod \"kubernetes-dashboard-68ddcc97fc-r2g66\": NetworkPlugin cni failed to set up pod \"kubernetes-dashboard-68ddcc97fc-r2g66_kube-system\" network: json: cannot unmarshal string into Go value of type request.PodResponse"
May 21 14:05:15 k8s402 kubelet[2088]: W0521 14:05:15.458626 2088 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412"
May 21 14:05:15 k8s402 kubelet[2088]: I0521 14:05:15.462253 2088 kubelet.go:1930] SyncLoop (PLEG): "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)", event: &pleg.PodLifecycleEvent{ID:"5d8638e0-7b8c-11e9-9145-525400cecc16", Type:"ContainerDied", Data:"0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412"}
May 21 14:05:15 k8s402 kubelet[2088]: W0521 14:05:15.462549 2088 pod_container_deletor.go:75] Container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412" not found in pod's containers
May 21 14:05:15 k8s402 kubelet[2088]: I0521 14:05:15.462680 2088 kuberuntime_manager.go:427] No ready sandbox for pod "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)" can be found. Need to start a new one
May 21 14:05:15 k8s402 kubelet[2088]: W0521 14:05:15.465294 2088 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "0fbf235fa4d3d96044db023cf7435903bc765f2d5c718a75da4cbd14ea669412"
May 21 14:05:16 k8s402 kubelet[2088]: I0521 14:05:16.483182 2088 kubelet.go:1930] SyncLoop (PLEG): "kubernetes-dashboard-68ddcc97fc-r2g66_kube-system(5d8638e0-7b8c-11e9-9145-525400cecc16)", event: &pleg.PodLifecycleEvent{ID:"5d8638e0-7b8c-11e9-9145-525400cecc16", Type:"ContainerStarted", Data:"c1f040e97e950c3579c1297b2843d2021043ba71841aa54be018561704b98bde"}
May 21 14:05:21 k8s402 kubelet[2088]: E0521 14:05:21.746665 2088 cni.go:331] Error adding kube-system_coredns-55f46dd959-d2r2l/ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330 to network kube-ovn/kube-ovn: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:21 k8s402 kubelet[2088]: E0521 14:05:21.971555 2088 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to set up sandbox container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330" network for pod "coredns-55f46dd959-d2r2l": NetworkPlugin cni failed to set up pod "coredns-55f46dd959-d2r2l_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:21 k8s402 kubelet[2088]: E0521 14:05:21.971700 2088 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330" network for pod "coredns-55f46dd959-d2r2l": NetworkPlugin cni failed to set up pod "coredns-55f46dd959-d2r2l_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:21 k8s402 kubelet[2088]: E0521 14:05:21.971740 2088 kuberuntime_manager.go:693] createPodSandbox for pod "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)" failed: rpc error: code = Unknown desc = failed to set up sandbox container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330" network for pod "coredns-55f46dd959-d2r2l": NetworkPlugin cni failed to set up pod "coredns-55f46dd959-d2r2l_kube-system" network: json: cannot unmarshal string into Go value of type request.PodResponse
May 21 14:05:21 k8s402 kubelet[2088]: E0521 14:05:21.971873 2088 pod_workers.go:190] Error syncing pod 5631ae47-7b8c-11e9-9145-525400cecc16 ("coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)"), skipping: failed to "CreatePodSandbox" for "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)\" failed: rpc error: code = Unknown desc = failed to set up sandbox container \"ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330\" network for pod \"coredns-55f46dd959-d2r2l\": NetworkPlugin cni failed to set up pod \"coredns-55f46dd959-d2r2l_kube-system\" network: json: cannot unmarshal string into Go value of type request.PodResponse"
May 21 14:05:22 k8s402 kubelet[2088]: W0521 14:05:22.573513 2088 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "coredns-55f46dd959-d2r2l_kube-system": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330"
May 21 14:05:22 k8s402 kubelet[2088]: I0521 14:05:22.579087 2088 kubelet.go:1930] SyncLoop (PLEG): "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)", event: &pleg.PodLifecycleEvent{ID:"5631ae47-7b8c-11e9-9145-525400cecc16", Type:"ContainerDied", Data:"ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330"}
May 21 14:05:22 k8s402 kubelet[2088]: W0521 14:05:22.579312 2088 pod_container_deletor.go:75] Container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330" not found in pod's containers
May 21 14:05:22 k8s402 kubelet[2088]: I0521 14:05:22.580823 2088 kuberuntime_manager.go:427] No ready sandbox for pod "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)" can be found. Need to start a new one
May 21 14:05:22 k8s402 kubelet[2088]: W0521 14:05:22.583792 2088 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ade331de55afaacd53210011e1969bd85bfe7be7be96e6efa7cce2dae3a38330"
May 21 14:05:23 k8s402 kubelet[2088]: I0521 14:05:23.598500 2088 kubelet.go:1930] SyncLoop (PLEG): "coredns-55f46dd959-d2r2l_kube-system(5631ae47-7b8c-11e9-9145-525400cecc16)", event: &pleg.PodLifecycleEvent{ID:"5631ae47-7b8c-11e9-9145-525400cecc16", Type:"ContainerStarted", Data:"b4d6e2c91763f28a1be902af75f5dff85b832bb72c944b02741ee375c0ffd30c"}
When evicting happens, tons of pod might be generated, we need to skip these pod when possible to avoid block normal pod operation
I see kube-ovn project can support pod IP through DHCP and static IP, but we know this pod IP is a virtual intranet IP within the pod IP range (CIDR) that we configured when initializing K8S cluster and its CNI plugin, and the pod IP can only be accessed (ping) by the pod in the same namespace and the host K8S node. Is there any support to assign a static external IP to a pod to associate it with the pod IP, so that external machines can directly communicate (eg. ping ) with the specific pod? (Just like VM's floating IP in OpenStack)
Another assumptions:
目前kubeasz项目已经测试支持集成 kube-ovn,后续我也会更新与 kube-ovn 的最新稳定版本保持一致;
https://github.com/easzlab/kubeasz/blob/master/docs/setup/network-plugin/kube-ovn.md
按照文档安装后,发现nodeport类型好像是不起作用,按照我们的文档说,应该是可以的,是需要特殊配置,还是目前不支持
When node deleted, the related ip crd and static route in ovn is not deleted
Sometimes ovn-nbctl request might be blocked for long time, e.g. ovn-nb is busy or ovn-nbctl daemon become zombie and finally no worker is available for further operations. We need add a timeout to ovn-nbctl request
Hello,
What is the timeline for DPDK support?
Is it something that you plan to implement in the near future?
Thanks,
Patryk
默认子网设置为172.30.0.0/16,但部署了一个测试的busybox应用,分配的IP为172.17.0.2(docker的默认网段),那位指点一下问题出在哪儿里,多谢!
[root@host-10-19-17-139 ~]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
host-10-19-17-139 Ready compute,master 25d v1.15.2 10.19.17.139 <none> CentOS Linux 7 (Core) 5.2.11-1.el7.elrepo.x86_64 docker://19.3.1
host-10-19-17-140 Ready compute 25d v1.15.2 10.19.17.140 <none> CentOS Linux 7 (Core) 5.2.11-1.el7.elrepo.x86_64 docker://19.3.1
host-10-19-17-141 Ready compute 25d v1.15.2 10.19.17.141 <none> CentOS Linux 7 (Core) 5.2.11-1.el7.elrepo.x86_64 docker://19.3.1
[root@host-10-19-17-139 ~]# kubectl get Subnet
NAME PROTOCOL CIDR PRIVATE NAT DEFAULT GATEWAYTYPE USED AVAILABLE
join IPv4 100.64.0.0/16 false false false distributed 3 65532
ovn-default IPv4 172.30.0.0/16 false true true distributed 0 65535
[root@host-10-19-17-139 ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default test-588865b-zd5vt 1/1 Running 1 31m 172.17.0.2 host-10-19-17-141 <none> <none>
kube-ovn kube-ovn-cni-dvgzj 1/1 Running 0 33m 10.19.17.141 host-10-19-17-141 <none> <none>
kube-ovn kube-ovn-cni-krsnc 1/1 Running 0 33m 10.19.17.140 host-10-19-17-140 <none> <none>
kube-ovn kube-ovn-cni-w74td 1/1 Running 0 33m 10.19.17.139 host-10-19-17-139 <none> <none>
kube-ovn kube-ovn-controller-86d7c8d6c4-p4ndm 1/1 Running 0 33m 10.19.17.141 host-10-19-17-141 <none> <none>
kube-ovn kube-ovn-controller-86d7c8d6c4-zjvbv 1/1 Running 0 33m 10.19.17.139 host-10-19-17-139 <none> <none>
kube-ovn ovn-central-8ddc7dd8-ww7mf 1/1 Running 0 39m 10.19.17.139 host-10-19-17-139 <none> <none>
kube-ovn ovs-ovn-dbrg2 1/1 Running 0 39m 10.19.17.139 host-10-19-17-139 <none> <none>
kube-ovn ovs-ovn-jxjc5 1/1 Running 0 39m 10.19.17.140 host-10-19-17-140 <none> <none>
kube-ovn ovs-ovn-s5rxz 1/1 Running 0 39m 10.19.17.141 host-10-19-17-141 <none> <none>
Support windows nodes to access k8s cluster through openvswitch?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.