Coder Social home page Coder Social logo

k8snetworkplumbingwg / multus-cni Goto Github PK

View Code? Open in Web Editor NEW
2.3K 66.0 577.0 48.34 MB

A CNI meta-plugin for multi-homed pods in Kubernetes

License: Apache License 2.0

Shell 2.38% Go 94.91% Dockerfile 0.13% Jinja 2.58%
vnf cni multiple-network containerized-vnf kubernetes-networking controlplane dataplane cni-plugin kubernetes

multus-cni's People

Contributors

adrianchiris avatar ahalimx86 avatar ashish-billore avatar billy99 avatar bn222 avatar cyclinder avatar dcbw avatar dependabot[bot] avatar dougbtv avatar e0ne avatar giovanism avatar liornoy avatar maiqueb avatar mccv1r0 avatar mmduh-483 avatar moshe010 avatar nicklesimba avatar oshothebig avatar pliurh avatar prashantsunkari avatar przemeklal avatar rkamudhan avatar s1061123 avatar shahar-klein avatar tdorsey avatar vadorovsky avatar vasrem avatar xieyanker avatar yulng avatar zshi-redhat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multus-cni's Issues

Macvlan and flannel issue.

Kub version 1.10
OS: Ubuntu 16.04.4 LTS

Hi, I am able to follow the readme on getting flannel to work just fine. I however am trying to work with macvlan for certain pods that have more of a nfv type issue where they need to bridge to a physical interface. My end goal would be able to select some pods that would run with bridged interfaces and some that would simply just use flannel interfaces. So the network type currently looks like the following..

apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: macvlan-conf
plugin: macvlan 
args: '[
{
                  "type": "macvlan",
                  "master": "ens160",
                  "mode": "bridge",
                  "ipam": {
                          "type": "host-local",
                          "subnet": "10.20.30.0/24",
                          "rangeStart": "10.20.30.200",
                          "rangeEnd": "10.20.30.205"
                          "routes": [
              { "dst": "0.0.0.0/0" }
            ],
            "gateway": "10.20.30.250"
         }
    }
]'

Here is an example pod that will only pull a interface from Flannel and not from macvlan. I am sure it is something small I am doing wrong?

apiVersion: v1
kind: Pod
metadata:
  name: ubuntu-multus
  annotations:
    networks: '[  
        { "name": "flannel-conf" },
        { "name": "macvlan-conf" }
    ]'
  labels:
    type: ubuntu
    vendor: Canonical
spec:
  containers:
  - name: ubuntu
    image: ubuntu
    command: [ "/bin/bash", "-c", "--" ]
    args: [ "while true; do sleep 30; done;" ]
  nodeSelector:
    multusselector: "yes"

Multus setup questions given this configuration

Reported by Alex on slack:

ref: https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

1. Master:
   kubeadm init  or   sudo kubeadm init --pod-network-cidr=10.245.0.0/1  
   kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

2. Node(s):
    kubeadm join 192.168.162.131:6443 --token 6uxoie.llxf6hhiymcmtdi6 --discovery-token-ca-cert-hash sha256:55cee956b12641b728c4529e601d164e4d622ce4fa78cb813ce57f6da3f05ff2  


3. Multus add-on install  (https://github.com/Intel-Corp/multus-cni)

Built the sources on the master and node

On the master:
===============================================================================
Create the base CRD network object  (kubectl create -f ./crdnetwork.yaml)
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
 # name must match the spec fields below, and be in the form: <plural>.<group>
 name: networks.kubernetes.com
spec:
 # group name to use for REST API: /apis/<group>/<version>
 group: kubernetes.com
 # version name to use for REST API: /apis/<group>/<version>
 version: v1
 # either Namespaced or Cluster
 scope: Namespaced
 names:
   # plural name to be used in the URL: /apis/<group>/<version>/<plural>
   plural: networks
   # singular name to be used as an alias on the CLI and for display
   singular: network
   # kind is normally the CamelCased singular type. Your resource manifests use this.
   kind: Network
   # shortNames allow shorter string to match your resource on the CLI
   shortNames:
   - net

Created the flannel-network object  (kubectl create -f flannel-network.yaml) 
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
 name: flannel-conf
plugin: flannel
args: '[
       {
               "delegate": {
                       "isDefaultGateway": true
               }
       }
]'

Created the mtacvlan network object (kubectl create -f macvlan_network_resource.yaml)
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
name: macvlan-conf
plugin: macvlan
args: '[
      {
               "master": "eth0",
               "mode": "bridge",
               "ipam": {
                       "type": "host-local",
                       "subnet": "192.168.0.10/32",
                       "gateway": "192.186.0.1"
               }
       }
]'

On the Node(s)
===============================================================================
Copy the following multus-cni.conf to /etc/cni/net.d/

{
   "name": "node-cni-network",
   "type": "multus",
   "kubeconfig": "/etc/kubernetes/node-kubeconfig.yaml"
}

Create Pod with the following annotations 
===============================================================================

annotations:
   networks: '[
       { "name": "flannel-conf" },
       { "name": "macvlan-conf" }
   ]'

Is there any reference yaml file applying multus-cni for pod network?

I'm able to compile multus-cni, and I'd like to run it for a test as a pod network in Kubernetes, but, I'm having trouble figuring out a specification for exactly how to implement it. I've noticed the multus-cni readme has an example config, but, not an example yaml file for applying the pod network, e.g. so I can run something like:

kubectl apply -f multus.yaml

So I tried to create one using the example one from Flannel as an example, I've tried this one I created that I've posted as a gist.

However, I'm not having a lot of luck.

Generally the steps I've taken are to:

  • Compile multus-cni on master and minion nodes and copy binaries into /opt/cni/bin
  • kubectl apply -f multus.yaml with this yaml
  • Join the minion
  • Create a pod (just an nginx for example)

As a note, I am testing on virtual machines as kubernetes hosts, so I don't have SRIOV capability. I'm continuing to iterate to try to figure it out, but, would appreciate any input here.

Any pointers to help me get started there? Thank you.

Cannot make my TPR Network objects to work.

Environment:
Kubernetes 1.7.3
Issue
I think I've configured my environment correctly... but cannot get TPR Network objects to work.
Here is the yaml file for deploying my pod.

apiVersion: v1
kind: Pod
metadata:
  name: multus-multi-net-poc
  annotations:
    networks: '[
        { "name": "multus-calico" },
        { "name": "macvlan-conf" }
    ]'
spec:  # specification of the pod's contents
  containers:
  - name: multus-multi-net-poc
    image: "busybox"
    command: ["top"]
    stdin: true
    tty: true

If I deploy it, it only comes up only with one NIC which I want multiple nics.

root@cam1:~# kubectl create -f pod-multi-network.yaml
pod "multus-multi-net-poc" created
root@cam1:~# kubectl exec -it multus-multi-net-poc ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if137: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether f2:3f:c8:49:9d:fe brd ff:ff:ff:ff:ff:ff
    inet 10.1.188.11/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::f03f:c8ff:fe49:9dfe/64 scope link
       valid_lft forever preferred_lft forever

Steps I did for configuration

  1. installed golang 1.8 and build Multus and CNI Plugins(for macvlan adn host-local)
# dpkg -i golang-1.8-src_1.8.5-1ubuntu1~longsleep1+xenial_amd64.deb golang-1.8-go_1.8.5-1ubuntu1~longsleep1+xenial_amd64.deb
# export PATH=/usr/lib/go-1.8/bin/:${PATH}
# unzip multus-cni-master.zip
# cd multus-cni-master/
# ./build
# cp bin/multus /opt/cni/bin/
# unzip plugins-master.zip
# cd plugins-master/
# ./build
# cp bin/macvlan /opt/cni/bin/
# cp bin/host-local /opt/cni/bin/
  1. created /etc/cni/net.d/10-a-multus-cni.conf
{
    "delegates": [
        {
            "etcd_ca_cert_file": "/etc/cni/net.d/calico-tls/etcd-ca",
            "etcd_cert_file": "/etc/cni/net.d/calico-tls/etcd-cert",
            "etcd_endpoints": "https://...:4001",
            "etcd_key_file": "/etc/cni/net.d/calico-tls/etcd-key",
            "hairpinMode": true,
            "ipam": {
                "type": "calico-ipam"
            },
            "kubernetes": {
                "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
            },
            "log_level": "info",
            "masterplugin": true,
            "name": "multus-calico",
            "policy": {
                "k8s_api_root": "https://10.0.0.1:443",
                "k8s_auth_token": "...",
                "type": "k8s"
            },
            "type": "calico"
        }
    ],
    "name": "minion1-multus-demo-network",
    "type": "multus"
}
  1. Restart all nodes

# systemctl restart kubelet

  1. Create CRD
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: networks.kubernetes.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: kubernetes.com
  # version name to use for REST API: /apis/<group>/<version>
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: networks
    # singular name to be used as an alias on the CLI and for display
    singular: network
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: Network
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - net
# kubectl create -f crdnetwork.yaml
# kubectl get CustomResourceDefinition
NAME                      KIND
networks.kubernetes.com   CustomResourceDefinition.v1beta1.apiextensions.k8s.io

5 Create TPR

apiVersion: extensions/v1beta1
kind: ThirdPartyResource
metadata:
  name: network.kubernetes.com
description: "A specification of a Network obj in the kubernetes"
versions:
- name: v1
root@cam1:~# kubectl create -f ./tprnetwork.yaml
thirdpartyresource "network.kubernetes.com" created
root@cam1:~# kubectl get thirdpartyresource
NAME                     DESCRIPTION                                          VERSION(S)
network.kubernetes.com   A specification of a Network obj in the kubernetes   v1
  1. Create Network
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: macvlan-conf
plugin: macvlan
args: '[
        {
            "ipam": {
                "ranges": [
                    [
                        {
                            "subnet": "1.1.1.0/24"
                        }
                    ]
                ],
                "type": "host-local"
            },
            "master": "eth0",
            "name": "macvlan-conf",
            "type": "macvlan"
        }
]'
# kubectl create -f macvlan.yaml
network "macvlan-conf" created
# kubectl get networks
NAME           KIND
macvlan-conf   Network.v1.kubernetes.com
# kubectl get network macvlan-conf -o yaml
apiVersion: kubernetes.com/v1
args: '[ { "ipam": { "ranges": [ [ { "subnet": "1.1.1.0/24" } ] ], "type": "host-local"
  }, "master": "eth0", "name": "macvlan-conf", "type": "macvlan" } ]'
kind: Network
metadata:
  creationTimestamp: 2017-12-01T01:57:02Z
  name: macvlan-conf
  namespace: default
  resourceVersion: "377220"
  selfLink: /apis/kubernetes.com/v1/namespaces/default/networks/macvlan-conf
  uid: ec7394db-d63a-11e7-b5e6-fa163e90ea75
plugin: macvlan

By the way

This multus config worked (which is just adding the macvlan config directly to the file) so I do believe I have my infrastructure correct.

{
    "delegates": [
        {
            "ipam": {
                "ranges": [
                    [
                        {
                            "subnet": "1.1.1.0/24"
                        }
                    ]
                ],
                "type": "host-local"
            },
            "master": "eth0",
            "name": "macvlan-conf",
            "type": "macvlan"
        },
        {
            "etcd_ca_cert_file": "/etc/cni/net.d/calico-tls/etcd-ca",
            "etcd_cert_file": "/etc/cni/net.d/calico-tls/etcd-cert",
            "etcd_endpoints": "https://...:4001",
            "etcd_key_file": "/etc/cni/net.d/calico-tls/etcd-key",
            "hairpinMode": true,
            "ipam": {
                "type": "calico-ipam"
            },
            "kubernetes": {
                "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
            },
            "log_level": "info",
            "masterplugin": true,
            "name": "multus-calico",
            "policy": {
                "k8s_api_root": "https://10.0.0.1:443",
                "k8s_auth_token": "...",
                "type": "k8s"
            },
            "type": "calico"
        }
    ],
    "name": "minion1-multus-demo-network",
    "type": "multus"
}
root@cam1:~# kubectl exec -it multus-multi-net-poc-2 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop qlen 1
    link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if232: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 52:85:d7:e4:1c:f4 brd ff:ff:ff:ff:ff:ff
    inet 10.1.183.49/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5085:d7ff:fee4:1cf4/64 scope link
       valid_lft forever preferred_lft forever
5: net0@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 42:65:98:0f:2c:7a brd ff:ff:ff:ff:ff:ff
    inet 1.1.1.2/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::4065:98ff:fe0f:2c7a/64 scope link
       valid_lft forever preferred_lft forever
root@cam1:~#

However I trully want to use the TPR function instead...

multus on 1.6.4 kubernetes failing

Hello,

I am trying to use multus on my all in one test bed, further plan is to add sriov support, but now I have only flannel. I had to deal with some rbac issues, now it is resolved. Still kube dns pod is not starting as it does not see networking running, I suspect kubelet does not see valid networking and does not untain the worker. Here is my config:
I used one of example from previous issue:

apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system

kind: ConfigMap
apiVersion: v1
metadata:
name: kube-multus-cfg
namespace: kube-system
labels:
tier: node
app: multus
data:
cni-conf.json: |
{
"name": "multus",
"type": "multus",
"delegates": [
{
"type": "macvlan",
"master": "provider_b0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.10.0/24",
"rangeStart": "192.168.10.154",
"rangeEnd": "192.168.10.254",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.10.1"
}
},
{
"type": "flannel",
"masterplugin": true,
"delegate": {
"isDefaultGateway": true
}
}
]
}
net-conf.json: |
{
"Network": "10.57.128.0/19",
"Backend": {
"Type": "vxlan"
}
}

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-multus-ds
namespace: kube-system
labels:
tier: node
app: multus
spec:
template:
metadata:
labels:
tier: node
app: multus
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
serviceAccountName: multus
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.7.1-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel:v0.7.1-amd64
command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-multus-cfg

Appreciate your help
Serguei

Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env

I am trying to configure 2 network card in a pod for testing purpose. I am using multus plugin and executed following steps:

  1. Downloaded the Multus plugin and build it. Copied the file under /opt/cni/bin. I executed this step on both master as well as on worker node.
  2. Following is the kubeadm version:
    kubeadm version: &version.Info Major:"1", Minor:"9", GitVersion:"v1.9.1+2.1.5.el7"
  3. I followed Install Doc. I used multus.yaml.
  4. When I create ngnix container, I get following error on worker node:
Apr  8 07:20:41 k8dind2 kubelet: E0408 07:20:41.043438   31024 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-whjjm_default" network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory

Apr  8 07:20:41 k8dind2 kubelet: E0408 07:20:41.043487   31024 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "nginx-whjjm_default(28594443-3afd-11e8-824c-0000170195b8)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-whjjm_default" network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory
Apr  8 07:20:41 k8dind2 kubelet: E0408 07:20:41.043506   31024 kuberuntime_manager.go:647] createPodSandbox for pod "nginx-whjjm_default(28594443-3afd-11e8-824c-0000170195b8)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-whjjm_default" network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory

Apr  8 07:20:41 k8dind2 kubelet: E0408 07:20:41.043546   31024 pod_workers.go:186] Error syncing pod 28594443-3afd-11e8-824c-0000170195b8 ("nginx-whjjm_default(28594443-3afd-11e8-824c-0000170195b8)"), skipping: failed to "CreatePodSandbox" for "nginx-whjjm_default(28594443-3afd-11e8-824c-0000170195b8)" with CreatePodSandboxError: "CreatePodSandbox for pod \"nginx-whjjm_default(28594443-3afd-11e8-824c-0000170195b8)\" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod \"nginx-whjjm_default\" network: Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory"

I have no idea how to fix it. Also, I have following physical network cards on my master and worker node:

ens3 : 10.0.20.xx
ens4 : 192.168.16.xx

Do I need to change IP series in multus.yaml?

Unique IP address across the cluster

We are invoking IPVLAN plugin via multus-cni. Each POD is allocated an unique IP address from the subnet start and end range ( as mentioned in IPAM args). But the uniqueness of the IP is in a host and not across the cluster. /var/lib/cni/networks/<network_id>/last_reserved_ip.0 maintains the last allocated ip for the subnet/network and it's maintained in each host.

It will be good, if multus-cni provides the uniqueness of the IP address across the cluster level.

CPU-Manager-For-Kubernetes fails to launch when using kubeconfig multus

I am trying to use CPU-Manager-For-Kubernetes with Mutlus Kubeconfig & TPR - however the annotations are not being propagated to pods created by CMK and Mutlus errors out dues to the lack of annotations. Below is the output of CMK cluster init & Mutlus. CMK works when running with Mutlus Delegates and if deployed manually with annotations in all pods.

CMK init-install-discover pods were not able to come up.
Following is the output while deploying CMK with flannel network object.

$ kubectl get pods
NAME                                          READY     STATUS     RESTARTS   AGE
kcm-cluster-init-pod                          1/1       Running    0          10m
kcm-init-install-discover-pod-192.168.14.38   0/2       Init:0/1   0          10m

$ kubectl describe pods kcm-init-install-discover-pod-192.168.14.38
Name:           kcm-init-install-discover-pod-192.168.14.38
Namespace:      default
Node:           192.168.14.38/192.168.14.38
Start Time:     Fri, 21 Jul 2017 10:04:57 -0400
Labels:         <none>
Status:         Pending
IP:
Controllers:    <none>
Init Containers:
  init:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py init --num-dp-cores=2 --num-cp-cores=1
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
Containers:
  install:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py install
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
      NODE_NAME:         (v1:spec.nodeName)
  discover:
    Container ID:
    Image:              96.118.23.36:8089/kcm:v0.6.1
    Image ID:
    Port:
    Command:
      /bin/bash
      -c
    Args:
      /kcm/kcm.py discover
    State:              Waiting
      Reason:           PodInitializing
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /etc/kcm from kcm-conf-dir (rw)
      /host/proc from host-proc (ro)
      /opt/bin from kcm-install-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-dlbk8 (ro)
    Environment Variables:
      KCM_PROC_FS:      /host/proc
      NODE_NAME:         (v1:spec.nodeName)
Conditions:
  Type          Status
  Initialized   False
  Ready         False
  PodScheduled  True
Volumes:
  host-proc:
    Type:       HostPath (bare host directory volume)
    Path:       /proc
  kcm-conf-dir:
    Type:       HostPath (bare host directory volume)
    Path:       /etc/kcm
  kcm-install-dir:
    Type:       HostPath (bare host directory volume)
    Path:       /opt/bin
  default-token-dlbk8:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-dlbk8
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  1h            1s              1832    {kubelet 192.168.14.38}                 Warning         FailedSync      Error syncing pod, skipping: failed to "SetupNetwork" for "kcm-init-install-discover-pod-192.168.14.38_default" with SetupNetworkError: "Failed to setup network for pod \"kcm-init-install-discover-pod-192.168.14.38_default(9451904f-6e1d-11e7-aa74-ecb1d7851f72)\" using network plugins \"cni\": Multus: Err in getting k8s network from pod: parsePodNetworkObject: pod annotation not having \"network\" as key, refer Multus README.md for the usage guide; Skipping pod"

$

Network pod annotation is specified in kcm-cluster-init-pod pod spec. However the same seems to be not inherited into kcm-init-install-discover-pod.

apiVersion: v1
kind: Pod
metadata:
  labels:
    app: kcm-cluster-init-pod
  annotations:
    "scheduler.alpha.kubernetes.io/tolerations": '[{"key":"kcm", "value":"true"}]'
    "networks": '[{ "name": "flannel-conf" }]'
  name: kcm-cluster-init-pod
spec:

Error checking when using kubeconfig is wrong

The function getK8sNetwork returns an error. After commit 7066439
it was changed so that the error checking is done on the type of the returned error, instead of the string value. In cmdAdd a check is done for the type '*NoK8sNetworkError', but the function returns NoK8sNetworkError (no star), so the statement will never be true. What will happend is that the else will be taken, and if there is a default net in the delegates list, the pod will be started without any anotated nets, just with the default network, this is wrong if you wnated the pod to have annotated networks, and made an error in the config.

Also the cmdDel does not even check for the type of error, but for the string value, that is not returned anymore, since the update of getK8sNetwork.

Attaching a git diff for an example fix.

multus-error.patch.tar.gz

Fixes 1

Signed-off-by: [email protected]

Deleting interface should be continued instead of returning after one failure

Instead of returning the error encountered on deleting one interface inside the loop can we concatenate the errors and continue with deletion of rest of the interfaces and return the error outside of the loop?
That way, if there was any error on deletion on any interface we are still catching it and able to return it at the end of the function.

Cleaning up the reserved IPAM, in case of plugin failure in Multus CNI.

If there's malformed configuration in the 'delegates' table in the Multus configuration file and at least one Flannel plugin instance is working correctly, Multus will keep creating interfaces and reserving consecutive IP addresses from the subnet range specified in the Flannel configuration. This will eventually lead to the preservation of entire range from specified subnet and will block the creation of any new K8s pods with Flannel interface.
Moreover, we haven't found any recovery actions after failure, that would do automatic cleanup by itself. We had to remove leftovers after failed pod creation attempts manually by removing failed files and directories from /var/lib/cni/networks/ directory.
Noteworthy 'restartPolicy' option in the pod configuration was set to 'Never'.
Steps to reproduce:

  1. Create Multus configuration file containing at least one properly configured Flannel entry and at least one malformed entry. Example:
cat /etc/cni/net.d/multus-cni.conf
{
    "name": "minion1-multus-demo-network",
    "type": "multus",
    "delegates": [
        {
                "type": "flannel",
                "masterplugin": true,
                "delegate": {
                        "isDefaultGateway": true
                }
        },
        {
                "type": "foobar"
        }
    ]
}
  1. Restart kubelet service.
  2. Create new pod.

multus sriov cni not working after system reboot

using multus sriov cni as network annotation (use for creating multiple interface ) in pod yaml file but after system reboot it is not working and pod is in still completed state but the weave multus cni plugin is working in the same way....

root@ubuntu:/home/mavenir/rahul/files# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
multus10 1/1 Running 8 8d 10.32.0.5 ubuntu
multus8-jq6m7 0/1 Completed 0 8d ubuntu

Is there a way to communicate across kubernetes nodes?

I use flannel to communicate across kubernetes nodes. pods can communicate via the default interface eth0.

If I add more one, the cni configure file looks as follows:
---master---

      "type": "ptp",
      "ipam": {
        "type": "host-local",
        "subnet": "172.16.1.0/24",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "172.16.1.1"
     }

---minion---

      "type": "ptp",
      "ipam": {
        "type": "host-local",
        "subnet": "172.16.2.0/24",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "172.16.2.1"
     }

the pods in the same node can reachable, but across nodes unreachable because they are in different subnets.(use different subnets to avoid ip conflict)

I cannot find a way to make them reachable.

should I use the default route? (created by flannel-docker-bridge)

Is there are some example for that?

Issue with POD redeployment.

I am using this to have ipvlan based networking for the containers. There is a issue found if a Kubernetes POD is terminated and redeployed with the same network K8S object, networking doesn't seem to work fine. We are defined CRT to have network object where plugin type defined as ipvlan. Also in the arguments IPAM defines the subnet CIDR. start and end IP. I have tried following sequences but none of them works to restore it apart from reboot of the node.

  1. Delete the K8s Network object files and recreate it before the POD is deployed again,
  2. Restart of kubelet.

Problem seems to be with IPV6.

DNS Problem

Hi all,

I'm trying to access pods directly via the public IP, ie not nat or anything.

I can make this happen by making my multus config like so:

{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "macvlan",
      "master": "eth1",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "137.74.152.128/26",
        "rangeStart": "137.74.152.129",
        "rangeEnd": "137.74.152.189",
        "routes": [
          { "dst": "0.0.0.0/0","gw":"137.74.152.190" }
        ],
        "gateway": "137.74.152.190"
     }
    },
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": false <-- This seemed to be the key
      }
    }
  ]
}

This works fine and I can reach the pod direct from external.

This gives me a routing table like so, in the pod:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         137.74.152.190  0.0.0.0         UG    0      0        0 net0
10.244.0.0      10.244.1.1      255.255.0.0     UG    0      0        0 eth0
10.244.1.0      *               255.255.255.0   U     0      0        0 eth0
137.74.152.128  *               255.255.255.192 U     0      0        0 net0

I cannot get any dns as the resolv.conf shows:

nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ovh.net
options ndots:5

And there is no route to 10.96.0.10 which is the DNS except through 137.74.152.190 which is obviously not going to work.

I will be able to hack up a way I'm sure to add in say 8.8.8.8 to the dns server list but I'm sure this is the wrong way to go about this as I believe I need the 10.96.0.10

I know next to nothing by the way, I've only been using kubernetes and docker in general for the last week.

I do not want to use NAT for certain pods as I would like eventually to use this sort of platform for voip.

Anyone have any pointers in the right direction please?

Multiple interfaces for specific Pod on kubernetes

Hi folks,
AFAIK, multus can give more than one network interface to one container/pod on kubernetes. But in my use case on kubernetes cluster. I only need to give specific pod multiple interfaces. My question is there a method to support the feature that can add specific pod multiple interfaces?

Using Kubernetes version: v1.6.6 failing

Hello,

I facing an issue with multus running on 1.6.6 kubernetes version.
Jun 22 17:07:24 wn1-virtual-machine kubelet[22909]: E0622 17:07:24.880168 22909 remote_runtime.go:109] StopPodSandbox "c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "kube-dns-692378583-pvrv9_kube-system" network: Multus: Err in reading the delegates: failed to read container data in the path("/var/lib/cni/multus/c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f"): open /var/lib/cni/multus/c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f: no such file or directory
"Jun 22 17:07:24 wn1-virtual-machine kubelet[22909]: E0622 17:07:24.880543 22909 kuberuntime_gc.go:138] Failed to stop sandbox "c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "kube-dns-692378583-pvrv9_kube-system" network: Multus: Err in reading the delegates: failed to read container data in the path("/var/lib/cni/multus/c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f"): open /var/lib/cni/multus/c9e530e16dfc884a9d555e2d4bf96c06a3895959ba748b65c0d7cb407da85e4f: no such file or directory"

Below are the steps that I followed:

  1. kubeadm init

  2. copied multus to /opt/cni/bin
    3)root@wn1-virtual-machine:/etc/cni/net.d# kubectl get pods --all-namespaces --show-all | grep dns
    kube-system kube-dns-692378583-pvrv9 0/3 ContainerCreating 0 47m

  3. files used as suggested in issue for 1.6.4

  4. root@wn1-virtual-machine:/etc/cni/net.d# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
    valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:00:fb:39 brd ff:ff:ff:ff:ff:ff
    inet 192.168.63.192/24 brd 192.168.63.255 scope global dynamic ens33
    valid_lft 1129sec preferred_lft 1129sec
    inet6 fe80::ca84:383e:dd8c:2181/64 scope link
    valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:ae:37:4a:d4 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
    valid_lft forever preferred_lft forever
    4: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default
    link/ether 2a:64:3f:43:70:c4 brd ff:ff:ff:ff:ff:ff

  5. root@wn1-virtual-machine:/etc/cni/net.d# cat 10-multus.conf
    {
    "name": "multus-demo",
    "type": "multus",
    "delegates": [
    {
    "type": "macvlan",
    "master": "eth0",
    "mode": "bridge",
    "ipam": {
    "type": "host-local",
    "subnet": "192.168.122.0/24",
    "rangeStart": "192.168.122.200",
    "rangeEnd": "192.168.122.216",
    "routes": [
    { "dst": "0.0.0.0/0" }
    ],
    "gateway": "192.168.122.1"
    }
    },
    {
    "type": "flannel",
    "masterplugin": true,
    "delegate": {
    "isDefaultGateway": true
    }
    }
    ]
    }

  6. root@wn1-virtual-machine:~# cat flannel-rbac.yaml


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
  • apiGroups:
    • ""
      resources:
    • nodes
      verbs:
    • list
    • watch
  • apiGroups:
    • ""
      resources:
    • nodes/status
      verbs:
    • patch

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:

  • kind: ServiceAccount
    name: multus
    namespace: kube-system
  1. root@wn1-virtual-machine:~# cat multus.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system

kind: ConfigMap
apiVersion: v1
metadata:
name: kube-multus-cfg
namespace: kube-system
labels:
tier: node
app: multus
data:
cni-conf.json: |
{
"name": "multus-demo",
"type": "multus",
"delegates": [
{
"type": "macvlan",
"master": "eth0",
"mode": "bridge",
"ipam": {
"type": "host-local",
"subnet": "192.168.122.0/24",
"rangeStart": "192.168.122.200",
"rangeEnd": "192.168.122.216",
"routes": [
{ "dst": "0.0.0.0/0" }
],
"gateway": "192.168.122.1"
}
},
{
"type": "flannel",
"masterplugin": true,
"delegate": {
"isDefaultGateway": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-multus-ds
namespace: kube-system
labels:
tier: node
app: multus
spec:
template:
metadata:
labels:
tier: node
app: multus
spec:
hostNetwork: true
# nodeSelector:
# beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: flannel
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.7.0-amd64
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel:v0.7.0-amd64
command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-multus-cfg

Please help me out in resolving this issue.

Thank you.

ipvlan config question - master and gateway parameters

Hi @rkamudhan
On this response from earlier this year you showed a sample ipvlan configuration for Multus and it seems that you omitted the "master" parameter (which denotes the name of the host interface to enslave as per the IPVLAN CNI plugin), can you please explain why?
I also see that you specified a gateway but, according to the ipvlan driver documents, gateways are not used on ipvlan configurations, would you be able to explain the reasoning behind this? I am assuming this has to do with the IPAM configuration but it's not really clear to me how it fits together.
Thanks.

crd permission fail

I was trying to debug a pod with annotations to select network. It kept failing back to the default network. Turns out the multus user account didn't have rights to read the network crd info. Once fixed, it started working properly.

In this case though, when a user specified a desired network, it should not ever fail back to the default networrk but fail the pod as it will malfunction if the desired networks are not attached.

Please update the behavior to fail when specified networks/config are unavailable.

Adding SRIOV plugin complains about ipam

Hello,
When I add SRIOV to multus plugin, kube-dns pods does not start and complains about absence of IPAM module. In reality assigning ip address is not always required, curios if IPAM can be made optional. Appreciate if you could check the messages and config provided below.

  7m            7m              1       kubelet, sbezverk-srv-05.sbezverk.cisco.com                     Warning         FailedSync      Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-1080222536-2ndzl_kube-system(17fcccf5-511c-11e7-acee-58ac785cb0a1)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-1080222536-2ndzl_kube-system(17fcccf5-511c-11e7-acee-58ac785cb0a1)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-1080222536-2ndzl_kube-system\" network: netplugin failed but error parsing its diagnostic message \"{\\n    \\\"ip4\\\": {\\n        \\\"ip\\\": \\\"10.57.128.2/24\\\",\\n        \\\"gateway\\\": \\\"10.57.128.1\\\",\\n        \\\"routes\\\": [\\n            {\\n                \\\"dst\\\": \\\"10.57.128.0/19\\\",\\n                \\\"gw\\\": \\\"10.57.128.1\\\"\\n            },\\n            {\\n                \\\"dst\\\": \\\"0.0.0.0/0\\\",\\n                \\\"gw\\\": \\\"10.57.128.1\\\"\\n            }\\n        ]\\n    },\\n    \\\"dns\\\": {}\\n}{\\n    \\\"code\\\": 100,\\n    \\\"msg\\\": \\\"Multus: error in invoke Delegate add - \\\\\\\"sriov\\\\\\\": failed to set up IPAM plugin type \\\\\\\"\\\\\\\" from the device \\\\\\\"enp129s0f0\\\\\\\": no plugin name provided\\\"\\n}\": invalid character '{' after top-level value"

  7m    7m      1       kubelet, sbezverk-srv-05.sbezverk.cisco.com             Warning FailedSync      Error syncing pod, skipping: failed to "KillPodSandbox" for "17fcccf5-511c-11e7-acee-58ac785cb0a1" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1080222536-2ndzl_kube-system\" network: Multus: error in invoke Delegate del - \"sriov\": failed to open netns %!!(MISSING)!(MISSING)q(<nil>): failed to Statfs \"\": no such file or directory"

  7m    4s      32      kubelet, sbezverk-srv-05.sbezverk.cisco.com             Normal  SandboxChanged  Pod sandbox changed, it will be killed and re-created.
  7m    4s      31      kubelet, sbezverk-srv-05.sbezverk.cisco.com             Warning FailedSync      Error syncing pod, skipping: failed to "KillPodSandbox" for "17fcccf5-511c-11e7-acee-58ac785cb0a1" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-1080222536-2ndzl_kube-system\" network: Multus: Err in  reading the delegates: failed to read container data in the path(\"/var/lib/cni/multus/1d37508e11906f26c7f667cab880bb22c540bcb38909287c892c62f0d3200c63\"): open /var/lib/cni/multus/1d37508e11906f26c7f667cab880bb22c540bcb38909287c892c62f0d3200c63: no such file or directory"
cat bridge.yaml 
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-multus-cfg
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  cni-conf.json: |
    {
      "name": "multus-cni",
      "type": "multus",
      "delegates": [
        {
          "type": "macvlan",
          "master": "provider_b0",
          "mode": "bridge",
          "ipam": {
            "type": "host-local",
            "subnet": "192.168.10.0/24",
            "rangeStart": "192.168.10.154",
            "rangeEnd": "192.168.10.254",
            "routes": [
              { "dst": "0.0.0.0/0" }
            ],
            "gateway": "192.168.10.1"
          }
        },
        {
          "name": "sriov-1-1",
          "type": "sriov",
          "if0": "enp129s0f0",
          "vf": "1"
          "ipam": {
            "type": "host-local",
            "subnet": "1.1.1.0/24"
          }
        },
        {
          "name": "sriov-1-2",
          "type": "sriov",
          "if0": "enp129s0f0",
          "vf": "2"
          "ipam": {
            "type": "host-local",
            "subnet": "1.1.2.0/24"
          }
        },
        {
          "name": "sriov-1-3",
          "type": "sriov",
          "if0": "enp129s0f0",
          "vf": "3"
          "ipam": {
            "type": "host-local",
            "subnet": "1.1.3.0/24"
          }
        },
        {
          "name": "sriov-1-4",
          "type": "sriov",
          "if0": "enp129s0f0",
          "vf": "4"
          "ipam": {
            "type": "host-local",
            "subnet": "1.1.4.0/24"
          }
        },
        {
          "name": "sriov-2-1",
          "type": "sriov",
          "if0": "enp129s0f1",
          "vf": "1"
          "ipam": {
            "type": "host-local",
            "subnet": "1.2.1.0/24"
          }
        },
        {
          "name": "sriov-2-2",
          "type": "sriov",
          "if0": "enp129s0f1",
          "vf": "2"
          "ipam": {
            "type": "host-local",
            "subnet": "1.2.2.0/24"
          }
        },
        {
          "name": "sriov-2-3",
          "type": "sriov",
          "if0": "enp129s0f1",
          "vf": "3"
          "ipam": {
            "type": "host-local",
            "subnet": "1.2.3.0/24"
          }
        },
        {
          "name": "sriov-2-4",
          "type": "sriov",
          "if0": "enp129s0f1",
          "vf": "4"
          "ipam": {
            "type": "host-local",
            "subnet": "1.2.4.0/24"
          }
        },
        {
          "type": "flannel",
          "masterplugin": true,
          "delegate": {
            "isDefaultGateway": true
          }
        }
      ]
    }
  net-conf.json: |+
    {
      "Network": "10.57.128.0/19",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-multus-ds
  namespace: kube-system
  labels:
    tier: node
    app: multus
spec:
  template:
    metadata:
      labels:
        tier: node
        app: multus
    spec:
      hostNetwork: true
      # nodeSelector:
      #  beta.kubernetes.io/arch: amd64
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      serviceAccountName: flannel 
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.7.1-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel:v0.7.1-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-multus-cfg

Multus v2.0 semantic version

Network Plumbing working group changes in Multus. The semantic versioning is the de facto standard for describing software version in the Go community. The v2.0 version is not compatible with the older version v1.0 and v1.1, and v1.2

Todo:

  • Add version in built binary (embedded)
  • Modify travisCI to put binary in release tab

Multus not aligned to CNI spec v0.3.0

When using calico with spec v0.3.0 (plugin chaining included) with multus I get the following error:

1235 pod_workers.go:186] Error syncing pod af00ed7b-0a97-11e8-bf2f-525400f0a9f1 ("simplesvc-54f8df5889-lhqwl_default(af00ed7b-0a97-11e8-bf2f-525400f0a9f1)"), skipping: failed to "CreatePodSandbox" for "simplesvc-54f8df5889-lhqwl_default(af00ed7b-0a97-11e8-bf2f-525400f0a9f1)" with CreatePodSandboxError: "CreatePodSandbox for pod "simplesvc-54f8df5889-lhqwl_default(af00ed7b-0a97-11e8-bf2f-525400f0a9f1)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "simplesvc-54f8df5889-lhqwl_default" network: Multus: Err in delegate conf: delegate must have the field 'type'"

I am using the following multus conf:
{
"name": "multus-multi-network",
"type": "multus",
"policy": {
"type": "k8s",
"k8s_api_root": "https://10.96.0.1:443",
"k8s_auth_token":
},
"kubeconfig": "/etc/cni/net.d/multus-kubeconfig",
"delegates": [
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"etcd_endpoints": "https://10.10.6.11:2379,https://10.10.6.12:2379",
"etcd_key_file": "/etc/cni/net.d/calico-tls/etcd-key",
"etcd_cert_file": "/etc/cni/net.d/calico-tls/etcd-cert",
"etcd_ca_cert_file": "/etc/cni/net.d/calico-tls/etcd-ca",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s",
"k8s_api_root": "https://10.96.0.1:443",
"k8s_auth_token":
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {
"portMappings": true
}
}
],
"masterplugin": true
}
]
}

PTP support when using CRD

Steps:

  1. Copy multus configration file to /etc/cni/net.d/:
{
    "type": "multus",
    "log_level": "debug",
    "kubeconfig": "/etc/kubernetes/admin.conf",
    "delegates": [{
        "name": "flannel",
        "type": "flannel",
        "masterplugin": true
    } ]
}
  1. Start a single node kubernetes cluster:
    sudo -E kubeadm init --pod-network-cidr 10.244.0.0/16

  2. Startup flannel, using .yamls from the upstream coreOS repo:

sudo -E kubectl create -f "../k8s/kube-flannel-rbac.yml"
sudo -E kubectl create --namespace kube-system -f "../k8s/kube-flannel.yml"
  1. Taint master so we can schedule work on it and create a custom resource definition for networks:
master=$(hostname)
sudo -E kubectl taint nodes "$master" node-role.kubernetes.io/master:NoSchedule-
kubectl create -f crd-network.yaml

Where crd-network yaml is:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: networks.kubernetes.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: kubernetes.com
  # version name to use for REST API: /apis/<group>/<version>
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: networks
    # singular name to be used as an alias on the CLI and for display
    singular: network
    # kind is normally the CamelCased singular type. Your resource manifests use this.
    kind: Network
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - net
  1. Create a few CRD network types on the system, based on following yaml:
---
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: ptp-net
plugin: ptp
args: '[
    {
        "name": "ptp-net",
        "type": "ptp",
        "ipam": {
                  "type": "host-local",
                  "subnet": "10.248.246.144/28",
                  "routes": [
                   { "dst": "0.0.0.0/0" }
        }
    }
]'

---
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: br-net-1
plugin: bridge
args: '[
    {
        "name": "br-net-1",
        "type": "bridge",
        "bridge": "br-net-1",
        "ipam": {
            "type": "host-local",
            "subnet": "10.1.10.0/24"
        }
    }
]'

---
apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: br-net-2
plugin: bridge
args: '[
    {
        "name": "br-net-2",
        "type": "bridge",
        "bridge": "br-net-2",
        "ipam": {
            "type": "host-local",
            "subnet": "11.1.1.0/24"
        }
    }
]'
  1. Start a pod making use of each of these three networks:
$ sudo -E kubectl create -f ptp-pod.yaml
$ cat ptp-pod.yaml 
---
apiVersion: v1
kind: Pod # TODO make these deployments later
metadata:
  name: ptp-test
  annotations:
    networks: '[
        { "name": "br-net-1" },
        { "name": "br-net-2" },
        { "name": "ptp-net" }
    ]'
spec:
  containers:
  - name: ptp-test
    image: "ubuntu:14.04"
    command: ["top"]
    stdin: true
    tty: true
  1. See following networks:
$ sudo -E kubectl exec -it ptp-test  -- bash
root@ptp-test:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if135: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:00:44 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.68/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5433:ff:fe5f:9145/64 scope link 
       valid_lft forever preferred_lft forever
  1. Bring up one more pod, this time not using PTP:
sudo -E kubectl create -f test-pod.yaml 
 cat test-pod.yaml 
---
apiVersion: v1
kind: Pod # TODO make these deployments later
metadata:
  name: no-ptp-test
  annotations:
    networks: '[
        { "name": "br-net-1" },
        { "name": "br-net-2" }
    ]'
spec:
  containers:
  - name: no-ptp-test
    image: "ubuntu:14.04"
    stdin: true
    tty: true
  1. Network comes up without issue:
$ sudo -E kubectl exec -it no-ptp-test2 -- bash -c "ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if146: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:01:0a:03 brd ff:ff:ff:ff:ff:ff
    inet 10.1.10.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::8c3e:a1ff:fed1:8659/64 scope link 
       valid_lft forever preferred_lft forever
5: net0@if147: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 0a:58:0b:01:01:03 brd ff:ff:ff:ff:ff:ff
    inet 11.1.1.3/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::2098:afff:febf:fbb0/64 scope link 
       valid_lft forever preferred_lft forever

Multus reading stored conf file improperly after the node reboot.

After the next reboot we get the error. There is no file in /var/lib/cni directory. The kubelet logs however tell something weird:

Nov 08 03:20:30 Minion2 kubelet[1333]: W1108 03:20:30.421959    1333 docker_sandbox.go:342] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "busybox-flannel-with-multus-minion2_default": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "1dbac58fcb0b562c3e1ef109c045c8599838646415d9b80f9dbc3f79eefd757d"
Nov 08 03:20:30 Minion2 kubelet[1333]: W1108 03:20:30.423128    1333 docker_sandbox.go:342] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "busybox-flannel-with-multus-minion2_default": CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b"
...
Nov 08 03:20:31 Minion2 kubelet[1333]: E1108 03:20:31.044895    1333 cni.go:312] Error deleting network: Multus: Err in  reading the delegates: failed to read container data in the path("/var/lib/cni/multus/fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b"): open /var/lib/cni/multus/fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b: no such file or directory

Nov 08 03:20:31 Minion2 kubelet[1333]: E1108 03:20:31.045660    1333 remote_runtime.go:114] StopPodSandbox "fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b" from runtime service failed: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "busybox-flannel-with-multus-minion2_default" network: Multus: Err in  reading the delegates: failed to read container data in the path("/var/lib/cni/multus/fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160Nov 08 03:20:31 Minion2 kubelet[1333]: E1108 03:20:31.045694    1333 kuberuntime_gc.go:154] Failed to stop sandbox "fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b" before removing: rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod "busybox-flannel-with-multus-minion2_default" network: Multus: Err in  reading the delegates: failed to read container data in the path("/var/lib/cni/multus/fc2572c319de6a10cda6f097e41b4d080c674e7556b4ec797f7a2aaf9160dc4b

Support chained CNI Plugins

CNI v0.3.0 defined a way to call a list of plugins by defining multiple plugins in "plugins" field(details).

But this feature is not supported in multus. If you try to use a CNI plugin(like "portmap") which is supposed to be used as a chained plugin only, you will get an error message: must be called as chained plugin.

Can multus support chained plugins?

I think we can add a special type of Network Resource, like "chained", and defined the real plugin type and args in the "args" field. So multus will understand this network is a "chained plugins" and call it sequentially with prevResult enabled.

Example CNI Config: Master plugin POD/Service Network, Second Plugin Internet Connectivity

Is anyone able to provide an example cni configuration whereby a master plugin. e.g calico provides pod and service connectivity.

And a second plugin only provides NAT'd internet connectivity? This use case will allow eaiser monitoring of internet egress for chargeback purposes.

{
  "name": "multus-networks",
  "type": "multus",
  "delegates": [
    {
      "cniVersion": "0.1.0",
      "name": "internet",
      "type": "bridge",
  ?????,
      }
    },
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.1.0",
      "type": "calico",
      "masterplugin": true,
      "log_level": "info",
      "datastore_type": "kubernetes",
      "nodename": "__KUBERNETES_NODE_NAME__",
      "mtu": 1500,
      "ipam": {
        "type": "host-local",
        "subnet": "usePodCidr"
      },
      "policy": {
          "type": "k8s",
          "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
      },
      "kubernetes": {
          "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
          "kubeconfig": "__KUBECONFIG_FILEPATH__"
      }
    }
  ]
}

newbie question

I am just getting started with a POC around Multus. I have created my vanilla Kubernetes cluster with one master and one worker. Should I apply flannel and multus YAML's as specified here and separately(https://github.com/Intel-Corp/multus-cni/issues/3) or is there one YAML that I can apply to see both flannel and multus created interfaces?

how to create multiple k8s services with endpoints pointing to different network interfaces

I was able to create multiple interfaces (2 flannel overlay networks) with multus for a k8s pod, but the kube service is having endpoints pointing to the default gateway. I would like to have seperate k8s service for the 2 network interface. ie control service ->endpoints with flannel 1 ips:port, data service ->endpoints with flannel 2 ips:ports. Any idea how to achieve this?

multus cannot find delegate cni network in pod/deployment without annotations

I'm configuring multus-cni in a this scenario type:
hosts: ubuntu 16.04 LTS
kubernetes version: 1.7
network: macvlan, weave
When i start a pod with annotations, everything is working fine
example :

apiVersion: v1
kind: Pod
metadata:
  name: multus-multi-net-poc
  annotations:
    networks: '[  
        { "name": "weave-conf" },
        { "name": "macvlan" }
    ]'
spec:  # specification of the pod's contents
  containers:
  - name: multus-multi-net-poc
    image: "busybox"
    command: ["top"]
    stdin: true
    tty: true

but when i start a standard pod, this is not able to reach the "running" state and in logs i can see:

Error deleting network: Multus: Err in  reading the delegates: failed to read container data in the path
ailed to read container data in the path(\"/var/lib/cni/multus/c5c70dfd300607437cebb12b87f21651040f27bac143a91f2197fc2b8ff21b7e\"): open /var/lib/cni/multus/c5c70dfd300607437cebb12b87f21651040f27bac143a91f2197fc2b8ff21b7e: no such file or directory"

my cni conf is:

{
    "name": "minion-cni-network",
    "type": "multus",
    "kubeconfig": "/etc/kubernetes/admin.conf",
    "delegates": [
      {
        "type": "weave-net",
        "hairpinMode": true
      }
    ]
}

is there any incompatibility with kubernetes 1.7 or any misconfiguration on my side ?
Thanks

networks annotations not working in version 1.8.0-00

I am running kubeadm version 1.8.0-00. And default network is multus-flannel 10-multus-flannel.conf

{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": true
      }
    }
  ]
}

I created CustomResourceDefinition as below

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networks.kubernetes.com
spec:
  group: kubernetes.com
  version: v1
  scope: Namespaced
  names:
    plural: networks
    singular: network
    kind: Network
    shortNames:
    - net

And my other network is local bridge

apiVersion: "kubernetes.com/v1"
kind: Network
metadata:
  name: bridge-conf
plugin: bridge
args: '[
    {
      "type": "bridge",
      "name": "lanbr",
      "bridge": "lanbr",
      "isGateway": true,
      "ipMasq": true,
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.122.0/24",
        "rangeStart": "192.168.122.200",
        "rangeEnd": "192.168.122.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.122.1"
     }
    },
]'

And when I tried to create multi-network-pod as below

apiVersion: v1
kind: Pod
metadata:
  name: multus-multi-net-poc
  annotations:
    networks: '[
        { "name": "bridge-conf" }
    ]'
spec: 
  containers:
  - name: multus-multi-net-poc
    image: "busybox"
    command: ["top"]
    stdin: true
    tty: true

I only see flannel network eth0 inside the pod and bridge network not being added. And I also notice that no bridges are created on host either (lanbr). Can you please suggest if I am missing something.

And when I tried to create multus-network using this yaml I could see both networks and interfaces added inside the pod. But annotations doesn't work for me.

Re-using of IP address of terminated PODS

There is an issue raised for the IPv6 - https://github.com/Intel-Corp/multus-cni/issues/48. This issue is with IPV4. if the POD is terminated, IP address of the terminated POD doesn't get re-allocated to a new POD, deployed with a different docker image than the terminated POD. If the new POD is deployed with the same Docker image, IP address of the terminated POD is re-allocated.

Workaround of the problem - once the POD is terminated, delete the /var/lib/cni/networks/.

cmdAdd results provide results for last delegate

As far as I can tell, the returned "result" field for cmdAdd is the result from the last delegate which was added. I wonder if it'd make sense to embed the other results as well in this reply. Callers who know to look for these fields would be able to observe the network details for each CNI plugin individually. Callers who don't know to look for these fields, assuming golfing JSON unmarshaling is used, would not be impacted.

Similar idea to what @mcastelino was suggesting at intel/userspace-cni-network-plugin#2

What do you think?

Mutlus CDR Usage?

CPU Manager for Kubernetes(CMK) is currently using TPR but is planning to transition to CDR in the near future, Is there plans for Mutlus to move from TPR(deprecated) to CDR in the future?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.