Coder Social home page Coder Social logo

multi-networkpolicy-iptables's Introduction

multi-networkpolicy-iptables

buildtest

multi-networkpolicy implementation with iptables

Current Status of the Repository

It is now actively developping hence not stable yet. Bug report and feature request are welcome.

Description

Kubernetes provides Network Policies for network security. Currently net-attach-def does not support Network Policies because net-attach-def is CRD, user defined resources, outside of Kubernetes. multi-network policy implements Network Policiy functionality for net-attach-def, by iptables and provies network security for net-attach-def networks.

Quickstart

Install MultiNetworkPolicy CRD into Kubernetes.

$ git clone https://github.com/k8snetworkplumbingwg/multi-networkpolicy
$ cd multi-networkpolicy
$ kubectl create -f scheme.yml
customresourcedefinition.apiextensions.k8s.io/multi-networkpolicies.k8s.cni.cncf.io created

Deploy multi-networkpolicie-iptables into Kubernetes.

$ git clone https://github.com/k8snetworkplumbingwg/multi-networkpolicy-iptables
$ cd multi-networkpolicy-iptables
$ kubectl create -f deploy.yml
clusterrole.rbac.authorization.k8s.io/multi-networkpolicy created
clusterrolebinding.rbac.authorization.k8s.io/multi-networkpolicy created
serviceaccount/multi-networkpolicy created
daemonset.apps/multi-networkpolicy-ds-amd64 created

Requirements

This project leverages iptables and ip6tables commands to do its work. Hence, ip_tables and ip6_tables kernel modules need to be loaded on the container host:

# modprobe ip_tables ip6_tables

Configurations

See Configurations.

Demo

(TBD)

MultiNetworkPolicy DaemonSet

MultiNetworkPolicy creates DaemonSet and it runs multi-networkpolicy-iptables for each node. multi-networkpolicy-iptables watches MultiNetworkPolicy object and creates iptables rules into 'pod's network namespace', not container host and the iptables rules filters packets to interface, based on MultiNetworkPolicy.

TODO

  • Bugfixing
  • (TBD)

Contact Us

For any questions about Multus CNI, feel free to ask a question in #general in the NPWG Slack, or open up a GitHub issue. Request an invite to NPWG slack here.

multi-networkpolicy-iptables's People

Contributors

dependabot[bot] avatar dougbtv avatar jcaamano avatar lgtm-com[bot] avatar p- avatar s1061123 avatar zeeke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

multi-networkpolicy-iptables's Issues

Not match multiple ingress policy

Set multiple ingress rules, but only matches the last rule.
For example, This policy only allows from 172.17.0.0/16.

apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/policy-for: macvlan-conf-1 
spec:
  policyTypes:
  - Ingress
  ingress:
  - from:
    - ipBlock:
        cidr: 172.16.0.0/16
  - from:
    - ipBlock:
        cidr: 172.17.0.0/16

Maybe need a return rule before the next rule.

ipt.renderIngressFrom(s, podInfo, idx, n, ingress.From, policyNetworks)

multi-networkpolicy containerd

Hello,

I have tried to deploy multi-networkpolicy so first the pods can’t be scheduled as there was an error in the deployment

See Line 110 in deploy.yml 
  add: ["SYS_ADMIN", "SYS_NET_ADMIN"]

It should be NET_ADMIN , after correcting I could get pod scheduled but then they are in error mode

kubectl -n kube-system logs multi-networkpolicy-ds-amd64-tccgv
I0916 01:25:36.063146 1 server.go:174] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
E0916 01:25:46.160911 1 pod.go:448] failed to get cri client: failed to connect: failed to connect to unix:///host/run/crio/crio.sock, make sure you are running as root and the runtime has been started: context deadline exceeded
F0916 01:25:46.161146 1 main.go:62] cannot create pod change tracker

I tried also to run the Daemonset as root

Then I have changed the container run-time to docker also tried to add the containerd sock , not sure if it supported

Here below the mdofied deploy.yml

cat deploy.yml
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multi-networkpolicy
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - namespaces
    verbs:
      - list
      - watch
      - get
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multi-networkpolicy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multi-networkpolicy
subjects:
- kind: ServiceAccount
  name: multi-networkpolicy
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multi-networkpolicy
  namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: multi-networkpolicy-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multi-networkpolicy
    name: multi-networkpolicy
spec:
  selector:
    matchLabels:
      name: multi-networkpolicy
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multi-networkpolicy
        name: multi-networkpolicy
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multi-networkpolicy
      containers:
      - name: multi-networkpolicy
        # crio support requires multus:latest for now. support 3.3 or later.
        image: ghcr.io/k8snetworkplumbingwg/multi-networkpolicy-iptables:snapshot-amd64
        imagePullPolicy: Always
        command: ["/usr/bin/multi-networkpolicy-iptables"]
        args:
        - "--host-prefix=/host"
        # uncomment this if runtime is docker
        - "--container-runtime=docker"
        # change this if runtime is different that crio default
        - "--container-runtime-endpoint=unix:///run/containerd/containerd.sock"
        # uncomment this if you want to store iptables rules
        - "--pod-iptables=/var/lib/multi-networkpolicy/iptables"
        resources:
          requests:
            cpu: "100m"
            memory: "80Mi"
          limits:
            cpu: "100m"
            memory: "150Mi"
        securityContext:
          privileged: true
          runAsUser: 0
          capabilities:
            add: ["SYS_ADMIN", "NET_ADMIN"]
        volumeMounts:
        - name: host
          mountPath: /host
        - name: var-lib-multinetworkpolicy
          mountPath: /var/lib/multi-networkpolicy
      volumes:
        - name: host
          hostPath:
            path: /
        - name: var-lib-multinetworkpolicy
          hostPath:
            path: /var/lib/multi-networkpolicy

and the logs

kubectl -n kube-system logs multi-networkpolicy-ds-amd64-6jx6t
I0916 02:01:43.685347       1 server.go:174] Neither kubeconfig file nor master URL was specified. Falling back to in-cluster config.
I0916 02:01:43.888556       1 options.go:73] hostname: sysarch-k8s-master-nf-1
I0916 02:01:43.888587       1 options.go:74] container-runtime: docker
I0916 02:01:43.889247       1 namespace.go:79] Starting ns config controller
I0916 02:01:43.889581       1 shared_informer.go:223] Waiting for caches to sync for ns config
I0916 02:01:43.889297       1 server.go:160] Starting network-policy-node
I0916 02:01:43.982086       1 networkpolicy.go:81] Starting policy config controller
I0916 02:01:43.982131       1 shared_informer.go:223] Waiting for caches to sync for policy config
I0916 02:01:43.990857       1 net-attach-def.go:84] Starting net-attach-def config controller
I0916 02:01:43.990892       1 shared_informer.go:223] Waiting for caches to sync for net-attach-def config
I0916 02:01:44.183237       1 shared_informer.go:230] Caches are synced for policy config
I0916 02:01:44.183753       1 server.go:355] OnPolicySynced
I0916 02:01:44.183788       1 shared_informer.go:230] Caches are synced for net-attach-def config
I0916 02:01:44.183800       1 server.go:388] OnNetDefSynced
I0916 02:01:44.282163       1 shared_informer.go:230] Caches are synced for ns config
I0916 02:01:44.282200       1 server.go:421] OnNamespaceSynced
I0916 02:01:44.282207       1 server.go:100] Starting pod config
I0916 02:01:44.283253       1 pod.go:123] Starting pod config controller
I0916 02:01:44.283277       1 shared_informer.go:223] Waiting for caches to sync for pod config
E0916 02:01:45.190935       1 pod.go:368] failed to get pod(nti-mesh/linkerd-sp-validator-85bc85dcb4-7ltr2) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I0916 02:01:45.380851       1 shared_informer.go:230] Caches are synced for pod config
I0916 02:01:45.380917       1 server.go:324] OnPodSynced
E0916 02:01:45.384701       1 pod.go:368] failed to get pod(nti-mesh/linkerd-destination-658f496bcd-8fxww) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0916 02:01:45.385515       1 server.go:453] cannot get nti-security/ncms-cainjector-66d4c5db74-g6pt2 podInfo: not found
E0916 02:01:45.385574       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.385937       1 server.go:453] cannot get nti-monitoring/ntim-prometheus-68769b565d-w9dnb podInfo: not found
E0916 02:01:45.385986       1 server.go:453] cannot get ns1/nf0-backend-f9ccf678d-qc5k8 podInfo: not found
E0916 02:01:45.386255       1 server.go:453] cannot get ns0/nf0-tgen-7687f9456d-wt4fw podInfo: not found
E0916 02:01:45.386630       1 server.go:453] cannot get nti-security/ncms-cainjector-66d4c5db74-g6pt2 podInfo: not found
E0916 02:01:45.386678       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.483181       1 server.go:453] cannot get nti-monitoring/ntim-prometheus-68769b565d-w9dnb podInfo: not found
E0916 02:01:45.483237       1 server.go:453] cannot get ns1/nf0-backend-f9ccf678d-qc5k8 podInfo: not found
E0916 02:01:45.483255       1 server.go:453] cannot get ns0/nf0-tgen-7687f9456d-wt4fw podInfo: not found
E0916 02:01:45.483382       1 server.go:453] cannot get nti-monitoring/ntim-prometheus-68769b565d-w9dnb podInfo: not found
E0916 02:01:45.483393       1 server.go:453] cannot get ns1/nf0-backend-f9ccf678d-qc5k8 podInfo: not found
E0916 02:01:45.483409       1 server.go:453] cannot get ns0/nf0-tgen-7687f9456d-wt4fw podInfo: not found
E0916 02:01:45.483471       1 server.go:453] cannot get nti-security/ncms-cainjector-66d4c5db74-g6pt2 podInfo: not found
E0916 02:01:45.483505       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.487501       1 pod.go:368] failed to get pod(nti-monitoring/ntim-prometheus-68769b565d-w9dnb) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0916 02:01:45.487733       1 server.go:453] cannot get nti-security/ncms-cainjector-66d4c5db74-g6pt2 podInfo: not found
E0916 02:01:45.487762       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.487793       1 server.go:453] cannot get ns1/nf0-backend-f9ccf678d-qc5k8 podInfo: not found
E0916 02:01:45.487820       1 server.go:453] cannot get ns0/nf0-tgen-7687f9456d-wt4fw podInfo: not found
E0916 02:01:45.487902       1 server.go:453] cannot get nti-security/ncms-cainjector-66d4c5db74-g6pt2 podInfo: not found
E0916 02:01:45.487927       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.487954       1 server.go:453] cannot get ns1/nf0-backend-f9ccf678d-qc5k8 podInfo: not found
E0916 02:01:45.487976       1 server.go:453] cannot get ns0/nf0-tgen-7687f9456d-wt4fw podInfo: not found
E0916 02:01:45.488269       1 pod.go:368] failed to get pod(nti-security/ncms-cainjector-66d4c5db74-g6pt2) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0916 02:01:45.488530       1 pod.go:368] failed to get pod(ns1/nf0-backend-f9ccf678d-qc5k8) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0916 02:01:45.488980       1 pod.go:368] failed to get pod(ns0/nf0-tgen-7687f9456d-wt4fw) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0916 02:01:45.489225       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.489374       1 server.go:453] cannot get nti-security/ncms-controller-9d47b9bfc-lkkp5 podInfo: not found
E0916 02:01:45.489622       1 pod.go:368] failed to get pod(nti-security/ncms-controller-9d47b9bfc-lkkp5) network namespace: failed to get container info: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Enhancement: support wildcards in k8s.v1.cni.cncf.io/policy-for

Hi,

In my use case, I have a lot of NetworkAttachmentDefinition objets (named net1, net2, and so on).
It is tedious very tedious to list each object the policy applies to in the annotation.
Instead, I would like to be able to write something like:

apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name: default-deny
  namespace: mysubnet
  annotations:
    k8s.v1.cni.cncf.io/policy-for: net*
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

and have the policy apply to every NetworkAttachmentDefinition matching the given pattern.

In case this is too complex to implement, would it be possible to match NetworkAttachmentDefinition objects using labels instead of their name? (similar to what's done in NetworkPolicy to match namespaces using spec.ingress[].from[].namespaceSelector.matchLabels)

Ingress only rule should allow outgoing TCP connection

I discovered a behavior that differs from regular NetworkPolicies about ingress rules and outgoing traffic. Let me know if it is the intended behavior:

The scenario can be reproduced in a Kind (with Calico as default CNI network plugin) using this fork (branch multinetpolicy-bugs):

$ cd e2e
$ ./get_tools.sh
$ ./setup_cluster.sh
$ kubectl apply -f stacked.yml
$ ./onweay-test.sh

The scenario creates

  • namespace test-oneway-multi
    • 3 pods (pod-a, pod-b, pod-c)
    • MultiNetworkPolicy test-multinetwork-policy-oneway-1
      • podSelector: pod-a
      • ingress[0].from.podSelector: pod-b
  • namespace test-oneway-reguar
    <same as test-oneway-multi with regular NetworkPolicies

oneway-test.sh reproduces the behavior:

$ kubectl apply -f oneway.yml
networkattachmentdefinition.k8s.cni.cncf.io/macvlan1-oneway created
namespace/test-oneway-multi created
pod/pod-a created
pod/pod-b created
pod/pod-c created
multinetworkpolicy.k8s.cni.cncf.io/test-multinetwork-policy-oneway-1 created
namespace/test-oneway-regular created
pod/pod-a created
pod/pod-b created
pod/pod-c created
networkpolicy.networking.k8s.io/test-network-policy-oneway-1 created

$ ./oneway-test.sh
pod/pod-a condition met
(1 = NO connection, 0 = can connect)
MULTI: pod-a <-- pod-b 0
MULTI: pod-a <-- pod-c 1
MULTI: pod-a --> pod-b 0
MULTI: pod-a --> pod-c 1
REGULAR: pod-a <-- pod-b 0
REGULAR: pod-a <-- pod-c 1
REGULAR: pod-a --> pod-b 0
REGULAR: pod-a --> pod-c 0

test-oneway-multi/pod-a iptables

Chain INPUT (policy ACCEPT 9 packets, 486 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   13   726 MULTI-INGRESS  all  --  net1   *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 10 packets, 546 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   10   546 MULTI-EGRESS  all  --  *      net1    0.0.0.0/0            0.0.0.0/0           

Chain MULTI-0-INGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   13   726 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK and 0xfffcffff
   13   726 MULTI-0-INGRESS-0-PORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   13   726 MULTI-0-INGRESS-0-FROM  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    9   486 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x30000/0x30000
    4   240 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain MULTI-0-INGRESS-0-FROM (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    9   486 MARK       all  --  net1   *       2.2.6.10             0.0.0.0/0            MARK or 0x20000

Chain MULTI-0-INGRESS-0-PORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   13   726 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* no ingress ports, skipped */ MARK or 0x10000

Chain MULTI-EGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain MULTI-INGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   13   726 MULTI-0-INGRESS  all  --  net1   *       0.0.0.0/0            0.0.0.0/0            /* policy:test-multinetwork-policy-oneway-1 net-attach-def:default/macvlan1-oneway */

With regular NetworkPolicies, pod-a is able to create connections to pod-c, while it is not possible with MultiNetworkPolicies.

MULTI: pod-a --> pod-c 1
REGULAR: pod-a --> pod-c 0

Let me know if I can work on a fix on this

Update rules for existing pods when new pods are added

Hello,

My use case is this:

  • I have a namespace named myns with a MultiNetworkPolicy that allows pods inside that namespace (and only those) to communicate with one another (both as an ingress & egress policy) using the mynet network attachment definition. The policy looks like this:
apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name: mypolicy
  namespace: myns
  annotations:
    k8s.v1.cni.cncf.io/policy-for: mynet
spec:
  podSelector: {}
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: myns
        - podSelector: {}
  egress:
    - to:
        - namespaceSelector:
            matchLabels:
              kubernetes.io/metadata.name: myns
        - podSelector: {}
  • I then create a first pod (podA) in that namespace. I see rules are created allowing traffic in/out using podA's IP address. So far, so good.
  • However, when I add another pod (podB) to the namespace, I see rules are created for podB, allowing traffic from/to podA & podB (also good), but the rules for podA are never updated, meaning that traffic going from podA to podB is dropped (due to the lack of a corresponding egress rule in podA) and traffic from podB to podA is also dropped (due to the lack of a corresponding ingress rule for podA).

For comparison, I see that some CNI plugins (e.g. Weave Net) use the IP sets framework to handle that:

  • The iptables rules apply to IP sets (e.g. -A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChPgUaGF_$}G;WdH~~TK)o src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT). The content of such a set is similar to the following extract from ipset list :
Name: weave-s_+ChPgUaGF_$}G;WdH~~TK)o
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536 comment
Size in memory: 1238
References: 1
Number of entries: 6
Members:
10.11.12.20 comment "namespace: default, pod: prometheus-kube-state-metrics-6723ds345-63435d"
  • Pods are added/removed to/from the matching IP sets when they are created/updated/deleted.

Is there a similar mechanism in multi-networkpolicy-iptables / is there a way to update the rules for existing pods when new pods are added to a namespace ?

Best regards,
François

cannot express policy with only a `namespaceSelector`

When attempting to configure the following policy (which just uses a namespaceSelector in the policy's from clause), the ...-FROM rules (the ones marked w/ 0x20000) are not generated:

apiVersion: k8s.cni.cncf.io/v1beta1
kind: MultiNetworkPolicy
metadata:
  name: vmi-zero-trust-secondary-network
  namespace: default
  annotations:
    k8s.v1.cni.cncf.io/policy-for: wb-vm-network
spec:
  podSelector:
    labelSelector:
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels: 
          role: allowed-namespace
    ports:
    - protocol: TCP
      port: 80

Looking at the code, it seems the rules generator always requires a podSelector for the rules to be generated.

When reading the Kubernetes network policy documentation, it can be seen that using the namespaceSelector without a podSelector is a valid option.

I can also say that when an empty pod selector is added to the from clause provided above, the expected rule is generated.

unknown capability: "CAP_SYS_NET_ADMIN"

after installing multi-networkpolicy-iptables, deamonset has err occured:

  Normal   Scheduled  9m55s                default-scheduler  Successfully assigned kube-system/multi-networkpolicy-ds-amd64-kzvzg to cluster03-master01-192.168.3.30-centos
  Normal   Pulling    13s (x4 over 9m55s)  kubelet            Pulling image "ghcr.io/k8snetworkplumbingwg/multi-networkpolicy-iptables:snapshot-amd64"
  Normal   Pulled     11s (x4 over 51s)    kubelet            Successfully pulled image "ghcr.io/k8snetworkplumbingwg/multi-networkpolicy-iptables:snapshot-amd64"
  Warning  Failed     11s (x4 over 51s)    kubelet            Error: Error response from daemon: invalid CapAdd: unknown capability: "CAP_SYS_NET_ADMIN"

I can't find CAP_SYS_NET_ADMIN from man page this link: https://man7.org/linux/man-pages/man7/capabilities.7.html

Support for IPv6

The iptables implementation of multi-networkpolicy does not support pods with IPv6 addresses on multus interfaces, giving the following error in such case:

I0610 13:43:14.709071       1 iptables.go:337] running iptables-save [-t mangle]
I0610 13:43:14.709873       1 iptables.go:337] running iptables-save [-t filter]
I0610 13:43:14.710601       1 iptables.go:337] running iptables-save [-t nat]
I0610 13:43:14.711482       1 iptables.go:442] running iptables: iptables [-w -N MULTI-INGRESS -t filter]
I0610 13:43:14.712217       1 iptables.go:442] running iptables: iptables [-w -N MULTI-EGRESS -t filter]
I0610 13:43:14.712951       1 iptables.go:442] running iptables: iptables [-w -C INPUT -t filter -i net1 -j MULTI-INGRESS]
I0610 13:43:14.796884       1 iptables.go:442] running iptables: iptables [-w -C OUTPUT -t filter -o net1 -j MULTI-EGRESS]
I0610 13:43:14.797721       1 iptables.go:442] running iptables: iptables [-w -C PREROUTING -t nat -i net1 -j RETURN]
I0610 13:43:14.798446       1 iptables.go:337] running iptables-save [-t filter]
I0610 13:43:14.799776       1 iptables.go:402] running iptables-restore [-w --noflush --counters]
E0610 13:43:14.800993       1 server.go:610] sync rules failed: exit status 2 (iptables-restore v1.4.21: host/network `3ffe:ffff:0:1ff::4e' not foundError occurred at line: 14Try `iptables-restore -h' or 'iptables-restore --help' for more information.)
I0610 13:43:14.801038       1 iptables.go:337] running iptables-save [-t mangle]
I0610 13:43:14.801838       1 iptables.go:337] running iptables-save [-t filter]
I0610 13:43:14.802697       1 iptables.go:337] running iptables-save [-t nat] 

A possible solution can be leveraging the Server.ipv6Tables field to issue the same iptables commands as IPv4, in case the involved pods have v6 addresses.

WDYT? Is it a viable way?

Stacked policies should be linked with logical `OR`

Hi,

I think I found a behavior that differs from regular NetworkPolicies when creating multiple MultiNetworkPolicies with the same pod selector.

The scenario can be reproduced in a Kind (with Calico as default CNI network plugin) using this fork (branch multinetpolicy-bugs):

$ cd e2e
$ ./get_tools.sh
$ ./setup_cluster.sh
$ kubectl apply -f stacked.yml
$ ./stacked-test.sh

The scenario creates

  • namespace test-stacked-multi
    • 4 pods (pod-a, pod-b, pod-c, pod-d)
    • MultiNetworkPolicy test-multinetwork-policy-stacked-1
      • podSelector: pod-a
      • ingress[0].from.podSelector: pod-b
    • MultiNetworkPolicy test-multinetwork-policy-stacked-2
      • podSelector: pod-a
      • ingress[0].from.podSelector: pod-c
  • namespace test-stacked-reguar
    <same as test-stacked-multi with regular NetworkPolicies

stacked-test.sh verifies the behavior (1 error connecting, 0 connection ok)

$ ./stacked-test.sh                                                                                                                                                        10:32:52
pod/pod-a condition met
MULTI: pod-a <-- pod-b 1
MULTI: pod-a <-- pod-c 1
MULTI: pod-a <-- pod-d 1
REGULAR: pod-a <-- pod-b 0
REGULAR: pod-a <-- pod-c 0
REGULAR: pod-a <-- pod-d 1

test-stacked-multi/pod-a iptables

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   15   900 MULTI-INGRESS  all  --  net1   *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MULTI-EGRESS  all  --  *      net1    0.0.0.0/0            0.0.0.0/0           

Chain MULTI-0-INGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK and 0xfffcffff
    6   360 MULTI-0-INGRESS-0-PORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    6   360 MULTI-0-INGRESS-0-FROM  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    2   120 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x30000/0x30000
    4   240 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain MULTI-0-INGRESS-0-FROM (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 MARK       all  --  net1   *       2.2.5.10             0.0.0.0/0            MARK or 0x20000

Chain MULTI-0-INGRESS-0-PORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* no ingress ports, skipped */ MARK or 0x10000

Chain MULTI-1-INGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK and 0xfffcffff
    2   120 MULTI-1-INGRESS-0-PORTS  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    2   120 MULTI-1-INGRESS-0-FROM  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x30000/0x30000
    2   120 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain MULTI-1-INGRESS-0-FROM (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       all  --  net1   *       2.2.5.9              0.0.0.0/0            MARK or 0x20000

Chain MULTI-1-INGRESS-0-PORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   120 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* no ingress ports, skipped */ MARK or 0x10000

Chain MULTI-EGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain MULTI-INGRESS (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    6   360 MULTI-0-INGRESS  all  --  net1   *       0.0.0.0/0            0.0.0.0/0            /* policy:test-multinetwork-policy-stacked-1 net-attach-def:default/macvlan1-stacked */
    2   120 MULTI-1-INGRESS  all  --  net1   *       0.0.0.0/0            0.0.0.0/0            /* policy:test-multinetwork-policy-stacked-2 net-attach-def:default/macvlan1-stacked */

It seems that MultiNetworkPolicies are linked with an AND operator (from pod-b AND pod-c), while with regular network policies work with an OR operator, as stated in the documentation:

Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow. Thus, order of evaluation does not affect the policy result.

Is it the intended behavior or it can be flagged as a bug? in the letter case I can work on a PR with some clues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.