Coder Social home page Coder Social logo

Comments (16)

rancher-max avatar rancher-max commented on June 21, 2024 1

Validated using commitid f2e7c01acfdc5f51bfd007c44bfe6605e8864975 from the master branch via the linked automated test above and added to our regression suite.

Thank you so much for this report and very clear reproduction steps! The PR linked has the steps and details of the test results for those curious. This fix will be available for general consumption in the June patch releases.

from k3s.

brandond avatar brandond commented on June 21, 2024

This works fine for me. Are you perhaps just trying to make a request very quickly, before the network policies have synced? They are not instantaneous, there may be a few seconds following the creation of a new pod where the policies are not yet in effect.

/ # while true; do wget -qO - http://10.42.0.4; sleep 1; done
Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.4
IP: fe80::48f4:edff:fe25:9ac5
RemoteAddr: 10.42.0.5:60158
GET / HTTP/1.1
Host: 10.42.0.4
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.4
IP: fe80::48f4:edff:fe25:9ac5
RemoteAddr: 10.42.0.5:60172
GET / HTTP/1.1
Host: 10.42.0.4
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.4
IP: fe80::48f4:edff:fe25:9ac5
RemoteAddr: 10.42.0.5:60174
GET / HTTP/1.1
Host: 10.42.0.4
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.4
IP: fe80::48f4:edff:fe25:9ac5
RemoteAddr: 10.42.0.5:49488
GET / HTTP/1.1
Host: 10.42.0.4
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.4
IP: fe80::48f4:edff:fe25:9ac5
RemoteAddr: 10.42.0.5:49490
GET / HTTP/1.1
Host: 10.42.0.4
User-Agent: Wget
Connection: close
# ---
# --- here is where I deleted the server pod - note that I get only a single error and then no further output until I hit control-c to break out of the loop
# ---
wget: can't connect to remote host (10.42.0.4): Connection refused
^C
# ---
# --- I then re-created the server pod and started querying it at its new address within a few seconds
# ---
/ # while true; do wget -qO - http://10.42.0.6; sleep 1; done
Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43828
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43832
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43834
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43836
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43848
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

Hostname: server
IP: 127.0.0.1
IP: ::1
IP: 10.42.0.6
IP: fe80::200e:d6ff:fe7c:26d1
RemoteAddr: 10.42.0.5:43860
GET / HTTP/1.1
Host: 10.42.0.6
User-Agent: Wget
Connection: close

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

That's odd. No, I've tried waiting for as long as several minutes and for me connectivity is never restored.

I've created a new repo here with the manifests I'm using to test, plus a script that puts it all together to make sure we're doing the exact same things. You should be able to replicate by just spinning up a new server, then:

git clone https://github.com/jfmontanaro/k3s-netpol-issue-demo.git
cd k3s-netpol-issue-demo
./run.sh

Note that I did add a service to front the server pod, in contrast to how I was doing it above, so that I didn't have to bother figuring out the IP addresses for direct communication with pods. In all my testing, however, it hasn't made a difference whether I was talking to a pod or a service fronting that pod.

I've been testing on VMs from Vultr, if that makes any difference. I can't think why it should but if it still doesn't replicate I don't know what else could be making the difference.

from k3s.

brandond avatar brandond commented on June 21, 2024

I used the exact pod manifests you provided in your initial message. Is there anything else odd about your configuration? Are you sure it's network policy that is blocking your traffic?

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

I think it's likely that the network policy is at least involved, given that the issue goes away immediately if I remove the network policy.

I notice that the responses you are getting from your whoami service show only link-local addresses for the server's IPv6 address; are you using a dual-stack cluster? I've only seen this issue in dual stack clusters.

from k3s.

brandond avatar brandond commented on June 21, 2024

Do you have any errors in the k3s logs? I wonder if something is going on with the ipv6 rule sync?

from k3s.

brandond avatar brandond commented on June 21, 2024

@manuelbuil do you have any ideas on what else to look at here?

from k3s.

microsec-ambrose avatar microsec-ambrose commented on June 21, 2024

I have a similar issue as well. Sometimes, when I restart the pods, the network policy does not apply for minutes. Making changes to the network policy or deleting the pod again causes it to work sometimes as well.

node-external-ip: "<ext4>,<ext6>"
flannel-iface: eno2

flannel-backend: host-gw
flannel-ipv6-masq: true
tls-san: <domain>
cluster-domain: <domain>
cluster-cidr: "<int4>/13,<int6>/56"
service-cidr: "<int4/16,<int6>/112"
secrets-encryption: true
kube-controller-manager-arg:
  - "node-cidr-mask-size-ipv4=23"
  - "node-cidr-mask-size-ipv6=64"
etcd-expose-metrics: true
disable: local-storage

The problem becomes more frequent when using Egress as well.

Unfortunately, I don't have much details yet but I'll try to do some investigation.

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

Do you have any errors in the k3s logs? I wonder if something is going on with the ipv6 rule sync?

Just tested this, going from a working state (pod running and receiving traffic) to a broken state (pod running but inaccessible) via delete / recreate.

After deleting the working pod, I get this output:

May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.274001    4635 topology_manager.go:215] "Topology Admit Handler" podUID="14693fc2-b378-4c6c-8764-aee596c32cd2" podNamespace="default" podName="server"
May 07 18:23:52 vultr k3s[4635]: E0507 18:23:52.284261    4635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cd9f56a-f17c-44bf-b886-36a419d919e1" containerName="whoami"
May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.284496    4635 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9f56a-f17c-44bf-b886-36a419d919e1" containerName="whoami"
May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.460776    4635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kdb\" (UniqueName: \"kubernetes.io/projected/14693fc2-b378-4c6c-8764-aee596c32cd2-kube-api-access-94kdb\") pod \"server\" (UID: \"14693fc2-b378-4c6c-8764-aee596c32cd2\") " pod="default/server"

When I then re-create the pod, I get this:

May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.274001    4635 topology_manager.go:215] "Topology Admit Handler" podUID="14693fc2-b378-4c6c-8764-aee596c32cd2" podNamespace="default" podName="server"
May 07 18:23:52 vultr k3s[4635]: E0507 18:23:52.284261    4635 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2cd9f56a-f17c-44bf-b886-36a419d919e1" containerName="whoami"
May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.284496    4635 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd9f56a-f17c-44bf-b886-36a419d919e1" containerName="whoami"
May 07 18:23:52 vultr k3s[4635]: I0507 18:23:52.460776    4635 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kdb\" (UniqueName: \"kubernetes.io/projected/14693fc2-b378-4c6c-8764-aee596c32cd2-kube-api-access-94kdb\") pod \"server\" (UID: \"14693fc2-b378-4c6c-8764-aee596c32cd2\") " pod="default/server"

Finally, when I try (and fail) to make an http request to the running pod, sometimes I will see output like this:

May 07 18:24:19 vultr k3s[4635]: E0507 18:24:19.361668    4635 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:58364->127.0.0.1:10010: write tcp 127.0.0.1:58364->127.0.0.1:10010: write: broken pipe
May 07 18:24:19 vultr k3s[4635]: E0507 18:24:19.364346    4635 upgradeaware.go:439] Error proxying data from backend to client: write tcp 127.0.0.1:10250->127.0.0.1:60376: write: connection reset by peer

However, that last doesn't happen all the time.

In addition, separately from my running any tests, I have seen this message come up:

May 07 18:27:18 vultr k3s[4635]: E0507 18:27:18.837279    4635 kubelet_node_status.go:711] "Failed to set some node status fields" err="failed to validate secondaryNodeIP: node IP: \"2001:19f0:5401:2db0:5400:4ff:fee6:63ac\" not found in the host's network interfaces" node="vultr"

But I think that's just because I used a publicly-routable IPv6 range that doesn't actually route to the server I'm using.

from k3s.

manuelbuil avatar manuelbuil commented on June 21, 2024

Thanks for reporting this, I was able to reproduce it. I suspect this is a bug in the kube-router network policy controller.

When following your steps on a dual-stack environment, the first time I create the server, if I execute ipset list I can see both server IPs: 10.42.1.5 and 2001:cafe:42:1::5

Name: KUBE-DST-2FAIIK2E4RIPMTGF
Type: hash:ip
Revision: 5
Header: family inet hashsize 1024 maxelem 65536 timeout 0
Size in memory: 280
References: 2
Number of entries: 1
Members:
10.42.1.5 timeout 0

Name: inet6:KUBE-DST-I3PRO5XXEERITJZO
Type: hash:ip
Revision: 5
Header: family inet6 hashsize 1024 maxelem 65536 timeout 0
Size in memory: 304
References: 2
Number of entries: 1
Members:
2001:cafe:42:1::5 timeout 0

After removing the server and recreating again, I can only see the ipv6 address: 2001:cafe:42:1::6:

Name: KUBE-DST-2FAIIK2E4RIPMTGF
Type: hash:ip
Revision: 5
Header: family inet hashsize 1024 maxelem 65536 timeout 0
Size in memory: 216
References: 2
Number of entries: 0
Members:

Name: inet6:KUBE-DST-I3PRO5XXEERITJZO
Type: hash:ip
Revision: 5
Header: family inet6 hashsize 1024 maxelem 65536 timeout 0
Size in memory: 304
References: 2
Number of entries: 1
Members:
2001:cafe:42:1::6 timeout 0

The ipv4 memeber list has no IPs available. As a consequence, the verdict will always be false when the iptables rule checks that ipset.

Workaround: Change your ipFamilyPolicy: service config to PreferDualStack. The ipv6 config is correct, so the wget command will work

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

Hi @manuelbuil, thanks for the workaround! Can confirm it works in my testing environment.

from k3s.

manuelbuil avatar manuelbuil commented on June 21, 2024

It's definitely a bug in upstream kube-router. The problem is that each ipFamily (ipv4 & ipv6) carries an ipset handler object. However, that ipset handler is tracking ipsets of both ipFamilies but only updates its ipFamily ipset correctly, the other one is left with outdated data (especially ipv6). As the ipv6 ipset is always the second one to be "refreshed", it overwrites the ipv4 ipset with that outdated data. I'm going to collect more information and open a bug issue in kube-router upstream

from k3s.

manuelbuil avatar manuelbuil commented on June 21, 2024

cloudnativelabs/kube-router#1665

from k3s.

manuelbuil avatar manuelbuil commented on June 21, 2024

kube-router maintainer created a fix. I tested and it fixes the issue :)

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

Wow, that's great! Thanks for getting this addressed so quickly!

from k3s.

jfmontanaro avatar jfmontanaro commented on June 21, 2024

Great, thanks so much for the speedy response!

from k3s.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.