Coder Social home page Coder Social logo

kube-v6's Introduction

kube-v6

Instructions on how to instantiate a multi-node, IPv6-only Kubernetes cluster using the CNI bridge plugin and Host-local IPAM plugin for developing or exploring IPv6 on Kubernetes.

Also includes instructions on configuring a dual-stack ingress controller on an IPv6-only Kubernetes cluster.

Table of Contents

Overview

So you'd like to take Kubernetes IPv6 for a test drive, or perhaps do some Kubernetes IPv6 development? The instructions below describe how to bring up a multi-node, IPv6-only Kubernetes cluster that uses the CNI bridge and host-local IPAM plugins, using kubeadm to stand up the cluster.

For instructional purposes, the steps below assume the topology shown in the following diagram, but certainly various topologies can be supported (e.g. using baremetal nodes or different IPv6 addressing schemes) with slight variations in the steps:

Screenshot

Quick Start Options

If you would like to get a sense of how what IPv6-only support looks like on Kubernetes, here are a couple of quick-start options:

  • Docker run an IPv6-only cluster in a "Kube-in-the-Box" container, using commands described here.
  • Use the automated Vagrant environment. You will need to install Vagrant, and then run ./vagrant-start from the vagrant directory.

FAQs

Why Use the CNI Bridge Plugin? Isn't it intended for single-node clusters?

The Container Networking Interface (CNI) Release v0.6.0 included support for IPv6 operation for the Bridge plugin and Host-local IPAM plugin. It was therefore considered a good reference plugin with which to test IPv6 changes that were being made to Kubernetes. Although the bridge plugin is intended for single-node operation (the bridge on each node is isolated), a multi-node cluster using the bridge plugin can be instantiated using a couple of manual steps:

  • Provide each node with a unique pod address space (e.g. each node gets a unique /64 subnet for pod addresses).
  • Add static routes on each node to other nodes' pod subnets using the target node's host address as a next hop.

Why run in IPv6-only mode rather than running in dual-stack mode?

The first phase of implementation for IPv6 on Kubernetes (introduced in Kubernetes Version 1.9) will target support for IPv6-only clusters. The main reason for this is that Kubernetes currently only supports/recognizes a single IP address per pod (i.e. no multiple-IP support). So even though the CNI bridge plugin supports dual-stack (as well as support for multiple IPv6 addresses on each pod interface) operation on the pods, Kubernetes will currently only be aware of one IP address per pod.

What is the purpose of NAT64 and DNS64 in the IPv6 Kubernetes cluster topology diagram?

We need to be able to test IPv6-only Kubernetes cluster configurations. However, there are many external services (e.g. DockerHub) that only work with IPv4. In order to interoperate between our IPv6-only cluster and these external IPv4 services, we configure a node outside of the cluster to host a NAT64 server and a DNS64 server. The NAT64 server operates in dual-stack mode, and it serves as a stateful translator between the internal IPv6-only network and the external IPv4 Internet. The DNS64 server synthesizes AAAA records for any IPv4-only host/server outside of the cluster, using a special prefix of 64:ff9b::. Any packets that are forwarded to the NAT64 server with this special prefix are subjected to stateful address translation to an IPv4 address/port.

Should I use global (GUA) or private (ULA) IPv6 addresses on the Kubernetes nodes?

You can use GUA, ULA, or a combination of both for addresses on your Kubernetes nodes. Using GUA addresses (that are routed to your cluster) gives you the flexibility of connecting directly to Kubernetes services from outside the cluster (e.g. by defining Kubernetes services using nodePorts or externalIPs). On the other hand, the ULA addresses that you choose can be convenient and predictable, and that can greatly simplify the addition of static routes between nodes and pods.

Preparing the Nodes

Set up node IP addresses

For the example topology show above, the eth2 addresses would be configured via IPv6 SLAAC (this is optional, and requires a router external to the Kubernetes cluster to provide Router Advertisement messages), and the eth1 addresses would be statically configured with IPv6 Unique Local Addresses (ULAs) as follows:

       Node        IP Address
   -------------   ----------
   NAT64/DNS64     fd00::64
   Kube Master     fd00::100
   Kube Node 1     fd00::101
   Kube Node 1     fd00::102

Configure /etc/hosts on each node with the new addresses

NOTE: When configuring nodes with dual-stack addresses on an otherwise IPv6-only Kubernetes cluster, care should be taken to configure the /etc/hosts file on each master/worker node to include only IPv6 addresses for each node. An example /etc/hosts file is shown below. Failure to configure the /etc/hosts file in this way will result in Kubernetes system pods (API server, controller manager, etc.) getting assigned IPv4 addresses, so that their services are not reachable from the other pods in the cluster with IPv6 addresses.

Example /etc/hosts file:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
fd00::64 kube-nat64-dns64
fd00::100 kube-master
fd00::101 kube-node-1
fd00::102 kube-node-2

Add static routes between nodes, pods, and Kubernetes services

In the list of static routes below, the subnets/addresses used are as follows:

   Subnet/Address          Description
   --------------    ---------------------------
   64:ff9b::/96      Prefix used inside the cluster for packets requiring NAT64 translation
   fd00::101         Kube Node 1
   fd00::102         Kube Node 2
   fd00:101::/64     Kube Node 1's pod subnet
   fd00:102::/64     Kube Node 2's pod subnet
   fd00:1234::/64    Cluster's Service subnet

Static Routes on NAT64/DNS64 Server

Example: CentOS 7, entries in /etc/sysconfig/network-scripts/route6-eth1:

fd00:101::/64 via fd00::101 metric 1024
fd00:102::/64 via fd00::102 metric 1024

Static Routes on Kube Master

Example: CentOS 7, entries in /etc/sysconfig/network-scripts/route6-eth1:

64:ff9b::/96 via fd00::64 metric 1024
fd00:101::/64 via fd00::101 metric 1024
fd00:102::/64 via fd00::102 metric 1024

Static Routes on Kube Node 1

Example: CentOS 7, entries in /etc/sysconfig/network-scripts/route6-eth1:

64:ff9b::/64 via fd00::64 metric 1024
fd00:102::/64 via fd00::102 metric 1024
fd00:1234::/64 via fd00::100 metric 1024

Static Routes on Kube Node 2

Example: CentOS 7, entries in /etc/sysconfig/network-scripts/route6-eth1:

64:ff9b::/64 via fd00::64 metric 1024
fd00:101::/64 via fd00::101 metric 1024
fd00:1234::/64 via fd00::100 metric 1024

Configure and install NAT64 and DNS64 on the NAT64/DNS64 server

For installing on a Ubuntu host, refer to the NAT64-DNS64-UBUNTU-INSTALL.md file.

For installing on a CentOS 7 host, refer to the NAT64-DNS64-CENTOS7-INSTALL.md file.

If Using VirtualBox VMs as Kubernetes Nodes

If you are using VirtualBox VMs as Kubernetes nodes, please see the VirtualBox-specific considerations listed in VIRTUALBOX_CONSIDERATIONS.md.

Set sysctl IPv6 settings for forwarding and using ip6tables for intra-bridge

Set the following sysctl settings on the nodes:

net.ipv6.conf.all.forwarding=1
net.bridge.bridge-nf-call-ip6tables=1

For example, on CentOS 7 hosts:

sudo -i
cat << EOT >> /etc/sysctl.conf
net.ipv6.conf.all.forwarding=1
net.bridge.bridge-nf-call-ip6tables=1
EOT
sudo sysctl -p /etc/sysctl.conf
exit

Install standard Kubernetes packages

On the Kubernetes master and nodes, install docker, kubernetes, and kubernetes-cni. Reference: Installing kubeadm

Example: For CentOS 7 based hosts

sudo -i
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
exit

sudo -i
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
exit

The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do.

Configure the kube-dns Kubernetes service address in kubelet startup config

When the kubelet systemd service is started up, it needs to know what nameserver address that it will be configuring in the /etc/resolv.conf file of every pod that it starts up. Since kube-dns provides DNS service within the cluster, the nameserver address configured in pods needs to be the Kubernetes service address of kube-dns.

By default, when kubeadm is installed, the kubelet service is configured for the default IPv4 service address of 10.96.0.10 via a --cluster-dns setting in the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file. But for a Kubernetes cluster running in IPv6-only mode, the kube-dns service address will be the :a address in the Kubernetes service CIDR. For example, for the Kubernetes service CIDR shown in the example topoogy above, the kube-dns service address will be:

fd00:1234::a

TODO: Modify the step below to use 10-extra-args.conf dropin file rather than 10-kubeadm.conf.

To modify kubelet's kube-dns configuration, do the following on all nodes (and your master, if you plan on un-tainting it):

KUBE_DNS_SVC_IPV6=fd00:1234::a
sudo sed -i "s/--cluster-dns=.* /--cluster-dns=$KUBE_DNS_SVC_IPV6 /" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Create an IPv6-only CNI bridge network config file on all nodes

On kube-node-1, create a CNI network plugin using pod subnet fd00:101::/64:

sudo -i
# Backup any existing CNI netork config files to home dir
mv /etc/cni/net.d/* $HOME
cat <<EOT > 10-bridge-v6.conf
{
  "cniVersion": "0.3.0",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cbr0",
  "isDefaultGateway": true,
  "ipMasq": true,
  "hairpinMode": true,
  "ipam": {
    "type": "host-local",
    "ranges": [
      [
        {
          "subnet": "fd00:101::/64",
          "gateway": "fd00:101::1"
        }
      ]
    ]
  }
}
EOT
exit

On kube-node-2, create a CNI network plugin using pod subnet fd00:102::/64:

sudo -i
# Backup any existing CNI netork config files to home dir
mv /etc/cni/net.d/* $HOME
cat <<EOT > 10-bridge-v6.conf
{
  "cniVersion": "0.3.0",
  "name": "mynet",
  "type": "bridge",
  "bridge": "cbr0",
  "isDefaultGateway": true,
  "ipMasq": true,
  "hairpinMode": true,
  "ipam": {
    "type": "host-local",
    "ranges": [
      [
        {
          "subnet": "fd00:102::/64",
          "gateway": "fd00:102::1"
        }
      ]
    ]
  }
}
EOT
exit

Bringing up the Kubernetes Cluster with kubeadm

Create IPv6-enabled kubeadm config file on master node

On the master node, create a kubeadm config file as follows:

cat << EOT > kubeadm_v6.cfg
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: fd00::100
networking:
  serviceSubnet: fd00:1234::/110
nodeName: kube-master
EOT

Run kubeadm init on master node

On the master node, run:

sudo -i
kubeadm init --config=kubeadm_v6.cfg

When this command completes (it may take a few minutes), you will see something like the following in the command output:

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run these commands as shown. You can run these as either root or as a regular user.

You will also see something like the following in the 'kubeadm init ...' command output:

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <TOKEN-REDACTED> [fd00::100]:6443 --discovery-token-ca-cert-hash sha256:<HASH-REDACTED>

Take note of this command, as you will need it for the next step.

Run kubeadm join on all nodes

To join each of your nodes to the cluster, run the 'kubeadm join ...' command that you saw in the output of 'kubeadm init ...' (from the previous step) on each node. This command should complete almost immediately, and you should see the following at the end of the command output:

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

Run 'kubectl get nodes' on the master

Run 'kubeadm get nodes' on the master. You should see something like the following:

[root@kube-master ~]# kubectl get nodes
NAME            STATUS    ROLES     AGE       VERSION
kube-master     Ready     master    11h       v1.9.0
kube-minion-1   Ready     <none>    11h       v1.9.0
kube-minion-2   Ready     <none>    11h       v1.9.0
[root@kube-master ~]# 

Note: If for some reason you don't see the nodes showing up in the nodes list, try restarting the kubelet service on the effected node, e.g.:

systemctl restart kubelet

and rerun 'kubectl get nodes' on the master.

Run 'kubectl get pods ...' on the master

Run the following to check that all kubernetes system pods are alive:

[root@kube-master ~]# kubectl get pods -o wide --all-namespaces
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE       IP                    NODE
kube-system   etcd-kube-master                      1/1       Running   0          36s       2001:<**REDACTED**>   kube-master
kube-system   kube-apiserver-kube-master            1/1       Running   0          36s       2001:<**REDACTED**>   kube-master
kube-system   kube-controller-manager-kube-master   1/1       Running   0          35s       2001:<**REDACTED**>   kube-master
kube-system   kube-dns-5b8ff6bff8-rlfp2             3/3       Running   0          5m        fd00:102::2           kube-node-2
kube-system   kube-proxy-prd5s                      1/1       Running   0          5m        2001:<**REDACTED**>   kube-master
kube-system   kube-proxy-tqfcf                      1/1       Running   0          38s       2001:<**REDACTED**>   kube-node-2
kube-system   kube-proxy-xsqcx                      1/1       Running   0          41s       2001:<**REDACTED**>   kube-node-1
kube-system   kube-scheduler-kube-master            1/1       Running   0          35s       2001:<**REDACTED**>   kube-master
[root@kube-master ~]# 

Additional checking of the cluster

Test Connectivity with the Kubernetes API Server

[root@kube-master nginx_v6]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   fd00:1234::1   <none>        443/TCP   10h
[root@kube-master nginx_v6]# curl -g [fd00:1234::1]:443 | od -c -a
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100    14    0    14    0     0   1926      0 --:--:-- --:--:-- --:--:--  3500
0000000 025 003 001  \0 002 002  \n
        nak etx soh nul stx stx  nl
0000007
[root@kube-master nginx_v6]# 

Manually Test Services Using an IPv6-Enabled, nginx-Based Replicated Service

To manually test IPv6-based services, follow the instructions in nginx_v6/README.md

Running IPv6 e2e Test Suite to Confirm IPv6 Networking Conformance

To run the IPv6 End-to-End test suite to verify that your IPv6 Kubernetes cluster meets IPv6 Networking conformance, follow the kube-v6-test guide.

Resetting and Re-Running kubeadm init/join

If you ever need to restart and re-run kubeadm init/join, follow these steps.

Reset the nodes

On each node, run the following as sudo:

# Make backup of CNI config
mkdir -p $HOME/temp
cp /etc/cni/net.d/10-bridge-v6.conf $HOME/temp
# Run kubeadm reset
kubeadm reset
# Clean up an allocated IPs
rm -f /var/lib/cni/networks/mynet/*
# Restore CNI config
cp $HOME/temp/10-bridge-v6.conf /etc/cni/net.d

Reset the master

There's no CNI config to save/restore on the master, so run:

kubeadm reset

Re-run 'kubeadm init ...' and 'kubeadm join ...' as sudo

Installing a Dual-Stack Ingress Controller on an IPv6-Only Kubernetes Cluster

If an otherwise IPv6-only Kubernetes cluster is configured with dual-stack (IPv4 and IPv6) public addresses on worker nodes, then it is possible to install a dual-stack ingress controller on the cluster. In this way, dual-stack access from both IPv4 and IPv6 external clients can be provided for services that are hosted in an IPv6-only (internally speaking) Kubernetes cluster. The method for doing this is described in dual-stack-ingress/README.md.

kube-v6's People

Contributors

aojea avatar oryon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-v6's Issues

Cannot ping ipv6 from remote container

I have a setup running Calico IPV6 enabled in Kubernetes v1.9.5 running docker 1.13.1-cs9. I'm not able to ping the remote container using ipv6 address. I can ping ipv6 address locally. Do you know what's needed to be resolve to be able to ping the remote container?

What is the best way to verify ipv6 traffic is coming into the container?


# kubectl get po -n kube-system
NAME                                                                  READY     STATUS    RESTARTS   AGE
calico-etcd-jg2vr                                                     1/1       Running   0          5h
calico-kube-controllers-866bf5646c-pplzf                              1/1       Running   0          5h
calico-node-kz2sk                                                     2/2       Running   0          5h
etcd-devtricorder0A-master-01                                         1/1       Running   0          5h
kube-apiserver-devtricorder0A-master-01                               1/1       Running   0          5h
kube-controller-manager-devtricorder0A-master-01                      1/1       Running   0          5h
kube-dns-6f4fd4bdf-zdwkz                                              3/3       Running   0          5h
kube-proxy-q9xg5                                                      1/1       Running   0          5h
kube-scheduler-devtricorder0A-master-01                               1/1       Running   0          5h
kubernetes-dashboard-68d7c968ff-nj2m7                                 1/1       Running   0          5h

From Container B:
Pinging remote container

# ping6 fe80::e471:21ff:fef0:b88d
PING fe80::e471:21ff:fef0:b88d (fe80::e471:21ff:fef0:b88d): 56 data bytes
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: Destination unreachable: Address unreachable

Pinging from locally

# ping6 fe80::482b:91ff:fe64:de2b
PING fe80::482b:91ff:fe64:de2b (fe80::482b:91ff:fe64:de2b): 56 data bytes
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=1 ttl=64 time=0.060 ms
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=2 ttl=64 time=0.056 ms
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=3 ttl=64 time=0.059 ms
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=4 ttl=64 time=0.068 ms
64 bytes from fe80::482b:91ff:fe64:de2b%eth0: icmp_seq=5 ttl=64 time=0.053 ms

Container A:

# ip -6 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 state UP
    inet6 fe80::e471:21ff:fef0:b88d/64 scope link
       valid_lft forever preferred_lft forever

# ip -6 route show
fe80::/64 dev eth0  proto kernel  metric 256

Container B:

# ip -6 addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN qlen 1000
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
4: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 state UP
    inet6 fe80::482b:91ff:fe64:de2b/64 scope link
       valid_lft forever preferred_lft forever

# ip -6 route show
fe80::/64 dev eth0  proto kernel  metric 256

More information needs to be added for dual stack nginx README.md

Currently, the readme for nginx dual stack doesn't explicitly describe how dual stack is working with the nginx ingress controller and what is the complete flow. Currently the reader can see that all the configurations are over IPv6, but IPv4 is also working. More details and flow would be very useful.

ULA examples aren't compliant with ULA RFC 4193

Hi,

The IPv6 Unique Local Unicast Address (ULA) prefixes in this document aren't good ones to use as examples, as they don't seem to comply the ULA RFC 4193 Global ID requirements:

"The allocation of Global IDs is pseudo-random [RANDOM]. They MUST
NOT be assigned sequentially or with well-known numbers. This is to
ensure that there is not any relationship between allocations and to
help clarify that these prefixes are not intended to be routed
globally. Specifically, these prefixes are not designed to
aggregate."

While they may be examples, there should be at least a comment that they are examples and that ULA prefixes should have globally unique Global ID values.

The examples imply that ULAs are effectively just IPv6 versions of IPv4's RFC 1918 addresses. Thats not the case, IPv6 equivalent of RFC 1918s were "Site-Local" addresses. They had problems which is why the IPv6 IETF working group replaced them with ULAs.

As ULAs being treated as exact equivalents of IPv4's RFC 1918s looks to be a common issue, I recently did a presentation which described the problems with Site-Locals and how ULAs solves those problems:

"Getting IPv6 Private Addressing Right"
https://www.slideshare.net/markzzzsmith/ausnog-2019-getting-ipv6-private-addressing-right

If possible, you may consider redacting the Global ID part of your ULA example prefixes, via black block characters, or perhaps replacing digits with "X", "Y" or "Z", to ensure people don't copy and paste your example ULAs, violating the global uniqueness requirement.

Regards,
Mark.

[Documentation] Multiple Subnets

Do all cluster nodes need to be within the same ipv6 subnet, or is it fine, if each node has it's own prefix neither consecutive nor within the same global prefix (different rz/location/...)?

Edit: And do they need to share the same ULA-Prefix?

Installation failed while using vagrant

Background:
Installation on Windows 10, laptop, HP, core i7, 16 GB memory
Virtual box version 6.0.6
Vagrant version 2.2.4
Date: 05/13/2019

Problem:
while installing via 'vagrant up' , the installation fails while installing nat vm because of cni version mismatch. The log display is mentioned here.
deployment_logs.zip

` k86-nat64: Fetched 110 kB in 1s (87.8 kB/s)

k86-nat64: Reading package lists...

k86-nat64: + echo 'Installing Kubernetes Components...'
k86-nat64: Installing Kubernetes Components...
k86-nat64: + sudo -E apt-get install -qy --allow-downgrades kubelet=1.10.5-00 kubectl=1.10.5-00 kubeadm=1.10.5-00
k86-nat64: Reading package lists...
k86-nat64: Building dependency tree...
k86-nat64: Reading state information...
k86-nat64: Some packages could not be installed. This may mean that you have
k86-nat64: requested an impossible situation or if you are using the unstable
k86-nat64: distribution that some required packages have not yet been created
k86-nat64: or been moved out of Incoming.
k86-nat64: The following information may help to resolve the situation:
k86-nat64: The following packages have unmet dependencies:
k86-nat64:  kubelet : Depends: kubernetes-cni (= 0.6.0) but 0.7.5-00 is to be installed
k86-nat64: E
k86-nat64: :
k86-nat64: Unable to correct problems, you have held broken packages.

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
`

provision_nat64.sh causes deployment process to fail

What happened?

While deploying the ipv6 cluster using ./vagrant-start, the Jool installation fails with this error.

    k86-nat64: + sudo chown -R vagrant:vagrant /home/vagrant/Jool
    k86-nat64: + cd /home/vagrant/Jool/usr
    k86-nat64: /tmp/vagrant-shell: line 33: cd: /home/vagrant/Jool/usr: No such file or directory
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.

Needs updating for latest version

Seems that a number of aspects in this guide need updating, and no longer work. e.g.

https://github.com/leblancd/kube-v6#configure-the-kube-dns-kubernetes-service-address-in-kubelet-startup-config

My /etc/systemd/system/kubelet.service.d/10-kubeadm.conf doesn't have the referenced --cluster-dns setting, apparently it is deprecated, the docs say "DEPRECATED: This parameter should be set via the config file specified by the
Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information."

However I am a bit stuck, because the config file (I assume /var/lib/kubelet/config.yaml on my system) hasn't been built yet.

https://github.com/leblancd/kube-v6#create-ipv6-enabled-kubeadm-config-file-on-master-node

My kubeadm doesn't like the sample config:

root@panda01:~# kubeadm init --config=kubeadm_v6.cfg
your configuration file uses an old API spec: "kubeadm.k8s.io/v1alpha1". Please use kubeadm v1.11 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.

I tried:

kubeadm config migrate --old-config kubeadm_v6.cfg --new-config kubeadm_v6_new.cfg

but this gives me an empty file. I also tried updating the version reference, but it doesn't like that either:

root@panda01:~# kubeadm init --config=kubeadm_v6.cfg
W0602 18:26:28.788834    4192 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1.11", Kind:"MasterConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
[config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1.11, Kind=MasterConfiguration
no InitConfiguration or ClusterConfiguration kind was found in the YAML file

Would appreciate advice on the correct procedures.

Thanks

Jool Installation fails via vagrant-start script

Background:
Installation on Windows 10, laptop, HP, core i7, 16 GB memory
Virtual box version 6.0.6
Vagrant version 2.2.4
Date: 05/13/2019

Problem:
The automated installation via vagrant-start script fails when its installing jool. It fails to find the directory and exits. The complete logs are attached in the zip file.

` k86-nat64: DKMS: install completed.

k86-nat64: + echo 'Compiling and installing Jool'\''s user binaries'

k86-nat64: Compiling and installing Jool's user binaries

k86-nat64: + sudo chown -R vagrant:vagrant /home/vagrant/Jool

k86-nat64: + cd /home/vagrant/Jool/usr

k86-nat64: /tmp/vagrant-shell: line 31: cd: /home/vagrant/Jool/usr: No such file or directory

The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.`

jool_failure_logs.zip

pods cant get ipv6

Hi,

Thanks a lot for that great explanation, I am now managed to start the cluster, but when I tried to create Nginx pods via your exact steps i got an error during the creating of the pod

  Normal   Scheduled               23s               default-scheduler     Successfully assigned nginx-controller-qjdq8 to kube-node-2
  Normal   SuccessfulMountVolume   22s               kubelet, kube-node-2  MountVolume.SetUp succeeded for volume "default-token-6zs9q"
  Warning  FailedCreatePodSandBox  21s               kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000e2a0 Address:{IP:2a0a:e5c0:2:9:f252::5b9 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  19s               kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000f1a0 Address:{IP:2a0a:e5c0:2:9:f252::5bb Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  16s               kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000efd0 Address:{IP:2a0a:e5c0:2:9:f252::5bd Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  14s               kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc4200ac240 Address:{IP:2a0a:e5c0:2:9:f252::5bf Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  11s               kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000edd0 Address:{IP:2a0a:e5c0:2:9:f252::5c1 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  9s                kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000efd0 Address:{IP:2a0a:e5c0:2:9:f252::5c3 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  7s                kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000efd0 Address:{IP:2a0a:e5c0:2:9:f252::5c6 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  5s                kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc4200ac240 Address:{IP:2a0a:e5c0:2:9:f252::5c8 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Warning  FailedCreatePodSandBox  3s                kubelet, kube-node-2  Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc42000f1a0 Address:{IP:2a0a:e5c0:2:9:f252::5c9 Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied
  Normal   SandboxChanged          2s (x9 over 20s)  kubelet, kube-node-2  Pod sandbox changed, it will be killed and re-created.
  Warning  FailedCreatePodSandBox  0s                kubelet, kube-node-2  (combined from similar events): Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "nginx-controller-qjdq8_default" network: failed to add IP addr {Version:6 Interface:0xc420098630 Address:{IP:2a0a:e5c0:2:9:f252::5cc Mask:ffffffffffffffffffff000000000000} Gateway:2a0a:e5c0:2:9:f252::1} to "eth0": permission denied

the thing is there is no eth0 in the machine at all.
Appreciate your support.
Thanks

Update kubeadm.conf

Now that kubeadm supports componentconfig, setting the proxy bind address through kubeadm.conf is unnecessary. The Readme should be updated accordingly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.