This is a k3s kubernetes cluster playground wrapped in a Vagrant environment. This particular repo is forked and modified from https://github.com/rgl/k3s-vagrant
NB The vxlan
flannel backend seems to be broken in Debian 11.
Configure your hosts file with:
192.168.1.10 s.example.test
192.168.1.50 traefik.example.test
192.168.1.50 kubernetes-dashboard.example.test
Original project advise: Install the base Debian 11 (Bullseye) vagrant box.
Optionally, start the rgl/gitlab-vagrant environment at ../gitlab-vagrant
. If you do this, this environment will have the gitlab-runner helm chart installed in the k8s cluster.
However this particular project relied on the box,'bento/ubuntu-22.04'. So, install kvm as per: https://linuxhint.com/install-kvm-ubuntu-22-04/
Also have a working virtualbox environment.
Configure your hosts file with:
192.168.1.10 s.example.test
192.168.1.50 traefik.example.test
192.168.1.50 kubernetes-dashboard.example.test
Launch the environment:
time vagrant up --no-destroy-on-error --no-tty --provider=virtualbox
NB The server nodes (e.g. s1
) are tainted to prevent them from executing non control-plane workloads. That kind of workload is executed in the agent nodes (e.g. a1
).
To access the cluster from the host, run below at the root of the project folder.
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl cluster-info
kubectl get nodes -o wide
Access the Traefik Dashboard at:
https://traefik.example.test/dashboard/
Access the Rancher Server at:
https://s.example.test:6443
NB This is a proxy to the k8s API server (which is running in port 6444).
NB You must use the client certificate that is inside the tmp/admin.conf
,
tmp/*.pem
, or /etc/rancher/k3s/k3s.yaml
(inside the s1
machine) file.
Access the rancher server using the client certificate with httpie:
http \
--verify tmp/default-ca-crt.pem \
--cert tmp/default-crt.pem \
--cert-key tmp/default-key.pem \
https://s.example.test:6443
Or with curl:
curl \
--cacert tmp/default-ca-crt.pem \
--cert tmp/default-crt.pem \
--key tmp/default-key.pem \
https://s.example.test:6443
Access the Kubernetes Dashboard at:
https://kubernetes-dashboard.example.test
Then select Token
and use the contents of tmp/admin-token.txt
as the token.
You can also launch the kubernetes API server proxy in background:
export KUBECONFIG=$PWD/tmp/admin.conf
kubectl proxy &
And access the kubernetes dashboard at:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
The K9s console UI dashboard is also installed in the server node. You can access it by running:
vagrant ssh s1
sudo su -l
k9s
- k3s has a custom k8s authenticator module that does user authentication from
/var/lib/rancher/k3s/server/cred/passwd
.