Coder Social home page Coder Social logo

mmumshad-kubernetes-the-hard-way's Introduction

Certified Kubernetes Administrators Course

This repository holds the supporting material for the Certified Kubernetes Administrators Course. There are two major sections.

Kubernetes The Hard Way On VirtualBox

This tutorial walks you through setting up Kubernetes the hard way on a local machine using VirtualBox. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out Google Kubernetes Engine, or the Getting Started Guides.

Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.

This tutorial is a modified version of the original developed by Kelsey Hightower. While the original one uses GCP as the platform to deploy kubernetes, we use VirtualBox and Vagrant to deploy a cluster on a local machine. If you prefer the cloud version, refer to the original one here

Another difference is that we use Docker instead of containerd. There are a few other differences to the original and they are documented here

The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!

Target Audience

The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.

Cluster Details

Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.

Labs

mmumshad-kubernetes-the-hard-way's People

Contributors

aberoham avatar adriaandejonge avatar alan01252 avatar amouat avatar andrewpsp avatar dannykansas avatar danquah avatar dpritchett avatar dy-dx avatar elsonrodriguez avatar font avatar gburiola avatar gopi-g-dev avatar joeint avatar jomagam avatar justinsb avatar kelseyhightower avatar koep avatar ksingh7 avatar lfaoro avatar lisa avatar marcelom avatar markvincze avatar mblair avatar mercer avatar michaelmcclanahan avatar mmumshad avatar oppegard avatar senax avatar srikanth787 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

mmumshad-kubernetes-the-hard-way's Issues

Kubelet - mountpoint for cpu not found

Facing issue to start kubelet service on worker-1. Journal Logs says mountpoint for cpu not found could you help me to solve the issue?

Docker version: 20.10.22 (containerd: 1.6.14, runc: 1.1.4)

I have tried

  1. Reinstall Docker
  2. Create Mount point give in this comment Github Comment Link
  3. Change Container Runtime followed this guide CRI Containerd

Command output from worker-1 sudo cgroupfs-mount

mount: /sys/fs/cgroup/cpuset: cgroup already mounted or mount point busy.
mount: /sys/fs/cgroup/cpu: cgroup already mounted or mount point busy.
mount: /sys/fs/cgroup/blkio: cgroup already mounted on /sys/fs/cgroup/cpuacct.
mount: /sys/fs/cgroup/memory: cgroup already mounted on /sys/fs/cgroup/cpuacct.
mount: /sys/fs/cgroup/hugetlb: cgroup already mounted on /sys/fs/cgroup/cpuacct.
mount: /sys/fs/cgroup/pids: cgroup already mounted on /sys/fs/cgroup/cpuacct.
mount: /sys/fs/cgroup/rdma: cgroup already mounted on /sys/fs/cgroup/cpuacct.
mount: /sys/fs/cgroup/misc: cgroup already mounted on /sys/fs/cgroup/cpuacct.

Journal Log
Jan 05 21:11:32 worker-1 kubelet[12510]: F0105 21:11:32.785200 12510 server.go:261] failed to run Kubelet: mountpoint for cpu not found

unable to set up cluster on ubuntu 20

There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "ipconfig", "vboxnet4", "--ip", "192.168.5.1", "--netmask", "255.255.255.0"]

Stderr: VBoxManage: error: Code E_ACCESSDENIED (0x80070005) - Access denied (extended info not available)
VBoxManage: error: Context: "EnableStaticIPConfig(Bstr(pszIp).raw(), Bstr(pszNetmask).raw())" at line 242 of file VBoxManageHostonly.cpp

Can i use ingress controllers to configure Load Balancer provisioned in Section 08

We have seen on cloud that installing an ingress controller, automatically provisions a LoadBalancer and ingress rules configure LoadABalancer itself.

Here on a bare Metal K8s, we have already launched a LoadBalancer https://github.com/ddometita/mmumshad-kubernetes-the-hard-way/blob/358fd23331f139031e87a69271e3820873b27d99/docs/08-bootstrapping-kubernetes-controllers.md#the-kubernetes-frontend-load-balancer

Can we use Ingress controllers like Kong / HAProxy/ Nginx to configure this existing Load Balancer via Ingress rules?
If Not, what should be the way?

does not work on macOS Monterey

when trying to run vagrant up

There was an error while executing VBoxManage, a CLI used by Vagrant
for controlling VirtualBox. The command and stderr is shown below.

Command: ["hostonlyif", "create"]

Stderr: 0%...
Progress state: NS_ERROR_FAILURE
VBoxManage: error: Failed to create the host-only adapter
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface
VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg *)" at line 95 of file VBoxManageHostonly.cpp

worker node 2 is not working.

I followed the steps for setting up worker node 2 but I can't see the csr when I run kubectl get csr on the master.

I have confirmed the steps 3 times now.
https://github.com/ddometita/mmumshad-kubernetes-the-hard-way/blob/master/docs/10-tls-bootstrapping-kubernetes-workers.md

Please what could I be missing?

vagrant@master-1:~$ kubectl get csr
No resources found.

vagrant@worker-2:~$ sudo service kubelet status
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2021-09-25 08:13:16 UTC; 34s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 23256 (kubelet)
    Tasks: 7 (limit: 546)
   CGroup: /system.slice/kubelet.service
           └─23256 /usr/local/bin/kubelet --bootstrap-kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig --config=/var/lib/kubelet/kubelet-config.yaml --image-

Sep 25 08:13:30 worker-2 kubelet[23256]: I0925 08:13:30.384428   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:32 worker-2 kubelet[23256]: I0925 08:13:32.755796   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:34 worker-2 kubelet[23256]: I0925 08:13:34.879264   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:37 worker-2 kubelet[23256]: I0925 08:13:37.269063   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:39 worker-2 kubelet[23256]: I0925 08:13:39.292983   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:41 worker-2 kubelet[23256]: I0925 08:13:41.468494   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:43 worker-2 kubelet[23256]: I0925 08:13:43.684493   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:45 worker-2 kubelet[23256]: I0925 08:13:45.974190   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:48 worker-2 kubelet[23256]: I0925 08:13:48.260567   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
Sep 25 08:13:50 worker-2 kubelet[23256]: I0925 08:13:50.401201   23256 bootstrap.go:239] Failed to connect to apiserver: the server has asked for the client 
cat bootstrap-token-07401b.yaml
apiVersion: v1
kind: Secret
metadata:
  # Name MUST be of form "bootstrap-token-<token id>"
  name: bootstrap-token-07401b
  namespace: kube-system

# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
  # Human readable description. Optional.
  description: "The default bootstrap token generated by 'kubeadm init'."

  # Token ID and secret. Required.
  token-id: 07401b
  token-secret: f395accd246ae52d

  # Expiration. Optional.
  expiration: 2022-03-10T03:22:11Z

  # Allowed usages.
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"

  # Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
  auth-extra-groups: system:bootstrappers:worker

Unable to start HA Proxy service.

After creating the HA Proxy configuration file and trying to restart HA Proxy, I get this error:

#sudo journalctl -u haproxy.service --since today --no-pager
Jan 29 00:00:17 master-1 haproxy[16567]: [ALERT] 028/000017 (16567) : Starting frontend kubernetes: cannot bind socket [192.168.5.30:6443]
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jan 29 00:00:17 master-1 systemd[1]: Failed to start HAProxy Load Balancer.
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Service hold-off time over, scheduling restart.
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Scheduled restart job, restart counter is at 5.
Jan 29 00:00:17 master-1 systemd[1]: Stopped HAProxy Load Balancer.
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Start request repeated too quickly.
Jan 29 00:00:17 master-1 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jan 29 00:00:17 master-1 systemd[1]: Failed to start HAProxy Load Balancer.
$ journalctl -xe
Jan 28 21:40:30 master-1 sshd[2007]: Received disconnect from 192.168.5.11 port 58580:11: disconnected by user
Jan 28 21:40:30 master-1 sshd[2007]: Disconnected from user vagrant 192.168.5.11 port 58580
Jan 28 21:44:27 master-1 sshd[2109]: Received disconnect from 192.168.5.11 port 58584:11: disconnected by user
Jan 28 21:44:27 master-1 sshd[2109]: Disconnected from user vagrant 192.168.5.11 port 58584

I tried to troubleshoot following this guide. However, I am still getting the same error.

I am installing HA Proxy on both master-1 and master-2 nodes.

Please Help!!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.