Coder Social home page Coder Social logo

sachua.github.io's Introduction

Song Ann's Snippets ๐Ÿ”— http://sachua.github.io

๐Ÿ“„ 3

๐Ÿ’ฌ 0

๐ŸŒบ 34944

โฐ 2024-07-01 10:41:38

Powered by โค๏ธ Gmeek

sachua.github.io's People

Contributors

sachua avatar

Watchers

 avatar

sachua.github.io's Issues

Inconsistent Networking Behaviour between Tanzu Kubernetes Clusters

When working with vSphere with Tanzu on NSX-T, the cluster-to-cluster communication between Tanzu Kubernetes Clusters on the same vSphere Workload Management Cluster was observed to deviate from other combinations of communication.

This means that we cannot use the Tanzu Supervisor Namespace Egress IP as the source IP when we create Gateway Firewall rules to allow cluster-to-cluster communication.

This is due to the default NAT rules that NSX-T creates when a new Supervisor Namespace is created:

NSX-T NAT Rules

A quick look at the 3 NAT rules set:

  1. No SNAT for traffic going from Node IPs (10.222.0.0/16) to Ingress IPs (10.223.112.0/20)

    • No SNAT is applied to any traffic going from Cluster A to Cluster B
  2. No SNAT for traffic going from Node IPs (10.222.0.0/16) to Node IPs (10.222.0.0/16)

    • No SNAT is applied to any traffic going from Node A to Node B
  3. SNAT for all other traffic, translated to the Tanzu Supervisor Namespace Egress IP (10.223.136.18)

    • SNAT is applied to any other traffic, such as from Cluster A to another Virtual Machine in another segment, and egress the Tanzu Supervisor Namespace segment through the Tanzu Supervisor Namespace Egress IP

What does this mean?

Rule 1:

Rule 1 illustration

For Cluster-to-Cluster traffic, SNAT rule 1 is applied, Cluster B will see the incoming traffic IP address as the Node IP of Cluster A

Rule 2:

Rule 2 illustration

For traffic within the Cluster, SNAT rule 2 is applied, Nodes within Cluster A will see each other's IP address. This is working as intended.

Rule 3:

Rule 3 illustration

For Cluster-to-Virtual-Machine traffic, SNAT rule 3 is applied, the Virtual Machine will see the incoming traffic IP address as the Tanzu Supervisor Namespace Egress IP which Cluster A resides in. This is working as intended.

Rule 3 illustration

In fact, for network traffic that leaves the T0 Gateway to an external Kubernetes cluster, the external Kubernetes cluster will also see the incoming traffic IP address as the Tanzu Supervisor Namespace Egress IP which Cluster A resides in. This is also working as intended.

Hypothesis

Because of the implementation of vSphere Pods, where the Tanzu Supervisor Namespaces are treated as Kubernetes namespaces, and vSphere Pods are treated as Kubernetes pods, the network communication between Kubernetes pods in different namespaces should show the internal IP addresses and not the Tanzu Supervisor Namespace egress IP. In this implementation, SNAT Rule 1 is valid.

The problem happens when SNAT Rule 1 is retained in NSX-T even after the Tanzu Kubernetes Cluster deployment model was selected, where the Tanzu Kubernetes Cluster nodes should not exist in the same segment as other Tanzu Kubernetes Cluster nodes when they exist in different Tanzu Supervisor Namespaces.

Workaround

Since we cannot use the Tanzu Supervisor Namespace Egress IP as the source, we need to find a way to obtain the IP address of the Cluster Nodes.

If we do some exploring, we observe there are some default NSX-T Distributed Firewall rules that are created whenever a new Supervisor Namespace is created to allow intra-cluster communication by default.

Auto-generated security group in NSX-T Distributed Firewall for Supervisor Namespace

These rules are applied to 2 security groups, by clicking the security groups, we can see that one of the security groups contain all Cluster Nodes that exist within the Tanzu Supervisor Namespace.

Members in auto-generated security group

We can then select this security group as our source when creating Gateway Firewall rules, and the network communication will work as desired.

The downside is this solution does not have the ability to set fine-grained rules for Tanzu Kubernetes Clusters that exist in the same Tanzu Supervisor Namespace, as the security group contains everything.

This means for Cluster A and Cluster B, both existing in the same Tanzu Supervisor Namespace, we cannot allow traffic from Cluster A but deny traffic from Cluster B.

Highly Available Intermediate Certificate Authority Using Kubernetes

In our own private infrastructure environment, we often need to use our own self-signed TLS certificates to serve our sites over HTTPS.

Step CA can help you generate TLS certificates for your sites using the ACME protocol, and automate the TLS certificate renewal process as well.

In this post we will walk though the process of deploying a PostgreSQL cluster on Kubernetes, then deploying a Step CA Intermediate Certificate Authority that will use the PostgreSQL cluster as the database.

Here is how the final architecture will look like:

Architecture

Preparation

To start the entire deployment, let's create the stepca Kubernetes namespace where everything will be deployed to:

kubectl create ns stepca

Sign a leaf TLS cert for the domain where you will be hosting your Step CA. We can inject the TLS secret as follows:

kubectl create secret tls stepca-tls -n stepca --key private.key --cert public.crt

Download the step binary here and place the binary in /usr/bin

Then generate an intermediate certificate signing request:

step certificate create "Intermediate CA Name" intermediate.csr intermediate_ca_key --csr

Transfer the certificate signing request to your existing root CA and get it signed. You should have the root_ca.crt from your existing root CA, intermediate_ca.crt from signing the certificate signing request, and intermediate_ca.key that was created when you generated the intermediate certificate signing request from the previous step.

Deploying a PostgreSQL Cluster

There are many available PostgreSQL Operators that can help you lifecycle a highly available PostgreSQL cluster. In this example, we will be using CloudNativePG. I like CloudNativePG because the operator creates the database instances using Pods instead of StatefulSets, which avoids all the limitations that comes with StatefulSets.

To install the CloudNativePG Operator, run the following command:

kubectl apply -f \ https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yaml

You can verify the Operator is installed with:

kubectl get deployment -n cnpg-system cnpg-controller-manager

Here is the YAML deployment file to create the Postgres Cluster. In this example we will be storing our postgres backups to our own MinIO instance hosted at minio.domain.org

# Inject MinIO Credentials as Secret
apiVersion: v1
kind: Secret
metadata:
  name: minio
  namespace: stepca
type: Opaque
stringData:
  ACCESS_KEY_ID: minio # S3 username
  ACCESS_SECRET_KEY: minio123 # S3 password
---
# Step CA Deployment YAML
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: stepca-postgres
  namespace: stepca
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:15.0
  instances: 3
  primaryUpdateStrategy: unsupervised # Rolling update process to be automated and managed by Kubernetes
  monitoring:
    enablePodMonitor: true # Expose prometheus metrics
  storage:
    storageClass: vsan-default-storage-policy # Define the storage class you use in your Kubernetes cluster
    size: 1Gi
  backup:
    # Configure to use S3 to store backup resources
    barmanObjectStore:
      destinationPath: s3://stepca/ # S3 bucket location
      endpointURL: https://minio.domain.org # S3 endpoint
      s3Credentials:
        accessKeyId:
          name: minio
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: minio
          key: ACCESS_SECRET_KEY
      wal:
        compression: gzip
        encryption: AES256
      retentionPolicy: "7d"    

You should see the following Kubernetes resources once your PostgreSQL cluster is created:

Services:

  • stepca-postgres-r
    • applications to connect to any of the instances for read-only workloads
  • stepca-postgres-ro
    • applications to connect to any of the hot standby, non-primary replicas for read-only workloads
  • stepca-postgres-rw
    • applications to connect to the primary instance for read-write workloads

Secrets:

  • stepca-postgres-app
    • database credentials for the default user called app, corresponds to the user owning the database
  • stepca-postgres-ca
    • self-signed CA generated and used to support TLS within the postgres cluster
  • stepca-postgres-replication
    • streaming replication client certificate generated by the client CA
  • stepca-postgres-server
    • server TLS certificate signed by the server CA
  • stepca-postgres-superuser
    • superuser credentials to be used only for administrative purposes, corresponds to the postgres user
  • stepca-postgres-token
    • kubernetes service account created for the database operator

Monitoring:

  • When enablePodMonitor is set to true, CloudNativePG will automatically expose prometheus metrics relating to CloudNativePG clusters, and create a PodMonitor resource for your prometheus to scrape the endpoint
    • The pre-requisite is that you must have prometheus already installed

PostgreSQL Backups

For on-demand backups, apply the following YAML:

apiVersion: postgresql.cnpg.io/v1
kind: Backup
metadata:
  name: backup-example
spec:
  cluster:
    name: stepca-postgres

To schedule backups, apply the following YAML:

apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
  name: backup-daily-midnight
  namespace: stepca
spec:
  schedule: "0 0 16 * * *" # 0000 SGT in UTC time
  backupOwnerReference: self
  cluster:
    name: stepca-postgres

To restore backup from S3 object store, apply the following YAML:

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: stepca-postgres
  namespace: stepca
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:15.0
  instances: 3
  primaryUpdateStrategy: unsupervised # Rolling update process to be automated and managed by Kubernetes
  monitoring:
    enablePodMonitor: true # Expose prometheus metrics
  storage:
    storageClass: vsan-default-storage-policy # Define the storage class you use in your Kubernetes cluster
    size: 1Gi
  backup:
    # Configure to use S3 to store backup resources
    barmanObjectStore:
      destinationPath: s3://stepca/ # S3 bucket location
      endpointURL: https://minio.domain.org # S3 endpoint
      s3Credentials:
        accessKeyId:
          name: minio
          key: ACCESS_KEY_ID
        secretAccessKey:
          name: minio
          key: ACCESS_SECRET_KEY
      wal:
        compression: gzip
        encryption: AES256
      retentionPolicy: "7d"
  bootstrap:
    recovery:
      source: stepca-postgres
      recoveryTarget:
        # Use backupID and targetImmediate to backup to that instant and stop immediately, or use targetTime to do point-in-time recovery where the database will run through the WAL to the timestamp specified after restoring from the nearest base backup
        backupID: 20240102T160000
        targetTime: "2024-01-02T09:00:25"
  externalClusters:
    - name: stepca-postgres # Name has to be same as the previous cluster since barman will be searching for backups based on the name
      barmanObjectStore:
        destinationPath: s3://stepca/
        endpointURL: https://minio.domain.org # S3 endpoint
        s3Credentials:
          accessKeyId:
            name: minio
            key: ACCESS_KEY_ID
          secretAccessKey:
            name: minio
            key: ACCESS_SECRET_KEY
          wal:
            maxParallel: 8 # Take advantage of the parallel WAL restore feature to dedicate up to 8 concurrent jobs to fetch required WAL files from the archive

Deploying Step CA

Step CA includes a helm chart to deploy on Kubernetes, but there is a disclaimer that they can only support 1 replica instance. Therefore we will not be using their helm chart, and instead we take the following deployment path:

  1. Run a Step CA on Docker
  2. Retrieve the configuration template to use in our actual deployment
  3. Deploy our Step CA with our specified configuration injected as a Kubernetes secret

To run Step CA on Docker, we can use the following docker-compose.yml file:

version: '3.3'
services:
  ca:
    image: smallstep/step-ca:0.24.1
    networks:
      - default
    ports:
      - "9000:9000"
    environment:
      - DOCKER_STEPCA_INIT_NAME=${DOCKER_STEPCA_INIT_NAME} # Name of your CA - this will be the issuer of your CA certificates
      - DOCKER_STEPCA_INIT_DNS_NAMES=${DOCKER_STEPCA_INIT_DNS_NAMES} # Hostname(s) or IPs that the CA will accept requests on
      - DOCKER_STEPCA_INIT_PROVISIONER_NAME=${DOCKER_STEPCA_INIT_PROVISIONER_NAME} # Label for the initial admin (JWK) provisioner. Default: "admin"
      - DOCKER_STEPCA_INIT_PASSWORD=${DOCKER_STEPCA_INIT_PASSWORD} # Password for the encrypted CA keys and the default CA provisioner
    volumes:
      - ./data/home/step:/home/step
    restart: always
networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: "172.97.0.0/16"
volumes:
  step_home:

From here, we can retrieve the configuration files from /home/step/, then edit the template to use the PostgreSQL cluster that we deployed earlier. A reference of the configuration options can be found here

A customised config/ca.json is as follows:

{
  "root": "/home/step/certs/root_ca.crt",
  "federatedRoots": null,
  "crt": "/home/step/certs/intermediate_ca.crt",
  "key": "/home/step/certs/intermediate_ca.key",
  "address": "9000",
  "insecureAddress": "true",
  "dnsNames": [
    "localhost",
    "ca.domain.org"
  ],
  "logger": {
    "format": "text"
  },
  "db":{
    "type": "postgresql",
    "dataSource": "postgresql://app:[email protected]:5432",
    "database": "app"
  },
  "authority": {
    "provisioners": [
      {
        "type": "JWK",
        "name": "admin",
        "key": {
          "use": "sig",
          "kty": "EC",
          "kid": "YYNxZ0rq0WsT2MlqLCWvgme3jszkmt99KjoGEJJwAKs",
          "crv": "P-256",
          "alg": "ES256",
          "x": "LsI8nHBflc-mrCbRqhl8d3hSl5sYuSM1AbXBmRfznyg",
          "y": "F99LoOvi7z-ZkumsgoHIhodP8q9brXe4bhF3szK-c_w"
        },
        "encryptedKey": "eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjEwMDAwMCwicDJzIjoiVERQS2dzcEItTUR4ZDJxTGo0VlpwdyJ9.2_j0cZgTm2eFkZ-hrtr1hBIvLxN0w3TZhbX0Jrrq7vBMaywhgFcGTA.mCasZCbZJ-JT7vjA.bW052WDKSf_ueEXq1dyxLq0n3qXWRO-LXr7OzBLdUKWKSBGQrzqS5KJWqdUCPoMIHTqpwYvm-iD6uFlcxKBYxnsAG_hoq_V3icvvwNQQSd_q7Thxr2_KtPIDJWNuX1t5qXp11hkgb-8d5HO93CmN7xNDG89pzSUepT6RYXOZ483mP5fre9qzkfnrjx3oPROCnf3SnIVUvqk7fwfXuniNsg3NrNqncHYUQNReiq3e9I1R60w0ZQTvIReY7-zfiq7iPgVqmu5I7XGgFK4iBv0L7UOEora65b4hRWeLxg5t7OCfUqrS9yxAk8FdjFb9sEfjopWViPRepB0dYPH8dVI.fb6-7XWqp0j6CR9Li0NI-Q",
        "claims": {
          "enableSSHCA": false,
          "disableRenewal": false,
          "allowRenewalAfterExpiry": false
        },
        "options": {
          "x509": {},
          "ssh": {}
        }
      },
      {
        "type": "ACME",
        "name": "acme",
        "forceCN": true,
        "claims": {
          "maxTLSCertDuration": "2160h0m0s",
          "defaultTLSCertDuration": "2160h0m0s",
          "policy": {
            "x509": {
              "allow": ["*.domain.org"]
            }
          }
        },
        "options": {
          "x509": {
            "templateFile": "templates/certs/x509/leaf.tpl"
          }
        }
      }
    ],
    "template": {},
    "backdate": "1m0s"
  },
  "tls": {
    "cipherSuites": [
      "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
    ],
    "minVersion": 1.2,
    "maxVersion": 1.3,
    "renegotiation": false
  },
  "commonName": "Step Online CA"
}

This ca.json references a custom leaf cert template leaf.tpl to set Subject Alternative Name (SAN) in the provisioned TLS certificate. Check here on how to configure your own Step CA templates.

The custom leaf.tpl as follows:

{
  "subject": {{ toJson .subject }},
{{- if .Insecure.User.dnsName }}
  "dnsNames": {{ toJSON .Insecure.User.dnsName }},
{{- else }}
  "sans": {{ toJson .SANs }},
{{- end }}
{{-if typeIs "*rsa.PublicKey" .Insecure.CR.PublicKey }}
  "keyUsage": ["keyEncipherment", "digitalSignature"],
{{- else }}
  "keyUsage": ["digitalSignature"],
{{- end }}
  "extKeyUsage": ["serverAuth", "clientAuth]
}

We can then create the entire deployment YAML file including the injecting of configuration as secrets, and the referencing of the secrets from the deployment resource.

Our example deployment assumes an NGINX ingress controller is already deployed in the cluster.
The deployment.yaml is as follows:

apiVersion: v1
kind: Secret
metadata:
  name: stepca-config
  namespace: stepca
type: Opaque
stringData:
  intermediate_ca_crt: |
    <YOUR INTERMEDIATE CA CRT>
  root_ca.crt: |
    <YOUR ROOT CA CRT>
  ca.json: |
    {
      "root": "/home/step/certs/root_ca.crt",
      "federatedRoots": null,
      "crt": "/home/step/certs/intermediate_ca.crt",
      "key": "/home/step/certs/intermediate_ca.key",
      "address": "9000",
      "insecureAddress": "true",
      "dnsNames": [
        "localhost",
        "ca.domain.org"
      ],
      "logger": {
        "format": "text"
      },
      "db":{
        "type": "postgresql",
        "dataSource": "postgresql://app:[email protected]:5432",
        "database": "app"
      },
      "authority": {
        "provisioners": [
          {
            "type": "JWK",
            "name": "admin",
            "key": {
              "use": "sig",
              "kty": "EC",
              "kid": "YYNxZ0rq0WsT2MlqLCWvgme3jszkmt99KjoGEJJwAKs",
              "crv": "P-256",
              "alg": "ES256",
              "x": "LsI8nHBflc-mrCbRqhl8d3hSl5sYuSM1AbXBmRfznyg",
              "y": "F99LoOvi7z-ZkumsgoHIhodP8q9brXe4bhF3szK-c_w"
            },
            "encryptedKey": "eyJhbGciOiJQQkVTMi1IUzI1NitBMTI4S1ciLCJjdHkiOiJqd2sranNvbiIsImVuYyI6IkEyNTZHQ00iLCJwMmMiOjEwMDAwMCwicDJzIjoiVERQS2dzcEItTUR4ZDJxTGo0VlpwdyJ9.2_j0cZgTm2eFkZ-hrtr1hBIvLxN0w3TZhbX0Jrrq7vBMaywhgFcGTA.mCasZCbZJ-JT7vjA.bW052WDKSf_ueEXq1dyxLq0n3qXWRO-LXr7OzBLdUKWKSBGQrzqS5KJWqdUCPoMIHTqpwYvm-iD6uFlcxKBYxnsAG_hoq_V3icvvwNQQSd_q7Thxr2_KtPIDJWNuX1t5qXp11hkgb-8d5HO93CmN7xNDG89pzSUepT6RYXOZ483mP5fre9qzkfnrjx3oPROCnf3SnIVUvqk7fwfXuniNsg3NrNqncHYUQNReiq3e9I1R60w0ZQTvIReY7-zfiq7iPgVqmu5I7XGgFK4iBv0L7UOEora65b4hRWeLxg5t7OCfUqrS9yxAk8FdjFb9sEfjopWViPRepB0dYPH8dVI.fb6-7XWqp0j6CR9Li0NI-Q",
            "claims": {
              "enableSSHCA": false,
              "disableRenewal": false,
              "allowRenewalAfterExpiry": false
            },
            "options": {
              "x509": {},
              "ssh": {}
            }
          },
          {
            "type": "ACME",
            "name": "acme",
            "forceCN": true,
            "claims": {
              "maxTLSCertDuration": "2160h0m0s",
              "defaultTLSCertDuration": "2160h0m0s",
              "policy": {
                "x509": {
                  "allow": ["*.domain.org"]
                }
              }
            },
            "options": {
              "x509": {
                "templateFile": "templates/certs/x509/leaf.tpl"
              }
            }
          }
        ],
        "template": {},
        "backdate": "1m0s"
      },
      "tls": {
        "cipherSuites": [
          "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256",
          "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"
        ],
        "minVersion": 1.2,
        "maxVersion": 1.3,
        "renegotiation": false
      },
      "commonName": "Step Online CA"
    }
  defaults.json: |
    {
    "ca-url": "https://localhost:9000",
    "ca-config": "/home/step/config/ca.json",
    "fingerprint": "93cff06dc36251fb0c4985d0b5ed7265a368cd70697fba90355c93cc4aabff0d",
    "root": "/home/step/certs/root_ca.crt"
    }
  intermediate_ca_key: |
    <YOUR INTERMEDIATE CA KEY>
  password: |
    <YOUR STEP CA ADMIN PASSWORD>
  leaf.tpl: |
    {
    "subject": {{ toJson .subject }},
    {{- if .Insecure.User.dnsName }}
    "dnsNames": {{ toJSON .Insecure.User.dnsName }},
    {{- else }}
    "sans": {{ toJson .SANs }},
    {{- end }}
    {{-if typeIs "*rsa.PublicKey" .Insecure.CR.PublicKey }}
    "keyUsage": ["keyEncipherment", "digitalSignature"],
    {{- else }}
    "keyUsage": ["digitalSignature"],
    {{- end }}
    "extKeyUsage": ["serverAuth", "clientAuth]
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stepca
  namespace: stepca
spec:
  replicas: 1
  selector:
    matchLabels:
      app: stepca
  template:
    metadata:
      labels:
        app: stepca
    spec:
      containers:
        - name: stepca
          image: smallstep/step-ca:0.23.0
          ports:
            - containerPort: 9000
          env:
            - name: DOCKER_STEPCA_INIT_NAME
        value: stepca
      - name: DOCKER_STEPCA_INIT_DNS_NAMES
        value: localhost, ca.domain.org
      - name: DOCKER_STEPCA_INIT_PROVISIONER_NAME
        value: admin
      - name: DOCKER_STEP_CA_INIT_PASSWORD
        valueFrom:
          secretKeyRef:
            name: stepca-config
            key: password
      volumeMounts:
        - mountPath: /home/step
          name: stepca-config
          readOnly: false
      volumes:
        - name: stepca-config
          secret:
            secretName: stepca-config
            items:
              - key: intermediate_ca.crt
                path: certs/intermediate_ca.crt
              - key: root_ca.crt
                path: certs/root_ca.crt
              - key: ca.json
                path: config/ca.json
              - key: defaults.json
                path: config/defaults.json
              - key: intermediate_ca_key
                path: secrets/intermediate_ca_key
              - key: password
                path: secrets/password
              - key: leaf.tpl
                path: templates/certs/x509/leaf.tpl
              defaultMode: 0755
---
apiVersion: v1
kind: Service
metadata:
  name: stepca
  namespace: stepca
  labels:
    app: stepca
spec:
  type: ClusterIP
  ports:
  - port: 9000
  selector:
    app: stepca
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: stepca-ingress
  namespace: stepca
  annotations:
    kubernetes.io/ingress.class: 'nginx'
    nginx.ingress.kubernetes.io/backend-protocol: 'HTTPS'
spec:
  tls:
    - hosts:
        - ca.domain.org
        secretName: stepca-tls
  rules:
    - host: ca.domain.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: stepca
                port:
                  number: 9000
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: stepca
  namespace: stepca
  labels:
    app: stepca
    resource: horizontalpodautoscaler
spec:
  maxReplicas: 6
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: stepca
  targetCPUUtilizationPercentage: 80

You check the health of your Intermediate Certificate Authority:

curl https://ca.domain.org/health

# {"status":"ok"}

curl https://ca.domain.org/acme/acme/directory

# {"newNonce":"https://ca.domain.org/acme/acme/new-nonce","newAccount":"https://ca.domain.org/acme/acme/new-account","newOrder":"https://ca.domain.org/acme/acme/new-order","revokeCert":"https://ca.domain.org/acme/acme/revoke-cert","keyChange":"https://ca.domain.org/acme/acme/key-change"}

About Me

Hi, I'm Song Ann, a cloud engineer based in Singapore. As a tinkerer at heart and a lifelong learner, I love exploring and experimenting with new technologies!

This blog records various snippets of learnings and takeaways from my exploration in tech.

Lightweight Kubernetes Using Docker Compose

Local development on Kubernetes can be a hassle since the resources required to run a full Kubernetes cluster can be quite heavy.

In this post, we will explore running a single node Kubernetes cluster for your local development needs using Docker Compose.

Kubernetes Distribution

We will be focusing on 2 different Kubernetes distributions: K3s and RKE2.

K3s is a Rancher's lightweight, fully compliant Kubernetes distribution, that is packaged into a single binary. It is most commonly used for edge computing or IoT use cases, and there are projects such as k3d which is a lightweight wrapper to run K3s in docker.

RKE2 is Rancher's next-generation Kubernetes distribution, combining RKE1's close alignment with upstream Kubernetes and K3s's deployment model for ease-of-operations. It also comes with FIPS 140-2 compliance.

K3s in Docker

k3d provides a simple solution to create k3s clusters for local development, but we are looking for an even simpler solution that only uses Docker and Docker Compose.

You only need a docker-compose.yml file:

version: '3.7'
services:
  server:
    image: rancher/k3s:v1.24.0-rc1-k3s1-amd64
    networks:
    - default
    command: server
    tmpfs:
    - /run
    - /var/run
    ulimits:
      nproc: 65535
      nofile:
        soft: 65535
        hard: 65535
    privileged: true
    restart: always
    environment:
    # - K3S_TOKEN=${K3S_PASSWORD_1} # Only required if we are running more than 1 node
    - K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
    - K3S_KUBECONFIG_MODE=666
    volumes:
    - k3s-server:/var/lib/rancher/k3s
    # This is just so that we get the kubeconfig file out
    - ./k3s_data/kubeconfig:/output
    - ./k3s_data/docker_images:/var/lib/rancher/k3s/agent/images
    expose:
    - "6443"  # Kubernetes API Server
    - "80"    # Ingress controller port 80
    - "443"   # Ingress controller port 443
    ports:
    - 6443:6443
volumes:
  k3s-server: {}
networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: "172.98.0.0/16" # Self-defined subnet on local machine

Create the k3s_data directory and run the docker compose up command:

mkdir -p k3s_data/kubeconfig
docker compose up -d

All that is left is to alias your kubectl to use your K3s kubeconfig, and you can start interacting with your K3s Kubernetes cluster!

alias k='kubectl --kubeconfig '"${PWD}"'/k3s_data/kubeconfig/kubeconfig.yaml'
k get pods -A

# NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
# kube-system   local-path-provisioner-7b7dc8d6f5-q5ldx   1/1     Running     0          4m23s
# kube-system   coredns-b96499967-kw4gt                   1/1     Running     0          4m23s
# kube-system   metrics-server-668d979685-cvxmv           1/1     Running     0          4m23s
# kube-system   helm-install-traefik-crd-wrjz6            0/1     Completed   0          4m24s
# kube-system   helm-install-traefik-smxtr                0/1     Completed   1          4m24s
# kube-system   svclb-traefik-pxlgs                       2/2     Running     0          3m51s
# kube-system   traefik-7cd4fcff68-lbg4m                  1/1     Running     0          3m51s

RKE2 in Docker

RKE2 is slightly more complicated as there isn't a readily available container image. We will have to build our own container image and publish it to our container registry of choice.

The Dockerfile can be found here. Alternatively, I have already built the container image and you can copy my image.

Once again, the docker-compose.yml file:

version: '3.7'
services:
  server:
    image: sachua/rke2-test:v1.27.1-rke2r1
    networks:
    - default
    command: server
    tmpfs:
    - /run
    - /var/run
    ulimits:
      nproc: 65535
      nofile:
        soft: 65535
        hard: 65535
    privileged: true
    restart: always
    environment:
   # - RKE2_TOKEN=${RKE2_PASSWORD_1} # Only required if we are running more than 1 node
    - RKE2_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml
    - RKE2_KUBECONFIG_MODE=666
    volumes:
    - rke2-server:/var/lib/rancher/rke2
    # This is just so that we get the kubeconfig file out
    - ./rke2_data/kubeconfig:/output
    - ./rke2_data/docker_images:/var/lib/rancher/rke2/agent/images
    expose:
    - "6443"  # Kubernetes API Server
    - "80"    # Ingress controller port 80
    - "443"   # Ingress controller port 443
    ports:
    - 6443:6443
volumes:
  rke2-server: {}
networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: "172.98.0.0/16"

Create the rke2_data directory and run the docker compose up command:

mkdir -p rke2_data/kubeconfig
docker compose up -d

Finally, alias your kubectl to use your RKE2 kubeconfig, and you can start interacting with your RKE2 Kubernetes cluster!
RKE2 might take longer to load as it has more initial set-up tasks than K3s

alias k='kubectl --kubeconfig '"${PWD}"'/rke2_data/kubeconfig/kubeconfig.yaml'
k get pods -A

# NAMESPACE     NAME                                                   READY   STATUS      RESTARTS        AGE
# kube-system   cloud-controller-manager-7e3475f79d6d                  1/1     Running     1 (2m34s ago)   2m35s
# kube-system   etcd-7e3475f79d6d                                      1/1     Running     0               116s
# kube-system   helm-install-rke2-canal-vznnf                          0/1     Completed   1               2m23s
# kube-system   helm-install-rke2-coredns-7mdnk                        0/1     Completed   0               2m23s
# kube-system   helm-install-rke2-ingress-nginx-xt8s8                  0/1     Completed   0               2m23s
# kube-system   helm-install-rke2-metrics-server-gzjmd                 0/1     Completed   0               2m23s
# kube-system   helm-install-rke2-snapshot-controller-crd-452h4        0/1     Completed   0               2m23s
# kube-system   helm-install-rke2-snapshot-controller-mlfz8            0/1     Completed   2               2m23s
# kube-system   helm-install-rke2-snapshot-validation-webhook-c6zrt    0/1     Completed   0               2m23s
# kube-system   kube-apiserver-7e3475f79d6d                            1/1     Running     0               2m35s
# kube-system   kube-controller-manager-7e3475f79d6d                   1/1     Running     0               2m37s
# kube-system   kube-proxy-7e3475f79d6d                                1/1     Running     0               2m34s
# kube-system   kube-scheduler-7e3475f79d6d                            1/1     Running     0               2m37s
# kube-system   rke2-canal-8ntsg                                       2/2     Running     0               2m2s
# kube-system   rke2-coredns-rke2-coredns-5896cccb79-hngm5             1/1     Running     0               2m3s
# kube-system   rke2-coredns-rke2-coredns-autoscaler-f6766cdc9-d7b2m   1/1     Running     0               2m3s
# kube-system   rke2-ingress-nginx-controller-vvwcp                    1/1     Running     0               48s
# kube-system   rke2-metrics-server-6d45f6cb4d-wl7t8                   1/1     Running     0               72s
# kube-system   rke2-snapshot-controller-7bf6d7bf5f-v9lfb              1/1     Running     0               58s
# kube-system   rke2-snapshot-validation-webhook-b65d46c9f-988vf       1/1     Running     0               70s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.