Coder Social home page Coder Social logo

kubernetes-the-hard-way-aws's Introduction

About

This is a fork of awesome Kubernetes The Hard Way by Kelsey Hightower and is geared towards using it on AWS. Also thanks to @slawekzachcial for his work that made this easier.

There are currently no tool upgrades as compared to the original.

Kubernetes The Hard Way

This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out Google Kubernetes Engine, AWS Elastic Container Service for Kubernetes or the Getting Started Guides.

Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.

The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!

Target Audience

The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.

Cluster Details

Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.

Labs

This tutorial assumes you have access to the Amazon Web Service. If you are looking for GCP version of this guide then look at : https://github.com/kelseyhightower/kubernetes-the-hard-way.

kubernetes-the-hard-way-aws's People

Contributors

aleks-mariusz avatar bigal avatar chynkm avatar davidread avatar dmakeroam avatar lpmi-13 avatar murty0 avatar prabhatsharma avatar ryanbrainard avatar trjh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubernetes-the-hard-way-aws's Issues

improve instance-querying commands

The commands that invoke the aws ec2 describe-instances such as these ones will fail if you are going through this a second time, since any previous-runs' worker-node will also be returned, leading to failures that look like this:

An error occurred (InvalidParameterValue) when calling the DescribeInstanceAttribute operation: Value (i-07856279c563df20a
i-0b873ccd72361c70c) for parameter instanceId is invalid. Expected: 'i-...'.
None
10.0.1.20

An error occurred (InvalidInstanceID.Malformed) when calling the CreateRoute operation: Invalid id: "i-07856279c563df20a
i-0b873ccd72361c70c"

Additionally, some of the region-specific queries in the smoke-test can be condensed/optimized into single queries.

I will create a PR to update these commands with the filter improvements in mind

Waiting for services and endpoints to be initialized from apiserver

I'm having a problem getting DNS to work. It looks like the apiserver times out, but I can't figure out why. Any help is appreciated.

kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME                        READY     STATUS             RESTARTS   AGE
kube-dns-864b8bdc77-8gntk   2/3       CrashLoopBackOff   5          5m
kubectl logs kube-dns-864b8bdc77-8gntk kubedns -n kube-system
...
I0210 21:32:34.232623       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0210 21:32:34.732601       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
I0210 21:32:35.232515       1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver...
E0210 21:32:35.232914       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.32.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.32.0.1:443: i/o timeout
E0210 21:32:35.233133       1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.32.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.32.0.1:443: i/o timeout
...
kubectl describe pod kube-dns-864b8bdc77-8gntk -n kube-system
Name:               kube-dns-864b8bdc77-8gntk
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               ip-10-240-0-22/10.240.0.22
Start Time:         Sun, 10 Feb 2019 13:32:00 -0800
Labels:             k8s-app=kube-dns
                    pod-template-hash=4206468733
Annotations:        scheduler.alpha.kubernetes.io/critical-pod=
Status:             Running
IP:                 10.200.2.2
Controlled By:      ReplicaSet/kube-dns-864b8bdc77
Containers:
  kubedns:
    Container ID:  containerd://774ee159af06de9ada6c66965a73a9fc9560487fc6b1bdbc72acafdd75954fb3
    Image:         gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
    Image ID:      gcr.io/google_containers/k8s-dns-kube-dns-amd64@sha256:f5bddc71efe905f4e4b96f3ca346414be6d733610c1525b98fff808f93966680
    Ports:         10053/UDP, 10053/TCP, 10055/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      --domain=cluster.local.
      --dns-port=10053
      --config-dir=/kube-dns-config
      --v=2
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Sun, 10 Feb 2019 13:37:35 -0800
      Finished:     Sun, 10 Feb 2019 13:38:35 -0800
    Ready:          False
    Restart Count:  4
    Limits:
      memory:  170Mi
    Requests:
      cpu:      100m
      memory:   70Mi
    Liveness:   http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:  http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
    Environment:
      PROMETHEUS_PORT:  10055
    Mounts:
      /kube-dns-config from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-7lzvn (ro)
  dnsmasq:
    Container ID:  containerd://a4630ae32325ce609abd26489e5f014bb96da133235c1b1efe5e484bd3f6e6ed
    Image:         gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
    Image ID:      gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64@sha256:6cfb9f9c2756979013dbd3074e852c2d8ac99652570c5d17d152e0c0eb3321d6
    Ports:         53/UDP, 53/TCP
    Host Ports:    0/UDP, 0/TCP
    Args:
      -v=2
      -logtostderr
      -configDir=/etc/k8s/dns/dnsmasq-nanny
      -restartDnsmasq=true
      --
      -k
      --cache-size=1000
      --no-negcache
      --log-facility=-
      --server=/cluster.local/127.0.0.1#10053
      --server=/in-addr.arpa/127.0.0.1#10053
      --server=/ip6.arpa/127.0.0.1#10053
    State:          Running
      Started:      Sun, 10 Feb 2019 13:38:38 -0800
    Last State:     Terminated
      Reason:       Error
      Exit Code:    137
      Started:      Sun, 10 Feb 2019 13:36:27 -0800
      Finished:     Sun, 10 Feb 2019 13:38:37 -0800
    Ready:          True
    Restart Count:  3
    Requests:
      cpu:        150m
      memory:     20Mi
    Liveness:     http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /etc/k8s/dns/dnsmasq-nanny from kube-dns-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-7lzvn (ro)
  sidecar:
    Container ID:  containerd://76121d4643f072450fa6daea8e582f39641ca41ef74045c59e04167e7ef22254
    Image:         gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
    Image ID:      gcr.io/google_containers/k8s-dns-sidecar-amd64@sha256:f80f5f9328107dc516d67f7b70054354b9367d31d4946a3bffd3383d83d7efe8
    Port:          10054/TCP
    Host Port:     0/TCP
    Args:
      --v=2
      --logtostderr
      --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
      --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
    State:          Running
      Started:      Sun, 10 Feb 2019 13:32:08 -0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        10m
      memory:     20Mi
    Liveness:     http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-dns-token-7lzvn (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-dns-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-dns
    Optional:  true
  kube-dns-token-7lzvn:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kube-dns-token-7lzvn
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     CriticalAddonsOnly
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age              From                     Message
  ----     ------     ----             ----                     -------
  Normal   Scheduled  6m               default-scheduler        Successfully assigned kube-system/kube-dns-864b8bdc77-8gntk to ip-10-240-0-22
  Normal   Pulling    6m               kubelet, ip-10-240-0-22  pulling image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
  Normal   Pulled     6m               kubelet, ip-10-240-0-22  Successfully pulled image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7"
  Normal   Pulling    6m               kubelet, ip-10-240-0-22  pulling image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
  Normal   Pulled     6m               kubelet, ip-10-240-0-22  Successfully pulled image "gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7"
  Normal   Created    6m               kubelet, ip-10-240-0-22  Created container
  Normal   Started    6m               kubelet, ip-10-240-0-22  Started container
  Normal   Pulling    6m               kubelet, ip-10-240-0-22  pulling image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
  Normal   Pulled     6m               kubelet, ip-10-240-0-22  Successfully pulled image "gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7"
  Normal   Created    6m               kubelet, ip-10-240-0-22  Created container
  Normal   Started    6m               kubelet, ip-10-240-0-22  Started container
  Normal   Started    5m (x2 over 6m)  kubelet, ip-10-240-0-22  Started container
  Normal   Created    5m (x2 over 6m)  kubelet, ip-10-240-0-22  Created container
  Normal   Pulled     5m               kubelet, ip-10-240-0-22  Container image "gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7" already present on machine
  Warning  Unhealthy  5m (x2 over 5m)  kubelet, ip-10-240-0-22  Liveness probe failed: HTTP probe failed with statuscode: 503
  Warning  Unhealthy  5m (x8 over 6m)  kubelet, ip-10-240-0-22  Readiness probe failed: Get http://10.200.2.2:8081/readiness: dial tcp 10.200.2.2:8081: connect: connection refused
  Warning  BackOff    1m (x6 over 3m)  kubelet, ip-10-240-0-22  Back-off restarting failed container
kubectl get svc kubernetes -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2019-02-10T21:14:46Z
  labels:
    component: apiserver
    provider: kubernetes
  name: kubernetes
  namespace: default
  resourceVersion: "14"
  selfLink: /api/v1/namespaces/default/services/kubernetes
  uid: e4c3b4a9-2d78-11e9-aeb9-0662e484f590
spec:
  clusterIP: 10.32.0.1
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 6443
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

An error occurred (MissingParameter) when calling the CreateRoute operation

Running part 11 (https://github.com/prabhatsharma/kubernetes-the-hard-way-aws/blob/master/docs/11-pod-network-routes.md) and running into this message. Probably missing something really dumb that I did myself.

root@ip-172-31-38-108:/home/ubuntu# for instance in worker-0 worker-1 worker-2; do
   instance_id_ip="$(aws ec2 describe-instances \
     --filters "Name=tag:Name,Values=${instance}" \
     --output text --query 'Reservations[].Instances[].[InstanceId,PrivateIpAddress]')"
   instance_id="$(echo "${instance_id_ip}" | cut -f1)"
   instance_ip="$(echo "${instance_id_ip}" | cut -f2)"
   pod_cidr="$(aws ec2 describe-instance-attribute \
     --instance-id "${instance_id}" \
     --attribute userData \
     --output text --query 'UserData.Value' \
     | base64 --decode | tr "|" "\n" | grep "^pod-cidr" | cut -d'=' -f2)"
   echo "${instance_ip} ${pod_cidr}"

   aws ec2 create-route \
     --route-table-id "${ROUTE_TABLE_ID}" \
     --destination-cidr-block "${pod_cidr}" \
     --instance-id "${instance_id}"
 done
10.240.0.20 10.200.0.0/24

An error occurred (MissingParameter) when calling the CreateRoute operation: The request must contain the parameter routeTableId
10.240.0.21 10.200.1.0/24

An error occurred (MissingParameter) when calling the CreateRoute operation: The request must contain the parameter routeTableId
10.240.0.22 10.200.2.0/24

An error occurred (MissingParameter) when calling the CreateRoute operation: The request must contain the parameter routeTableId
root@ip-172-31-38-108:/home/ubuntu#```

PLEG is not healthy: pleg was last seen active 42m53.338575586s ago

I created cluster using commands from master branch. Two nodes started reporting this error in kubelet logs. kubelet service status is active.

This happens when I deploy untrusted pod on the cluster

Log showing untrusted pod and PLEG error.

Jan 31 15:43:35 ip-10-240-0-20 kubelet[3072]: I0131 15:43:35.158465    3072 provider.go:116] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider
Jan 31 15:43:35 ip-10-240-0-20 kubelet[3072]: I0131 15:43:35.414429    3072 kubelet.go:1953] SyncLoop (PLEG): "untrusted3_default(70a6b852-4440-11ea-9a3e-0624b25d9900)", event: &pleg.PodLifecycleEvent{ID:"70a6b852-4440-11ea-9a3e-0624b25d9900", Type:"ContainerStarted", Data:"8dcbaa8a513e8cd85b2c26e14b09f8204383b77cb4b603308fcd98af5dc0a76d"}
Jan 31 15:46:40 ip-10-240-0-20 kubelet[3072]: I0131 15:46:40.357340    3072 kubelet_node_status.go:446] Recording NodeNotReady event message for node ip-10-240-0-20
Jan 31 15:46:40 ip-10-240-0-20 kubelet[3072]: I0131 15:46:40.357386    3072 setters.go:518] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-01-31 15:46:40.357320576 +0000 UTC m=+6391.653089090 LastTransitionTime:2020-01-31 15:46:40.357320576 +0000 UTC m=+6391.653089090 Reason:KubeletNotReady Message:PLEG is not healthy: pleg was last seen active 3m4.945529133s ago; threshold is 3m0s}
k get nodes
NAME             STATUS     ROLES    AGE   VERSION
ip-10-240-0-20   Ready      <none>   95m   v1.13.4
ip-10-240-0-21   NotReady   <none>   95m   v1.13.4
ip-10-240-0-22   NotReady   <none>   95m   v1.13.4

I restarted worker services and kubelet started failing.

sudo systemctl restart containerd kubelet kube-proxy

Kubelet service status (after restart):

root@ip-10-240-0-21:/home/ubuntu# service kubelet status
● kubelet.service - Kubernetes Kubelet
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: activating (auto-restart) (Result: exit-code) since Fri 2020-01-31 15:38:51 UTC; 236ms ag
     Docs: https://github.com/kubernetes/kubernetes
  Process: 8935 ExecStart=/usr/local/bin/kubelet --config=/var/lib/kubelet/kubelet-config.yaml --con
 Main PID: 8935 (code=exited, status=255)

Jan 31 15:38:51 ip-10-240-0-21 systemd[1]: kubelet.service: Unit entered failed state.
Jan 31 15:38:51 ip-10-240-0-21 systemd[1]: kubelet.service: Failed with result 'exit-code'.

containerd status: active

root@ip-10-240-0-21:/home/ubuntu# service containerd status
● containerd.service - containerd container runtime
   Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-01-31 15:37:59 UTC; 1min 24s ago

kube-proxy status: active

root@ip-10-240-0-21:/home/ubuntu# service kube-proxy status
● kube-proxy.service - Kubernetes Kube Proxy
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-01-31 15:37:59 UTC; 1min 46s ago

Initial ssh commands require interactively saying 'yes'

The first time one connects to a host that ssh has not previously seen the host-key for, it asks you to manually confirm. There's an ssh option that can be used the first time to automatically add this thus making these commands more non-interactive. Of course disabling host key checking like this should not typically be done lightly, however since these are transient/temporary hosts, it shouldn't be very big of a risk.

Will create a PR to enable this, and allow you to decide whether this is an acceptable trade-off for the sake of making the commands easier to run w/o having to interactive with them as much.

ssh not available

Having trouble w/ssh to instance with ami-07be40433001d2433. I'm near certain this is an issue w/the AMI itself as I launched it in different vpc, subnets, w/different keys, etc. To reproduce simply launch the AMI with default settings, attempt to ssh with assigned key, the result will be: Permission denied (publickey).

AMI is derived from https://github.com/prabhatsharma/kubernetes-the-hard-way-aws/blob/master/docs/03-compute-resources.md#instance-image

IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 \
  --filters \
  'Name=root-device-type,Values=ebs' \
  'Name=architecture,Values=x86_64' \
  'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*' \
  | jq -r '.Images|sort_by(.Name)[-1]|.ImageId')

Broken Pipe issue on IMAGE_ID step

Hi there,
In 03-compute-resources.md a few co-workers and I tried running.

IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 \
  --filters \
  'Name=root-device-type,Values=ebs' \
  'Name=architecture,Values=x86_64' \
  'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-*' \
  | jq -r '.Images|sort_by(.Name)[-1]|.ImageId')

We're running into an error

parse error: Invalid numeric literal at line 1, column 7

[Errno 32] Broken pipe
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe

Would appreciate any ideas/help

Cleanup fails if instance not yet terminated

The steps here fail:

An error occurred (DependencyViolation) when calling the DetachInternetGateway operation: Network vpc-0be1b4c40fcd63a81 has some mapped public address(es). Please unmap those public address(es) before detaching the gateway.

...if the previous section did not yet finish terminating.

Will create a PR to insert a second set of steps to wait/delay until they are terminated before returning the prompt (allowing the rest of the commands to succeed).

Failed to create pod sandbox: invalid CIDR address

Hi community,

I have met the following problem at the last step. Does someone know how to work around it?

Many Thanks
Weibo

Administrator:~/environment $ kubectl get pods -l app=nginxNAME                     READY   STATUS              RESTARTS   AGE
nginx-6799fc88d8-9rqnj   0/1     ContainerCreating   0          7m56s

Administrator:~/environment $  kubectl describe pod nginx-6799fc88d8-9rqnjName:           nginx-6799fc88d8-9rqnj
Namespace:      default
Priority:       0
Node:           ip-10-0-1-22/10.0.1.22
Start Time:     Fri, 24 Feb 2023 23:06:49 +0000
Labels:         app=nginx
                pod-template-hash=6799fc88d8
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/nginx-6799fc88d8
Containers:
  nginx:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4spk9 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-4spk9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason                  Age                     From               Message
  ----     ------                  ----                    ----               -------
  Normal   Scheduled               8m3s                    default-scheduler  Successfully assigned default/nginx-6799fc88d8-9rqnj to ip-10-0-1-22
  Warning  FailedCreatePodSandBox  8m3s                    kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8eb840c025368927a33b29fce47731fbf3113f3ef7d5e614fa4d896cd70bef13": invalid CIDR address:
  Warning  FailedCreatePodSandBox  7m52s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6854f4c1599233e6714ee46f4c5778ef34536ec2867cdad0c04d69c061a5ab12": invalid CIDR address:
  Warning  FailedCreatePodSandBox  7m40s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "090f853a5cd89433b37a9415f4b7d67e95e5b31846aabbe56bac5273240f82d5": invalid CIDR address:
  Warning  FailedCreatePodSandBox  7m28s                   kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9f7eb5c84fcafbeee9ab5f663a8fb829ef8e5d5983d3953f484f7f3819832d09": invalid CIDR address:

two CIDR blocks

What is the significance of 2 CIDR blocks
"10.240.0.0/24",
"10.200.0.0/16"

First one i understand is VPC CIDR , but why the second one is used in the security groups , same also used in kelseys repository but could not find much explanation about it

Not able to ssh on worker node after configured the containerd

I followed all steps, looks great, but on last step bootstrapping the kubernetes worker node. I configured on worker-0, and enabled containers, and logout to main server to deploy the same on other 2 workers but I am not able to ssh the worker-0 after enabling containerd tried to login ec2 instance from console also but there also some issue not able to allow login, setup 3-4 times Everytime I am stuck in same place,let me know if it's known issue.

Outdated kubernetes version

As per Exam tips document of CNCF, the current kubernetes version is 1.12. However, this repository is about kubernetes v1.11 currently.

etcd cluster configuration failed

Hi all,
I was trying to bootstrap my etcd cluster following this guideline: https://github.com/prabhatsharma/kubernetes-the-hard-way-aws/blob/master/docs/07-bootstrapping-etcd.md
However I bumped into the following error when I try to get the list of etcd members.

{"level":"warn","ts":"2020-12-07T04:24:30.778Z","caller":"clientv3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"endpoint://client-8a517529-6562-4d91-a83d-3f2c39531dc4/127.0.0.1:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: all SubConns are in TransientFailure, latest connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""}

Anyone has insight on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.