Coder Social home page Coder Social logo

Comments (11)

alexellis avatar alexellis commented on May 28, 2024

Hi thanks for getting in touch.

What do you get from kubectl get events --sort-by=.metadata.creationTimestamp --all-namespaces? f it's over 10 lines please use a Gist for the output.

You could also try kubectl get pod --all-namespaces to see what is not starting and then use kubectl describe for more info.

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

kubectl get pod --all-namespaces outputs the following

NAMESPACE     NAME                                   READY     STATUS             RESTARTS   AGE
kube-system   coredns-86c58d9df4-9hx5c               1/1       Running            0          23h
kube-system   coredns-86c58d9df4-nfgk5               1/1       Running            0          23h
kube-system   etcd-k8s-master-1                      1/1       Running            0          23h
kube-system   kube-apiserver-k8s-master-1            1/1       Running            0          23h
kube-system   kube-controller-manager-k8s-master-1   1/1       Running            0          23h
kube-system   kube-proxy-4k2mc                       1/1       Running            0          23h
kube-system   kube-proxy-9xbrw                       1/1       Running            0          23h
kube-system   kube-proxy-l6kz6                       1/1       Running            0          23h
kube-system   kube-proxy-rfntk                       1/1       Running            0          23h
kube-system   kube-scheduler-k8s-master-1            1/1       Running            0          23h
kube-system   kubernetes-dashboard-57df4db6b-ftc6v   0/1       CrashLoopBackOff   6          10m
kube-system   weave-net-5rmmn                        2/2       Running            1          23h
kube-system   weave-net-7ncw5                        2/2       Running            0          23h
kube-system   weave-net-ccvwc                        2/2       Running            0          23h
kube-system   weave-net-fknrp                        2/2       Running            0          23h

This is the output from kubectl describe on the failining pod

Name:               kubernetes-dashboard-57df4db6b-ftc6v
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node-3/192.168.0.103
Start Time:         Tue, 01 Jan 2019 12:42:10 -0500
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=57df4db6b
Annotations:        <none>
Status:             Running
IP:                 10.47.0.1
Controlled By:      ReplicaSet/kubernetes-dashboard-57df4db6b
Containers:
  kubernetes-dashboard:
    Container ID:  docker://75e26f8dfb8731f09296c646afc380cc2640003dfafdac4f3f5489288dc14ed6
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 01 Jan 2019 12:48:56 -0500
      Finished:     Tue, 01 Jan 2019 12:48:56 -0500
    Ready:          False
    Restart Count:  6
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-jd5sb (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  kubernetes-dashboard-token-jd5sb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-jd5sb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age               From                 Message
  ----     ------     ----              ----                 -------
  Normal   Scheduled  8m                default-scheduler    Successfully assigned kube-system/kubernetes-dashboard-57df4db6b-ftc6v to k8s-node-3
  Normal   Pulling    8m                kubelet, k8s-node-3  pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Pulled     7m                kubelet, k8s-node-3  Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
  Normal   Started    6m (x4 over 7m)   kubelet, k8s-node-3  Started container
  Normal   Pulled     5m (x4 over 7m)   kubelet, k8s-node-3  Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
  Normal   Created    5m (x5 over 7m)   kubelet, k8s-node-3  Created container
  Warning  BackOff    3m (x25 over 7m)  kubelet, k8s-node-3  Back-off restarting failed container

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

The output for kubectl get events --sort-by=.metadata.creationTimestamp --all-namespaces

NAMESPACE     LAST SEEN   FIRST SEEN   COUNT     NAME                                                    KIND         SUBOBJECT                               TYPE      REASON              SOURCE                  MESSAGE
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca58f45ef129   Pod                                                  Normal    Scheduled           default-scheduler       Successfully assigned kube-system/kubernetes-dashboard-57df4db6b-ftc6v to k8s-node-3
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b.1575ca58ebae7eb2         ReplicaSet                                           Normal    SuccessfulCreate    replicaset-controller   Created pod: kubernetes-dashboard-57df4db6b-ftc6v
kube-system   11m         11m          1         kubernetes-dashboard.1575ca58e4964d57                   Deployment                                           Normal    ScalingReplicaSet   deployment-controller   Scaled up replica set kubernetes-dashboard-57df4db6b to 1
kube-system   11m         11m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca599c67c84c   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulling             kubelet, k8s-node-3     pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system   10m         10m          1         kubernetes-dashboard-57df4db6b-ftc6v.1575ca676bd83518   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulled              kubelet, k8s-node-3     Successfully pulled image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1"
kube-system   9m          10m          5         kubernetes-dashboard-57df4db6b-ftc6v.1575ca6894db3dd3   Pod          spec.containers{kubernetes-dashboard}   Normal    Created             kubelet, k8s-node-3     Created container
kube-system   9m          10m          4         kubernetes-dashboard-57df4db6b-ftc6v.1575ca68c23b08ab   Pod          spec.containers{kubernetes-dashboard}   Normal    Started             kubelet, k8s-node-3     Started container
kube-system   9m          10m          4         kubernetes-dashboard-57df4db6b-ftc6v.1575ca69065f4ec7   Pod          spec.containers{kubernetes-dashboard}   Normal    Pulled              kubelet, k8s-node-3     Container image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1" already present on machine
kube-system   1m          10m          48        kubernetes-dashboard-57df4db6b-ftc6v.1575ca698da0d203   Pod          spec.containers{kubernetes-dashboard}   Warning   BackOff             kubelet, k8s-node-3     Back-off restarting failed container```

from k8s-on-raspbian.

alexellis avatar alexellis commented on May 28, 2024

Hi,

It looks like there is a problem with the version of the dashboard that the Kubernetes community is maintaining. Their version named "head" seems to work fine so I've updated the guide.

Since I have this working now I'll close the issue - please try out the new instructions and let me know how you get on.

Alex

from k8s-on-raspbian.

alexellis avatar alexellis commented on May 28, 2024

screenshot 2019-01-01 at 18 03 46

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

I am maybe doing this wrong.

I got the dashboard running. I then tried to access it via the url http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Got a 404. Then I changed the name to http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard-head:/proxy/ and got a

Error: 'tls: oversized record received with length 20527'
Trying to reach: 'https://10.47.0.1:9090/'

Then I changed https to http
So the final URL became http://localhost:8001/api/v1/namespaces/kube-system/services/http:kubernetes-dashboard-head:/proxy/
And this got me to the dashboard. But there is nothing in the Dashboard. Are there any additional steps required to get my nodes, pods and services to show up in the dash board.

I appreciate the tutorial and you help. Probably missing something silly.

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

screen shot 2019-01-01 at 1 30 36 pm

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

@alexellis Take a look when you get some down time.

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024
kubectl create clusterrolebinding kubernetes-dashboard-head --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard-head
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-head created

Adding this made it work for me.

from k8s-on-raspbian.

alexellis avatar alexellis commented on May 28, 2024

That step was documented - it's the large section of YAML. It should work for you as it worked for me on a brand new cluster. It's the step before the http kubectl apply -f statement.

I used kubectl port-forward instead of proxy.

from k8s-on-raspbian.

moficodes avatar moficodes commented on May 28, 2024

Thats what I missed. Proxy works with the clusterrole 😃
Thanks a bunch alex. 🎉

from k8s-on-raspbian.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.