Coder Social home page Coder Social logo

kube-explorer's Introduction

kube-explorer

kube-explorer is a portable explorer for Kubernetes without any dependency.

It integrates the Rancher steve framework and its dashboard, and is recompiled, packaged, compressed, and provides an almost completely stateless Kubernetes resource manager.

Usage ✅

Please download the binary from the release page.

To run an HTTP only server:

./kube-explorer --kubeconfig=xxxx --http-listen-port=9898 --https-listen-port=0

Then, open the browser to visit http://x.x.x.x:9898 .

Build ✅

To debug on an AMD64 Linux host:

make dev

# $basedir=/opt/ui/dist/
# prepare the file trees like this
# $basedir/dashboard/
# $basedir/index.html

# good to go!
./kube-explorer --debug  --ui-path /opt/ui/dist/ --http-listen-port=9898 --https-listen-port=0

To build all cross-platform binaries:

CROSS=1 make

kube-explorer's People

Contributors

bagechashu avatar jaciechao avatar niusmallnan avatar orangedeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

kube-explorer's Issues

with nginx proxy, the console keeps printing ' Error during subscribe websocket: '

ERRO[2300] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2304] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2310] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2316] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2325] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2333] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin
ERRO[2343] Error during subscribe websocket: request origin not allowed by Upgrader.CheckOrigin

Does this support k8s v.1.22 ?

I deploy it in k3s (v1.22.3+k3s1),and the pod log is

E1126 02:26:18.487162       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:26:19.774914       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:26:21.485751       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:26:25.192911       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:26:32.218456       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:26:48.882640       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource
E1126 02:27:27.682021       7 reflector.go:139] pkg/mod/github.com/rancher/[email protected]/tools/cache/reflector.go:168: Failed to watch *v1beta1.CustomResourceDefinition: failed to list *v1beta1.CustomResourceDefinition: the server could not find the requested resource

multi-kubeconfig

How to support multi-kubeconfig ? I'd like to check my rancher information in one web portal .

thx.


kube-explorer --kubeconfig=xxxx --http-listen-port=9898--https-listen-port=0

程序执行无任何输出,端口也未被监听

版本:v0.2.3
系统:Centos 7.9
命令行:./kube-explorer --kubeconfig=./k3s.yaml--http-listen-port=9898 --https-listen-port=0
现象:

  • 终端无任何输出,开启--debug并尝试debug-level从1-10均无输出
  • netstat -an看不到9898端口被监听
  • 试过sudo也是一样的现象
  • 系统防火墙已关闭,SeLinux已关闭

Exception in "view log", object.replace is not a function

Screen Shot 2022-07-06 at 23 37 13

image

# exec
/usr/local/bin/kube-explorer --kubeconfig=/etc/rancher/k3s/k3s.yaml --http-listen-port=9898 --https-listen-port=0 --debug
# version
root@fe:~# /usr/local/bin/kube-explorer --version
kube-explorer version v0.2.9 (a81fc99)
root@fe:~# k3s --version
k3s version v1.23.8+k3s1 (53f2d4e7)
go version go1.17.5

NOT work with v1.25.14-k3s1, the web UI keep loading... and onthing returned

E0922 10:27:55.959084 15 cacher.go:440] cacher (*flowcontrol.PriorityLevelConfiguration): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: no kind "PriorityLevelConfiguration" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"; reinitializing...
W0922 10:27:56.043618 15 reflector.go:424] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: no kind "FlowSchema" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"
E0922 10:27:56.043651 15 cacher.go:440] cacher (*flowcontrol.FlowSchema): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: no kind "FlowSchema" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"; reinitializing...
W0922 10:27:56.043892 15 reflector.go:424] storage/cacher.go:/flowschemas: failed to list *flowcontrol.FlowSchema: no kind "FlowSchema" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"
E0922 10:27:56.043912 15 cacher.go:440] cacher (*flowcontrol.FlowSchema): unexpected ListAndWatch error: failed to list *flowcontrol.FlowSchema: no kind "FlowSchema" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"; reinitializing...
W0922 10:27:56.044465 15 reflector.go:424] storage/cacher.go:/prioritylevelconfigurations: failed to list *flowcontrol.PriorityLevelConfiguration: no kind "PriorityLevelConfiguration" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"
E0922 10:27:56.044490 15 cacher.go:440] cacher (*flowcontrol.PriorityLevelConfiguration): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: no kind "PriorityLevelConfiguration" is registered for version "flowcontrol.apiserver.k8s.io/v1beta3" in scheme "k8s.io/[email protected]/pkg/runtime/scheme.go:100"; reinitializing...

pod 的web shell 打开后提示无法连接,kubectl可以打开

现像:

  1. log
    Find All: [cluster] pod
    socket.js:81 Socket connecting (id=3, url=ws://xxxx:9898/api/v1/namespaces/default/pods/centos-test-d9b5b57b7-29r4t/log...)
    socket.js:86 WebSocket connection to 'ws://xxxx:9898/api/v1/namespaces/default/pods/centos-test-d9b5b57b7-29r4t/log?previous=false&follow=true&timestamps=true&pretty=true&container=centos-x&sinceSeconds=1800&sockId=3' failed:
    value @ socket.js:86
    (anonymous) @ ContainerLogs.vue:315
    d @ runtime.js:45
    (anonymous) @ runtime.js:274
    O.forEach.t. @ runtime.js:97
    r @ asyncToGenerator.js:3
    l @ asyncToGenerator.js:25
    (anonymous) @ asyncToGenerator.js:32
    (anonymous) @ asyncToGenerator.js:21
    connect @ ContainerLogs.vue:222
    (anonymous) @ ContainerLogs.vue:214
    d @ runtime.js:45
    (anonymous) @ runtime.js:274
    O.forEach.t. @ runtime.js:97
    r @ asyncToGenerator.js:3
    l @ asyncToGenerator.js:25
    (anonymous) @ asyncToGenerator.js:32
    (anonymous) @ asyncToGenerator.js:21
    mounted @ ContainerLogs.vue:213
    Qt @ vue.runtime.esm.js:1854
    cn @ vue.runtime.esm.js:4219
    insert @ vue.runtime.esm.js:3139
    $ @ vue.runtime.esm.js:6346
    (anonymous) @ vue.runtime.esm.js:6565
    t._update @ vue.runtime.esm.js:3948
    r @ vue.runtime.esm.js:4066
    _n.get @ vue.runtime.esm.js:4479
    _n.run @ vue.runtime.esm.js:4554
    gn @ vue.runtime.esm.js:4310
    (anonymous) @ vue.runtime.esm.js:1980
    ie @ vue.runtime.esm.js:1906
    socket.js:309 Socket error (state=connecting, id=3)
    socket.js:252 Socket 3 closed
    ContainerLogs.vue:286 Connect Error r {isTrusted: false}

  2. shell
    socket.js:86 WebSocket connection to 'ws://xxxxx:9898/api/v1/namespaces/default/pods/centos-test-d9b5b57b7-29r4t/exec?container=centos-x&stdout=1&stdin=1&stderr=1&tty=1&command=%2Fbin%2Fsh&command=-c&command=TERM%3Dxterm-256color%3B%20export%20TERM%3B%20%5B%20-x%20%2Fbin%2Fbash%20%5D%20%26%26%20(%5B%20-x%20%2Fusr%2Fbin%2Fscript%20%5D%20%26%26%20%2Fusr%2Fbin%2Fscript%20-q%20-c%20%22%2Fbin%2Fbash%22%20%2Fdev%2Fnull%20%7C%7C%20exec%20%2Fbin%2Fbash)%20%7C%7C%20exec%20%2Fbin%2Fsh&sockId=3' failed:
    value @ socket.js:86
    (anonymous) @ ContainerShell.vue:232
    d @ runtime.js:45
    (anonymous) @ runtime.js:274
    O.forEach.t. @ runtime.js:97
    r @ asyncToGenerator.js:3
    l @ asyncToGenerator.js:25
    (anonymous) @ asyncToGenerator.js:32
    (anonymous) @ asyncToGenerator.js:21
    connect @ ContainerShell.vue:183
    (anonymous) @ ContainerShell.vue:98
    d @ runtime.js:45
    (anonymous) @ runtime.js:274
    O.forEach.t. @ runtime.js:97
    r @ asyncToGenerator.js:3
    l @ asyncToGenerator.js:25
    socket.js:309 Socket error (state=connecting, id=3)
    socket.js:252 Socket 3 closed
    ContainerShell.vue:206 Connect Error r {isTrusted: false}

服务端日志:
INFO[0002] Watching metadata for /v1, Kind=ResourceQuota
INFO[0002] Watching metadata for apps/v1, Kind=ReplicaSet
INFO[0002] Watching metadata for certificates.k8s.io/v1beta1, Kind=CertificateSigningRequest
INFO[0002] Watching metadata for /v1, Kind=PersistentVolumeClaim
INFO[0002] Watching metadata for /v1, Kind=Secret
INFO[0002] Watching metadata for /v1, Kind=PersistentVolume
INFO[0002] Watching metadata for rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding
ERRO[0011] Error during subscribe websocket: close sent
ERRO[0011] Error during subscribe websocket: close sent
ERRO[0366] Error during subscribe websocket: close sent
ERRO[0366] Error during subscribe websocket: close sent
ERRO[0483] Error during subscribe websocket: close sent
ERRO[0483] Error during subscribe websocket: close sent

Linux 下操作:
命令行操作查看log, 执行shell 没问题

Failed to start since last commit

Create Aliyun cluster using Autok3s:

ERRO[2022-10-25T11:49:33+08:00] fail to start kube-explorer for cluster k3s-cluster.cn-shanghai.alibaba: fork/exec /usr/local/bin/kube-explorer: exec format error

I started to use autok3s and kube-explorer since last Friday, it worked in the begining when I created cluster in the quick way for testing.
Then I switched to the detailed way to create cluster, and it failed till now. The cluster is good to go, except the explorer.

Looking for your help! 😥 @niusmallnan

关闭 kubectl shell 后相关对象未被清理

版本:autok3s v0.5.2-rc2 随附的 kube-explorer v0.2.9

每次打开 kubectl shell 都会创建 ClusterRoleBinding ClusterRole ConfigMap 等对象,但关闭后并未对这些对象进行清理。

复现步骤

  1. 进入 kube-explorer
  2. 点击右上角 kubectl shell
  3. 关闭 shell 窗口
  4. 重复以上操作,观察到上述类型的孤立对象不断增加

image

预期行为

关闭 shell 后清理相关对象

kube-explorer自动创建shell pod的tolerations问题

版本:kubernetes v1.25.3 kube-explorer v0.2.12

打开 kubectl shell 会创建shell pod,但pod的tolerations配置的是

- effect: NoSchedule
    key: node-role.kubernetes.io/controlplane
    operator: Equal
    value: "true"

但使用kubeadm创建的集群的master节点上配置的taint是node-role.kubernetes.io/control-plane:NoSchedule,因此会导致shell pod无法部署,能否使此工具支持kubeadm创建的单节点集群。

复现步骤
kubeadm部署单节点集群
进入 kube-explorer
点击右上角 kubectl shell
kubectl -n kube-system describe pod dashboard-shell-xxxxx

support --insecure-skip-tls-verify like kubectl

When deploying the kube api, a TLS certificate is usually used, and the host to be accessed is expected to be verified in the certificate.

In some special scenarios, we need to ignore this verification so that the test can be completed quickly. kubectl supports this case via --insecure-skip-tls-verify.

kube-explorer should also support this feature.

打开首页,页面提示出错

版本:v0.2.3
系统:Centos 7.9
命令行:./kube-explorer --kubeconfig=./k3s.yaml--http-listen-port=9898 --https-listen-port=0

现象:

  • Chrome浏览器(版本号71.0)打开首页,蛇形Logo下提示Error:Promise[t] is not a function(抱歉因为纯内网环境无法截图)

kube explorer not working with k3d local cluster

kube-explorer does not start and shows no logs. here is my kube-explorer command

kube-explorer --kubeconfig="$HOME/.kube/config" --context=k3d-devship --http-listen-port=9080 --https-listen-port=0 --debug

When I run the command it shows nothing, even I passed the --debug flag. If I go to http://localhost:9080, it shows Site can not be reached error.

I am using kube-explorer version v0.3.1 (f4970b8)

My Machine

macOS v12.6.4 (MacbookPro Mid 2015 model)
Darwin Kernel Version 21.6.0: Thu Mar  9 20:08:59 PST 2023; root:xnu-8020.240.18.700.8~1/RELEASE_X86_64 x86_64

Others Informations

# devship.localhost resolves to 127.0.0.1
$ kubectl cluster-info

Kubernetes control plane is running at https://devship.localhost:6445
CoreDNS is running at https://devship.localhost:6445/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://devship.localhost:6445/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

$ kubectl get nodes

NAME                   STATUS   ROLES                  AGE   VERSION
k3d-devship-server-0   Ready    control-plane,master   13m   v1.26.3+k3s1
k3d-devship-agent-1    Ready    <none>                 13m   v1.26.3+k3s1
k3d-devship-agent-0    Ready    <none>                 13m   v1.26.3+k3s1

kube-explorer 启动放到后台之后 崩溃了

环境:
vmware centos7 加 k3s

启动命令:
/home/kube-explorer --kubeconfig=/root/.kube/config --http-listen-port=9898 --https-listen-port=0 >>/home/kube-explorer.log 2>&1 &

下面是记录到的日志:

W0514 15:58:21.697574 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 15:58:21.706767 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T15:58:21+08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=RoleBinding controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=Role controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRole controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Starting apiregistration.k8s.io/v1, Kind=APIService controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Starting apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition controller"
time="2021-05-14T15:58:21+08:00" level=info msg="Listening on :9898"
time="2021-05-14T15:58:22+08:00" level=info msg="Refreshing all schemas"
time="2021-05-14T15:58:22+08:00" level=info msg="Refreshing all schemas"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Binding"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind ComponentStatus"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind ConfigMap"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Endpoints"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Event"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind LimitRange"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Namespace"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Node"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind PersistentVolumeClaim"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind PersistentVolume"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Pod"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind PodTemplate"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind ReplicationController"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind ResourceQuota"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Secret"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind ServiceAccount"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion /v1 Kind Service"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apiregistration.k8s.io/v1 Kind APIService"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apiregistration.k8s.io/v1beta1 Kind APIService"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apps/v1 Kind ControllerRevision"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apps/v1 Kind DaemonSet"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apps/v1 Kind Deployment"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apps/v1 Kind ReplicaSet"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apps/v1 Kind StatefulSet"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion events.k8s.io/v1 Kind Event"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion events.k8s.io/v1beta1 Kind Event"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authentication.k8s.io/v1 Kind TokenReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authentication.k8s.io/v1beta1 Kind TokenReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1 Kind LocalSubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1 Kind SelfSubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1 Kind SelfSubjectRulesReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1 Kind SubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1beta1 Kind LocalSubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1beta1 Kind SelfSubjectRulesReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion authorization.k8s.io/v1beta1 Kind SubjectAccessReview"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion autoscaling/v1 Kind HorizontalPodAutoscaler"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion autoscaling/v2beta1 Kind HorizontalPodAutoscaler"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion autoscaling/v2beta2 Kind HorizontalPodAutoscaler"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion batch/v1 Kind Job"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion batch/v1beta1 Kind CronJob"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion certificates.k8s.io/v1 Kind CertificateSigningRequest"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion certificates.k8s.io/v1beta1 Kind CertificateSigningRequest"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion networking.k8s.io/v1 Kind IngressClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion networking.k8s.io/v1 Kind Ingress"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion networking.k8s.io/v1 Kind NetworkPolicy"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion networking.k8s.io/v1beta1 Kind IngressClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion networking.k8s.io/v1beta1 Kind Ingress"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion extensions/v1beta1 Kind Ingress"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion policy/v1beta1 Kind PodDisruptionBudget"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion policy/v1beta1 Kind PodSecurityPolicy"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1 Kind ClusterRole"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1 Kind RoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1 Kind Role"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1beta1 Kind ClusterRole"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1beta1 Kind RoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion rbac.authorization.k8s.io/v1beta1 Kind Role"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1 Kind CSIDriver"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1 Kind CSINode"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1 Kind StorageClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1 Kind VolumeAttachment"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1beta1 Kind CSIDriver"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1beta1 Kind CSINode"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1beta1 Kind StorageClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion storage.k8s.io/v1beta1 Kind VolumeAttachment"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion admissionregistration.k8s.io/v1 Kind MutatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion admissionregistration.k8s.io/v1 Kind ValidatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion admissionregistration.k8s.io/v1beta1 Kind MutatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion admissionregistration.k8s.io/v1beta1 Kind ValidatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apiextensions.k8s.io/v1 Kind CustomResourceDefinition"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion apiextensions.k8s.io/v1beta1 Kind CustomResourceDefinition"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion scheduling.k8s.io/v1 Kind PriorityClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion scheduling.k8s.io/v1beta1 Kind PriorityClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion coordination.k8s.io/v1 Kind Lease"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion coordination.k8s.io/v1beta1 Kind Lease"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion node.k8s.io/v1 Kind RuntimeClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion node.k8s.io/v1beta1 Kind RuntimeClass"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion discovery.k8s.io/v1beta1 Kind EndpointSlice"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion flowcontrol.apiserver.k8s.io/v1beta1 Kind FlowSchema"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion flowcontrol.apiserver.k8s.io/v1beta1 Kind PriorityLevelConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion helm.cattle.io/v1 Kind HelmChart"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion helm.cattle.io/v1 Kind HelmChartConfig"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion k3s.cattle.io/v1 Kind Addon"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion devices.edge.cattle.io/v1alpha1 Kind ModbusDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion devices.edge.cattle.io/v1alpha1 Kind BluetoothDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion devices.edge.cattle.io/v1alpha1 Kind MQTTDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion devices.edge.cattle.io/v1alpha1 Kind OPCUADevice"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion edge.cattle.io/v1alpha1 Kind DeviceLink"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion octopusapi.cattle.io/v1alpha1 Kind Setting"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion octopusapi.cattle.io/v1alpha1 Kind DeviceTemplate"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion octopusapi.cattle.io/v1alpha1 Kind DeviceTemplateRevision"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion octopusapi.cattle.io/v1alpha1 Kind Catalog"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion metrics.k8s.io/v1beta1 Kind NodeMetrics"
time="2021-05-14T15:58:22+08:00" level=info msg="APIVersion metrics.k8s.io/v1beta1 Kind PodMetrics"
W0514 15:58:22.455325 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Service"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for devices.edge.cattle.io/v1alpha1, Kind=MQTTDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for networking.k8s.io/v1, Kind=IngressClass"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for k3s.cattle.io/v1, Kind=Addon"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for flowcontrol.apiserver.k8s.io/v1beta1, Kind=PriorityLevelConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Secret"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for batch/v1beta1, Kind=CronJob"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for edge.cattle.io/v1alpha1, Kind=DeviceLink"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for events.k8s.io/v1, Kind=Event"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for flowcontrol.apiserver.k8s.io/v1beta1, Kind=FlowSchema"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=PersistentVolumeClaim"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=ServiceAccount"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=PersistentVolume"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apps/v1, Kind=DaemonSet"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for admissionregistration.k8s.io/v1, Kind=ValidatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for devices.edge.cattle.io/v1alpha1, Kind=BluetoothDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for helm.cattle.io/v1, Kind=HelmChartConfig"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for storage.k8s.io/v1, Kind=CSIDriver"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for discovery.k8s.io/v1beta1, Kind=EndpointSlice"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for networking.k8s.io/v1, Kind=Ingress"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for devices.edge.cattle.io/v1alpha1, Kind=OPCUADevice"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for admissionregistration.k8s.io/v1, Kind=MutatingWebhookConfiguration"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apiregistration.k8s.io/v1, Kind=APIService"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for autoscaling/v2beta2, Kind=HorizontalPodAutoscaler"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Node"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for rbac.authorization.k8s.io/v1, Kind=ClusterRole"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apps/v1, Kind=Deployment"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for policy/v1beta1, Kind=PodDisruptionBudget"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for storage.k8s.io/v1, Kind=CSINode"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Event"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=PodTemplate"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for storage.k8s.io/v1, Kind=StorageClass"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for octopusapi.cattle.io/v1alpha1, Kind=DeviceTemplate"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apps/v1, Kind=StatefulSet"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=ResourceQuota"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for rbac.authorization.k8s.io/v1, Kind=Role"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apiextensions.k8s.io/v1, Kind=CustomResourceDefinition"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for storage.k8s.io/v1, Kind=VolumeAttachment"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Pod"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for node.k8s.io/v1, Kind=RuntimeClass"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for coordination.k8s.io/v1, Kind=Lease"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for devices.edge.cattle.io/v1alpha1, Kind=ModbusDevice"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for certificates.k8s.io/v1, Kind=CertificateSigningRequest"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apps/v1, Kind=ReplicaSet"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for policy/v1beta1, Kind=PodSecurityPolicy"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for networking.k8s.io/v1, Kind=NetworkPolicy"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=ConfigMap"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Namespace"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for helm.cattle.io/v1, Kind=HelmChart"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=ReplicationController"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for scheduling.k8s.io/v1, Kind=PriorityClass"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for octopusapi.cattle.io/v1alpha1, Kind=Setting"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for batch/v1, Kind=Job"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for rbac.authorization.k8s.io/v1, Kind=RoleBinding"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for octopusapi.cattle.io/v1alpha1, Kind=DeviceTemplateRevision"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for octopusapi.cattle.io/v1alpha1, Kind=Catalog"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=LimitRange"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for /v1, Kind=Endpoints"
time="2021-05-14T15:58:22+08:00" level=info msg="Watching metadata for apps/v1, Kind=ControllerRevision"
time="2021-05-14T16:04:17+08:00" level=error msg="Error during subscribe websocket: close sent"
time="2021-05-14T16:04:17+08:00" level=error msg="Error during subscribe websocket: close sent"
time="2021-05-14T16:04:23+08:00" level=error msg="Error during subscribe websocket: close sent"
time="2021-05-14T16:04:23+08:00" level=error msg="Error during subscribe websocket: close sent"
W0514 16:06:47.707930 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T16:10:42+08:00" level=error msg="Error during subscribe write tcp 192.168.101.129:9898->192.168.101.1:52432: write: broken pipe"
W0514 16:16:21.710084 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T16:20:13+08:00" level=error msg="Error during subscribe write tcp 192.168.101.129:9898->192.168.101.1:52460: write: broken pipe"
W0514 16:23:00.712928 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 16:32:50.716396 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 16:40:07.717496 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 16:43:15.078192 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:43:15.080531 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:43:15.109231 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:43:15.109346 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:50:02.718479 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 16:56:10.721367 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 16:57:30.477566 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:57:30.478075 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:57:30.519553 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 16:57:30.538458 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:01:57.722639 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:10:56.727165 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:13:20.899996 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:13:20.902668 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:13:20.930667 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:13:20.949182 70665 transport.go:260] Unable to cancel request for *client.addQuery
time="2021-05-14T17:13:34+08:00" level=error msg="Error during subscribe websocket: close sent"
time="2021-05-14T17:13:34+08:00" level=error msg="Error during subscribe websocket: close sent"
W0514 17:16:04.729339 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:21:07.731454 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T17:24:51+08:00" level=error msg="Error during subscribe websocket: close sent"
time="2021-05-14T17:24:51+08:00" level=error msg="Error during subscribe websocket: close sent"
W0514 17:28:26.736444 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:35:29.737206 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:43:26.862136 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:43:26.864633 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:43:26.888163 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:43:26.912545 70665 transport.go:260] Unable to cancel request for *client.addQuery
W0514 17:43:42.739727 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
W0514 17:51:12.742223 70665 warnings.go:80] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
time="2021-05-14T17:54:46+08:00" level=error msg="Error during subscribe write tcp 192.168.101.129:9898->192.168.101.1:52724: write: broken pipe"

cant use with gke

we use gke service, when run service it response bellow

FATA[0000] no Auth Provider found for name "gcp"

CRD support?

Thanks for the great work. This tiny tool is cool. However, I am wondering if can support crd resource. I download and run it to explore my cluster. All k8s standard resource can be seen from the UI, but none CRD showing up. Is this the way it supposed to be?

web-shell is invalid

When i use web-shell, the status bar display "connecting", After a while, status is "Disconnected".

server log:

ERRO[0007] Error during subscribe websocket: close sent
ERRO[0007] Error during subscribe websocket: close sent
ERRO[0013] Error during subscribe websocket: close sent
ERRO[0013] Error during subscribe websocket: close sent
ERRO[0158] Error during subscribe websocket: close sent
ERRO[0158] Error during subscribe websocket: close sent

如何设置账号密码

服务启动后都是直接就进入系统了感觉不安全,我要如何设置账号密码登录

kube-explorer does not support being installed behind a shared ingress gateway

Please enhance the software to support a prepended URI. This way we can present kube-explorer as an application behind a shared ingress gateway. For example, allow all URIs to be prepended with a configurable token (such as /kube-explorer) so that other applications can be available at other endpoints on the same public host.

A possible partial solution is if the ingress rule rewrites the URL. Though this depends on the UI being able to work with this configuration.

Right now, it appears that the following top-level endpoints are necessary (and possibly others):

  • /dashboard
  • /v1
  • /k8s
  • /api/v1
  • /apis

Mac Ventura 13.0.1 can not start kube-explorer

When I upgraded to the latest mac os (Ventura 13.0.1), I found that the kube-explorer would not start.
I have upgraded to the latest version with the same problem.

./kube-explorer --kubeconfig=/Users/xxx/.kube/config --http-listen-port=9898 --https-listen-port=0
[1]    20848 segmentation fault  ./kube-explorer --kubeconfig=/Users/xxxd/.kube/config

logs in ~/Library/Logs/DiagnosticReports/

-------------------------------------
Translated Report (Full Report Below)
-------------------------------------

Incident Identifier: 50D15C8C-FBA9-40CC-BB21-BE0F2D726E48
CrashReporter Key:   F8B6804D-137B-BC28-CFC0-E3EE7FE4EA53
Hardware Model:      Macmini8,1
Process:             kube-explorer [21387]
Path:                /Users/USER/*/kube-explorer
Identifier:          kube-explorer
Version:             ???
Code Type:           X86-64 (Native)
Role:                Unspecified
Parent Process:      Exited process [21385]
Coalition:           com.googlecode.iterm2 [1424]
Responsible Process: iTerm2 [3441]

Date/Time:           2022-11-27 12:19:37.9474 +0800
Launch Time:         2022-11-27 12:19:37.6098 +0800
OS Version:          macOS 13.0.1 (22A400)
Release Type:        User
Report Version:      104

Exception Type:  EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x00000000000000c8
Exception Codes: 0x0000000000000001, 0x00000000000000c8
VM Region Info: 0xc8 is not in any region.  Bytes before following region: 140737488269112
      REGION TYPE                    START - END         [ VSIZE] PRT/MAX SHRMOD  REGION DETAIL
      UNUSED SPACE AT START
--->  
      shared memory            7ffffffeb000-7ffffffec000 [    4K] r-x/r-x SM=SHM  
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [21387]

Highlighted by Thread:  0

Backtrace not available

No thread state (register information) available

Binary Images:
               0x0 - 0xffffffffffffffff ??? (*) <00000000-0000-0000-0000-000000000000> ???

Error Formulating Crash Report:
_dyld_process_info_create failed with 30
dyld_process_snapshot_get_shared_cache failed
Failed to create CSSymbolicatorRef - corpse still valid ¯\_(ツ)_/¯
thread_get_state(PAGEIN) returned 0x10000003: (ipc/send) invalid destination port
thread_get_state(EXCEPTION) returned 0x10000003: (ipc/send) invalid destination port
thread_get_state(FLAVOR) returned 0x10000003: (ipc/send) invalid destination port

EOF

-----------
Full Report
-----------

{"app_name":"kube-explorer","timestamp":"2022-11-27 12:19:44.00 +0800","app_version":"","slice_uuid":"00000000-0000-0000-0000-000000000000","build_version":"","platform":0,"share_with_app_devs":0,"is_first_party":1,"bug_type":"309","os_version":"macOS 13.0.1 (22A400)","roots_installed":0,"incident_id":"50D15C8C-FBA9-40CC-BB21-BE0F2D726E48","name":"kube-explorer"}
{
  "uptime" : 51000,
  "procRole" : "Unspecified",
  "version" : 2,
  "userID" : 0,
  "deployVersion" : 210,
  "modelCode" : "Macmini8,1",
  "coalitionID" : 1424,
  "osVersion" : {
    "train" : "macOS 13.0.1",
    "build" : "22A400",
    "releaseType" : "User"
  },
  "captureTime" : "2022-11-27 12:19:37.9474 +0800",
  "incident" : "50D15C8C-FBA9-40CC-BB21-BE0F2D726E48",
  "pid" : 21387,
  "cpuType" : "X86-64",
  "roots_installed" : 0,
  "bug_type" : "309",
  "procLaunch" : "2022-11-27 12:19:37.6098 +0800",
  "procStartAbsTime" : 51436034057640,
  "procExitAbsTime" : 51436371440052,
  "procName" : "kube-explorer",
  "procPath" : "\/Users\/USER\/*\/kube-explorer",
  "parentProc" : "Exited process",
  "parentPid" : 21385,
  "coalitionName" : "com.googlecode.iterm2",
  "crashReporterKey" : "F8B6804D-137B-BC28-CFC0-E3EE7FE4EA53",
  "responsiblePid" : 3441,
  "responsibleProc" : "iTerm2",
  "bridgeVersion" : {"build":"20P420","train":"7.0"},
  "sip" : "enabled",
  "vmRegionInfo" : "0xc8 is not in any region.  Bytes before following region: 140737488269112\n      REGION TYPE                    START - END         [ VSIZE] PRT\/MAX SHRMOD  REGION DETAIL\n      UNUSED SPACE AT START\n--->  \n      shared memory            7ffffffeb000-7ffffffec000 [    4K] r-x\/r-x SM=SHM  ",
  "exception" : {"codes":"0x0000000000000001, 0x00000000000000c8","rawCodes":[1,200],"type":"EXC_BAD_ACCESS","signal":"SIGSEGV","subtype":"KERN_INVALID_ADDRESS at 0x00000000000000c8"},
  "termination" : {"flags":0,"code":11,"namespace":"SIGNAL","indicator":"Segmentation fault: 11","byProc":"exc handler","byPid":21387},
  "vmregioninfo" : "0xc8 is not in any region.  Bytes before following region: 140737488269112\n      REGION TYPE                    START - END         [ VSIZE] PRT\/MAX SHRMOD  REGION DETAIL\n      UNUSED SPACE AT START\n--->  \n      shared memory            7ffffffeb000-7ffffffec000 [    4K] r-x\/r-x SM=SHM  ",
  "extMods" : {"caller":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"system":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"targeted":{"thread_create":0,"thread_set_state":0,"task_for_pid":0},"warnings":0},
  "usedImages" : [
  {
    "size" : 0,
    "source" : "A",
    "base" : 0,
    "uuid" : "00000000-0000-0000-0000-000000000000"
  }
],
  "legacyInfo" : {
  "threadHighlighted" : 0
},
  "trialInfo" : {
  "rollouts" : [

  ],
  "experiments" : [

  ]
},
  "reportNotes" : [
  "_dyld_process_info_create failed with 30",
  "dyld_process_snapshot_get_shared_cache failed",
  "Failed to create CSSymbolicatorRef - corpse still valid ¯\\_(ツ)_\/¯",
  "thread_get_state(PAGEIN) returned 0x10000003: (ipc\/send) invalid destination port",
  "thread_get_state(EXCEPTION) returned 0x10000003: (ipc\/send) invalid destination port",
  "thread_get_state(FLAVOR) returned 0x10000003: (ipc\/send) invalid destination port"
]
}


Cannot "Execute Shell" nor " "View Log" on workload

Hi,

I cannot connect to an container shell nor to it's logging. Errors I see:

62552 upgradeaware.go:369] Error proxying data from backend to client: write tcp [::1]:9443->[::1]:50821: write: broken pipe

See this on:

  • k3s v1.22.15 and release v0.2.15
  • k3s v1.26.7+k3s1 and release v0.3.3

Need a flag to explicitly add a root certificate for the access to a remote cluster behind a corp proxy

Problem encountered

I have run kube-explorer with the following command:

kube-explorer --kubeconfig=C:\Users\john\Documents\k3s.yaml --http-listen-port=9898

However, the output is full of the : x509: certificate signed by unknown authority error.

Suggestion

Since the access to the remote cluster is over https, the certificate provided by the proxy server must be added as the root certificate before the curl command can work properly.

So add a flag for users behind corp proxies to specify the certificates of their proxy servers.

Can not start on MacOS 12

system : macos 12.2.1 (Intel)

❯ ./kube-explorer-darwin-amd64 -v
kube-explorer version v0.3.0 (f898c55)
❯ ./kube-explorer-darwin-amd64 --http-listen-port=9898 --https-listen-port=0

it hangs up and no logs

Incorrect certificate expiration time

Kube-explorer: v0.2.9(UI v2.6.5-kube-explorer-ui-rc1)

Install K3s, and use kube-explorer to connect it. Take k3s-serving as an example, it is a secret of type kubernetes.io/tls.

Use openssl to view the expiration time as follows:

kubectl --insecure-skip-tls-verify  get secret -n kube-system k3s-serving -o jsonpath='{.data.tls\.crt}' | base64 -d | openssl x509 -noout -text | grep Not
            Not Before: Jul 22 05:40:32 2022 GMT
            Not After : Jul 22 05:40:32 2023 GMT

However, the expiry time displayed in the Explorer UI is different.

截屏2022-07-22 下午2 31 03

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.