Coder Social home page Coder Social logo

kubesphere / ks-installer Goto Github PK

View Code? Open in Web Editor NEW
514.0 21.0 743.0 139.08 MB

Install KubeSphere on existing Kubernetes cluster

Home Page: https://kubesphere.io

License: Apache License 2.0

Dockerfile 0.11% Shell 11.73% Smarty 0.48% Python 10.48% Mustache 9.27% Makefile 0.09% Jinja 67.84%
existing-kubernetes-cluster kubernetes k8s dashboard kubernetes-dashboard installer hacktoberfest

ks-installer's Introduction

Install KubeSphere on Existing Kubernetes Cluster

English | 中文

In addition to supporting deploying on VM and BM, KubeSphere also supports installing on cloud-hosted and on-premises existing Kubernetes clusters.

Prerequisites

  • Kubernetes Version: 1.20.x, 1.21.x, 1.22.x, 1.23.x (experimental);
  • CPU > 1 Core, Memory > 2 G;
  • An existing default Storage Class in your Kubernetes clusters.
  • The CSR signing feature is activated in kube-apiserver when it is started with the --cluster-signing-cert-file and --cluster-signing-key-file parameters, see RKE installation issue.
  1. Make sure your Kubernetes version is compatible by running kubectl version in your cluster node. The output looks as the following:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:41:51Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.8", GitCommit:"fd5d41537aee486160ad9b5356a9d82363273721", GitTreeState:"clean", BuildDate:"2021-02-17T12:33:08Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

Note: Pay attention to Server Version line, if GitVersion is greater than v1.19.0, it's good to go. Otherwise you need to upgrade your Kubernetes first.

  1. Check if the available resources meet the minimal prerequisite in your cluster.
$ free -g
              total        used        free      shared  buff/cache   available
Mem:              16          4          10           0           3           2
Swap:             0           0           0
  1. Check if there is a default Storage Class in your cluster. An existing Storage Class is the prerequisite for KubeSphere installation.
$ kubectl get sc
NAME                      PROVISIONER               AGE
glusterfs (default)               kubernetes.io/glusterfs   3d4h

If your Kubernetes cluster environment meets all requirements mentioned above, then you can start to install KubeSphere.

To Start Deploying KubeSphere

Minimal Installation

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml

Then inspect the logs of installation.

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f

When all Pods of KubeSphere are running, it means the installation is successful. Check the port (30880 by default) of the console service by the following command. Then you can use http://IP:30880 to access the console with the default account admin/P@88w0rd.

kubectl get svc/ks-console -n kubesphere-system

Enable Pluggable Components

Attention:

  • KubeSphere supports enable the pluggable components before or after the installation, you can refer to the cluster-configuration.yaml for more details.
  • Make sure there is enough CPU and memory available in your cluster.
  1. [Optional] Create the secret of certificate for Etcd in your Kubernetes cluster. This step is only needed when you want to enable Etcd monitoring.

Note: Create the secret according to the actual Etcd certificate path of your cluster; If the Etcd has not been configured certificate, an empty secret needs to be created.

  • If the Etcd has been configured with certificates, refer to the following step (The following command is an example that is only used for the cluster created by kubeadm):
$ kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs  \
--from-file=etcd-client-ca.crt=/etc/kubernetes/pki/etcd/ca.crt  \
--from-file=etcd-client.crt=/etc/kubernetes/pki/etcd/healthcheck-client.crt  \
--from-file=etcd-client.key=/etc/kubernetes/pki/etcd/healthcheck-client.key
  • If the Etcd has not been configured with certificates.
kubectl -n kubesphere-monitoring-system create secret generic kube-etcd-client-certs
  1. If you already have a minimal KubeSphere setup, you still can enable the pluggable components by editing the ClusterConfiguration of ks-installer using the following command.

Note: Please make sure there is enough CPU and memory available in your cluster.

kubectl edit cc ks-installer -n kubesphere-system

Note: When you're enabling KubeEdge, please set advertiseAddress as below and expose corresponding ports correctly before you run or restart ks-installer. Please refer to KubeEdge Guide for more details.

kubeedge:
    cloudCore:
      cloudHub:
        advertiseAddress:
        - xxxx.xxxx.xxxx.xxxx
  1. Inspect the logs of installation.
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -f

Upgrade

Deploy the new version of ks-installer:

# Notice: ks-installer will automatically migrate the configuration. Do not modify the cluster configuration by yourself.

kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml --force

Note: If your KubeSphere version is v3.1.0 or eariler, please upgrade to v3.2.x first.

ks-installer's People

Contributors

chilianyi avatar daixijun avatar daniel-hutao avatar duanjiong avatar feynmanzhou avatar forest-l avatar frezes avatar hlwanghl avatar iawia002 avatar johnniang avatar junotx avatar ks-ci-bot avatar linuxsuren avatar min-zh avatar pixiake avatar rayzhou2017 avatar sagilio avatar shaowenchen avatar swiftslee avatar tester-rep avatar wanjunlei avatar wansir avatar wenchajun avatar yudong2015 avatar yunkunrao avatar zackzhangkai avatar zheng1 avatar zhou1203 avatar zhu733756 avatar zryfish avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ks-installer's Issues

主机设置里有未能访问的DNS地址时,安装出错

当主机 /etc/resolv.conf 包含未能访问的 DNS 地址时,安装会失败。因为 coredns 配置对主机 dns 的 proxy 策略,如果主机 dns 不可访问,会导致 coredns 返回给 pod 的响应不正常。

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
           pods insecure
           upstream
           fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf 
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap

ks-account-deployment一直处于Init:0/2状态,原因不明

kubectl get pod -n kubesphere-system
NAME READY STATUS RESTARTS AGE
ks-account-697996989f-dbn77 0/1 Init:0/2 0 3h13m
ks-apigateway-7bb9bccc6d-wmzx2 1/1 Running 0 3h29m
ks-console-f5bf76dd4-2qlj8 1/1 Running 0 3h28m
ks-console-f5bf76dd4-zc5gj 1/1 Running 0 3h28m
ks-controller-manager-69666fc668-v5pdk 1/1 Running 0 3h29m
ks-docs-77c4796dc9-6wsjj 1/1 Running 0 4h49m
openldap-84857748b4-672xl 1/1 Running 0 4h52m
redis-78ff75bddc-rh6rb 1/1 Running 0 4h52m

ks-account-697996989f-dbn77这个pod始终处于初始化状态。
看下事件:
kubectl describe po ks-account-697996989f-dbn77 -n kubesphere-system
Name: ks-account-697996989f-dbn77
Namespace: kubesphere-system
Priority: 0
PriorityClassName:
Node: 10.221.8.63/10.221.8.63
Start Time: Sat, 17 Aug 2019 16:52:13 +0800
Labels: app=ks-account
pod-template-hash=697996989f
tier=backend
version=advanced-2.0.0
Annotations:
Status: Pending
IP: 195.168.53.232
Controlled By: ReplicaSet/ks-account-697996989f
Init Containers:
wait-redis:
Container ID: docker://b5b31e791c23906c90d57ebb5ba151f7298137898131d9bdf25563f5bd7c4db8
Image: busybox:1.28.4
Image ID: docker-pullable://10.237.79.203/test/library/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335
Port:
Host Port:
Command:
sh
-c
until nc -z redis.kubesphere-system.svc 6379; do echo "waiting for redis"; sleep 2; done;
State: Running
Started: Sat, 17 Aug 2019 16:52:14 +0800
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro)
wait-ldap:
Container ID:
Image: busybox:1.28.4
Image ID:
Port:
Host Port:
Command:
sh
-c
until nc -z openldap.kubesphere-system.svc 389; do echo "waiting for ldap"; sleep 2; done;
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro)
Containers:
ks-account:
Container ID:
Image: kubesphere/ks-account:advanced-2.0.2
Image ID:
Port: 9090/TCP
Host Port: 0/TCP
Command:
ks-iam
--v=4
--logtostderr=true
--devops-database-connection=root:password@tcp(openpitrix-db.openpitrix-system.svc:3306)/devops
--ldap-server=openldap.kubesphere-system.svc:389
--redis-server=redis.kubesphere-system.svc:6379
--ldap-manager-dn=cn=admin,dc=kubesphere,dc=io
--ldap-manager-password=$(LDAP_PASSWORD)
--ldap-user-search-base=ou=Users,dc=kubesphere,dc=io
--ldap-group-search-base=ou=Groups,dc=kubesphere,dc=io
--jwt-secret=$(JWT_SECRET)
--admin-password=P@88w0rd
--token-expire-time=0h
--jenkins-address=http://ks-jenkins.kubesphere-devops-system.svc/
--jenkins-password=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJlbWFpbCI6ImFkbWluQGt1YmVzcGhlcmUuaW8iLCJleHAiOjE4MTYyMzkwMjIsInVzZXJuYW1lIjoiYWRtaW4ifQ.86ofN704ZPc1o-yyXnF-up5nK1w3nHeRlGWcwNLCa-k
--master-url=10.221.8.155:8443
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 500Mi
Requests:
cpu: 30m
memory: 200Mi
Environment:
KUBECTL_IMAGE: kubesphere/kubectl:advanced-1.0.0
JWT_SECRET: <set to the key 'jwt-secret' in secret 'ks-account-secret'> Optional: false
LDAP_PASSWORD: <set to the key 'ldap-admin-password' in secret 'ks-account-secret'> Optional: false
Mounts:
/etc/ks-iam from user-init (rw)
/etc/kubernetes/pki from ca-dir (rw)
/etc/kubesphere/rules from policy-rules (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubesphere-token-pswpl (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
policy-rules:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: policy-rules
Optional: false
ca-dir:
Type: Secret (a volume populated by a Secret)
SecretName: kubesphere-ca
Optional: false
user-init:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: user-init
Optional: false
kubesphere-token-pswpl:
Type: Secret (a volume populated by a Secret)
SecretName: kubesphere-token-pswpl
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 3h16m default-scheduler Successfully assigned kubesphere-system/ks-account-697996989f-dbn77 to 10.221.8.63
Normal Pulled 3h16m kubelet, 10.221.8.63 Container image "busybox:1.28.4" already present on machine
Normal Created 3h16m kubelet, 10.221.8.63 Created container
Normal Started 3h16m kubelet, 10.221.8.63 Started container

再看下日志:
kubectl logs ks-account-697996989f-dbn77 -n kubesphere-system
Error from server (BadRequest): container "ks-account" in pod "ks-account-697996989f-dbn77" is waiting to start: PodInitializing

另外还有其他的pod也是同样的状况:
kubectl get po -n openpitrix-system
NAME READY STATUS RESTARTS AGE
openpitrix-api-gateway-deployment-587cc46874-7xrdd 0/1 Init:0/2 0 3h33m
openpitrix-app-db-ctrl-job-9g7br 0/1 Init:0/1 0 3h33m
openpitrix-app-manager-deployment-595dcd76f-b6b2q 0/1 Init:0/2 0 4h58m
openpitrix-category-manager-deployment-7968d789d6-s9dd8 0/1 Init:0/2 0 4h58m
openpitrix-cluster-db-ctrl-job-sfk54 0/1 Init:0/1 0 3h33m
openpitrix-cluster-manager-deployment-6dcd96b68b-jqcb4 0/1 Init:0/2 0 4h58m
openpitrix-db-deployment-6864df4f99-fg9d9 1/1 Running 0 4h58m
openpitrix-db-init-job-dgvmp 0/1 Init:0/1 0 3h33m
openpitrix-etcd-deployment-58845c4648-fxvh5 1/1 Running 0 4h58m
openpitrix-iam-db-ctrl-job-kbtw7 0/1 Init:0/1 0 3h33m
openpitrix-iam-service-deployment-864df9fb6f-w75qd 0/1 Init:0/2 0 4h58m
openpitrix-job-db-ctrl-job-cf9z7 0/1 Init:0/1 0 3h33m
openpitrix-job-manager-deployment-588858bcb9-ggcbt 0/1 Init:0/2 0 4h58m
openpitrix-minio-deployment-84d5f9c94b-7x49m 1/1 Running 0 4h58m
openpitrix-repo-db-ctrl-job-bp94x 0/1 Init:0/1 0 3h33m
openpitrix-repo-indexer-deployment-5f4c895b54-5kk6k 0/1 Init:0/2 0 4h58m
openpitrix-repo-manager-deployment-84fd5b5fdf-w6vgp 0/1 Init:0/2 0 4h58m
openpitrix-runtime-db-ctrl-job-88gzw 0/1 Init:0/1 0 3h33m
openpitrix-runtime-manager-deployment-5fcbb6f447-b47xq 0/1 Init:0/2 0 4h58m
openpitrix-task-db-ctrl-job-nxz4z 0/1 Init:0/1 0 3h33m
openpitrix-task-manager-deployment-59578dc9d6-d6qg5 0/1 Init:0/2 0 4h58m
目前无法查明原因!!!

TASK:ks-controller-manager FAILED

resource details

kubernetes cluster version: 1.10
kubernetes cluster node operator system informations:

         Icon name: computer-vm
           Chassis: vm
        Machine ID: a9621c1d226d401b979d2e25c6677a95
           Boot ID: 7c412f2a72a44e8ab39025f8c36bb53c
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-957.el7.x86_64
      Architecture: x86-64

Error details

TASK [ks-apigateway : ks-apigateway | Creating manifests] **********************
changed: [localhost] => (item={u'type': u'deploy', u'name': u'ks-apigateway', u'file': u'ks-apigateway.yaml'})

TASK [ks-apigateway : ks-apigateway | Installing ks-apigateway] ****************
changed: [localhost]

TASK [ks-apigateway : KubeSphere | Restarting ks-apigateway] *******************
changed: [localhost]

TASK [ks-controller-manager : ks-controller-manager | Getting ks-controller-manager installation files] ***
changed: [localhost]

TASK [ks-controller-manager : ks-controller-manager | Creating manifests] ******
changed: [localhost] => (item={u'type': u'deploy', u'name': u'ks-controller-manager', u'file': u'ks-controller-manager.yaml'})

TASK [ks-controller-manager : ks-controller-manager | Creating Controller Manager] ***
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/kubectl apply -f /etc/kubesphere/ks-controller-manager/ks-controller-manager.yaml", "delta": "0:00:02.228484", "end": "2019-09-06 06:20:17.941972", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2019-09-06 06:20:15.713488", "stderr": "Error from server (Forbidden): error when creating \"/etc/kubesphere/ks-controller-manager/ks-controller-manager.yaml\": clusterroles.rbac.authorization.k8s.io \"ks-controller-manager-role\" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"delete\"]}] user=&{system:serviceaccount:kubesphere-system:ks-installer c7d6e976-d06a-11e9-811a-005056aa092d [system:serviceaccounts system:serviceaccounts:kubesphere-system system:authenticated] map[]} ownerrules=[PolicyRule{APIGroups:[\"\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apps\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"extensions\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"batch\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"rbac.authorization.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apiregistration.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apiextensions.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"tenant.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"certificates.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"devops.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"monitoring.coreos.com\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"logging.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"jaegertracing.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"authorization.k8s.io\"], Resources:[\"selfsubjectaccessreviews\" \"selfsubjectrulesreviews\"], Verbs:[\"create\"]} PolicyRule{NonResourceURLs:[\"/api\" \"/api/*\" \"/apis\" \"/apis/*\" \"/healthz\" \"/openapi\" \"/openapi/*\" \"/swagger-2.0.0.pb-v1\" \"/swagger.json\" \"/swaggerapi\" \"/swaggerapi/*\" \"/version\" \"/version/\"], Verbs:[\"get\"]}] ruleResolutionErrors=[]", "stderr_lines": ["Error from server (Forbidden): error when creating \"/etc/kubesphere/ks-controller-manager/ks-controller-manager.yaml\": clusterroles.rbac.authorization.k8s.io \"ks-controller-manager-role\" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"mutatingwebhookconfigurations\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"admissionregistration.k8s.io\"], Resources:[\"validatingwebhookconfigurations\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"virtualservices\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"networking.istio.io\"], Resources:[\"destinationrules\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"servicepolicies\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"servicemesh.kubesphere.io\"], Resources:[\"strategies\"], Verbs:[\"delete\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"get\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"list\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"watch\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"create\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"update\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"patch\"]} PolicyRule{APIGroups:[\"app.k8s.io\"], Resources:[\"apps\"], Verbs:[\"delete\"]}] user=&{system:serviceaccount:kubesphere-system:ks-installer c7d6e976-d06a-11e9-811a-005056aa092d [system:serviceaccounts system:serviceaccounts:kubesphere-system system:authenticated] map[]} ownerrules=[PolicyRule{APIGroups:[\"\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apps\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"extensions\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"batch\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"rbac.authorization.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apiregistration.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"apiextensions.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"tenant.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"certificates.k8s.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"devops.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"monitoring.coreos.com\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"logging.kubesphere.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"jaegertracing.io\"], Resources:[\"*\"], Verbs:[\"*\"]} PolicyRule{APIGroups:[\"authorization.k8s.io\"], Resources:[\"selfsubjectaccessreviews\" \"selfsubjectrulesreviews\"], Verbs:[\"create\"]} PolicyRule{NonResourceURLs:[\"/api\" \"/api/*\" \"/apis\" \"/apis/*\" \"/healthz\" \"/openapi\" \"/openapi/*\" \"/swagger-2.0.0.pb-v1\" \"/swagger.json\" \"/swaggerapi\" \"/swaggerapi/*\" \"/version\" \"/version/\"], Verbs:[\"get\"]}] ruleResolutionErrors=[]"], "stdout": "deployment.extensions/ks-controller-manager configured\nserviceaccount/ks-controller-manager unchanged\nclusterrolebinding.rbac.authorization.k8s.io/ks-controller-manager-rolebinding unchanged", "stdout_lines": ["deployment.extensions/ks-controller-manager configured", "serviceaccount/ks-controller-manager unchanged", "clusterrolebinding.rbac.authorization.k8s.io/ks-controller-manager-rolebinding unchanged"]}

PLAY RECAP *********************************************************************
localhost                  : ok=47   changed=39   unreachable=0    failed=1    skipped=9    rescued=0    ignored=1

status of running pods in my k8s cluster.

kubesphere-alerting-system   alerting-db-init-job-l6bcz                                0/1     Completed          0          5m    10.244.1.75      dev-k8s-node2
kubesphere-controls-system   default-http-backend-58f5f47b88-vw6hh                     1/1     Running            0          22m   10.244.1.69      dev-k8s-node2
kubesphere-controls-system   kubectl-admin-866596d4f6-c7ht9                            1/1     Running            0          20m   10.244.2.159     dev-k8s-node1
kubesphere-system            ks-account-8495875cc-pnjpp                                1/1     Running            0          3m    10.244.0.59      dev-k8s-master
kubesphere-system            ks-apigateway-85dbbf6787-jm7l6                            1/1     Running            0          3m    10.244.0.60      dev-k8s-master
kubesphere-system            ks-controller-manager-cc9545f88-vrqsd                     1/1     Running            0          3m    10.244.0.61      dev-k8s-master
kubesphere-system            kubesphere-installer-z9r54                                0/1     CrashLoopBackOff   6          29m   10.244.1.57      dev-k8s-node2
kubesphere-system            openldap-6bff859b79-z5cjt                                 1/1     Running            0          1h    10.244.0.43      dev-k8s-master
kubesphere-system            redis-6cd87b4466-fhthm                                    1/1     Running            0          1h    10.244.0.42      dev-k8s-master
openpitrix-system            openpitrix-api-gateway-deployment-78c87c697b-ch5zj        1/1     Running            0          5m    10.244.1.74      dev-k8s-node2
openpitrix-system            openpitrix-app-manager-deployment-5c86df7997-k8g5t        1/1     Running            0          1h    10.244.1.29      dev-k8s-node2
openpitrix-system            openpitrix-category-manager-deployment-7f6fb9b588-l9nwr   1/1     Running            0          1h    10.244.1.28      dev-k8s-node2
openpitrix-system            openpitrix-cluster-manager-deployment-76f587cbb8-vrjmn    1/1     Running            0          1h    10.244.1.31      dev-k8s-node2
openpitrix-system            openpitrix-db-deployment-7d6c87dfc4-kjnzt                 1/1     Running            0          1h    10.244.0.45      dev-k8s-master
openpitrix-system            openpitrix-etcd-deployment-7df58f85c8-rjkmc               1/1     Running            0          1h    10.244.1.35      dev-k8s-node2
openpitrix-system            openpitrix-iam-service-deployment-5c7697d745-b28hp        0/1     CrashLoopBackOff   17         1h    10.244.1.33      dev-k8s-node2
openpitrix-system            openpitrix-job-manager-deployment-656b9745c7-sz85m        1/1     Running            0          1h    10.244.1.32      dev-k8s-node2
openpitrix-system            openpitrix-minio-deployment-79bc56d768-gls89              1/1     Running            0          1h    10.244.1.34      dev-k8s-node2
openpitrix-system            openpitrix-repo-indexer-deployment-6c5d76df9b-fc4jj       1/1     Running            0          1h    10.244.1.36      dev-k8s-node2
openpitrix-system            openpitrix-repo-manager-deployment-78d58699df-vnn8z       1/1     Running            0          1h    10.244.1.30      dev-k8s-node2
openpitrix-system            openpitrix-runtime-manager-deployment-7cff66ddd7-wgxk7    1/1     Running            0          1h    10.244.1.37      dev-k8s-node2
openpitrix-system            openpitrix-task-manager-deployment-699c974c88-dktlr       1/1     Running            0          1h    10.244.1.38      dev-k8s-node2

请问如何解决?

阿里云ubuntu16.04安装kubesphere遇到问题

阿里云环境 ubuntu 16.04.6,多节点离线部署问题:
1、由于阿里云环境的安全组中有些端口没有开放,导致push镜像超时

The push refers to repository [39.104.125.28:5000/istio/mixer]
Get http://39.104.125.28:5000/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
解决方法:在安全组中把需要的端口打开即可,比如镜像仓库5000。

2、由于脚本中考虑的ubuntu的版本为16.04.4和16.04.5,没有16.04.6版本iso包。

E: Failed to fetch file:/kubeinstaller/apt_repo/16.04.6/iso/Packages  File not found - /kubeinstaller/apt_repo/16.04.6/iso/Packages (2: No such file or directory)
E: Some index files failed to download. They have been ignored, or old ones used instead
Some packages could not be installed. This may mean that you have\nrequested an impossible situation or if you are using the unstable\ndistribution that some required packages have not yet been created\nor been moved out of Incoming.\nThe following information may help to resolve the situation:\n\nThe following packages have unmet dependencies:\n software-properties-common : Depends: python3-software-properties (= 0.96.20.8) but it is not going to be installed\n", 
解决方法:重新制作了16.04.6版本iso包已解决,后面依赖包冲突在新的包中也解决了。
包的下载地址为:https://kubesphere-installer.pek3b.qingstor.com/offline/ubuntu-16.04.6-server-amd64.iso

太重量级了,希望能重新编排架构

持久化存储能改为可选项安装,Prometheus监控可选安装或可对接已安装好的,并且最好尽可能地把复杂简单化安装,做到类似于像Rancher那种轻量级别

kubesphere配置

如下,kubesphere组件配置示例,如果对应组件没有配置,或者对应组件的 host/endpoint/apiserver 为空,则视为禁用

mysql:
  host: 10.68.55.55:3306
  username: root
  password: admin
  maxIdleConnections: 10
  maxOpenConnections: 20
  maxConnectionLifeTime: 10s
devops:
  host: http://ks-devops.kubesphere-devops-system.svc
  username: jenkins
  password: kubesphere
  maxConnections: 10
sonarQube:
  host: http://sonarqube.kubesphere-devops-system.svc
  token: ABCDEFG
kubernetes:
  kubeconfig: /Users/zry/.kube/config
  master: https://127.0.0.1:6443
  qps: 1e+06
  burst: 1000000
servicemesh:
  istioPilotHost: http://istio-pilot.istio-system.svc:9090
  jaegerQueryHost: http://jaeger-query.istio-system.svc:80
  servicemeshPrometheusHost: http://prometheus-k8s.kubesphere-monitoring-system.svc
ldap:
  host: http://openldap.kubesphere-system.svc
  managerDN: cn=admin,dc=example,dc=org
  managerPassword: P@88w0rd
  userSearchBase: ou=Users,dc=example,dc=org
  groupSearchBase: ou=Groups,dc=example,dc=org
redis:
  host: 10.10.111.110
  port: 6379
  password: ""
  db: 0
s3:
  endpoint: http://minio.openpitrix-system.svc
  region: us-east-1
  disableSSL: true
  accessKeyID: ABCDEFGHIJKLMN
  secretAccessKey: OPQRSTUVWXYZ
  sessionToken: abcdefghijklmn
  bucket: ssss
openpitrix:
  apiServer: http://api-gateway.openpitrix-system.svc
  token: ABCDEFGHIJKLMN
monitoring:
  endpoint: http://prometheus.kubesphere-monitoring-system.svc
  secondaryEndpoint: http://prometheus.kubesphere-monitoring-system.svc

[gitlab-1180] 在installer的最终阶段,scale重启一下s2ioperator

 kubectl scale -n kubesphere-devops-system statefulset --replicas=0 s2ioperator
 kubectl scale -n kubesphere-devops-system statefulset --replicas=1 s2ioperator

会修复问题

Internal error occurred: failed calling webhook "validating-create-update-s2ibuilder.kubesphere.io": Post https://webhook-server-service.kubesphere-devops-system.svc:443/validating-create-update-s2ibuilder?timeout=30s: x509: certificate signed by unknown authority

离线安装 镜像仓库报 connection refused

离线部署下载镜像报 connect: connection refused

解决办法:
修改 scripts/os/tag.sh
将registry:2.7.1替换为 $(sudo docker images | awk '{if(NR>1){print $1":"$2}}' | grep registry:2.7.1) 重新执行安装

我在安装时,出现这个问题:could not find a ready tiller pod

我在安装时,出现这个问题:
Wednesday 18 September 2019 01:24:28 -0400 (0:00:00.639) 0:02:32.879 ***
fatal: [ks-allinone]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system", "delta": "0:00:00.203654", "end": "2019-09-18 01:24:29.367462", "msg": "non-zero return code", "rc": 1, "start": "2019-09-18 01:24:29.163808", "stderr": "Error: could not find a ready tiller pod", "stderr_lines": ["Error: could not find a ready tiller pod"], "stdout": "", "stdout_lines": []}

PLAY RECAP **************************************************************************************************************************************************************************************************************************************
ks-allinone : ok=172 changed=7 unreachable=0 failed=1

Originally posted by @yeyouqun in #23 (comment)

重装之后账号问题

服务器上重装了一次只有admin账号的密码还是上次的
并且登录上之后是普通用户
在哪里可以重置账号信息

添加新组件minio

  1. ks-apiserverks-controller-manager 新增参数
flag.StringVar(&s3Region, "s2i-s3-region", "us-east-1", "region of s2i s3")
flag.StringVar(&s3Endpoint, "s2i-s3-endpoint", "http://ks-minio.kubesphere-system.svc", "endpoint of s2i s3")
flag.StringVar(&s3AccessKeyID, "s2i-s3-access-key-id", "AKIAIOSFODNN7EXAMPLE", "access key of s2i s3")
flag.StringVar(&s3SecretAccessKey, "s2i-s3-secret-access-key", "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY", "secret access key of s2i s3")
flag.StringVar(&s3SessionToken, "s2i-s3-session-token", "", "session token of s2i s3")
flag.StringVar(&s3Bucket, "s2i-s3-bucket", "s2i-binaries", "bucket name of s2i s3")
flag.BoolVar(&s3DisableSSL, "s2i-s3-disable-SSL", true, "disable ssl")
flag.BoolVar(&s3ForcePathStyle, "s2i-s3-force-path-style", true, "force path style")

2.用户可以选择使用自己的对象存储/使用内置minio,如果使用内置minio就需要部署minio

openpitrix_enable 置为 false 时,job运行出错

TASK [ks-core/prepare : KubeSphere | Getting OP Secret] ************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "cat /etc/kubesphere/openpitrix/kubernetes/iam-config/secret-key.txt", "delta": "0:00:00.102802", "end": "2019-08-20 06:37:55.175247", "msg": "non-zero return code", "rc": 1, "start": "2019-08-20 06:37:55.072445", "stderr": "cat: /etc/kubesphere/openpitrix/kubernetes/iam-config/secret-key.txt: No such file or directory", "stderr_lines": ["cat: /etc/kubesphere/openpitrix/kubernetes/iam-config/secret-key.txt: No such file or directory"], "stdout": "", "stdout_lines": []}

我执行了脚本后又两个job报几乎一样的错误,请问是数据库问题吗

Current version of schema app: 0.7
ERROR: Unexpected error
org.flywaydb.core.api.FlywayException: Schema app contains a failed migration to version 0.7 !
at org.flywaydb.core.internal.command.DbMigrate.migrateGroup(DbMigrate.java:227)
at org.flywaydb.core.internal.command.DbMigrate.access$100(DbMigrate.java:53)
at org.flywaydb.core.internal.command.DbMigrate$2.call(DbMigrate.java:163)
at org.flywaydb.core.internal.command.DbMigrate$2.call(DbMigrate.java:160)
at org.flywaydb.core.internal.database.mysql.MySQLNamedLockTemplate.execute(MySQLNamedLockTemplate.java:60)
at org.flywaydb.core.internal.database.mysql.MySQLConnection.lock(MySQLConnection.java:80)
at org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory.lock(JdbcTableSchemaHistory.java:150)
at org.flywaydb.core.internal.command.DbMigrate.migrateAll(DbMigrate.java:160)
at org.flywaydb.core.internal.command.DbMigrate.migrate(DbMigrate.java:138)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:947)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:910)
at org.flywaydb.core.Flyway.execute(Flyway.java:1238)
at org.flywaydb.core.Flyway.migrate(Flyway.java:910)
at org.flywaydb.commandline.Main.executeOperation(Main.java:161)
at org.flywaydb.commandline.Main.main(Main.java:108)

我不清楚发生了什么,这是我在本地作业的问题

ks安装的时候metrics-server步骤报错,信息如下

TASK [metrics-server : Metrics-Server | Getting metrics-server installation files] ***
changed: [localhost]

TASK [metrics-server : Metrics-Server | Creating manifests] ********************
changed: [localhost] => (item={u'type': u'config', u'name': u'values', u'file': u'values.yaml'})

TASK [metrics-server : Metrics-Server | Installing metrics-server] *************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install metrics-server /etc/kubesphere/metrics-server --namespace kube-system\n", "delta": "0:00:00.267260", "end": "2019-09-18 06:05:25.431297", "msg": "non-zero return code", "rc": 1, "start": "2019-09-18 06:05:25.164037", "stderr": "Error: UPGRADE FAILED: "metrics-server" has no deployed releases", "stderr_lines": ["Error: UPGRADE FAILED: "metrics-server" has no deployed releases"], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
localhost : ok=36 changed=28 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0

openpitrix-iam-service-deployment无法启动,看起来像是没有初始化数据

[root@test-k8s-master01 ~]# kubectl logs openpitrix-iam-service-deployment-768555b66d-mpps2 -n openpitrix-system
2019-08-12 09:59:24.91467 -INFO- Release OpVersion: [v0.3.5] (grpc_server.go:79)
2019-08-12 09:59:24.9147 -INFO- Git Commit Hash: [9333eae7354fc6634785df7a2fef35e74ed8dce5] (grpc_server.go:79)
2019-08-12 09:59:24.91471 -INFO- Build Time: [2018-11-22 06:37:13] (grpc_server.go:79)
2019-08-12 09:59:24.91471 -INFO- Service [iam-service] start listen at port [9115] (grpc_server.go:81)
2019-08-12 09:59:24.91657 -ERROR- Error 1049: Unknown database 'iam' (event.go:36)
2019-08-12 09:59:24.91661 -ERROR- dbr.select.load.query: map[sql:SELECT user_id, username, password, email, role, description, status, create_time, update_time, status_time FROM user WHERE (email = '[email protected]')] (event.go:37)
panic: Error 1049: Unknown database 'iam'

goroutine 52 [running]:
openpitrix.io/openpitrix/pkg/service/iam.initIAMAccount()
/go/src/openpitrix.io/openpitrix/pkg/service/iam/init.go:78 +0xdc3
created by openpitrix.io/openpitrix/pkg/service/iam.Serve
/go/src/openpitrix.io/openpitrix/pkg/service/iam/server.go:25 +0x5f

我是先执行kubectl apply -f kubesphere.yaml进行安装,然后发现一些pod无法正常启动,查看日志是因为需要存储,于是创建sc,然后再把这些pvc删除重新创建的

能在文档里说明是哪个svc访问吗?

kubesphere-controls-system default-http-backend ClusterIP 10.106.49.78 80/TCP 66m
kubesphere-devops-system controller-manager-metrics-service ClusterIP 10.97.112.126 8443/TCP 66m
kubesphere-devops-system s2ioperator ClusterIP 10.100.63.204 443/TCP 66m
kubesphere-devops-system webhook-server-service ClusterIP 10.102.71.7 443/TCP 64m
kubesphere-monitoring-system kube-state-metrics ClusterIP None 8443/TCP,9443/TCP 65m
kubesphere-monitoring-system node-exporter ClusterIP None 9100/TCP 65m
kubesphere-monitoring-system prometheus-k8s ClusterIP None 9090/TCP 65m
kubesphere-monitoring-system prometheus-k8s-system ClusterIP None 9090/TCP 65m
kubesphere-monitoring-system prometheus-operated ClusterIP None 9090/TCP 65m
kubesphere-monitoring-system prometheus-operator ClusterIP None 8080/TCP 65m
kubesphere-system ks-account ClusterIP 10.108.116.167 80/TCP 66m
kubesphere-system ks-apigateway ClusterIP 10.97.54.166 80/TCP 66m
kubesphere-system openldap ClusterIP 10.104.29.200 389/TCP 66m
kubesphere-system redis ClusterIP 10.108.107.71 6379/TCP 66m
openpitrix-system openpitrix-api-gateway NodePort 10.105.177.150 9100:32054/TCP 66m
openpitrix-system openpitrix-app-manager ClusterIP 10.103.229.36 9102/TCP 66m
openpitrix-system openpitrix-category-manager ClusterIP 10.100.48.4 9113/TCP 66m
openpitrix-system openpitrix-cluster-manager ClusterIP 10.108.236.28 9104/TCP 66m
openpitrix-system openpitrix-db ClusterIP 10.106.119.233 3306/TCP 66m
openpitrix-system openpitrix-etcd ClusterIP 10.108.109.223 2379/TCP 66m
openpitrix-system openpitrix-iam-service ClusterIP 10.98.189.159 9115/TCP 66m
openpitrix-system openpitrix-job-manager ClusterIP 10.101.215.83 9106/TCP 66m
openpitrix-system openpitrix-minio ClusterIP 10.101.197.177 9000/TCP 66m
openpitrix-system openpitrix-repo-indexer ClusterIP 10.99.109.56 9108/TCP 66m
openpitrix-system openpitrix-repo-manager ClusterIP 10.110.30.231 9101/TCP 66m
openpitrix-system openpitrix-runtime-manager ClusterIP 10.100.62.84 9103/TCP 66m
openpitrix-system openpitrix-task-manager ClusterIP 10.102.198.80 9107/TCP 66m

请问哪个svc是可以做暴露的

ImagePullBackOff

Some pods in kubesphere-monitoring-system not ready because couldn't pull images from dockerhub.qingcloud.com.

kubectl get pod -n kubesphere-monitoring-system

NAME                                   READY   STATUS             RESTARTS   AGE
kube-state-metrics-55cdd5b576-sjv4r    4/4     Running            0          60m
kube-state-metrics-74597f476b-9c64h    2/4     ImagePullBackOff   0          58m
node-exporter-47ttm                    2/2     Running            0          60m
node-exporter-48zhk                    2/2     Running            0          60m
node-exporter-4pm6p                    2/2     Running            0          60m
node-exporter-5xmkd                    1/2     ImagePullBackOff   0          60m
node-exporter-6242c                    2/2     Running            0          60m
node-exporter-62c6z                    2/2     Running            0          60m
node-exporter-6nq6r                    2/2     Running            0          60m
node-exporter-7fzpz                    2/2     Running            0          60m
node-exporter-7hdn4                    2/2     Running            0          60m
node-exporter-7qmzb                    2/2     Running            0          60m
node-exporter-84fcz                    2/2     Running            0          60m
node-exporter-8j5lw                    2/2     Running            0          60m
node-exporter-97skr                    2/2     Running            0          60m
node-exporter-9ff5w                    2/2     Running            0          60m
node-exporter-9qrth                    2/2     Running            0          60m
node-exporter-9rjr5                    2/2     Running            0          60m
node-exporter-9s4vz                    2/2     Running            0          60m
node-exporter-9st48                    2/2     Running            0          60m
node-exporter-9t2jm                    2/2     Running            0          60m
node-exporter-bb9mh                    2/2     Running            0          60m
node-exporter-bq5xt                    2/2     Running            0          60m
node-exporter-c4vrj                    2/2     Running            0          60m
node-exporter-cr75t                    2/2     Running            0          60m
node-exporter-d2lxp                    2/2     Running            0          60m
node-exporter-d6fh9                    2/2     Running            0          60m
node-exporter-f5kbc                    2/2     Running            0          60m
node-exporter-f6rld                    2/2     Running            0          60m
node-exporter-ftrmb                    2/2     Running            0          60m
node-exporter-fvfc7                    2/2     Running            0          60m
node-exporter-g6m24                    2/2     Running            0          60m
node-exporter-gfpsc                    2/2     Running            0          60m
node-exporter-hf8dp                    2/2     Running            0          60m
node-exporter-hnnl5                    2/2     Running            0          60m
node-exporter-hnzc5                    2/2     Running            0          60m
node-exporter-hz8xj                    2/2     Running            0          60m
node-exporter-j75dk                    2/2     Running            0          60m
node-exporter-jbzh2                    2/2     Running            0          60m
node-exporter-jmt4r                    2/2     Running            0          60m
node-exporter-k5ff9                    2/2     Running            0          60m
node-exporter-klcbc                    2/2     Running            0          60m
node-exporter-krzd2                    2/2     Running            0          60m
node-exporter-lvgsq                    2/2     Running            0          60m
node-exporter-lvj7n                    2/2     Running            0          60m
node-exporter-m6bd9                    2/2     Running            0          60m
node-exporter-m6jlt                    2/2     Running            0          60m
node-exporter-m7zsd                    2/2     Running            0          60m
node-exporter-m9fvx                    2/2     Running            0          60m
node-exporter-mgxhp                    2/2     Running            0          60m
node-exporter-mj49z                    2/2     Running            0          60m
node-exporter-mrsch                    2/2     Running            0          60m
node-exporter-mxg6c                    2/2     Running            0          60m
node-exporter-ngfrd                    2/2     Running            0          60m
node-exporter-nxn2l                    2/2     Running            0          60m
node-exporter-p66c2                    2/2     Running            0          60m
node-exporter-p78h9                    2/2     Running            0          60m
node-exporter-p87wh                    2/2     Running            0          60m
node-exporter-p8wfk                    2/2     Running            0          60m
node-exporter-pb95p                    2/2     Running            0          60m
node-exporter-pmvnj                    2/2     Running            0          60m
node-exporter-pvz2m                    2/2     Running            0          60m
node-exporter-pxt8g                    2/2     Running            0          60m
node-exporter-q4rgp                    2/2     Running            0          60m
node-exporter-qpxwk                    2/2     Running            0          60m
node-exporter-r6kkh                    2/2     Running            0          60m
node-exporter-rk2c4                    2/2     Running            0          60m
node-exporter-rtxgv                    2/2     Running            0          60m
node-exporter-s97jw                    2/2     Running            0          60m
node-exporter-sjqlv                    2/2     Running            0          60m
node-exporter-sjsbc                    2/2     Running            0          60m
node-exporter-tp547                    2/2     Running            0          60m
node-exporter-tpjms                    2/2     Running            0          60m
node-exporter-tzx2l                    2/2     Running            0          60m
node-exporter-vg9p2                    2/2     Running            0          60m
node-exporter-vnffm                    2/2     Running            0          60m
node-exporter-vq2kp                    2/2     Running            0          60m
node-exporter-vq78d                    2/2     Running            0          60m
node-exporter-w5lfc                    2/2     Running            0          60m
node-exporter-wqjll                    2/2     Running            0          60m
node-exporter-x6tmd                    2/2     Running            0          60m
node-exporter-xk7m5                    2/2     Running            0          60m
node-exporter-xp7f9                    2/2     Running            0          60m
node-exporter-xprp5                    2/2     Running            0          60m
node-exporter-xqk77                    2/2     Running            0          60m
node-exporter-z74gw                    2/2     Running            0          60m
node-exporter-zdddp                    2/2     Running            0          60m
node-exporter-zk4tx                    2/2     Running            0          60m
node-exporter-zlhvc                    2/2     Running            0          60m
node-exporter-zn8zx                    2/2     Running            0          60m
prometheus-k8s-0                       0/3     CrashLoopBackOff   16         60m
prometheus-k8s-1                       0/3     CrashLoopBackOff   16         60m
prometheus-k8s-system-0                0/3     CrashLoopBackOff   16         60m
prometheus-k8s-system-1                0/3     CrashLoopBackOff   16         60m
prometheus-operator-799455496d-6jq4v   1/1     Running            1          60m
kubectl describe pod prometheus-k8s-0 -n kubesphere-monitoring-system

Events:
  Type     Reason   Age                  From             Message
  ----     ------   ----                 ----             -------
  Normal   Pulled   59m                  kubelet, kube05  Successfully pulled image "dockerhub.qingcloud.com/prometheus/prometheus:v2.5.0"
  Normal   Pulling  59m                  kubelet, kube05  Pulling image "dockerhub.qingcloud.com/coreos/prometheus-config-reloader:v0.27.1"
  Normal   Pulled   59m                  kubelet, kube05  Container image "dockerhub.qingcloud.com/prometheus/prometheus:v2.5.0" already present on machine
  Warning  Failed   59m                  kubelet, kube05  Error: ErrImagePull
  Warning  Failed   59m                  kubelet, kube05  Failed to pull image "dockerhub.qingcloud.com/coreos/prometheus-config-reloader:v0.27.1": rpc error: code = Unknown desc = Error response from daemon: Get https://dockerhub.qingcloud.com/v2/coreos/prometheus-config-reloader/manifests/v0.27.1: error parsing HTTP 403 response body: no error details found in HTTP response body: "{\"err\":\"user is not active\"}\n"
  Warning  Failed   59m                  kubelet, kube05  Error: ErrImagePull
  Normal   Pulling  59m                  kubelet, kube05  Pulling image "dockerhub.qingcloud.com/coreos/configmap-reload:v0.0.1"
  Warning  Failed   59m                  kubelet, kube05  Failed to pull image "dockerhub.qingcloud.com/coreos/configmap-reload:v0.0.1": rpc error: code = Unknown desc = Error response from daemon: Get https://dockerhub.qingcloud.com/v2/coreos/configmap-reload/manifests/v0.0.1: error parsing HTTP 403 response body: no error details found in HTTP response body: "{\"err\":\"user is not active\"}\n"
  Normal   Started  59m (x2 over 59m)    kubelet, kube05  Started container prometheus
  Normal   Created  59m (x2 over 59m)    kubelet, kube05  Created container prometheus
  Normal   BackOff  59m (x2 over 59m)    kubelet, kube05  Back-off pulling image "dockerhub.qingcloud.com/coreos/configmap-reload:v0.0.1"
  Warning  Failed   59m (x2 over 59m)    kubelet, kube05  Error: ImagePullBackOff
  Normal   BackOff  59m (x3 over 59m)    kubelet, kube05  Back-off pulling image "dockerhub.qingcloud.com/coreos/prometheus-config-reloader:v0.27.1"
  Warning  Failed   59m (x3 over 59m)    kubelet, kube05  Error: ImagePullBackOff
  Warning  BackOff  47s (x285 over 59m)  kubelet, kube05  Back-off restarting failed container

And I try docker login with guest/guest, login success but pull error is user is not active.

$ docker login dockerhub.qingcloud.com

Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

$ docker pull dockerhub.qingcloud.com/coreos/configmap-reload:v0.0.1
Error response from daemon: Get https://dockerhub.qingcloud.com/v2/coreos/configmap-reload/manifests/v0.0.1: error parsing HTTP 403 response body: no error details found in HTTP response body: "{\"err\":\"user is not active\"}\n"

uninstall 卸载不干净

我的kubesphere好像没卸载干净,导致我现在随便起个pod都会报这样错Internal error occurred: failed calling webhook "sidecar-injector.istio.io": Post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s: service "istio-sidecar-injector" not found; Deployment does not have minimum availability.

临时解决办法:kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io istio-sidecar-injector

阿里云ECS上安装问题

阿里云 ECS centos7.6

报错错 ansible!=2.7.0
将 scripts/os/centos7.sh 脚本中 python-pip 删掉
修改前
sudo yum install epel-release -y && yum install python-pip python-netaddr sshpass -y
修改后
sudo yum install epel-release -y && yum install python-netaddr sshpass -y
后者
在 sudo yum install epel-release -y && yum install python-pip python-netaddr sshpass -y
增加
pip install --upgrade pip

报错 ipaddress 或 netaddr
手动执行
pip install --ignore-installed ipaddress
pip install --ignore-installed netaddr

或者在 scripts/os/centos7.sh中 pip install -r os/requirements.txt前 添加
修改前
sudo yum install epel-release -y && yum install python-pip python-netaddr sshpass -y
pip install -r os/requirements.txt
修改后
sudo yum install epel-release -y && yum install python-pip python-netaddr sshpass -y
pip install --upgrade pip
pip install --ignore-installed ipaddress
pip install --ignore-installed netaddr
pip install -r os/requirements.txt

ks-alerting | Waiting for alerting-db-init]

TASK [ks-alerting : ks-alerting | Waiting for alerting-db-init] ****************
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (30 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (29 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (28 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (27 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (26 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (25 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (24 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (23 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (22 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (21 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (20 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (19 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (18 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (17 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (16 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (15 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (14 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (13 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (12 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (11 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (10 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (9 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (8 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (7 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (6 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (5 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (4 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (3 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (2 retries left).
FAILED - RETRYING: ks-alerting | Waiting for alerting-db-init (1 retries left).
fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "/usr/local/bin/kubectl -n kubesphere-alerting-system get pod | grep alerting-db-init | awk '{print $3}'", "delta": "0:00:00.102598", "end": "2019-08-29 10:36:17.931938", "rc": 0, "start": "2019-08-29 10:36:17.829340", "stderr": "No resources found.", "stderr_lines": ["No resources found."], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
localhost : ok=104 changed=94 unreachable=0 failed=1 skipped=41 rescued=0 ignored=2

用vmware搭建的虚拟机安装install域名不能解释的问题

  • 本地/etc/hosts文件没有配置以下域名和ip关系,但可以ping通;
  • 而本地用vmware搭建的虚拟机不能ping通以下域名。
    kubernetes-istio.pek3b.qingstor.com
    kubernetes-helm.pek3b.qingstor.com
    kubesphere-installer.pek3b.qingstor.com
    kubesphere-release.pek3b.qingstor.com
    kubesphere-containernetworking.pek3b.qingstor.com
    解决方法:在机器/etc/hosts上添加域名和ip对应关系
139.198.8.203  kubernetes-istio.pek3b.qingstor.com
139.198.8.203  kubernetes-helm.pek3b.qingstor.com
139.198.8.202  kubesphere-installer.pek3b.qingstor.com
139.198.8.205  kubesphere-release.pek3b.qingstor.com
139.198.8.201  kubesphere-containernetworking.pek3b.qingstor.com

太重量级了,希望能重新编排架构

持久化存储能改为可选项安装,Prometheus监控可选安装或可对接已安装好的,并且最好尽可能地把复杂简单化安装,做到类似于像Rancher那种轻量级别

kubectl logs -n kubesphere-system报错

[root@master deploy]# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l job-name=kubesphere-installer -o jsonpath='{.items[0].metadata.name}') -f
error: error executing jsonpath "{.items[0].metadata.name}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template:
template was:
{.items[0].metadata.name}
object given to jsonpath engine was:
map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":"", "selfLink":""}}

error: expected 'logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]'.
POD or TYPE/NAME is a required argument for the logs command
See 'kubectl logs -h' for help and example

Files access problem when installing KubeSphere with netapp as backend storage

During installing KubeSphere with NFS client provisioner, we are using netapp ONTAP Release8.0.5 as backend storage, and meets problem as below:

kubectl logs -n openpitrix-system openpitrix-db-deployment-6999f7c49f-krk52 -p chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.Q1: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.11: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.21: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/nightly.Q1: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.31: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.41: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/hourly.51 : Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot/nightly.11: Read-only file system chov/n: changing ownership of 1/var/lib/mysql/.snapshot1 : Read-only file system

With some search work, we found the solution here:
https://kb.netapp.com/app/answers/answer_view/a_id/1035995

so there are some options to work around this issue:

  1. hide .snapshot folder
  2. disable snapshot feature
  3. change .snapshot folder with rw access

Execute to this point ‘kubectl apply -f kubesphere.yaml ’ to report errors

TASK [ks-devops/sonarqube : Sonarqube | Deploy Sonarqube] **********************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system\n", "delta": "0:00:00.424497", "end": "2019-08-24 20:20:58.029648", "msg": "non-zero return code", "rc": 1, "start": "2019-08-24 20:20:57.605151", "stderr": "Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"", "stderr_lines": ["Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system""], "stdout": "", "stdout_lines": []}

configmap配置没生效

我修改了kubesphere.yaml配置文件的configmap设置不部署ks-logging相关的pod,但是没生效,还是启动了es和fluent相关的pod。
logging_enable: False
123456

根据ansible-playbook里面的配置,true时候才会部署ks-logging相关服务。

  • { role: ks-logging, when: "logging_enable == true" }
    654321

查看相关的ks-logging pod
$kubectl get pods -n kubesphere-logging-system
234234

监控不到etcd

监控不到etcd,集群内部etcd和外部etcd我都安装测试过。同样都监控不到。如下图:
微信截图_20190823150817.png

现有ks中安装kubesphere,发现英文文档有重复错误,请修改英文的第2步和第3步

Create the Secret of CA certificate of your current Kubernetes cluster.
Note: Follow the certificate paths of ca.crt and ca.key of your current cluster to create this secret.

kubectl -n kubesphere-system create secret generic kubesphere-ca
--from-file=ca.crt=/etc/kubernetes/pki/ca.crt
--from-file=ca.key=/etc/kubernetes/pki/ca.key
Create the Secret of certificate for ETCD in your Kubernetes cluster.
Note: Create with the actual ETCD certificate location of the cluster; If the ETCD does not have a configured certificate, an empty secret is created(The following command applies to the cluster created by Kubeadm)

Note: Create the secret according to the your actual path of ETCD for the k8s cluster;

If the ETCD has been configured with certificates, refer to the following step:
$ kubectl -n kubesphere-system create secret generic kubesphere-ca
--from-file=ca.crt=/etc/kubernetes/pki/ca.crt
--from-file=ca.key=/etc/kubernetes/pki/ca.key
If the ETCD has been not configured with certificates.

k8s 上部署 ks,请问应该创建 ingress关联default-http-backend 吗?

我在 腾讯云的 k8s 上参照 https://github.com/kubesphere/ks-installer 部署基本成功,但无法访问
image

(1)请问应该创建 ingress关联default-http-backend 吗?

  • service
# kubectl get services -n kubesphere-controls-system
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
default-http-backend   LoadBalancer   172.16.255.206   10.0.0.39     80:31380/TCP   1d

(2)请问 k8s 版本一定要求 1.13.0+ 吗

当前使用的 k8s 版本是 1.10,helm 版本是 2.10.0,去年部署的,还未升级。

忽然想起来前提。

kubernetes version: 1.13.0+
helm version: 2.10.0+

KubeSphere 2.0.2 云平台安装测试结果

Three install options:

  1. online
  2. offline
  3. KubeSphere only, i.e., install KubeSphere on existing k8s cluster or k8s-based distribution.

All are 64 bit OSes (x86)

qingcloud huawei cloud aliyun tencent cloud aws azure Google Cloud
Ubuntu 16.04 online allinone √2 √4
Ubuntu 16.04 online 1master 2nodes √2 √4
Ubuntu 18.04 online allinone √2
Ubuntu 18.04 online 1master 2nodes √2
CentOS 7.6 online allinone √ 5 √1
CentOS 7.6 online 1master 2nodes √5 √1
RHEL 7.4 online allinone √3
RHEL 7.4 online 1master 2nodes √3
  1. Image from AWS Marketplace
  2. huawei cloud安装报错及解决方法:#19 (pip安装) #20 (coredns无法启动)
  3. image from AWS 社区AMI,online失败,需要用offline安装包且注释掉ebtables #23 (comment)
  4. tencent cloud中ubuntu16.04镜像较旧(ubuntu16.04.1),可能会遇到 illegal instruction (core dump),规避方法可参考:#21
  5. aliyun中centos7安装时可能会遇到pip版本问题,解决方法可参考:#16

在腾讯云Ubuntu Server 16.04.1 LTS 上安装在线版失败

安装环境:
腾讯云:Ubuntu Server 16.04.1 LTS 64位,北京三区

使用在线安装包安装all in one,安装失败:
./all-in-one.sh: line 67: 30166 Illegal instruction (core dumped) ansible-playbook -i $BASE_FOLDER/../k8s/inventory/local/hosts.ini $BASE_FOLDER/../kubesphere/kubesphere.yml -b -e logging_enable=true -e is_allinone=true -e prometheus_replica=1 -e ks_console_replicas=1 -e JavaOpts_Xms='-Xms512m' -e JavaOpts_Xmx='-Xmx512m' -e jenkins_memory_lim="2Gi" -e jenkins_memory_req="800Mi" -e elasticsearch_data_replica=1

image

在现有k8s集群上安装会出现这样的问题?

k8s版本
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

报错信息
TASK [ks-devops/sonarqube : Sonarqube | Deploy Sonarqube] **********************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/helm upgrade --install ks-sonarqube /etc/kubesphere/sonarqube/sonarqube-0.13.5.tgz -f /etc/kubesphere/sonarqube/custom-values-sonarqube.yaml --namespace kubesphere-devops-system\n", "delta": "0:00:00.282004", "end": "2019-09-16 17:24:11.206703", "msg": "non-zero return code", "rc": 1, "start": "2019-09-16 17:24:10.924699", "stderr": "Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"", "stderr_lines": ["Error: UPGRADE FAILED: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system""], "stdout": "", "stdout_lines": []}

PLAY RECAP *********************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=4 rescued=0 ignored=0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.