Coder Social home page Coder Social logo

kubesphere / kubekey Goto Github PK

View Code? Open in Web Editor NEW
2.2K 46.0 519.0 59.84 MB

Install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons, it supports all-in-one, multi-node, and HA ๐Ÿ”ฅ โŽˆ ๐Ÿณ

Home Page: https://kubesphere.io

License: Apache License 2.0

Go 95.06% Shell 3.58% Dockerfile 0.06% Makefile 1.16% Python 0.13%
installer kubernetes kubeadm k8s kubernetes-deployment kubernetes-cluster hacktoberfest

kubekey's Introduction

CI

English | ไธญๆ–‡

๐Ÿ‘‹ Welcome to KubeKey!

KubeKey is an open-source lightweight tool for deploying Kubernetes clusters. It provides a flexible, rapid, and convenient way to install Kubernetes/K3s only, both Kubernetes/K3s and KubeSphere, and related cloud-native add-ons. It is also an efficient tool to scale and upgrade your cluster.

In addition, KubeKey also supports customized Air-Gap package, which is convenient for users to quickly deploy clusters in offline environments.

KubeKey has passed CNCF kubernetes conformance verification.

Use KubeKey in the following three scenarios.

  • Install Kubernetes/K3s only
  • Install Kubernetes/K3s and KubeSphere together in one command
  • Install Kubernetes/K3s first, then deploy KubeSphere on it using ks-installer

Important: If you have existing Kubernetes clusters, please refer to ks-installer (Install KubeSphere on existing Kubernetes cluster).

Supported Environment

Linux Distributions

  • Ubuntu 16.04, 18.04, 20.04, 22.04
  • Debian Bullseye, Buster, Stretch
  • CentOS/RHEL 7
  • AlmaLinux 9.0
  • SUSE Linux Enterprise Server 15

Recommended Linux Kernel Version: 4.15 or later You can run the uname -srm command to check the Linux Kernel Version.

Kubernetes Versions

  • v1.19: โ€‚ v1.19.15
  • v1.20: โ€‚ v1.20.10
  • v1.21: โ€‚ v1.21.14
  • v1.22: โ€‚ v1.22.15
  • v1.23: โ€‚ v1.23.10 (default)
  • v1.24: โ€‚ v1.24.7
  • v1.25: โ€‚ v1.25.3

Looking for more supported versions:
Kubernetes Versions
K3s Versions

Container Manager

  • Docker / containerd / CRI-O / iSula

Kata Containers can be set to automatically install and configure runtime class for it when the container manager is containerd or CRI-O.

Network Plugins

  • Calico / Flannel / Cilium / Kube-OVN / Multus-CNI

Kubekey also supports users to set the network plugin to none if there is a requirement for custom network plugin.

Requirements and Recommendations

  • Minimum resource requirements (For Minimal Installation of KubeSphere only)๏ผš
    • 2 vCPUs
    • 4 GB RAM
    • 20 GB Storage

/var/lib/docker is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that /var/lib/docker mounts a drive separately.

  • OS requirements:
    • SSH can access to all nodes.
    • Time synchronization for all nodes.
    • sudo/curl/openssl should be used in all nodes.
    • docker can be installed by yourself or by KubeKey.
    • Red Hat includes SELinux in its Linux release. It is recommended to close SELinux or switch the mode of SELinux to Permissive
  • It's recommended that Your OS is clean (without any other software installed), otherwise there may be conflicts.
  • A container image mirror (accelerator) is recommended to be prepared if you have trouble downloading images from dockerhub.io. Configure registry-mirrors for the Docker daemon.
  • KubeKey will install OpenEBS to provision LocalPV for development and testing environment by default, this is convenient for new users. For production, please use NFS / Ceph / GlusterFS or commercial products as persistent storage, and install the relevant client in all nodes.
  • If you encounter Permission denied when copying, it is recommended to check SELinux and turn off it first
  • Dependency requirements:

KubeKey can install Kubernetes and KubeSphere together. Some dependencies need to be installed before installing kubernetes after version 1.18. You can refer to the list below to check and install the relevant dependencies on your node in advance.

Kubernetes Version โ‰ฅ 1.18
socat Required
conntrack Required
ebtables Optional but recommended
ipset Optional but recommended
ipvsadm Optional but recommended
  • Networking and DNS requirements:
    • Make sure the DNS address in /etc/resolv.conf is available. Otherwise, it may cause some issues of DNS in cluster.
    • If your network configuration uses Firewall or Security Group๏ผŒyou must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the link configuriation: NetworkAccess.

Usage

Get the KubeKey Executable File

  • The fastest way to get KubeKey is to use the script:

    curl -sfL https://get-kk.kubesphere.io | sh -
    
  • Binary downloads of the KubeKey also can be found on the Releases page. Unpack the binary and you are good to go!

  • Build Binary from Source Code

    git clone https://github.com/kubesphere/kubekey.git
    cd kubekey
    make kk

Create a Cluster

Quick Start

Quick Start is for all-in-one installation which is a good start to get familiar with Kubernetes and KubeSphere.

Note: Since Kubernetes temporarily does not support uppercase NodeName, contains uppercase letters in the hostname will lead to subsequent installation error

Command

If you have problem to access https://storage.googleapis.com, execute first export KKZONE=cn.

./kk create cluster [--with-kubernetes version] [--with-kubesphere version]
Examples
  • Create a pure Kubernetes cluster with default version (Kubernetes v1.23.10).

    ./kk create cluster
  • Create a Kubernetes cluster with a specified version.

    ./kk create cluster --with-kubernetes v1.24.1 --container-manager containerd
  • Create a Kubernetes cluster with KubeSphere installed.

    ./kk create cluster --with-kubesphere v3.2.1

Advanced

You have more control to customize parameters or create a multi-node cluster using the advanced installation. Specifically, create a cluster by specifying a configuration file.

If you have problem to access https://storage.googleapis.com, execute first export KKZONE=cn.

  1. First, create an example configuration file

    ./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --filename) path]

    examples:

    • create an example config file with default configurations. You also can specify the file that could be a different filename, or in different folder.
    ./kk create config [-f ~/myfolder/abc.yaml]
    • with KubeSphere
    ./kk create config --with-kubesphere v3.2.1
  2. Modify the file config-sample.yaml according to your environment

Note: Since Kubernetes temporarily does not support uppercase NodeName, contains uppercase letters in workerNode`s name will lead to subsequent installation error

A persistent storage is required in the cluster, when kubesphere will be installed. The local volume is used default. If you want to use other persistent storage, please refer to addons.

  1. Create a cluster using the configuration file

    ./kk create cluster -f config-sample.yaml

Enable Multi-cluster Management

By default, KubeKey will only install a solo cluster without Kubernetes federation. If you want to set up a multi-cluster control plane to centrally manage multiple clusters using KubeSphere, you need to set the ClusterRole in config-example.yaml. For multi-cluster user guide, please refer to How to Enable the Multi-cluster Feature.

Enable Pluggable Components

KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be started with a minimal installation if you do not enable them.

You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Please ensure your machines have sufficient CPU and memory before enabling them. See Enable Pluggable Components for the details.

Add Nodes

Add new node's information to the cluster config file, then apply the changes.

./kk add nodes -f config-sample.yaml

Delete Nodes

You can delete the node by the following command๏ผŒthe nodeName that needs to be removed.

./kk delete node <nodeName> -f config-sample.yaml

Delete Cluster

You can delete the cluster by the following command:

  • If you started with the quick start (all-in-one):
./kk delete cluster
  • If you started with the advanced (created with a configuration file):
./kk delete cluster [-f config-sample.yaml]

Upgrade Cluster

Allinone

Upgrading cluster with a specified version.

./kk upgrade [--with-kubernetes version] [--with-kubesphere version] 
  • Support upgrading Kubernetes only.
  • Support upgrading KubeSphere only.
  • Support upgrading Kubernetes and KubeSphere.

Multi-nodes

Upgrading cluster with a specified configuration file.

./kk upgrade [--with-kubernetes version] [--with-kubesphere version] [(-f | --filename) path]
  • If --with-kubernetes or --with-kubesphere is specified, the configuration file will be also updated.
  • Use -f to specify the configuration file which was generated for cluster creation.

Note: Upgrading multi-nodes cluster need a specified configuration file. If the cluster was installed without kubekey or the configuration file for installation was not found, the configuration file needs to be created by yourself or following command.

Getting cluster info and generating kubekey's configuration file (optional).

./kk create config [--from-cluster] [(-f | --filename) path] [--kubeconfig path]
  • --from-cluster means fetching cluster's information from an existing cluster.
  • -f refers to the path where the configuration file is generated.
  • --kubeconfig refers to the path where the kubeconfig.
  • After generating the configuration file, some parameters need to be filled in, such as the ssh information of the nodes.

Documents

Contributors โœจ

Thanks goes to these wonderful people (emoji key):

pixiake
pixiake

๐Ÿ’ป ๐Ÿ“–
Forest
Forest

๐Ÿ’ป ๐Ÿ“–
rayzhou2017
rayzhou2017

๐Ÿ’ป ๐Ÿ“–
shaowenchen
shaowenchen

๐Ÿ’ป ๐Ÿ“–
Zhao Xiaojie
Zhao Xiaojie

๐Ÿ’ป ๐Ÿ“–
Zack Zhang
Zack Zhang

๐Ÿ’ป
Akhil Mohan
Akhil Mohan

๐Ÿ’ป
pengfei
pengfei

๐Ÿ“–
min zhang
min zhang

๐Ÿ’ป ๐Ÿ“–
zgldh
zgldh

๐Ÿ’ป
xrjk
xrjk

๐Ÿ’ป
yonghongshi
yonghongshi

๐Ÿ’ป
Honglei
Honglei

๐Ÿ“–
liucy1983
liucy1983

๐Ÿ’ป
Lien
Lien

๐Ÿ“–
Tony Wang
Tony Wang

๐Ÿ“–
Hongliang Wang
Hongliang Wang

๐Ÿ’ป
dawn
dawn

๐Ÿ’ป
Duan Jiong
Duan Jiong

๐Ÿ’ป
calvinyv
calvinyv

๐Ÿ“–
Benjamin Huo
Benjamin Huo

๐Ÿ“–
Sherlock113
Sherlock113

๐Ÿ“–
fu_changjie
fu_changjie

๐Ÿ“–
yuswift
yuswift

๐Ÿ’ป
ruiyaoOps
ruiyaoOps

๐Ÿ“–
LXM
LXM

๐Ÿ“–
sbhnet
sbhnet

๐Ÿ’ป
misteruly
misteruly

๐Ÿ’ป
John Niang
John Niang

๐Ÿ“–
Michael Li
Michael Li

๐Ÿ’ป
็‹ฌๅญคๆ˜Šๅคฉ
็‹ฌๅญคๆ˜Šๅคฉ

๐Ÿ’ป
Liu Shaohui
Liu Shaohui

๐Ÿ’ป
Leo Li
Leo Li

๐Ÿ’ป
Roland
Roland

๐Ÿ’ป
Vinson Zou
Vinson Zou

๐Ÿ“–
tag_gee_y
tag_gee_y

๐Ÿ’ป
codebee
codebee

๐Ÿ’ป
Daniel Owen van Dommelen
Daniel Owen van Dommelen

๐Ÿค”
Naidile P N
Naidile P N

๐Ÿ’ป
Haiker Sun
Haiker Sun

๐Ÿ’ป
Jing Yu
Jing Yu

๐Ÿ’ป
Chauncey
Chauncey

๐Ÿ’ป
Tan Guofu
Tan Guofu

๐Ÿ’ป
lvillis
lvillis

๐Ÿ“–
Vincent He
Vincent He

๐Ÿ’ป
laminar
laminar

๐Ÿ’ป
tongjin
tongjin

๐Ÿ’ป
Reimu
Reimu

๐Ÿ’ป
Ikko Ashimine
Ikko Ashimine

๐Ÿ“–
Ben Ye
Ben Ye

๐Ÿ’ป
yinheli
yinheli

๐Ÿ’ป
hellocn9
hellocn9

๐Ÿ’ป
Brandan Schmitz
Brandan Schmitz

๐Ÿ’ป
yjqg6666
yjqg6666

๐Ÿ“– ๐Ÿ’ป
ๅคฑ็œ ๆ˜ฏ็œŸๆปด้šพๅ—
ๅคฑ็œ ๆ˜ฏ็œŸๆปด้šพๅ—

๐Ÿ’ป
mango
mango

๐Ÿ‘€
wenwutang
wenwutang

๐Ÿ’ป
Shiny Hou
Shiny Hou

๐Ÿ’ป
zhouqiu0103
zhouqiu0103

๐Ÿ’ป
77yu77
77yu77

๐Ÿ’ป
hzhhong
hzhhong

๐Ÿ’ป
zhang-wei
zhang-wei

๐Ÿ’ป
Deshi Xiao
Deshi Xiao

๐Ÿ’ป ๐Ÿ“–
besscroft
besscroft

๐Ÿ“–
ๅผ ๅฟ—ๅผบ
ๅผ ๅฟ—ๅผบ

๐Ÿ’ป
lwabish
lwabish

๐Ÿ’ป ๐Ÿ“–
qyz87
qyz87

๐Ÿ’ป
ZhengJin Fang
ZhengJin Fang

๐Ÿ’ป
Eric_Lian
Eric_Lian

๐Ÿ’ป
nicognaw
nicognaw

๐Ÿ’ป
ๅ•ๅพทๅบ†
ๅ•ๅพทๅบ†

๐Ÿ’ป
littleplus
littleplus

๐Ÿ’ป
Konstantin
Konstantin

๐Ÿค”
kiragoo
kiragoo

๐Ÿ’ป
jojotong
jojotong

๐Ÿ’ป
littleBlackHouse
littleBlackHouse

๐Ÿ’ป ๐Ÿ“–
guangwu
guangwu

๐Ÿ’ป ๐Ÿ“–
wongearl
wongearl

๐Ÿ’ป
wenwenxiong
wenwenxiong

๐Ÿ’ป
ๆŸๅ–ตSakura
ๆŸๅ–ตSakura

๐Ÿ’ป
cui fliter
cui fliter

๐Ÿ“–
ๅˆ˜ๆ—ญ
ๅˆ˜ๆ—ญ

๐Ÿ’ป
yuyu
yuyu

๐Ÿ’ป
chilianyi
chilianyi

๐Ÿ’ป
Ronald Fletcher
Ronald Fletcher

๐Ÿ’ป
baikjy0215
baikjy0215

๐Ÿ’ป
knowmost
knowmost

๐Ÿ“–

This project follows the all-contributors specification. Contributions of any kind welcome!

kubekey's People

Contributors

24sama avatar akhilerm avatar allcontributors[bot] avatar chilianyi avatar dependabot[bot] avatar deqinglv avatar fangzhengjin avatar forest-l avatar fuchange avatar hellocn9 avatar johnniang avatar ks-ci-bot avatar kuops avatar liangzai006 avatar life- avatar linuxsuren avatar liuxu623 avatar lwabish avatar misteruly avatar pixiake avatar qyz87 avatar rayzhou2017 avatar shaowenchen avatar tanguofu avatar testwill avatar vicoooo26 avatar wenwutang1 avatar will4j avatar xiaods avatar yjqg6666 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kubekey's Issues

need to judge if config-sample.yaml exists or not

When using ./kk create config to create a default sample file named config-sample.yaml, it's better to judge if such file already exists or not, and if so, ask user whether to overwrite it or abort.

kubekey test items

notes:

  1. Only the main test operation commands are given below. For other requirements and Recommendations, see https://github.com/kubesphere/kubekey.

  2. If you install with config file, you should modify the file config.yaml according to your environment.

Allinone

reference guide๏ผš
https://github.com/kubesphere/kubekey#allinone

precondition๏ผš
The install executable file has been downloaded to your task box.

test items:

  • to do 1: Create a pure Kubernetes cluster with default version in allinone mode
    command: ./kk create cluster

  • to do 2: Create a Kubernetes cluster with a specified version in allinone mode
    command: ./kk create cluster --with-kubernetes v1.17.6

  • to do 3๏ผšCreate a Kubernetes cluster with KubeSphere installed in allinone mode
    command: ./kk create cluster --with-kubesphere v3.0.0

  • to do 4: Install Kubernetes only with config file in allinone mode
    create config file command: ./kk create config [--with-kubernetes version]
    #The config file contains only Kubernetes information
    install command: ./kk create cluster -f config-sample.yaml

  • to do 5๏ผšInstall Kubernetes and KubeSphere with config file in current path
    create config file command: ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version]
    #The config file contains Kubernetes and kubesphere information
    install command: ./kk create cluster -f config-sample.yaml

  • to do 6๏ผšInstall Kubernetes and KubeSphere with config file in differen path
    create config file command: ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version] [(-f | --file) path]
    #The config file contains Kubernetes and kubesphere information
    install command: ./kk create cluster -f config-sample.yaml

  • to do 7๏ผšinstall ks after create a pure Kubernetes in allinone mode
    reference guide: https://github.com/kubesphere/ks-installer

  • to do 8๏ผšCreate a Kubernetes cluster with KubeSphere installed to remote host
    #configure host information as the remote host in the config file
    command: ./kk create cluster -f config.yaml

  • to do 9๏ผšadd node after allinone-install-mode
    #Add new node's information to the cluster config file, then apply the changes.
    command: ./kk scale -f config-sample.yaml

  • to do 10๏ผšremove the cluster installed by allinone
    command: ./kk delete cluster [-f config-sample.yaml]

Multi-node

reference guide๏ผš
https://github.com/kubesphere/kubekey#multi-node

precondition๏ผš
The install executable file has been downloaded to your task box.

test items:

  • to do 1: Install Kubernetes only in multi-node mode
    create config file command: ./kk create config [--with-kubernetes version]
    #The config file contains only Kubernetes information
    install command: ./kk create cluster -f config.yaml

  • to do 2๏ผšInstall Kubernetes and KubeSphere in multi-node mode
    create config file command: ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version]
    #The config file contains Kubernetes and kubesphere information
    install command: ./kk create cluster -f config.yaml

  • to do 3๏ผšinstall ks after installing only k8s in multi-node mode
    reference guide: https://github.com/kubesphere/ks-installer

  • to do 4๏ผšadd node after multi-node-install-mode
    #Add new node's information to the cluster config file, then apply the changes.
    command: ./kk scale -f config-sample.yaml

  • to do 5๏ผšChange the multiple nodes to HA
    #Add new node's information to the cluster config file
    #add the new node to the master role
    #Set the address in the controlPlaneEndp
    command: ./kk scale -f config-sample.yaml

  • to do 6๏ผšremove the cluster installed by muti-node
    command: ./kk delete cluster [-f config-sample.yaml]

storage type

reference guide๏ผš
https://github.com/kubesphere/kubekey#multi-node

precondition๏ผš
The install executable file has been downloaded to your task box.

test items:

  • to do 1๏ผšInstall Kubernetes and KubeSphere with nfsClient in multi-node mode
    create config file command: ./kk create config --with-storage nfsClient --with-kubesphere
    install command: ./kk create cluster -f config.yaml

  • to do 2๏ผšInstall Kubernetes and KubeSphere with rbd in multi-node mode
    create config file command: ./kk create config --with-storage rbd --with-kubesphere
    install command: ./kk create cluster -f config.yaml

  • to do 3๏ผšInstall Kubernetes and KubeSphere with glusterfs in multi-node mode
    create config file command: ./kk create config --with-storage glusterfs --with-kubesphere
    install command: ./kk create cluster -f config.yaml

os

test purpose๏ผš
install Kubernetes and KubeSphere on different operating systems using KubeKey

precondition๏ผš
Create 3 test hosts based on the operating system in the test item below

common steps๏ผš

  1. On one of the 3 hosts, execute the command:
    curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
    chmod +x kk

  2. On the same host, execute the command:
    ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version]

  3. modify the file config.yaml according to your environment

  4. On the same host, execute the command:
    ./kk create cluster -f config.yaml

test items:

  • to do 1: Install Kubernetes and KubeSphere on CentOS 7 in multi-node mode
  • to do 2: Install Kubernetes and KubeSphere on RHEL 7 in multi-node mode
  • to do 3: Install Kubernetes and KubeSphere on Ubuntu 16.04 in multi-node mode
  • to do 4: Install Kubernetes and KubeSphere on Ubuntu 18.04 in multi-node mode
  • to do 5: Install Kubernetes and KubeSphere on Debian Buster in multi-node mode
  • to do 6: Install Kubernetes and KubeSphere on Debian Stretch in multi-node mode

platform

test purpose๏ผš
Install Kubernetes and KubeSphere on different platforms using KubeKey.

precondition๏ผš
Create 3 test hosts based on the platform in the test item below

common steps๏ผš

  1. On one of the 3 hosts, execute the command:
    curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
    chmod +x kk

  2. On the same host, execute the command:
    ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version]

  3. modify the file config.yaml according to your environment

  4. On the same host, execute the command:
    ./kk create cluster -f config.yaml

test items:

  • to do 1: Install Kubernetes and KubeSphere on Aliyun in multi-node mode
  • to do 2๏ผšInstall Kubernetes and KubeSphere on AWS in multi-node mode
  • to do 3๏ผšInstall Kubernetes and KubeSphere on Azure in multi-node mode
  • to do 4๏ผšInstall Kubernetes and KubeSphere on GCE in multi-node mode
  • to do 5๏ผšInstall Kubernetes and KubeSphere on Huawei Cloud in multi-node mode
  • to do 6๏ผšInstall Kubernetes and KubeSphere on OpenStack in multi-node mode
  • to do 7๏ผšInstall Kubernetes and KubeSphere on QingCloud in multi-node mode
  • to do 8๏ผšInstall Kubernetes and KubeSphere on Tencent Cloud in multi-node mode
  • to do 9๏ผšInstall Kubernetes and KubeSphere on VMware in multi-node mode
  • to do 10๏ผšInstall Kubernetes and KubeSphere on Physical host in multi-node mode

k8s version

test purpose๏ผš
Install different versions of K8s via KubeKey.

precondition๏ผš
Create 3 test hosts

common steps๏ผš

  1. On one of the 3 hosts, execute the command:
    curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
    chmod +x kk

  2. On the same host, execute the command:
    ./kk create config [--with-kubernetes version] [--with-storage plugins] [--with-kubesphere version]

  3. modify the file config.yaml according to your environment, and set the version of k8s according the version in the test item below

  4. On the same host, execute the command:
    ./kk create cluster -f config.yaml

test items:

  • to do 1: Install Kubernetes(v1.15.12) and KubeSphere in multi-node mode
  • to do 2๏ผšInstall Kubernetes(v1.16.10) and KubeSphere in multi-node mode
  • to do 3๏ผšInstall Kubernetes(v1.17.6) and KubeSphere in multi-node mode
  • to do 4๏ผšInstall Kubernetes(v1.18.3) and KubeSphere in multi-node mode

provide -y

Users may have devops requirement for automation testing. It is better to provide such command ./kk create cluster -f xxxx.yaml -y so that the command will continue installing without manually typing yes

all-in-one install failed on centos7.7

I have installed sudo/curl/openssl/ebtables/socat/ipset/conntrack.
linux kernel 3.10,a clean enviroment without docker

exec

curl -O -k https://kubernetes.pek3b.qingstor.com/tools/kubekey/kk
chmod +x ./kk
./kk create cluster --with-kubesphere v3.0.0

return

clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged


INFO[16:38:40 CST] Congradulations! Installation is successful.

exec
kubectl logs -n kubesphere-system ks-installer-847c9d85b9-qgkdv -f
return

TASK [ks-core/prepare : KubeSphere | Init KubeSphere] **************************
cated unchanged\nuser.iam.kubesphere.io/admin unchanged\nworkspacerole.iam.kubesphere.io/role-template-view-projects unchanged\nworkspacerole.iam.kubesphere.io/role-template-create-projects unchanged\nworkspacerole.iam.kubesphere.io/role-template-manage-projects 
..........
"rolebase.iam.kubesphere.io/role-template-view-secrets unchanged"]}
changed: [localhost] => (item=webhook-secret.yaml)
changed: [localhost] => (item=kubesphere-config.yaml)

PLAY RECAP *********************************************************************
localhost                  : ok=10   changed=6    unreachable=0    failed=1    skipped=3    rescued=0    ignored=0

it has a same result after i delete the ks-installer pod.

exec
kubectl get po -A
return

NAMESPACE           NAME                                           READY   STATUS    RESTARTS   AGE
kube-system         calico-kube-controllers-68dc4cf88f-bhgxm       1/1     Running   0          45m
kube-system         calico-node-ch8pg                              1/1     Running   0          45m
kube-system         coredns-79c6f6447f-cksk6                       1/1     Running   0          45m
kube-system         coredns-79c6f6447f-wqpw2                       1/1     Running   0          45m
kube-system         kube-apiserver-ks-allinone                     1/1     Running   0          45m
kube-system         kube-controller-manager-ks-allinone            1/1     Running   0          45m
kube-system         kube-proxy-qgrl4                               1/1     Running   0          45m
kube-system         kube-scheduler-ks-allinone                     1/1     Running   0          45m
kube-system         nodelocaldns-f2d8c                             1/1     Running   0          45m
kube-system         openebs-localpv-provisioner-84956ddb89-8tqp2   1/1     Running   0          38m
kube-system         openebs-ndm-bg4qg                              1/1     Running   0          38m
kube-system         openebs-ndm-operator-64c57ccccf-42cf2          1/1     Running   1          38m
kubesphere-system   ks-installer-847c9d85b9-qgkdv                  1/1     Running   0          14m
kubesphere-system   openldap-0                                     1/1     Running   0          32m
kubesphere-system   redis-6fd6c6d6f9-5q584                         1/1     Running   0          33m
[root@localhost ~]# kubectl get sc -A
NAME              PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
local (default)   openebs.io/local   Delete          WaitForFirstConsumer   false                  52m

[root@localhost ~]# kubectl get crds
NAME                                             CREATED AT
applications.app.k8s.io                          2020-06-18T08:10:47Z
bgpconfigurations.crd.projectcalico.org          2020-06-18T07:58:43Z
bgppeers.crd.projectcalico.org                   2020-06-18T07:58:43Z
blockaffinities.crd.projectcalico.org            2020-06-18T07:58:43Z
blockdeviceclaims.openebs.io                     2020-06-18T08:05:22Z
blockdevices.openebs.io                          2020-06-18T08:05:18Z
clusterinformations.crd.projectcalico.org        2020-06-18T07:58:43Z
clusters.cluster.kubesphere.io                   2020-06-18T08:10:47Z
destinationrules.istio.kubesphere.io             2020-06-18T08:10:47Z
devopsprojects.devops.kubesphere.io              2020-06-18T08:10:47Z
felixconfigurations.crd.projectcalico.org        2020-06-18T07:58:43Z
gateways.istio.kubesphere.io                     2020-06-18T08:10:47Z
globalnetworkpolicies.crd.projectcalico.org      2020-06-18T07:58:43Z
globalnetworksets.crd.projectcalico.org          2020-06-18T07:58:43Z
globalrolebindings.iam.kubesphere.io             2020-06-18T08:10:47Z
globalroles.iam.kubesphere.io                    2020-06-18T08:10:47Z
hostendpoints.crd.projectcalico.org              2020-06-18T07:58:43Z
ipamblocks.crd.projectcalico.org                 2020-06-18T07:58:43Z
ipamconfigs.crd.projectcalico.org                2020-06-18T07:58:43Z
ipamhandles.crd.projectcalico.org                2020-06-18T07:58:43Z
ippools.crd.projectcalico.org                    2020-06-18T07:58:43Z
namespacenetworkpolicies.network.kubesphere.io   2020-06-18T08:10:48Z
networkpolicies.crd.projectcalico.org            2020-06-18T07:58:43Z
networksets.crd.projectcalico.org                2020-06-18T07:58:43Z
pipelines.devops.kubesphere.io                   2020-06-18T08:10:47Z
rolebases.iam.kubesphere.io                      2020-06-18T08:10:47Z
s2ibinaries.devops.kubesphere.io                 2020-06-18T08:10:47Z
s2ibuilders.devops.kubesphere.io                 2020-06-18T08:10:47Z
s2ibuildertemplates.devops.kubesphere.io         2020-06-18T08:10:47Z
s2iruns.devops.kubesphere.io                     2020-06-18T08:10:47Z
servicepolicies.servicemesh.kubesphere.io        2020-06-18T08:10:48Z
storageclasscapabilities.storage.kubesphere.io   2020-06-18T08:10:48Z
strategies.servicemesh.kubesphere.io             2020-06-18T08:10:48Z
users.iam.kubesphere.io                          2020-06-18T08:10:47Z
virtualservices.istio.kubesphere.io              2020-06-18T08:10:47Z
volumesnapshotclasses.snapshot.storage.k8s.io    2020-06-18T08:10:48Z
volumesnapshotcontents.snapshot.storage.k8s.io   2020-06-18T08:10:48Z
volumesnapshots.snapshot.storage.k8s.io          2020-06-18T08:10:48Z
workspacerolebindings.iam.kubesphere.io          2020-06-18T08:10:47Z
workspaceroles.iam.kubesphere.io                 2020-06-18T08:10:47Z
workspaces.tenant.kubesphere.io                  2020-06-18T08:10:48Z
workspacetemplates.tenant.kubesphere.io          2020-06-18T08:10:48Z

allinone install failed

root@i-jxd66w7b:~# ./kk create cluster
INFO[14:30:41 UTC] Install Files Download
INFO[14:30:41 UTC] Kubeadm being download ...
INFO[14:30:41 UTC] Kubelet being download ...
INFO[14:30:43 UTC] Kubectl being download ...
INFO[14:30:44 UTC] KubeCni being download ...
INFO[14:30:45 UTC] Helm being download ...
INFO[14:30:46 UTC] Initialize operating system
WARN[14:30:46 UTC] Task failedโ€ฆ
WARN[14:30:46 UTC] error was: failed to connect to 172.23.0.2: could not establish connection to 172.23.0.2:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Error: failed to download kube binaries: failed to connect to 172.23.0.2: could not establish connection to 172.23.0.2:22: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Usage:
  kk create cluster [flags]

Flags:
      --add string      add plugins
  -f, --config string   cluster info config
      --debug           debug info (default true)
  -h, --help            help for cluster
      --pkg string      release package (offline)

Check the environment before installing docker

INFO[14:41:47 UTC] Installing dockerโ€ฆโ€ฆ
ERRO[14:41:52 UTC] failed to install docker: failed to exec command: sudo sh -c "[ -z $(which docker) ] && curl https://kubernetes.pek3b.qingstor.com/tools/kubekey/docker-install.sh | sh ; systemctl enable docker": Process exited with status 1  node=172.23.0.2
WARN[14:41:52 UTC] Task failedโ€ฆ
WARN[14:41:52 UTC] error was: interrupted by error
Error: failed to install docker: interrupted by error

Root cause:

root@i-jxd66w7b:~# curl get.docker.com -L | bash
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   183  100   183    0     0    375      0 --:--:-- --:--:-- --:--:--   374
100 13328  100 13328    0     0   6690      0  0:00:01  0:00:01 --:--:-- 22029
# Executing docker install script, commit: 442e66405c304fa92af8aadaa1d9b31bf4b0ad94
+ sh -c 'apt-get update -qq >/dev/null'
+ sh -c 'DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null'
E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?

How to add multiple plugins

For example, I want to create config file with several plugins. Is it like this one?

./kk create config --add kubesphere,localVolume

Ubuntu multi nodes install failed

hosts:

  • {name: master1, address: 192.168.100.2, internalAddress: 192.168.100.2, user: ubuntu, password: xxxx}
  • {name: master2, address: 192.168.100.3, internalAddress: 192.168.100.3, user: ubuntu, password: xxxxx}
  • {name: master3, address: 192.168.100.4, internalAddress: 192.168.100.4, user: ubuntu, password: xxxxx}
  • {name: node1, address: 192.168.100.5, internalAddress: 192.168.100.5, user: ubuntu, password: xxxxx}
  • {name: node2, address: 192.168.100.9, internalAddress: 192.168.100.9, user: ubuntu, password: xxxx}
  • {name: node3, address: 192.168.100.10, internalAddress: 192.168.100.10, user: ubuntu, password: xxxx}

The error is:

INFO[03:31:48 UTC] Generating etcd certs
ERRO[03:31:49 UTC] Failed to write etcd certs content: Failed to exec command: sudo -E /bin/sh -c โ€œecho cat: /etc/ssl/etcd/ssl/ca-key.pem: Permission denied | base64 -d > /etc/ssl/etcd/ssl/ca-key.pemโ€: Process exited with status 1 node=192.168.100.4
ERRO[03:31:49 UTC] Failed to write etcd certs content: Failed to exec command: sudo -E /bin/sh -c โ€œecho cat: /etc/ssl/etcd/ssl/admin-master1-key.pem: Permission denied | base64 -d > /etc/ssl/etcd/ssl/admin-master1-key.pemโ€: Process exited with status 1 node=192.168.100.3

failed to install ks+k8s v1.18.5 using binary

I build the binary from the kubekey Git repo, but it reminds the installation errors as follows:

$ ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node2 | y    | y    | y       | y        | y     | y     | y         | y      |            |             |                  | CST 19:11:16 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[19:11:22 CST] Downloading Installation Files
INFO[19:11:22 CST] Downloading kubeadm ...
WARN[19:11:23 CST] Task failed ...
WARN[19:11:23 CST] error: Failed to load kube binaries: No SHA256 found for v1.18.5. v1.18.5 is not supported.
Error: Failed to download kube binaries: Failed to load kube binaries: No SHA256 found for v1.18.5. v1.18.5 is not supported.
Usage:
  kk create cluster [flags]

Flags:
  -f, --file string              Path to a configuration file
  -h, --help                     help for cluster
      --with-kubernetes string   Specify a supported version of kubernetes (default "v1.17.6")
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)

Global Flags:
      --debug   Print detailed information (default true)

Failed to download kube binaries: Failed to load kube binaries: No SHA256 found for v1.18.5. v1.18.5 is not supported.

allinone installed failed because of sc missing

./kk create cluster --with-kubernetes v1.18.3 install failed. The error msg shows as below. But I think allinone should have localVolume as default.

TASK [preinstall : Stop if defaultStorageClass was not found] ******************
fatal: [localhost]: FAILED! => {
"assertion": ""(default)" in default_storage_class_check.stdout",
"changed": false,
"evaluated_to": false,
"msg": "Default StorageClass was not found !"
}

kk cluster all-in-one install failed

install allinone failed (CentOS Linux release 7.7.1908 (Core))

root@ks-allinone:/root # kk create cluster --with-kubesphere
...
  /usr/local/bin/kubectl apply -f /etc/kubernetes/calico.yaml
....
The Deployment "calico-kube-controllers" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"k8s-app":"calico-kube-controllers"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
ERRO[17:45:11 CST] Failed to deploy calico: Failed to exec command: /usr/local/bin/kubectl apply -f /etc/kubernetes/calico.yaml: Process exited with status 1  node=10.160.17.3
WARN[17:45:11 CST] Task failed ...
WARN[17:45:11 CST] error: interrupted by error
Error: Failed to deploy network plugin: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
  -f, --file string              Path to a configuration file
  -h, --help                     help for cluster
      --with-kubernetes string   Specify a supported version of kubernetes (default "v1.17.6")
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)

Global Flags:
      --debug   Print detailed information (default true)

Failed to deploy network plugin: interrupted by error

The caclio installed failed. Then I install calico manually as the above command, it's also the same error.

The calico.yaml as follows:

...
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
...

improvement suggestion

I found the deployment of ks-installer takes significant time. It would be better to pul flashing dots like ....... as the previous installer does so users know they should wait instead suspect it's dead.

INFO[10:12:33 CST] Deploying KubeSphere ...
[ks-allinone 192.168.100.2] MSG:
namespace/kubesphere-system created
configmap/ks-installer created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created

Failed to scale Muti-node(1 master 2 node) to HA (3 master, 3 node)

The failed msg is as follows:

Waiting for etcd to start
INFO[18:11:54 CST] Get cluster status
[node1 192.168.0.2] MSG:
/etc/kubernetes/admin.conf
[node1 192.168.0.2] MSG:
v1.17.6
WARN[18:14:10 CST] Task failed ...
WARN[18:14:10 CST] error: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init phase upload-certs --upload-certs": Process exited with status 1
Error: Failed to get cluster status: Failed to upload kubeadm certs: Failed to exec command: sudo -E /bin/sh -c "/usr/local/bin/kubeadm init phase upload-certs --upload-certs": Process exited with status 1
Usage:
  kk scale [flags]

Flags:
      --debug          (default true)
  -f, --file string   configuration file name
  -h, --help          help for scale

My current config-sample.yaml:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: config-sample
spec:
  hosts:
  - {name: node1, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Qcloud@123}
  - {name: node2, address: 192.168.0.3, internalAddress: 192.168.0.3, password: Qcloud@123}
  - {name: node3, address: 192.168.0.4, internalAddress: 192.168.0.4, password: Qcloud@123}
  - {name: node4, address: 192.168.0.5, internalAddress: 192.168.0.5, password: Qcloud@123}
  - {name: node5, address: 192.168.0.6, internalAddress: 192.168.0.6, password: Qcloud@123}
  - {name: node6, address: 192.168.0.7, internalAddress: 192.168.0.7, password: Qcloud@123}
  roleGroups:
    etcd:
    - node1
    - node4
    - node6
    master:
    - node1
    - node4
    - node6
    worker:
    - node2
    - node3
    - node5
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: "192.168.0.253"
    port: "6443"
  kubernetes:
    version: v1.17.6
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kube_pods_cidr: 10.233.64.0/18
    kube_service_cidr: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []

My previous config-sample.yaml:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: config-sample
spec:
  hosts:
  - {name: node1, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Qcloud@123}
  - {name: node2, address: 192.168.0.3, internalAddress: 192.168.0.3, password: Qcloud@123}
  - {name: node3, address: 192.168.0.4, internalAddress: 192.168.0.4, password: Qcloud@123}
  roleGroups:
    etcd:
    - node1
    master:
    - node1
    worker:
    - node2
    - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.6
    imageRepo: kubesphere
    clusterName: cluster.local
  network:
    plugin: calico
    kube_pods_cidr: 10.233.64.0/18
    kube_service_cidr: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []

Install component support

not only install K8s or KubeSphere๏ผŒbut install some components. It looks like.

kubekey install kubesphere/devops
kubekey install isv/component

/enhancement

go build failed

use build.sh failed

root@ks-allinone:/root/kubekey git:(master) # sh build.sh
go: github.com/kballard/[email protected]: Get "https://proxy.golang.org/github.com/kballard/go-shellquote/@v/v0.0.0-20180428030007-95032a82bc51.mod": dial tcp: i/o timeout

root@ks-allinone:/root/kubekey git:(master) # curl -I https://proxy.golang.org/github.com/kballard/go-shellquote/@v/v0.0.0-20180428030007-95032a82bc51.mod
HTTP/1.1 200 OK

root@ks-allinone:/root/kubekey git:(master) # curl -I www.google.com
HTTP/1.1 200 OK

Just go build , also failed:

root@ks-allinone:/root/kubekey git:(master) # go build -v
get "sigs.k8s.io/structured-merge-diff/v3": found meta tag get.metaImport{Prefix:"sigs.k8s.io/structured-merge-diff", VCS:"git", RepoRoot:"https://github.com/kubernetes-sigs/structured-merge-diff"} at //sigs.k8s.io/structured-merge-diff/v3?go-get=1
get "sigs.k8s.io/structured-merge-diff/v3": verifying non-authoritative meta tag
go: github.com/spf13/[email protected] requires
        github.com/spf13/[email protected] requires
        github.com/xordataexchange/[email protected]: invalid pseudo-version: git fetch --unshallow -f https://github.com/xordataexchange/crypt in /root/go/pkg/mod/cache/vcs/d29aab0f2290694a8954e7fb32dffd4c47eb9d61dbe1217bcc89ef3a6e89f32d: exit status 128:
        fatal: git fetch-pack: expected shallow list

In China local:

hugo@zack:/Users/hugo/go/src/kubesphere.io/kubekey git:(master*) $ ./build.sh -p
go: github.com/kballard/[email protected]: Get "https://goproxy.cn/github.com/kballard/go-shellquote/@v/v0.0.0-20180428030007-95032a82bc51.mod": proxyconnect tcp: dial tcp 139.198.121.163:81: i/o timeout

Failed to install K8s in CentOs 7.6 on QingCloud

See the failed logs:

[ks-allinone] Downloading image: calico/pod2daemon-flexvol:v3.13.0
INFO[23:51:06 CST] Generating etcd certs
ERRO[23:51:06 CST] Failed to generate etcd certs: Failed to exec command: sudo -E /bin/sh -c "mkdir -p /etc/ssl/etcd/ssl && /bin/bash -x /tmp/kubekey/make-ssl-etcd.sh -f /tmp/kubekey/openssl.conf -d /etc/ssl/etcd/ssl": Process exited with status 127  node=192.168.0.2
WARN[23:51:06 CST] Task failed ...
WARN[23:51:06 CST] error: interrupted by error
Error: Failed to generate etcd certs: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
      --all           deploy kubernetes and kubesphere
      --debug         debug info (default true)
  -f, --file string   configuration file name
  -h, --help          help for cluster

RHEL support

Lots of large enterprises are using RHEL. So please support such OS.

failed on ubuntu 18.04

The error is:

ERRO[16:06:41 CST] Failed to install docker: Failed to exec command: sudo -E /bin/sh -c "if [ -z $(which docker) ]; then curl https://kubernetes.pek3b.qingstor.com/tools/kubekey/docker-install.sh | sh && systemctl enable docker && echo ewogICJsb2ctb3B0cyI6IHsKICAgICJtYXgtc2l6ZSI6ICI1bSIsCiAgICAibWF4LWZpbGUiOiIzIgogIH0sCiAgImV4ZWMtb3B0cyI6IFsibmF0aXZlLmNncm91cGRyaXZlcj1zeXN0ZW1kIl0KfQo= | base64 -d > /etc/docker/daemon.json && systemctl reload docker; fi": Process exited with status 100 node=192.168.100.2
WARN[16:06:41 CST] Task failed ...
WARN[16:06:41 CST] error: interrupted by error
Error: Failed to install docker: interrupted by error

After taking the following

E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.
root@ks-allinone:/home/ubuntu# dpkg --configure -a

It seems working.

Install failed about binary sha256 not match

[root@ip-172-31-2-146 centos]# ./kk create cluster -f config-sample.yaml
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| name  | sudo | curl | openssl | ebtables | socat | ipset | conntrack | docker | nfs client | ceph client | glusterfs client | time         |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+
| node1 | y    | y    | y       | y        | y     | y     | y         | y      | y          | y           | y                | UTC 02:12:18 |
+-------+------+------+---------+----------+-------+-------+-----------+--------+------------+-------------+------------------+--------------+

This is a simple check of your environment.
Before installation, you should ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendations

Continue this installation? [yes/no]: yes
INFO[02:12:21 UTC] Downloading Installation Files
INFO[02:12:21 UTC] Downloading kubeadm ...
WARN[02:12:22 UTC] Task failed ...
WARN[02:12:22 UTC] error: Failed to load kube binaries: SHA256 no match. d4cfc9a0a734ba015594974ee4253b8965b95cdb6e83d8a6a946675aad418b40 not in d2149f7261e055efcbcf34fc803a1549e1262054afe2ff88f0e0c2602b972fcb  /home/centos/kubekey/v1.17.6/amd64/kubeadm
Error: Failed to download kube binaries: Failed to load kube binaries: SHA256 no match. d4cfc9a0a734ba015594974ee4253b8965b95cdb6e83d8a6a946675aad418b40 not in d2149f7261e055efcbcf34fc803a1549e1262054afe2ff88f0e0c2602b972fcb  /home/centos/kubekey/v1.17.6/amd64/kubeadm
Usage:
  kk create cluster [flags]

Flags:
  -f, --file string              Path to a configuration file
  -h, --help                     help for cluster
      --with-kubernetes string   Specify a supported version of kubernetes (default "v1.17.6")
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)

Global Flags:
      --debug   Print detailed information (default true)

Failed to download kube binaries: Failed to load kube binaries: SHA256 no match. d4cfc9a0a734ba015594974ee4253b8965b95cdb6e83d8a6a946675aad418b40 not in d2149f7261e055efcbcf34fc803a1549e1262054afe2ff88f0e0c2602b972fcb  /home/centos/kubekey/v1.17.6/amd64/kubeadm

Installer failed due to network

When network is not very good, the installation may interrupt and exit.Maybe a retry is needed.

[ks-allinone] Downloading image: calico/pod2daemon-flexvol:v3.14.1
ERRO[18:03:30 CST] Failed to download image: calico/pod2daemon-flexvol:v3.14.1: Failed to exec command: sudo -E docker pull calico/pod2daemon-f
lexvol:v3.14.1: Process exited with status 1  node=10.160.16.30
WARN[18:03:30 CST] Task failed ...
WARN[18:03:30 CST] error: interrupted by error
Error: Failed to pre-pull images: interrupted by error
Usage:
  kk create cluster [flags]

Flags:
  -f, --file string              Path to a configuration file
  -h, --help                     help for cluster
      --with-kubernetes string   Specify a supported version of kubernetes (default "v1.17.6")
      --with-kubesphere          Deploy a specific version of kubesphere (default v3.0.0)

Global Flags:
      --debug   Print detailed information (default true)

Failed to pre-pull images: interrupted by error
[root@i-5872n6z9 ~]# sudo -E docker pull calico/pod2daemon-flexvol:v3.14.1
v3.14.1: Pulling from calico/pod2daemon-flexvol
Digest: sha256:d125b9f3c24133bdaf90eaf2bee1d506240d39a77bda712eda3991b6b5d443f0
Status: Image is up to date for calico/pod2daemon-flexvol:v3.14.1
docker.io/calico/pod2daemon-flexvol:v3.14.1

install failed with latest version

git version

commit a63a700cd675c3ca542d9dc506389e72cbf759fc (HEAD -> master, origin/master, origin/HEAD)
Merge: 09a2d03 47a4164
Author: pixiake <[email protected]>
Date:   Thu Jul 2 09:44:44 2020 +0800

install finished with no error,but pods not in running status

kube-system         openebs-ndm-operator-6456dc9db-gksjk           0/1     Error               0          11m
kube-system         openebs-ndm-xbmpw                              1/1     Running             0          11m
kubesphere-system   ks-installer-5b988669b9-kml65                  0/1     ContainerCreating   0          11m

kubectl logs openebs-ndm-operator-6456dc9db-gksjk -n kube-system

{"level":"info","ts":1593671683.6472864,"logger":"ndm-operator","msg":"Go Version: go1.12.7"}
{"level":"info","ts":1593671683.6473246,"logger":"ndm-operator","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1593671683.647328,"logger":"ndm-operator","msg":"operator-sdk Version: v0.5.0"}
{"level":"info","ts":1593671683.6473305,"logger":"ndm-operator","msg":"Version Tag: v0.5.0"}
{"level":"info","ts":1593671683.647333,"logger":"ndm-operator","msg":"Git Commit: 63ca87a283e5feea87821c4f96c8b38c038343e3"}
{"level":"info","ts":1593671683.6478827,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1593671683.67723,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1593671683.6797564,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1593671683.697341,"logger":"ndm-operator","msg":"Installing the components"}
{"level":"info","ts":1593671691.7111866,"logger":"ndm-operator","msg":"Registering Components"}
{"level":"info","ts":1593671691.729227,"logger":"ndm-operator","msg":"Check if CR has to be upgraded, and perform upgrade"}
{"level":"info","ts":1593671691.7558234,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"blockdevice-controller","source":"kind source: /, Kind="}
{"level":"error","ts":1593671691.7559855,"logger":"kubebuilder.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"BlockDevice.openebs.io","error":"no matches for kind \"BlockDevice\" in version \"openebs.io/v1alpha1\"","stacktrace":"github.com/openebs/node-disk-manager/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openebs/node-disk-manager/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89\ngithub.com/openebs/node-disk-manager/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:122\ngithub.com/openebs/node-disk-manager/pkg/controller/blockdevice.add\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/pkg/controller/blockdevice/blockdevice_controller.go:66\ngithub.com/openebs/node-disk-manager/pkg/controller/blockdevice.Add\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/pkg/controller/blockdevice/blockdevice_controller.go:49\ngithub.com/openebs/node-disk-manager/pkg/controller.AddToManager\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/pkg/controller/controller.go:29\nmain.main\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/cmd/manager/main.go:145\nruntime.main\n\t/home/travis/.gimme/versions/go1.12.7.linux.amd64/src/runtime/proc.go:200"}
{"level":"error","ts":1593671691.7560604,"logger":"ndm-operator","msg":"","error":"no matches for kind \"BlockDevice\" in version \"openebs.io/v1alpha1\"","stacktrace":"github.com/openebs/node-disk-manager/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/home/travis/gopath/src/github.com/openebs/node-disk-manager/cmd/manager/main.go:146\nruntime.main\n\t/home/travis/.gimme/versions/go1.12.7.linux.amd64/src/runtime/proc.go:200"}

failed to deploy on ubuntu 18.04

W0701 10:55:19.069649 17684 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join lb.kubesphere.local:6443 --token ays4v3.9exhj40g2ljmvg0d --discovery-token-ca-cert-hash sha256:1472bc2f7147084b12a3e0aed32483187ac302d9e7ae82f1c1fc23ac284d7b2e
[ks-node1 68.79.24.116] MSG:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ks-node1 NotReady master,worker 2m44s v1.18.3 172.31.2.153 Ubuntu 18.04.4 LTS 4.15.0-1063-aws docker://19.3.8
INFO[18:55:19 CST] Synchronizing kube binaries
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubeadm to 52.82.70.125:/tmp/kubekey/kubeadm Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubeadm to 52.82.27.246:/tmp/kubekey/kubeadm Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubelet to 52.82.70.125:/tmp/kubekey/kubelet Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubectl to 52.82.70.125:/tmp/kubekey/kubectl Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubelet to 52.82.27.246:/tmp/kubekey/kubelet Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/helm to 52.82.70.125:/tmp/kubekey/helm Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 52.82.70.125:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/kubectl to 52.82.27.246:/tmp/kubekey/kubectl Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/helm to 52.82.27.246:/tmp/kubekey/helm Done
Push /Users/luxingmin/Documents/dev/pkgs/kubekey/output/kubekey/v1.18.3/amd64/cni-plugins-linux-amd64-v0.8.6.tgz to 52.82.27.246:/tmp/kubekey/cni-plugins-linux-amd64-v0.8.6.tgz Done
INFO[18:57:57 CST] Initializing kubernetes cluster
INFO[18:57:57 CST] Deploying network plugin ...
ERRO[18:57:57 CST] Failed to read calico manifests: exit status 64 node=68.79.24.116
WARN[18:57:57 CST] Task failed ...
WARN[18:57:57 CST] error: interrupted by error
Error: Failed to deploy network plugin: interrupted by error

Failed to scale all-in-one to multi-node

When I try to scale all-in-one to multi-node, it runs Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1 .

Do I need to install docker on each node?

[node1] Downloading image: kubekey/kube-apiserver:v1.17.6
ERRO[13:44:44 CST] Failed to download image: kubekey/kube-proxy:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1  node=192.168.0.3
ERRO[13:44:45 CST] Failed to download image: kubekey/kube-apiserver:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-apiserver:v1.17.6: Process exited with status 1  node=192.168.0.2
ERRO[13:44:58 CST] Failed to download image: kubekey/kube-proxy:v1.17.6: Failed to exec command: sudo -E docker pull kubekey/kube-proxy:v1.17.6: Process exited with status 1  node=192.168.0.4
WARN[13:44:58 CST] Task failed ...
WARN[13:44:58 CST] error: interrupted by error
Error: Failed to pre-download images: interrupted by error
Usage:
  kk scale [flags]

My config.yaml as follows:

apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
  name: example
spec:
  hosts:
  - {name: node1, address: 192.168.0.2, internalAddress: 192.168.0.2, password: Qcloud@123}
  - {name: node2, address: 192.168.0.3, internalAddress: 192.168.0.3, password: Qcloud@123}
  - {name: node3, address: 192.168.0.4, internalAddress: 192.168.0.4, password: Qcloud@123}
  roleGroups:
    etcd:
     - node1
    master:
     - node1
    worker:
     - node1
     - node2
     - node3
  controlPlaneEndpoint:
    domain: lb.kubesphere.local
    address: ""
    port: "6443"
  kubernetes:
    version: v1.17.6
    imageRepo: kubekey
    clusterName: cluster.local
  network:
    plugin: calico
    podNetworkCidr: 10.233.64.0/18
    serviceNetworkCidr: 10.233.0.0/18
  registry:
    registryMirrors: []
    insecureRegistries: []
  storage:
    defaultStorageClass: localVolume
    nfsClient:
      nfsServer: 172.16.0.2
      nfsPath: /mnt/nfs
      nfsVrs3Enabled: false
      nfsArchiveOnDelete: false
  kubesphere:
    console:
      enableMultiLogin: false  # enable/disable multi login
      port: 30880
    common:
      mysqlVolumeSize: 20Gi
      minioVolumeSize: 20Gi
      etcdVolumeSize: 20Gi
      openldapVolumeSize: 2Gi
      redisVolumSize: 2Gi
    monitoring:
      prometheusReplicas: 1
      prometheusMemoryRequest: 400Mi
      prometheusVolumeSize: 20Gi
      grafana:
        enabled: false
    logging:
      enabled: false
      elasticsearchMasterReplicas: 1
      elasticsearchDataReplicas: 1
      logsidecarReplicas: 2
      elasticsearchMasterVolumeSize: 4Gi
      elasticsearchDataVolumeSize: 20Gi
      logMaxAge: 7
      elkPrefix: logstash
      containersLogMountedPath: ""
      kibana:
        enabled: false
    openpitrix:
      enabled: false
    devops:
      enabled: false
      jenkinsMemoryLim: 2Gi
      jenkinsMemoryReq: 1500Mi
      jenkinsVolumeSize: 8Gi
      jenkinsJavaOpts_Xms: 512m
      jenkinsJavaOpts_Xmx: 512m
      jenkinsJavaOpts_MaxRAM: 2g
      sonarqube:
        enabled: false
        postgresqlVolumeSize: 8Gi
    notification:
      enabled: false
    alerting:
      enabled: false
    serviceMesh:
      enabled: false
    metricsServer:
      enabled: false

Failed to install on ubuntu 18.04

The error:

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

install msg refine

./kk create cluster is to create a Kubernetes cluster only, but it shows "kubesphere" has been added to your repositories. This will confuse user.
Screen Shot 2020-06-19 at 3 16 45 PM

install mino failed and the message is not correct

TASK [common : debug] **********************************************************
ok: [localhost] => {
    "msg": [
        "1. check the storage configuration and storage server",
        "2. make sure the DNS address in /etc/resolv.conf is available.",
        "3. execute 'helm del --purge ks-minio && kubectl delete job -n kubesphere-system ks-minio-make-bucket-job'",
        "4. Restart the installer pod in kubesphere-system namespace"
    ]
}

message is not correct, for it's helm3 now.

[root@node1 ~]# helm uninstall ks-minio -n kubesphere-system && kubectl delete job -n kubesphere-system ks-minio-make-bucket-job
release "ks-minio" uninstalled
Error from server (NotFound): jobs.batch "ks-minio-make-bucket-job" not found

allinone install Failed to find /etc/kubernetes/admin.conf

OS: centos7.6

INFO[09:32:44 CST] Starting etcd cluster
[ks-allinone 192.168.1.101] MSG:
Configuration file will be created
INFO[09:32:45 CST] Refreshing etcd configuration
Waiting for etcd to start
INFO[09:32:51 CST] Get cluster status
[ks-allinone 192.168.1.101] MSG:
ls: ๆ— ๆณ•่ฎฟ้—ฎ/etc/kubernetes/admin.conf: ๆฒกๆœ‰้‚ฃไธชๆ–‡ไปถๆˆ–็›ฎๅฝ•
WARN[09:32:51 CST] Task failed ...
WARN[09:32:51 CST] error: Failed to find /etc/kubernetes/admin.conf: Failed to exec command: ls /etc/kubernetes/admin.conf: Process exited with status 2
Error: Failed to get cluster status: Failed to find /etc/kubernetes/admin.conf: Failed to exec command: ls /etc/kubernetes/admin.conf: Process exited with status 2

Parse with-kubesphere argument failed

root@master:~# ./kk create config --with-kubernetes=v1.18.3 --with-kubesphere=v3.0.0 --with-storage=localVolume
Error: invalid argument "v3.0.0" for "--with-kubesphere" flag: strconv.ParseBool: parsing "v3.0.0": invalid syntax

config-sample storage doc

Need a doc about how to create storage class for the common storage providers such as ceph, glusterfs, nfs, and the ones from famous cloud providers.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.