Coder Social home page Coder Social logo

lework / kainstall Goto Github PK

View Code? Open in Web Editor NEW
985.0 14.0 245.0 229 KB

Use shell scripts to install kubernetes(k8s) high availability clusters and addon components based on kubeadmin with one click.使用shell脚本基于kubeadmin一键安装kubernetes 高可用集群和addon组件。

License: MIT License

Shell 99.94% Batchfile 0.06%
install kubeadm kubernetes kubernetes-cluster kubernetes-setup kubeadmin-kubernetes kubernetes-install kainstall bash

kainstall's Introduction

kainstall = kubeadm install kubernetes

GitHub Super-Linter

使用 shell 脚本, 基于 kubeadm 一键部署 kubernetes HA 集群

为什么

为什么要搞这个?Ansible PlayBook 不好么?

因为懒,Ansible PlayBook 编排是非常给力的,不过需要安装 Python 和 Ansible, 且需要下载多个 yaml 文件 。因为懒,我想要个更简单的方式来快速部署一个分布式的 Kubernetes HA 集群, 使用 shell 脚本可以不借助外力直接在服务器上运行,省时省力。 并且 shell 脚本只有一个文件,文件大小100 KB 左右,非常小巧,可以实现一条命令安装集群的超快体验,而且配合离线安装包,可以在不联网的环境下安装集群,这体验真的非常爽啊。

要求

OS: centos 7.x x64 , centos 8.x x64, debian 9.x x64 , debian 10.x x64, ubuntu 20.04 x64, ubuntu 20.10 x64, ubuntu 21.04 x64

CPU: 2C

MEM: 4G

认证: 集群节点需统一认证; 使用密码认证时,集群节点需使用同一用户名和密码,使用密钥认证时,集群节点需使用同一个密钥文件登陆。

未指定离线包时,需要连通外网,用于下载 kube 组件和 docker 镜像。

架构

k8s-node-ha

如需按照步骤安装集群,可参考 https://lework.github.io/2019/10/01/kubeadm-install/

功能

  • 服务器初始化。
    • 关闭 selinux
    • 关闭 swap
    • 关闭 firewalld
    • 配置 epel
    • 修改 limits
    • 配置内核参数
    • 配置 history 记录
    • 配置 journal 日志
    • 配置 chrony 时间同步
    • 添加 ssh-login-info 信息
    • 配置 audit 审计
    • 安装 ipvs 模块
    • 更新内核
  • 安装kube组件。
  • 初始化kubernetes集群,以及增加或删除节点。
  • 安装ingress组件,可选nginxtraefik
  • 安装network组件,可选flannelcalicocilium
  • 安装monitor组件,可选prometheus
  • 安装log组件,可选elasticsearch
  • 安装storage组件,可选rooklonghorn
  • 安装web ui组件,可选dashboard, kubesphere
  • 安装addon组件,可选metrics-server, nodelocaldns
  • 安装cri组件,可选docker, containerd, cri-o
  • 升级到kubernetes指定版本。
  • 更新集群证书。
  • 添加运维操作,如备份etcd快照。
  • 支持离线部署
  • 支持sudo特权
  • 支持10年证书期限
  • 支持脚本更新。

默认版本

分类 软件 kainstall 默认版本 软件最新版本
common containerd latest docker-ce release
common kubernetes latest kubernetes release
network flannel 0.24.0 flannel release
network calico 3.27.0 calico release
network cilium 1.14.5 cilium release
addons metrics server 0.6.4 metrics-server release
addons nodelocaldns latest 1.22.28
ingress ingress nginx controller 1.9.5 ingress-nginx release
ingress traefik 2.10.7 traefik release
monitor kube_prometheus 0.13.0 kube-prometheus release
log elasticsearch 8.11.3 elasticsearch release
storage rook 1.13.1 rook release
storage longhorn 1.5.3 longhorn release
ui kubernetes_dashboard 2.7.0 kubernetes dashboard release
ui kubesphere 3.3.0 kubesphere release

kube组件 版本可以通过参数(--version) 指定外,其他的软件版本需在脚本中指定。

使用

案例使用请见:https://lework.github.io/2020/09/26/kainstall

下载脚本

# centos
wget https://ghproxy.com/https://raw.githubusercontent.com/lework/kainstall/master/kainstall-centos.sh

# debian
wget https://ghproxy.com/https://raw.githubusercontent.com/lework/kainstall/master/kainstall-debian.sh

# ubuntu
wget https://ghproxy.com/https://raw.githubusercontent.com/lework/kainstall/master/kainstall-ubuntu.sh

帮助信息

# bash kainstall-centos.sh


Install kubernetes cluster using kubeadm.

Usage:
  kainstall-centos.sh [command]

Available Commands:
  init            Init Kubernetes cluster.
  reset           Reset Kubernetes cluster.
  add             Add nodes to the cluster.
  del             Remove node from the cluster.
  renew-cert      Renew all available certificates.
  upgrade         Upgrading kubeadm clusters.
  update          Update script file.

Flag:
  -m,--master          master node, default: ''
  -w,--worker          work node, default: ''
  -u,--user            ssh user, default: root
  -p,--password        ssh password
     --private-key     ssh private key
  -P,--port            ssh port, default: 22
  -v,--version         kube version, default: latest
  -n,--network         cluster network, choose: [flannel,calico,cilium], default: flannel
  -i,--ingress         ingress controller, choose: [nginx,traefik], default: nginx
  -ui,--ui             cluster web ui, choose: [dashboard,kubesphere], default: dashboard
  -a,--addon           cluster add-ons, choose: [metrics-server,nodelocaldns], default: metrics-server
  -M,--monitor         cluster monitor, choose: [prometheus]
  -l,--log             cluster log, choose: [elasticsearch]
  -s,--storage         cluster storage, choose: [rook,longhorn]
     --cri             cri runtime, choose: [docker,containerd,cri-o], default: containerd
     --cri-version     cri version, default: latest
     --cri-endpoint    cri endpoint, default: /var/run/dockershim.sock
  -U,--upgrade-kernel  upgrade kernel
  -of,--offline-file   specify the offline package file to load
      --10years        the certificate period is 10 years.
      --sudo           sudo mode
      --sudo-user      sudo user
      --sudo-password  sudo user password

Example:
  [init cluster]
  kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134,192.168.77.135 \
  --user root \
  --password 123456 \
  --version 1.20.6

  [reset cluster]
  kainstall-centos.sh reset \
  --user root \
  --password 123456

  [add node]
  kainstall-centos.sh add \
  --master 192.168.77.140,192.168.77.141 \
  --worker 192.168.77.143,192.168.77.144 \
  --user root \
  --password 123456 \
  --version 1.20.6

  [del node]
  kainstall-centos.sh del \
  --master 192.168.77.140,192.168.77.141 \
  --worker 192.168.77.143,192.168.77.144 \
  --user root \
  --password 123456
 
  [other]
  kainstall-centos.sh renew-cert --user root --password 123456
  kainstall-centos.sh upgrade --version 1.20.6 --user root --password 123456
  kainstall-centos.sh update
  kainstall-centos.sh add --ingress traefik
  kainstall-centos.sh add --monitor prometheus
  kainstall-centos.sh add --log elasticsearch
  kainstall-centos.sh add --storage rook
  kainstall-centos.sh add --ui dashboard
  kainstall-centos.sh add --addon nodelocaldns

初始化集群

# 使用脚本参数
bash kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134 \
  --user root \
  --password 123456 \
  --port 22 \
  --version 1.20.6

# 使用环境变量
export MASTER_NODES="192.168.77.130,192.168.77.131,192.168.77.132"
export WORKER_NODES="192.168.77.133,192.168.77.134"
export SSH_USER="root"
export SSH_PASSWORD="123456"
export SSH_PORT="22"
export KUBE_VERSION="1.20.6"
bash kainstall-centos.sh init

默认情况下,除了初始化集群外,还会安装 ingress: nginx , ui: dashboard 两个组件。

还可以使用一键安装方式, 连下载都省略了。

bash -c "$(curl -sSL https://ghproxy.com/https://raw.githubusercontent.com/lework/kainstall/master/kainstall-centos.sh)"  \
  - init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134 \
  --user root \
  --password 123456 \
  --port 22 \
  --version 1.20.6

增加节点

操作需在 k8s master 节点上操作,ssh连接信息非默认时请指定

# 增加单个master节点
bash kainstall-centos.sh add --master 192.168.77.135

# 增加单个worker节点
bash kainstall-centos.sh add --worker 192.168.77.134

# 同时增加
bash kainstall-centos.sh add --master 192.168.77.135,192.168.77.136 --worker 192.168.77.137,192.168.77.138

删除节点

操作需在 k8s master 节点上操作,ssh连接信息非默认时请指定

# 删除单个master节点
bash kainstall-centos.sh del --master 192.168.77.135

# 删除单个worker节点
bash kainstall-centos.sh del --worker 192.168.77.134

# 同时删除
bash kainstall-centos.sh del --master 192.168.77.135,192.168.77.136 --worker 192.168.77.137,192.168.77.138

重置集群

bash kainstall-centos.sh reset \
  --user root \
  --password 123456 \
  --port 22 \

其他操作

操作需在 k8s master 节点上操作,ssh连接信息非默认时请指定 注意: 添加组件时请保持节点的内存和cpu至少为2C4G的空闲。否则会导致节点下线且服务器卡死。

# 添加 nginx ingress
bash kainstall-centos.sh add --ingress nginx

# 添加 prometheus
bash kainstall-centos.sh add --monitor prometheus

# 添加 elasticsearch
bash kainstall-centos.sh add --log elasticsearch

# 添加 rook
bash kainstall-centos.sh add --storage rook

# 添加 nodelocaldns
bash kainstall-centos.sh add --addon nodelocaldns

# 升级版本
bash kainstall-centos.sh upgrade --version 1.20.6

# 重新颁发证书
bash kainstall-centos.sh renew-cert

# debug模式
DEBUG=1 bash kainstall-centos.sh

# 更新脚本
bash kainstall-centos.sh update

# 使用 cri-o containerd runtime
bash kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134,192.168.77.135 \
  --user root \
  --password 123456 \
  --cri containerd
  
# 使用 cri-o cri runtime
bash kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134,192.168.77.135 \
  --user root \
  --password 123456 \
  --cri cri-o

默认设置

注意: 以下变量都在脚本文件的environment configuration部分。可根据需要自行修改,或者为变量设置同名的环境变量修改其默认内容。

# 版本
KUBE_VERSION="${KUBE_VERSION:-latest}"
FLANNEL_VERSION="${FLANNEL_VERSION:-0.24.0}"
METRICS_SERVER_VERSION="${METRICS_SERVER_VERSION:-0.6.4}"
INGRESS_NGINX="${INGRESS_NGINX:-1.9.5}"
TRAEFIK_VERSION="${TRAEFIK_VERSION:-2.10.7}"
CALICO_VERSION="${CALICO_VERSION:-3.27.0}"
CILIUM_VERSION="${CILIUM_VERSION:-1.14.5}"
KUBE_PROMETHEUS_VERSION="${KUBE_PROMETHEUS_VERSION:-0.13.0}"
ELASTICSEARCH_VERSION="${ELASTICSEARCH_VERSION:-8.11.3}"
ROOK_VERSION="${ROOK_VERSION:-1.9.13}"
LONGHORN_VERSION="${LONGHORN_VERSION:-1.5.3}"
KUBERNETES_DASHBOARD_VERSION="${KUBERNETES_DASHBOARD_VERSION:-2.7.0}"
KUBESPHERE_VERSION="${KUBESPHERE_VERSION:-3.3.2}"

# 集群配置
KUBE_DNSDOMAIN="${KUBE_DNSDOMAIN:-cluster.local}"
KUBE_APISERVER="${KUBE_APISERVER:-apiserver.$KUBE_DNSDOMAIN}"
KUBE_POD_SUBNET="${KUBE_POD_SUBNET:-10.244.0.0/16}"
KUBE_SERVICE_SUBNET="${KUBE_SERVICE_SUBNET:-10.96.0.0/16}"
KUBE_IMAGE_REPO="${KUBE_IMAGE_REPO:-registry.cn-hangzhou.aliyuncs.com/kainstall}"
KUBE_NETWORK="${KUBE_NETWORK:-flannel}"
KUBE_INGRESS="${KUBE_INGRESS:-nginx}"
KUBE_MONITOR="${KUBE_MONITOR:-prometheus}"
KUBE_STORAGE="${KUBE_STORAGE:-rook}"
KUBE_LOG="${KUBE_LOG:-elasticsearch}"
KUBE_UI="${KUBE_UI:-dashboard}"
KUBE_ADDON="${KUBE_ADDON:-metrics-server}"
KUBE_FLANNEL_TYPE="${KUBE_FLANNEL_TYPE:-vxlan}"
KUBE_CRI="${KUBE_CRI:-containerd}"
KUBE_CRI_VERSION="${KUBE_CRI_VERSION:-latest}"
KUBE_CRI_ENDPOINT="${KUBE_CRI_ENDPOINT:-unix:///run/containerd/containerd.sock}"

# 定义的master和worker节点地址,以逗号分隔
MASTER_NODES="${MASTER_NODES:-}"
WORKER_NODES="${WORKER_NODES:-}"

# 定义在哪个节点上进行设置
MGMT_NODE="${MGMT_NODE:-127.0.0.1}"

# 节点的连接信息
SSH_USER="${SSH_USER:-root}"
SSH_PASSWORD="${SSH_PASSWORD:-}"
SSH_PRIVATE_KEY="${SSH_PRIVATE_KEY:-}"
SSH_PORT="${SSH_PORT:-22}"
SUDO_USER="${SUDO_USER:-root}"

# 节点设置
HOSTNAME_PREFIX="${HOSTNAME_PREFIX:-k8s}"

# 脚本设置
TMP_DIR="$(rm -rf /tmp/kainstall* && mktemp -d -t kainstall.XXXXXXXXXX)"
LOG_FILE="${TMP_DIR}/kainstall.log"
SSH_OPTIONS="-o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
ERROR_INFO="\n\033[31mERROR Summary: \033[0m\n  "
ACCESS_INFO="\n\033[32mACCESS Summary: \033[0m\n  "
COMMAND_OUTPUT=""
SCRIPT_PARAMETER="$*"
OFFLINE_DIR="/tmp/kainstall-offline-file/"
OFFLINE_FILE=""
OS_SUPPORT="centos7 centos8"
GITHUB_PROXY="${GITHUB_PROXY:-https://mirror.ghproxy.com/}"
GCR_PROXY="${GCR_PROXY:-k8sgcr.lework.workers.dev}"
SKIP_UPGRADE_PLAN=${SKIP_UPGRADE_PLAN:-false}
SKIP_SET_OS_REPO=${SKIP_SET_OS_REPO:-false}

离线部署

注意, 脚本执行的宿主机上,需要安装 tar 命令,用于解压离线包。 详细部署请见: https://lework.github.io/2020/10/18/kainstall-offline/

  1. 下载指定版本的离线包

    wget https://github.com/lework/kainstall-offline/releases/download/1.20.6/1.20.6_centos7.tgz

    更多离线包信息,见 kainstall-offline 仓库

  2. 初始化集群

    指定 --offline-file 参数。

    bash kainstall-centos.sh init \
      --master 192.168.77.130,192.168.77.131,192.168.77.132 \
      --worker 192.168.77.133,192.168.77.134 \
      --user root \
      --password 123456 \
      --version 1.20.6 \
      --upgrade-kernel \
      --10years \
      --offline-file 1.20.6_centos7.tgz
  3. 添加节点

    指定 --offline-file 参数。

    bash kainstall-centos.sh add \
      --master 192.168.77.135 \
      --worker 192.168.77.136 \
      --user root \
      --password 123456 \
      --version 1.20.6 \
      --offline-file 1.20.6_centos7.tgz

sudo 特权

创建 sudo 用户

useradd test
passwd test --stdin <<< "12345678"
echo 'test    ALL=(ALL)   NOPASSWD:ALL' >> /etc/sudoers

sudo 参数

  • --sudo 开启 sudo 特权
  • --sudo-user 指定 sudo 用户, 默认是 root
  • --sudo-password 指定 sudo 密码

示例

# 初始化
bash kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134 \
  --user test \
  --password 12345678 \
  --port 22 \
  --version 1.20.6 \
  --sudo \
  --sudo-user root \
  --sudo-password 12345678

# 添加
bash kainstall-centos.sh add \
  --master 192.168.77.135 \
  --worker 192.168.77.136 \
  --user test \
  --password 12345678 \
  --port 22 \
  --version 1.20.6 \
  --sudo \
  --sudo-user root \
  --sudo-password 12345678

# 更新脚本文件
bash kainstall-centos.sh update

10年证书期限

注意: 此操作需要联网下载。

使用 kubeadm-certs 项目编译的 kubeadm 客户端, 其修改了 kubeadm 源码,将 1 年期限修改成 10 年期限,具体信息见仓库介绍。

在初始化或添加时,加上 --10years 参数,就可以使用kubeadm 10 years 的客户端

示例

# 初始化
bash kainstall-centos.sh init \
  --master 192.168.77.130,192.168.77.131,192.168.77.132 \
  --worker 192.168.77.133,192.168.77.134 \
  --user root \
  --password 123456 \
  --port 22 \
  --version 1.20.6 \
  --10years
  
# 添加
bash kainstall-centos.sh add \
  --master 192.168.77.135 \
  --worker 192.168.77.136 \
  --user root \
  --password 123456 \
  --port 22 \
  --version 1.20.6 \
  --10years

联系方式

License

MIT

kainstall's People

Contributors

120742056 avatar hikariion avatar hxx258456 avatar jelen56 avatar lework avatar sincerexia avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kainstall's Issues

redhat has already removed the sshpass from epel8

Hi lework,i got a package installation error when i was using ur shell to init the cluster,like this:

[root@node03 kainstall]# bash kainstall-centos.sh init --master 192.168.1.10 --worker 192.168.1.20 --user  root --password **** --upgrade-kernel
[2023-03-03T18:38:12.099311833+0100]: INFO:    [start] bash kainstall-centos.sh init --master 192.168.1.10 --worker 192.168.1.20 --user root --password **** --upgrade-kernel
[2023-03-03T18:38:12.103986530+0100]: INFO:    [check] ssh command exists.
[2023-03-03T18:38:12.105655497+0100]: WARNING: [check] I require sshpass but it's not installed.
[2023-03-03T18:38:12.107243514+0100]: WARNING: [check] install sshpass package.
[2023-03-03T18:38:13.382721161+0100]: ERROR:   [check] sshpass install failed.

ERROR Summary: 
  [2023-03-03T18:38:13.382721161+0800]: ERROR:   [check] sshpass install failed.
 
  See detailed log >>> /tmp/kainstall.t5ov18pkMG/kainstall.log 

the detail log is :

[root@node03 kainstall]# tail -f /tmp/kainstall.t5ov18pkMG/kainstall.log 
[2023-03-03T18:38:12.099311833+0800]: INFO:    [start] bash kainstall-centos.sh init --master 192.168.1.10 --worker 192.168.1.20 --user root --password **** --upgrade-kernel
[2023-03-03T18:38:12.103986530+0800]: INFO:    [check] ssh command exists.
[2023-03-03T18:38:12.105655497+0800]: WARNING: [check] I require sshpass but it's not installed.
[2023-03-03T18:38:12.107243514+0800]: WARNING: [check] install sshpass package.
[2023-03-03T18:38:12.114142347+0800]: EXEC:    [command] bash -c 'yum install -y sshpass'
Last metadata expiration check: 0:40:54 ago on Sat 03 Mar 2023 07:57:18 AM CST.
No match for argument: sshpass
Error: Unable to find a match: sshpass
[2023-03-03T18:38:13.382721161+0800]: ERROR:   [check] sshpass install failed.

And i have tried to install this package(sshpass) in several different ways (below ways) and found that they all output the same error like before:

1.use rpm to install
2.use yum to install
3.use dnf to install

Finally,i found that redhat has already removed the sshpass from epel8, and i cant find sshpass package from epel repo by using this command:

dnf repository-packages epel list | grep -i sshpass

So I suggest you consider the following sshpass compatibility issues in centos (my centos version is 8.3+)

CentOS 7 kubeadm init failed

我使用以下方法安装1.19.1就没问题

wget https://cdn.jsdelivr.net/gh/lework/kainstall/kainstall.sh
bash kainstall.sh init \
  --master 192.168.36.40 \
  --worker 192.168.36.41 \
  --user root \
  --password kkkzoz \
  --port 22 \
  --version 1.19.1 \
  --upgrade-kernel

bash kainstall.sh init \
  --master 192.168.36.40 \
  --worker 192.168.36.41 \
  --user root \
  --password kkkzoz \
  --port 22 \
  --version 1.19.1

如果把版本换为1.22.5,就会报以下的错

[2022-08-01T22:09:15.048465870+0800]: EXEC:    [command] sshpass -p "zzzzzz" ssh -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -p 22 bash -c 'kubeadm init --config=/etc/kubernetes/kubeadmcfg.yaml --upload-certs'
Warning: Permanently added '192.168.36.40' (ECDSA) to the list of known hosts.
your configuration file uses an old API spec: "kubeadm.k8s.io/v1beta1". Please use kubeadm v1.15 instead and run 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
To see the stack trace of this error execute with --v=5 or higher
[2022-08-01T22:09:15.573842225+0800]: ERROR:   [kubeadm init] 192.168.36.40: kubeadm init failed.

这里是日志文件
kainstall.log

[ERROR Port-6443]: Port 6443 is in use

bash kainstall-centos.sh init
--master 10.39.80.188,10.39.80.187,10.39.80.189
--worker 10.39.80.190
--user root
--password 1qazXSW@
--port 22
--version 1.23.3

Warning: Permanently added '10.39.80.188' (ECDSA) to the list of known hosts.
[2022-06-26T22:28:19.157826640+0800]: INFO: [kubeadm init] 10.39.80.188: set kubeadmcfg.yaml succeeded.
[2022-06-26T22:28:19.160159747+0800]: INFO: [kubeadm init] 10.39.80.188: kubeadm init start.
[2022-06-26T22:28:19.170729651+0800]: EXEC: [command] sshpass -p "zzzzzz" ssh -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -p 22 bash -c 'kubeadm init --config=/etc/kubernetes/kubeadmcfg.yaml --upload-certs'
Warning: Permanently added '10.39.80.188' (ECDSA) to the list of known hosts.
[init] Using Kubernetes version: v1.23.3
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher
[2022-06-26T22:28:19.719600219+0800]: ERROR: [kubeadm init] 10.39.80.188: kubeadm init failed.

密码不对 zzzzzz

执行:

bash kainstall-centos.sh init \
  --master oc01.com,oc02.com \
  --user root \
  --password admin@123 \
  --port 22 \
  --10years \
  --version 1.20.6

报错:

Warning: Permanently added 'oc01.com,172.30.60.74' (ECDSA) to the list of known hosts.
[2021-06-04T18:12:42.251836149+0800]: INFO:    [kubeadm init] oc01.com: set kubeadmcfg.yaml succeeded.
[2021-06-04T18:12:42.253314948+0800]: INFO:    [kubeadm init] oc01.com: kubeadm init start.
[2021-06-04T18:12:42.261850341+0800]: EXEC:    [command] sshpass -p "zzzzzz" ssh -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -p 22 bash -c 'kubeadm init --config=/etc/kubernetes/kubeadmcfg.yaml --upload-certs'
Warning: Permanently added 'oc01.com,172.30.60.74' (ECDSA) to the list of known hosts.
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
	[WARNING Hostname]: hostname "k8s-master-node1" could not be reached
	[WARNING Hostname]: hostname "k8s-master-node1": lookup k8s-master-node1 on 172.24.208.1:53: read udp 172.30.60.74:57720->172.24.208.1:53: i/o timeout
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[2021-06-04T18:13:06.589300224+0800]: ERROR:   [kubeadm init] oc01.com: kubeadm init failed.

kube-schuduler[非]安全端口问题

上午好。

我最近开始测试k8s相关的部署工具,感谢 kainstall 用shell脚本部署的方式。

昨天下午我新建了三台虚拟机,centos7.9 x64,master是4c12g,2个node是4c8g

初始化命令:bash kainstall-centos.sh init --master 127.0.0.1 --worker 10.6.62.242,10.6.62.243 --user root --password xxx --network calico --cri containerd (ps:master第一次用本机LAN IP,发现ssh错了,当然可能是密码含有$符合导致,后面用127.0.0.1没再试LAN IP)

然后今天早上来看,各种容器、进程正常有,只是scheduler 非安全端口连不上:

# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-0               Healthy     {"health":"true","reason":""}

# ps aux|grep scheduler
root      1991  2.0  0.3 754012 40016 ?        Ssl  11:03   0:03 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=0.0.0.0 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true --port=0

# netstat -anptl|grep LISTEN|grep kube
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      1095/kubelet        
tcp        0      0 0.0.0.0:32297           0.0.0.0:*               LISTEN      2234/kube-proxy     
tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      2234/kube-proxy     
tcp        0      0 0.0.0.0:40721           0.0.0.0:*               LISTEN      2234/kube-proxy     
tcp6       0      0 :::10250                :::*                    LISTEN      1095/kubelet        
tcp6       0      0 :::6443                 :::*                    LISTEN      2021/kube-apiserver 
tcp6       0      0 :::10256                :::*                    LISTEN      2234/kube-proxy     
tcp6       0      0 :::10257                :::*                    LISTEN      2009/kube-controlle 
tcp6       0      0 :::10259                :::*                    LISTEN      1991/kube-scheduler

进程存在,监听了安全端口10259,关了非安全端口,查了下,kubeadm是有这个情况,不过controlller-manager也会如此,但现在kainstall安装后controlller-maanger状态正常(也是关闭了非安全端口),所以猜测scheduler的相关配置少了诸如证书等配置项。

kubeadm方式部署的细节我不熟悉,所以请作者大大看看如何修改。

kube-flannel has changed its namespace from 'kube-system' to 'kube-flannel'

if u guys choose the new version of flannel to install by this project,maybe u will get a install fail error like [waiting] flannel pods ready failed when u installing flannel plugin,this is because kube-flannel has changed its namespace from kube-system to kube-flannel,u should change kube::wait "flannel" "kube-system" "pods" "app=flannel" to kube::wait "flannel" "kube-flannel" "pods" "app=flannel" in the shell file

添加 prometheus 失败

[root@node-1 ~]# bash kainstall-centos.sh add --monitor prometheus
[2023-03-03T10:09:52.018600276+0800]: INFO:    [start] bash kainstall-centos.sh add --monitor prometheus
[2023-03-03T10:09:52.023690883+0800]: INFO:    [check] ssh command exists.
[2023-03-03T10:09:52.025838288+0800]: INFO:    [check] sshpass command exists.
[2023-03-03T10:09:52.028290945+0800]: INFO:    [check] wget command exists.
[2023-03-03T10:09:52.197935107+0800]: INFO:    [check] ssh 10.0.1.201 connection succeeded.
[2023-03-03T10:09:52.357239163+0800]: INFO:    [check] ssh 10.0.1.202 connection succeeded.
[2023-03-03T10:09:52.518530772+0800]: INFO:    [check] ssh 10.0.1.203 connection succeeded.
[2023-03-03T10:09:52.520397483+0800]: INFO:    [check] os support: centos7 centos8
[2023-03-03T10:09:52.681127164+0800]: INFO:    [check] 10.0.1.201 os support succeeded.
[2023-03-03T10:09:52.854603538+0800]: INFO:    [check] 10.0.1.202 os support succeeded.
[2023-03-03T10:09:53.033822227+0800]: INFO:    [check] 10.0.1.203 os support succeeded.
[2023-03-03T10:09:53.119607019+0800]: INFO:    [check] conn apiserver succeeded.
[2023-03-03T10:09:53.121602751+0800]: INFO:    [monitor] add prometheus
[2023-03-03T10:09:53.127663719+0800]: INFO:    [download] prometheus.zip
[2023-03-03T10:09:56.455474934+0800]: INFO:    [download] prometheus.zip succeeded.
[2023-03-03T10:09:56.457851180+0800]: INFO:    [monitor] apply prometheus manifests
[2023-03-03T10:10:34.536135617+0800]: ERROR:   [apply] add prometheus failed.
[2023-03-03T10:10:37.541904371+0800]: INFO:    [waiting] waiting prometheus
[2023-03-03T10:11:08.915090034+0800]: ERROR:   [waiting] prometheus pods --all ready failed.
[2023-03-03T10:11:08.919343622+0800]: INFO:    [apply] controller-manager and scheduler prometheus discovery service
[2023-03-03T10:11:09.137725749+0800]: INFO:    [apply] add controller-manager and scheduler prometheus discovery service succeeded.
[2023-03-03T10:11:09.139441258+0800]: INFO:    [monitor] add prometheus ingress
[2023-03-03T10:11:09.141863365+0800]: INFO:    [apply] prometheus ingress
[2023-03-03T10:11:09.347701733+0800]: INFO:    [apply] add prometheus ingress succeeded.
[2023-03-03T10:11:09.431774898+0800]: INFO:    [command] get node_ip value succeeded.
[2023-03-03T10:11:09.526868024+0800]: INFO:    [command] get node_port value succeeded.
[2023-03-03T10:11:09.540782557+0800]: INFO:    [ingress] curl -H 'Host:grafana.monitoring.cluster.local' http://10.0.1.203:49180; auth: admin/admin
[2023-03-03T10:11:09.546959366+0800]: INFO:    [ingress] curl -H 'Host:prometheus.monitoring.cluster.local' http://10.0.1.203:49180
[2023-03-03T10:11:09.549969520+0800]: INFO:    [ingress] curl -H 'Host:alertmanager.monitoring.cluster.local' http://10.0.1.203:49180
[2023-03-03T10:11:09.639805208+0800]: INFO:    [command] get MGMT_NODE value succeeded.
[2023-03-03T10:11:09.873670581+0800]: INFO:    [command] get node_hosts value succeeded.
[2023-03-03T10:11:09.875918854+0800]: ERROR:   [init] The host 10.0.1.201 is already in the cluster!

ERROR Summary: 
  [2023-03-03T10:10:34.536135617+0800]: ERROR:   [apply] add prometheus failed.
  [2023-03-03T10:11:08.915090034+0800]: ERROR:   [waiting] prometheus pods --all ready failed.
  [2023-03-03T10:11:09.875918854+0800]: ERROR:   [init] The host 10.0.1.201 is already in the cluster!
  

ACCESS Summary: 
  [ingress] curl -H 'Host:grafana.monitoring.cluster.local' http://10.0.1.203:49180; auth: admin/admin
  [ingress] curl -H 'Host:prometheus.monitoring.cluster.local' http://10.0.1.203:49180
  [ingress] curl -H 'Host:alertmanager.monitoring.cluster.local' http://10.0.1.203:49180
  


  See detailed log >>> /tmp/kainstall.qgxy6Q1ykj/kainstall.log 

no matches for kind "CronJob" in version "batch/v1beta1"

在k8s 1.25版本之后batch/v1beta1不再提供 CronJob API。
因此在安装cilium时会提示错误,在查看yaml文件后发现1.9.*版本的cilium后发现其中在脚本的3266行使用了过时的api

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: etcd-snapshot
  namespace: kube-system

DNS record

kubernetes-dashboard.cluster.local
app.demo.com

这两个域名支持自定义吗,我想用自己的域名定位到内网,因为内网测试的集群很多人用,都绑定hosts的话,比较麻烦

使用--network cilium 安装报错

安装命令:
bash kainstall-centos.sh init --master 192.168.122.91 --worker 192.168.122.94,192.168.122.95 --user root --port 22 --password 123456 --network cilium --version 1.20.6
报错信息:

timed out waiting for the condition on pods/cilium-bjvlk
timed out waiting for the condition on pods/cilium-mks7d
timed out waiting for the condition on pods/cilium-ppvk2
Retry 1/6 exited 1, retrying in 1 seconds...

系统报错:

Feb  8 11:28:17 localhost containerd: time="2022-02-08T11:28:17.538728720+08:00" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed"
Feb  8 11:28:20 localhost kubelet: I0208 11:28:20.180676    3736 scope.go:111] [topologymanager] RemoveContainer - Container ID: fd67367ec3db6863f7e25c268f91adc5e00df2b80e124eda51c1de92f20628a7
Feb  8 11:28:22 localhost kubelet: E0208 11:28:22.203685    3736 remote_runtime.go:332] ContainerStatus "cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66
Feb  8 11:28:22 localhost kubelet: E0208 11:28:22.203749    3736 kuberuntime_manager.go:980] getPodContainerStatuses for pod "cilium-mks7d_kube-system(73fe1803-0e75-48d5-a547-5feb7e6f5a11)" failed: rpc error: code = Unknown desc = Error: No such container: cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.243401    3736 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.243990    3736 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb  8 11:28:22 localhost systemd: Created slice libcontainer container kubepods-burstable-pod3e70fe3d_66a2_4177_bb46_f1729f11d16b.slice.
Feb  8 11:28:22 localhost systemd: Created slice libcontainer container kubepods-burstable-pod8313d926_3ede_4d55_997a_848c590768c6.slice.
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.418662    3736 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-t52t8" (UniqueName: "kubernetes.io/secret/3e70fe3d-66a2-4177-bb46-f1729f11d16b-coredns-token-t52t8") pod "coredns-85bb79f4b4-kwb8z" (UID: "3e70fe3d-66a2-4177-bb46-f1729f11d16b")
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.418747    3736 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8313d926-3ede-4d55-997a-848c590768c6-config-volume") pod "coredns-85bb79f4b4-jddhw" (UID: "8313d926-3ede-4d55-997a-848c590768c6")
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.418826    3736 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-t52t8" (UniqueName: "kubernetes.io/secret/8313d926-3ede-4d55-997a-848c590768c6-coredns-token-t52t8") pod "coredns-85bb79f4b4-jddhw" (UID: "8313d926-3ede-4d55-997a-848c590768c6")
Feb  8 11:28:22 localhost kubelet: I0208 11:28:22.418894    3736 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3e70fe3d-66a2-4177-bb46-f1729f11d16b-config-volume") pod "coredns-85bb79f4b4-kwb8z" (UID: "3e70fe3d-66a2-4177-bb46-f1729f11d16b")
Feb  8 11:28:24 localhost containerd: time="2022-02-08T11:28:24.825457447+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66 pid=7842
Feb  8 11:28:24 localhost systemd: Started libcontainer container cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66.
Feb  8 11:28:26 localhost kubelet: E0208 11:28:26.429653    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071" for pod "coredns-85bb79f4b4-jddhw_kube-system(8313d926-3ede-4d55-997a-848c590768c6)" error: rpc error: code = Unknown desc = Error: No such container: 7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071
Feb  8 11:28:26 localhost kubelet: E0208 11:28:26.458018    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57" for pod "coredns-85bb79f4b4-kwb8z_kube-system(3e70fe3d-66a2-4177-bb46-f1729f11d16b)" error: rpc error: code = Unknown desc = Error: No such container: d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57
Feb  8 11:28:27 localhost kubelet: E0208 11:28:27.467708    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57" for pod "coredns-85bb79f4b4-kwb8z_kube-system(3e70fe3d-66a2-4177-bb46-f1729f11d16b)" error: rpc error: code = Unknown desc = Error: No such container: d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57
Feb  8 11:28:27 localhost kubelet: E0208 11:28:27.469564    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071" for pod "coredns-85bb79f4b4-jddhw_kube-system(8313d926-3ede-4d55-997a-848c590768c6)" error: rpc error: code = Unknown desc = Error: No such container: 7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071
Feb  8 11:28:27 localhost kubelet: E0208 11:28:27.471422    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57" for pod "coredns-85bb79f4b4-kwb8z_kube-system(3e70fe3d-66a2-4177-bb46-f1729f11d16b)" error: rpc error: code = Unknown desc = Error: No such container: d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57
Feb  8 11:28:27 localhost kubelet: E0208 11:28:27.473238    3736 kuberuntime_manager.go:965] PodSandboxStatus of sandbox "7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071" for pod "coredns-85bb79f4b4-jddhw_kube-system(8313d926-3ede-4d55-997a-848c590768c6)" error: rpc error: code = Unknown desc = Error: No such container: 7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071
Feb  8 11:28:29 localhost containerd: time="2022-02-08T11:28:29.743079878+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57 pid=8023
Feb  8 11:28:29 localhost systemd: Started libcontainer container d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57.
Feb  8 11:28:29 localhost systemd: Couldn't stat device /dev/char/10:200
Feb  8 11:28:30 localhost containerd: time="2022-02-08T11:28:30.480013207+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071 pid=8082
Feb  8 11:28:30 localhost systemd: Started libcontainer container 7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071.
Feb  8 11:28:30 localhost systemd: Couldn't stat device /dev/char/10:200
Feb  8 11:28:30 localhost kubelet: W0208 11:28:30.749269    3736 pod_container_deletor.go:79] Container "d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57" not found in pod's containers
Feb  8 11:28:31 localhost kubelet: W0208 11:28:31.168093    3736 pod_container_deletor.go:79] Container "7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071" not found in pod's containers
Feb  8 11:28:38 localhost systemd-logind: New session 1437 of user root.
Feb  8 11:28:38 localhost systemd: Started Session 1437 of user root.
Feb  8 11:28:48 localhost containerd: time="2022-02-08T11:28:48.346419616+08:00" level=info msg="shim disconnected" id=cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66
Feb  8 11:28:48 localhost containerd: time="2022-02-08T11:28:48.347154529+08:00" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed"
Feb  8 11:28:48 localhost dockerd: time="2022-02-08T11:28:48.345725065+08:00" level=info msg="ignoring event" container=cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Feb  8 11:28:50 localhost kubelet: I0208 11:28:50.388930    3736 scope.go:111] [topologymanager] RemoveContainer - Container ID: cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66
Feb  8 11:28:50 localhost kubelet: E0208 11:28:50.390600    3736 pod_workers.go:191] Error syncing pod 73fe1803-0e75-48d5-a547-5feb7e6f5a11 ("cilium-mks7d_kube-system(73fe1803-0e75-48d5-a547-5feb7e6f5a11)"), skipping: failed to "StartContainer" for "cilium-agent" with CrashLoopBackOff: "back-off 10s restarting failed container=cilium-agent pod=cilium-mks7d_kube-system(73fe1803-0e75-48d5-a547-5feb7e6f5a11)"
Feb  8 11:28:50 localhost kubelet: I0208 11:28:50.391174    3736 scope.go:111] [topologymanager] RemoveContainer - Container ID: fd67367ec3db6863f7e25c268f91adc5e00df2b80e124eda51c1de92f20628a7
Feb  8 11:29:00 localhost kubelet: E0208 11:29:00.860877    3736 cni.go:366] Error adding kube-system_coredns-85bb79f4b4-kwb8z/d5329aa45c09e5c2d2df343ef3fac34ce367c4b95fa213d491ef372368846a57 to network cilium-cni/cilium: unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
Feb  8 11:29:00 localhost kubelet: Is the agent running?
Feb  8 11:29:01 localhost kubelet: I0208 11:29:01.030233    3736 scope.go:111] [topologymanager] RemoveContainer - Container ID: cd7099b11ca1ad381c89858d00116a382fc9fe6964843af01fc53d7e751c0c66
Feb  8 11:29:01 localhost kubelet: E0208 11:29:01.287706    3736 cni.go:366] Error adding kube-system_coredns-85bb79f4b4-jddhw/7f39c7f3bff44ec7c1608d75770f4de2868ae9a3c08ff0ea243b7b8af2670071 to network cilium-cni/cilium: unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory
Feb  8 11:29:01 localhost kubelet: Is the agent running?
Feb  8 11:29:01 localhost systemd: Started Session 1438 of user root.
Feb  8 11:29:01 localhost crond: sendmail: fatal: parameter inet_interfaces: no local interface found for ::1
Feb  8 11:29:03 localhost containerd: time="2022-02-08T11:29:03.123942236+08:00" level=info msg="starting signal loop" namespace=moby path=/run/containerd/io.containerd.runtime.v2.task/moby/096c99288792d04b6cc31705105c70a63bd1d02af740c462c5438a2a9d0f0e4f pid=8912
Feb  8 11:29:03 localhost systemd: Started libcontainer container 096c99288792d04b6cc31705105c70a63bd1d02af740c462c5438a2a9d0f0e4f.
ERROR Summary: 
  [2022-02-08T11:42:03.367923589+0800]: ERROR:   [waiting] cilium-node pods ready failed.
  [2022-02-08T11:48:46.124565883+0800]: ERROR:   [waiting] hubble-relay pods ready failed.
  [2022-02-08T11:48:49.720552849+0800]: ERROR:   [command] get node_port value failed.
  [2022-02-08T11:50:47.687143866+0800]: ERROR:   [download] kubernetes-dashboard.yml failed.
  [2022-02-08T11:51:22.444082068+0800]: ERROR:   [apply] add /tmp/kainstall-offline-file//manifests/kubernetes-dashboard.yml failed.
  [2022-02-08T11:51:58.267282739+0800]: ERROR:   [apply] add kubernetes dashboard ingress failed.
  

ACCESS Summary: 
  [ingress] curl -H 'Host:hubble-ui.cluster.local' http://192.168.122.95:nodePort
  [ingress] curl -H 'Host:app.demo.com' http://192.168.122.95:30385
  [ops] etcd backup directory: /var/lib/etcd/backups

kainstall.log

kubeadm init failed.

下载脚本

wget https://ghproxy.com/https://raw.githubusercontent.com/lework/kainstall/master/kainstall-ubuntu.sh

报下面的错误,帮忙看看是什么原因?

error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0: output: E0831 09:51:30.930600   27372 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0: not found" image="registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0"
time="2022-08-31T09:51:30+08:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-apiserver:v1.25.0: not found"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0: output: E0831 09:51:32.328803   27397 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0: not found" image="registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0"
time="2022-08-31T09:51:32+08:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-controller-manager:v1.25.0: not found"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0: output: E0831 09:51:33.818145   27422 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0: not found" image="registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0"
time="2022-08-31T09:51:33+08:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-scheduler:v1.25.0: not found"
, error: exit status 1
        [ERROR ImagePull]: failed to pull image registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0: output: E0831 09:51:35.166799   27447 remote_image.go:238] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0: not found" image="registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0"
time="2022-08-31T09:51:35+08:00" level=fatal msg="pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0\": failed to resolve reference \"registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0\": registry.cn-hangzhou.aliyuncs.com/kainstall/kube-proxy:v1.25.0: not found"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

部署完成后出现两个错误

ERROR Summary:
[2021-02-10T10:53:31.694598032+0800]: ERROR: [apply] add ingress-demo-app failed.
[2021-02-10T10:55:05.062855606+0800]: ERROR: [apply] add kubernetes dashboard ingress failed.

ubuntu 20.04 安装 metrics-server 未正常运行,集群安装卡在 kubesphere

metrics-server 未正常运行,集群安装卡在 kubesphere

[2021-08-08T21:38:49.956194608+0800]: INFO:    [apply] add /tmp/kainstall-offline-file//manifests/kubesphere-installer.yaml succeeded.
[2021-08-08T21:38:49.963119829+0800]: INFO:    [apply] /tmp/kainstall-offline-file//manifests/cluster-configuration.yaml
[2021-08-08T21:38:52.437071649+0800]: INFO:    [apply] add /tmp/kainstall-offline-file//manifests/cluster-configuration.yaml succeeded.
[2021-08-08T21:39:55.449887893+0800]: INFO:    [waiting] waiting ks-installer
[2021-08-08T21:40:01.944030259+0800]: INFO:    [waiting] ks-installer pods ready succeeded.

节点信息

Information as of: 2021-08-08 13:15:24
 
 Product............: VMware Virtual Platform None
 OS.................: Ubuntu 20.04.1 LTS (bullseye/sid)
 Kernel.............: Linux 5.4.0-80-generic x86_64 GNU/Linux
 CPU................: Intel(R) Xeon(R) Silver 4214R CPU @ 2.40GHz 6P 1C 6L

 Hostname...........: k8s-master-node1
 IP Addresses.......: xxx.xxx.xxx.1

 Uptime.............: 0 days, 00h 00m 12s
 Memory.............: 0.61GiB of 7.75GiB RAM used (7.91%)
 Load Averages......: 0.07 / 0.02 / 0.00 with 6 core(s) at 2394.374Hz
 Disk Usage.........: 13G of 1.2T disk space used (2%) 

 Users online.......: 1
 Running Processes..: 309
 Container Info.....: Images:0

集群初始化命令

bash -c "$(curl -sSL https://cdn.jsdelivr.net/gh/lework/kainstall@master/kainstall-ubuntu.sh)" - init  \
 --master xxxx.xxxx.xxx.1  \
 --worker xxxx.xxxx.xxx.2, xxxx.xxxx.xxx.3  \
 --user root   --password zzzzzzz  
--10years --version 1.21.3  \
--network flannel --ingress nginx --ui kubesphere --addon metrics-server --monitor prometheus

metrics-server-79bf7dcc6f-wmbj9 not ready

[root@k8s-master-node1 /]# kubectl get pod -A
NAMESPACE           NAME                                        READY   STATUS             RESTARTS   AGE
default             ingress-demo-app-694bf5d965-6rqds           1/1     Running            0          30m
default             ingress-demo-app-694bf5d965-nkdvb           1/1     Running            0          30m
ingress-nginx       ingress-nginx-admission-create-r857p        0/1     Completed          0          31m
ingress-nginx       ingress-nginx-admission-patch-tkxp5         0/1     Completed          0          31m
ingress-nginx       ingress-nginx-controller-76d9d9fbf5-n5jxf   1/1     Running            0          31m
kube-system         coredns-56c5f6b585-2422r                    1/1     Running            0          32m
kube-system         coredns-56c5f6b585-srp4j                    1/1     Running            0          32m
kube-system         default-http-backend-6c67944995-fpmcq       1/1     Running            0          30m
kube-system         etcd-k8s-master-node1                       1/1     Running            0          32m
kube-system         kube-apiserver-k8s-master-node1             1/1     Running            0          32m
kube-system         kube-controller-manager-k8s-master-node1    1/1     Running            0          32m
kube-system         kube-flannel-ds-fh8zh                       1/1     Running            0          32m
kube-system         kube-flannel-ds-nb6kl                       1/1     Running            0          32m
kube-system         kube-flannel-ds-x78rn                       1/1     Running            0          32m
kube-system         kube-proxy-7pgps                            1/1     Running            0          32m
kube-system         kube-proxy-hnv6x                            1/1     Running            0          32m
kube-system         kube-proxy-nzpnq                            1/1     Running            0          32m
kube-system         kube-scheduler-k8s-master-node1             1/1     Running            0          32m
kube-system         metrics-server-79bf7dcc6f-wmbj9             0/1     Running            0          31m
kubesphere-system   ks-installer-ff7d7698d-bppv6                0/1     CrashLoopBackOff   9          30m

mertics-server pod 相关信息和日志

[root@k8s-master-node1 /]# kubectl logs -n kube-system   metrics-server-79bf7dcc6f-wmbj9
I0808 13:37:21.566776       1 serving.go:341] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0808 13:37:22.307172       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0808 13:37:22.307189       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0808 13:37:22.307209       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0808 13:37:22.307214       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0808 13:37:22.307226       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0808 13:37:22.307229       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0808 13:37:22.307685       1 secure_serving.go:197] Serving securely on [::]:443
I0808 13:37:22.307755       1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0808 13:37:22.307775       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0808 13:37:22.407917       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0808 13:37:22.407924       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0808 13:37:22.407940       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
E0808 13:37:27.504502       1 scraper.go:139] "Failed to scrape node" err="Get \"https://k8s-worker-node2:10250/stats/summary?only_cpu_and_memory=true\": EOF" node="k8s-worker-node2"
E0808 13:37:27.522212       1 scraper.go:139] "Failed to scrape node" err="Get \"https://k8s-master-node1:10250/stats/summary?only_cpu_and_memory=true\": EOF" node="k8s-master-node1"
[root@k8s-master-node1 /]# kubectl describe -n kube-system pod  metrics-server-79bf7dcc6f-wmbj9
Name:                 metrics-server-79bf7dcc6f-wmbj9
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8s-worker-node2/xxx.xxx.xxx.xxx
Start Time:           Sun, 08 Aug 2021 13:37:16 +0000
Labels:               k8s-app=metrics-server
                      pod-template-hash=79bf7dcc6f
Annotations:          <none>
Status:               Running
IP:                   10.244.2.2
IPs:
  IP:           10.244.2.2
Controlled By:  ReplicaSet/metrics-server-79bf7dcc6f
Containers:
  metrics-server:
    Container ID:  docker://10e9a176e588a454066608bdbec5adddd39de942ee771c62a6f99e7c079e68a0
    Image:         registry.cn-hangzhou.aliyuncs.com/kainstall/metrics-server:v0.5.0
    Image ID:      docker-pullable://registry.cn-hangzhou.aliyuncs.com/kainstall/metrics-server@sha256:05bf9f4bf8d9de19da59d3e1543fd5c140a8d42a5e1b92421e36e5c2d74395eb
    Port:          443/TCP
    Host Port:     0/TCP
    Args:
      --cert-dir=/tmp
      --secure-port=443
      --kubelet-use-node-status-port
      --metric-resolution=15s
    State:          Running
      Started:      Sun, 08 Aug 2021 13:37:21 +0000
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        100m
      memory:     200Mi
    Liveness:     http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
    Readiness:    http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nnnnm (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  tmp-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kube-api-access-nnnnm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  4m59s                 default-scheduler  Successfully assigned kube-system/metrics-server-79bf7dcc6f-wmbj9 to k8s-worker-node2
  Normal   Pulling    4m57s                 kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/kainstall/metrics-server:v0.5.0"
  Normal   Pulled     4m54s                 kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/kainstall/metrics-server:v0.5.0" in 3.841978824s
  Normal   Created    4m53s                 kubelet            Created container metrics-server
  Normal   Started    4m53s                 kubelet            Started container metrics-server
  Warning  Unhealthy  68s (x21 over 4m28s)  kubelet            Readiness probe failed: HTTP probe failed with statuscode: 500

安装中的报错

[2021-08-08T21:36:14.184345230+0800]: ^[[32mINFO:    ^[[0m[kubeadm init] xxx.xxx.xxx.xxx: set kube config succeeded.
[2021-08-08T21:36:14.196881865+0800]: ^[[32mINFO:    ^[[0m[kubeadm init] xxx.xxx.xxx.xxx: delete master taint
[2021-08-08T21:36:14.223645005+0800]: ^[[34mEXEC:    ^[[0m[command] bash -c 'kubectl taint nodes --all node-role.kubernetes.io/master-'
bash: kubectl: command not found
[2021-08-08T21:36:14.237175207+0800]: ^[[31mERROR:   ^[[0m[kubeadm init] xxx.xxx.xxx.xxx: delete master taint failed.

centos 7安装报错

W0330 18:49:18.277988 3616 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"InitConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
no kind "InitConfiguration" is registered for version "kubeadm.k8s.io/v1beta3" in scheme "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31"
To see the stack trace of this error execute with --v=5 or higher

centos8.5安装时报错: [download] kubeadm-linux-amd64 failed.

操作系统版本:
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 8.5.2111
[root@localhost ~]# uname -a
Linux k8s-master-node1 4.18.0-348.el8.x86_64 #1 SMP Tue Oct 19 15:14:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]#

安装过程报错:
[root@localhost ~]# bash kainstall-centos.sh init --master 192.168.3.168 --worker 192.168.3.169,192.168.3.170 --user root --password 1qaz2wsx --port 22 --10years --version 1.25.2
[2022-10-12T14:04:20.996282077+0800]: INFO: [start] bash kainstall-centos.sh init --master 192.168.3.168 --worker 192.168.3.169,192.168.3.170 --user root --password zzzzzz --port 22 --10years --version 1.25.2
[2022-10-12T14:04:20.999787480+0800]: INFO: [check] ssh command exists.
[2022-10-12T14:04:21.001026364+0800]: INFO: [check] sshpass command exists.
[2022-10-12T14:04:21.002241683+0800]: INFO: [check] wget command exists.
[2022-10-12T14:04:21.168360384+0800]: INFO: [check] ssh 192.168.3.168 connection succeeded.
[2022-10-12T14:04:21.347292239+0800]: INFO: [check] ssh 192.168.3.169 connection succeeded.
[2022-10-12T14:04:21.514996945+0800]: INFO: [check] ssh 192.168.3.170 connection succeeded.
[2022-10-12T14:04:21.516225001+0800]: INFO: [check] os support: centos7 centos8
[2022-10-12T14:04:21.830656951+0800]: INFO: [check] 192.168.3.168 os support succeeded.
[2022-10-12T14:04:22.187268224+0800]: INFO: [check] 192.168.3.169 os support succeeded.
[2022-10-12T14:04:22.353513229+0800]: INFO: [check] 192.168.3.170 os support succeeded.
[2022-10-12T14:04:22.357449687+0800]: INFO: [init] Get 192.168.3.168 InternalIP.
[2022-10-12T14:04:22.529640920+0800]: INFO: [command] get MGMT_NODE_IP value succeeded.
[2022-10-12T14:04:22.532309834+0800]: INFO: [init] master: 192.168.3.168
[2022-10-12T14:04:27.893130188+0800]: INFO: [init] init master 192.168.3.168 succeeded.
[2022-10-12T14:04:28.159972191+0800]: INFO: [init] 192.168.3.168 set hostname and hostname resolution succeeded.
[2022-10-12T14:04:28.165045054+0800]: INFO: [init] 192.168.3.168: set audit-policy file.
[2022-10-12T14:04:28.752052220+0800]: INFO: [init] 192.168.3.168: set audit-policy file succeeded.
[2022-10-12T14:04:28.760281853+0800]: INFO: [init] worker: 192.168.3.169
[2022-10-12T14:04:36.178754215+0800]: INFO: [init] init worker 192.168.3.169 succeeded.
[2022-10-12T14:04:36.527092785+0800]: INFO: [init] worker: 192.168.3.170
[2022-10-12T14:04:44.367455371+0800]: INFO: [init] init worker 192.168.3.170 succeeded.
[2022-10-12T14:04:44.647374276+0800]: INFO: [install] install containerd on 192.168.3.168.
[2022-10-12T14:04:45.141627418+0800]: ERROR: [install] install containerd on 192.168.3.168 failed.
[2022-10-12T14:04:45.145089853+0800]: INFO: [install] install kube on 192.168.3.168
[2022-10-12T14:04:45.746902911+0800]: ERROR: [install] install kube on 192.168.3.168 failed.
[2022-10-12T14:04:45.750514840+0800]: INFO: [install] install containerd on 192.168.3.169.
[2022-10-12T14:04:47.171355604+0800]: ERROR: [install] install containerd on 192.168.3.169 failed.
[2022-10-12T14:04:47.173495643+0800]: INFO: [install] install kube on 192.168.3.169
[2022-10-12T14:04:48.838902091+0800]: ERROR: [install] install kube on 192.168.3.169 failed.
[2022-10-12T14:04:48.841149112+0800]: INFO: [install] install containerd on 192.168.3.170.
[2022-10-12T14:04:49.725763134+0800]: ERROR: [install] install containerd on 192.168.3.170 failed.
[2022-10-12T14:04:49.727968657+0800]: INFO: [install] install kube on 192.168.3.170
[2022-10-12T14:04:51.247454388+0800]: ERROR: [install] install kube on 192.168.3.170 failed.
[2022-10-12T14:04:51.252089870+0800]: INFO: [install] install haproxy on 192.168.3.169
[2022-10-12T14:04:55.054083266+0800]: ERROR: [install] install haproxy on 192.168.3.169 failed.
[2022-10-12T14:04:55.057268774+0800]: INFO: [install] install haproxy on 192.168.3.170
[2022-10-12T14:04:56.110504724+0800]: ERROR: [install] install haproxy on 192.168.3.170 failed.
[2022-10-12T14:04:56.113846768+0800]: INFO: [install] download kubeadm 10 years certs client
[2022-10-12T14:04:56.117792342+0800]: INFO: [download] kubeadm-linux-amd64
[2022-10-12T14:04:56.157987417+0800]: ERROR: [download] kubeadm-linux-amd64 failed.

ERROR Summary:
[2022-10-12T14:04:45.141627418+0800]: ERROR: [install] install containerd on 192.168.3.168 failed.
[2022-10-12T14:04:45.746902911+0800]: ERROR: [install] install kube on 192.168.3.168 failed.
[2022-10-12T14:04:47.171355604+0800]: ERROR: [install] install containerd on 192.168.3.169 failed.
[2022-10-12T14:04:48.838902091+0800]: ERROR: [install] install kube on 192.168.3.169 failed.
[2022-10-12T14:04:49.725763134+0800]: ERROR: [install] install containerd on 192.168.3.170 failed.
[2022-10-12T14:04:51.247454388+0800]: ERROR: [install] install kube on 192.168.3.170 failed.
[2022-10-12T14:04:55.054083266+0800]: ERROR: [install] install haproxy on 192.168.3.169 failed.
[2022-10-12T14:04:56.110504724+0800]: ERROR: [install] install haproxy on 192.168.3.170 failed.
[2022-10-12T14:04:56.157987417+0800]: ERROR: [download] kubeadm-linux-amd64 failed.

See detailed log >>> /tmp/kainstall.coZgEOFuwO/kainstall.log

install container failed

image

安装container失败,然后所有节点的docker server 都不见了 。 连docker 都起不了了 。怎么回事?

No match for argument: containernetworking

脚本

## install_containerd()
 if [[ "${OFFLINE_TAG:-}" != "1" ]];then
    [ -f "$(which runc)" ]  && yum remove -y runc
    [ -f "$(which containerd)" ]  && yum remove -y containerd.io
    yum install -y containerd.io"${version}" containernetworking-plugins bash-completion
  fi

环境及报错

image

安装kubesphere错误

错误信息:
ERROR Summary: [2022-05-09T12:37:51.775037767+0800]: ERROR: [ui] set statefulset to worker node failed. [2022-05-09T12:39:30.009510299+0800]: ERROR: [waiting] kubesphere-controls-system pods --all ready failed. [2022-05-09T12:40:04.902137560+0800]: ERROR: [waiting] kubesphere-monitoring-system pods --all ready failed.
安装命令:
bash kainstall-ubuntu.sh init \ --master 192.168.1.200 \ --worker 192.168.1.201 \ --user root \ --password 0 \ --port 22 \ --version 1.22.5 \ --ui kubesphere \ --10years
以及带--storage的也试过好几次了,报一样的错误
日志

kubeadm init failed

unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta3", Kind:"InitConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
no kind "InitConfiguration" is registered for version "kubeadm.k8s.io/v1beta3" in scheme "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31"
To see the stack trace of this error execute with --v=5 or higher

如何修复
image

[bug] kubelect 错误的对 system.slice 进行了资源限制,导致 OOM

系统环境

  • 操作系统:Ubuntu 20.04 5.4.0-91-generic
  • 安装命令:
./kainstall-ubuntu.sh init \
  --master 192.168.7.140,192.168.7.141,192.168.7.142 \
  --worker 192.168.7.143,192.168.7.144,192.168.7.145,192.168.7.146,192.168.7.147,192.168.7.148,192.168.7.149\
  --port 22 \
  --network calico \
  --version 1.21.8 \

问题复现

k8s 部署完成后,部分节点 haproxy 以及大量系统进程被 Kill 掉

[152588.479431] Memory cgroup out of memory: Killed process 897 (haproxy) total-vm:71548kB, anon-rss:41156kB, file-rss:7324kB, shmem-rss:0kB, UID:0 pgtables:156kB oom_score_adj:0

原因是 system.slice 可用内存被限制到了 512M

● system.slice - System Slice
     Loaded: loaded
    Drop-In: /run/systemd/system.control/system.slice.d
             └─50-MemoryLimit.conf, 50-CPUShares.conf
     Active: active since Mon 2022-01-03 17:12:02 CST; 5 days ago
       Docs: man:systemd.special(7)
      Tasks: 476
     Memory: 503.4M (limit: 512.0M)
     CGroup: /system.slice
             ├─accounts-daemon.service
             │ └─659 /usr/lib/accountsservice/accounts-daemon
             ├─atd.service
             │ └─696 /usr/sbin/atd -f
             ├─auditd.service
             │ └─4896 /sbin/auditd

问题定位

kubelect 配置文件有误

# 节点资源预留
kubeReserved:
  cpu: 200m\$(if [[ \$(cat /proc/meminfo | awk '/MemTotal/ {print \$2}') -gt 3670016 ]]; then echo -e '\n  memory: 256Mi';fi)
  ephemeral-storage: 1Gi
systemReserved:
  cpu: 300m\$(if [[ \$(cat /proc/meminfo | awk '/MemTotal/ {print \$2}') -gt 3670016 ]]; then echo -e '\n  memory: 512Mi';fi)
  ephemeral-storage: 1Gi
kubeReservedCgroup: /kube.slice
systemReservedCgroup: /system.slice
enforceNodeAllocatable: 
- pods
- kube-reserved
- system-reserved

由于此处 enforceNodeAllocatable 加入了 system-reservedkube-reserved ,导致本应预留的资源被设定成了 system.slice Cgroup 的资源上限

参考

该问题在 Ubunt 20.4 上必复现,调整配置文件,删除 system-reservedkube-reserved 之后问题解决。不确定其他系统是否存在该问题

dns有问题

service创建成功后, nslookup 查这个service查不到. 系统默认的也是.
看coredns日志发现是正常的

1.21.1版本离线安装pause版本不对

离线包中的pause版本是3.4.1,kainstall-centos.sh中的是3.2。
离线安装时kubesphere也需要在线下载导致安装失败

bash kainstall-centos.sh init
--master 10.30.44.110,10.30.44.111,10.30.44.112
--worker 10.30.44.113,10.30.44.114
--user root
--password 123456
--version 1.21.1
--ui kubesphere
--offline-file 1.21.1_centos7.tgz

kainstall-ubuntu.sh uses deprecated parameter 'pod-eviction-timeout' in kube-controller-manager configuration, causing kubeadm init failure

Hello,

I have found an issue in the kainstall-ubuntu.sh script related to the configuration of kube-controller-manager. The script is currently using the deprecated parameter pod-eviction-timeout, which leads to an incorrect configuration file being generated by kubeadm, ultimately causing the kubeadm init process to fail.

To fix this issue, I suggest removing or commenting out the line with the pod-eviction-timeout parameter in the kube-controller-manager configuration:

# pod-eviction-timeout: '2m' (the `pod-eviction-timeout` flag is deprecated for v1.26 and later versions )

Please consider updating the script to address this issue, ensuring that it is compatible with newer Kubernetes versions.

Thank you for your attention to this matter.

离线部署时,镜像coredns名字的问题

离线文件版本: 1.23.1 centos7

好像coredns的镜像名字有些问题
执行完脚本load出来的名字是: registry.cn-hangzhou.aliyuncs.com/kainstall/coredns:v1.8.6 但是kubeadm实际使用的是
registry.cn-hangzhou.aliyuncs.com/kainstall/coredns/coredns:v1.8.6,在离线安装时会下载coredns

我的临时解决措施是在脚本执行过程中,重新tag为后者,就可以了

debian 10 报错 ERROR: [download] kube-flannel.yml failed.

执行命令

 bash kainstall-debian.sh init \
                                        --master 192.168.2.100  \
                                        --worker 192.168.2.101,192.168.2.102 \
                                        --user root \
                                        --password root \
                                        --port 22

结果

[2022-02-05T20:45:13.389986952+0800]: INFO:    [cluster] cluster status

NAME               STATUS     ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                       KERNEL-VERSION    CONTAINER-RUNTIME
k8s-master-node1   NotReady   control-plane,master   3m28s   v1.23.3   192.168.2.100   <none>        Debian GNU/Linux 10 (buster)   4.19.0-18-amd64   docker://20.10.12
k8s-worker-node1   NotReady   worker                 3m7s    v1.23.3   192.168.2.101   <none>        Debian GNU/Linux 10 (buster)   4.19.0-18-amd64   docker://20.10.12
k8s-worker-node2   NotReady   worker                 2m59s   v1.23.3   192.168.2.102   <none>        Debian GNU/Linux 10 (buster)   4.19.0-18-amd64   docker://20.10.12

NAMESPACE              NAME                                         READY   STATUS      RESTARTS   AGE
kube-system            coredns-5f5cf4bc85-7gblv                     0/1     Pending     0          3m11s
kube-system            coredns-f56b66bdc-75v7q                      0/1     Pending     0          11s
kube-system            coredns-f56b66bdc-jhklt                      0/1     Pending     0          11s
kube-system            etcd-k8s-master-node1                        1/1     Running     0          3m24s
kube-system            etcd-snapshot-1644065106-g2t5r               0/1     Completed   0          7s
kube-system            kube-apiserver-k8s-master-node1              1/1     Running     0          3m24s
kube-system            kube-controller-manager-k8s-master-node1     1/1     Running     0          3m24s
kube-system            kube-proxy-bm55l                             1/1     Running     0          3m7s
kube-system            kube-proxy-nr4sw                             1/1     Running     0          2m59s
kube-system            kube-proxy-qc4f7                             1/1     Running     0          3m11s
kube-system            kube-scheduler-k8s-master-node1              1/1     Running     0          3m24s
kube-system            metrics-server-765f8cbc4c-tjzwj              0/1     Pending     0          93s
kubernetes-dashboard   dashboard-metrics-scraper-799d786dbf-42zgl   0/1     Pending     0          13s
kubernetes-dashboard   kubernetes-dashboard-6b6b86c4c5-z6q4s        0/1     Pending     0          13s
ERROR Summary:
  [2022-02-05T20:42:29.561835327+0800]: ERROR:   [download] kube-flannel.yml failed.
  [2022-02-05T20:43:02.061428057+0800]: ERROR:   [apply] add /tmp/kainstall-offline-file//manifests/kube-flannel.yml failed.
  [2022-02-05T20:43:36.519914741+0800]: ERROR:   [waiting] flannel pods ready failed.
  [2022-02-05T20:43:51.374782080+0800]: ERROR:   [download] ingress-nginx.yml failed.
  [2022-02-05T20:44:24.257154049+0800]: ERROR:   [apply] add /tmp/kainstall-offline-file//manifests/ingress-nginx.yml failed.
  [2022-02-05T20:44:58.689684161+0800]: ERROR:   [waiting] ingress-nginx pod ready failed.
  [2022-02-05T20:44:58.888762906+0800]: ERROR:   [ingress] delete ingress-ngin ValidatingWebhookConfiguration failed.
  [2022-02-05T20:45:01.216912412+0800]: ERROR:   [command] get node_ip value failed.
  [2022-02-05T20:45:01.426682858+0800]: ERROR:   [command] get node_port value failed.


ACCESS Summary:
  [ingress] curl --insecure -H 'Host:kubernetes-dashboard.cluster.local' https://nodeIP:nodePort
  [Token] eyJhbGciOiJSUzI1NiIsImtpZCI6IjdqdUdvMWtyQUpKQUxaVjlpbnkyRFI0eUNyU19mT1BsQndLb2ttNHBmRFkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYS10b2tlbi1memNkNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImQzM2Y3ZjIwLWYzZjYtNGRmYy05YTYwLWY1ZTViNWU1YzNhMyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi1zYSJ9.HDrhJWtxznrXIyLAlhE49gmVudWKYfsth-Y3pLBUkVZIc60KudGmaRQYZQxtKLrKF7_TPW4yytUYSr9NOCFdqkM_Sdp3y-Iht_ZHv4EMTnuj4vKtD3xzc_EEOldP6Ub1EnHAPMUocJ1BdHCMiSSsiP9IBTxhl8b6e6sIo-yLU4haBk-I2kXd1NwkKP78H40g2fD1HsKAD4V4Fiz60vur5oSBP0yWV2jJyEpLaojGyIshtjutv0xE7Q8D1Ghq9uHSVGoyquyVi5HVYPQ5SyCB4VfYWmOpprcyW-3JsAZj5XhlT-FhTOBVKd8ty8dQ8pFbk00aSPyUKBHrkfklgwbXxQ
  [ops] etcd backup directory: /var/lib/etcd/backups



  See detailed log >>> /tmp/kainstall.YWbrRUgrDO/kainstall.log


这是报错日志
kainstall.log

我是 用的 archlinux 然后 跑的 virtualbox ,创建了3 个debian 虚拟机 master(192.168.2.100) node1(192.168.2.101) node2(192.168.2.102) ,master 给4g内存 ,别的都给2 g 内存 ,互相 也能ping 通

安装默认k8s最新版本1.25.2失败kubeadm init failed.

详细日志:
tail -n 1000 /tmp/kainstall.spQxQRt7s8/kainstall.log

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
[2022-11-25T09:52:13.163847100+0800]: ERROR: [kubeadm init] 172.30.183.50: kubeadm init failed.
systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 11-cgroup.conf
Active: active (running) since Fri 2022-11-25 09:48:13 CST; 17min ago
Docs: https://kubernetes.io/docs/
Process: 5586 ExecStartPre=/bin/bash -c /bin/mkdir -p /sys/fs/cgroup/{cpuset,memory,hugetlb,systemd,pids,"cpu,cpuacct"}/{system,kube,kubepods}.slice||: (code=exited, status=0/SUCCESS)
Main PID: 5590 (kubelet)
Memory: 40.1M
CGroup: /kube.slice/kubelet.service
└─5590 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/conf...

Nov 25 10:06:04 k8s-master-node1 kubelet[5590]: E1125 10:06:04.884308 5590 pod_workers.go:965] "Error syncing pod, skipping" err="failed to "CreatePodSandb...be-system(
Nov 25 10:06:04 k8s-master-node1 kubelet[5590]: E1125 10:06:04.907265 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.007627 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.108374 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.208919 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.309319 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.409774 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.510834 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.611591 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
Nov 25 10:06:05 k8s-master-node1 kubelet[5590]: E1125 10:06:05.712011 5590 kubelet.go:2448] "Error getting node" err="node "k8s-master-node1" not found"
请帮忙看下是不是不能安装1.25版本

metrics-server.yml failed.

ERROR Summary:
[2021-05-03T12:45:52.740372494+0800]: ERROR: [download] metrics-server.yml failed.
[2021-05-03T12:46:25.101696686+0800]: ERROR: [apply] add /tmp/kainstall-offline-file//manifests/metrics-server.yml failed.

如何解决? 不知道那台机器出问了

cri使用cilium时,脚本执行出现bug

[[ "${KUBE_NETWORK:-}" == "cilium" ]] && check::kernel 4.9.17
传入func check::kernel后 version变量的值就变为4197,但操作系统kernel version为类似4.18.0-xxx时,就会校验不通过,但实际上,kernel version是够的

kubeadm init failed

执行的命令

bash kainstall-centos.sh init \
  --master 192.168.0.59 \
  --worker 192.168.0.59 \
  --user root \
  --password P@ssw0rd123 \
  --version 1.22.3 \
  --10years \
  --offline-file 1.22.3_centos7.tgz

最后一条出错的日志

[2021-11-08T09:06:55.764277829+0800]: �[34mEXEC:    �[0m[command] sshpass -p "zzzzzz" ssh -o ConnectTimeout=600 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -p 22 bash -c 'kubeadm init --config=/etc/kubernetes/kubeadmcfg.yaml --upload-certs'
Warning: Permanently added '192.168.0.59' (ECDSA) to the list of known hosts.
invalid or incomplete external CA: failure loading key for apiserver: couldn't load the private key file /etc/kubernetes/pki/apiserver.key: open /etc/kubernetes/pki/apiserver.key: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
[2021-11-08T09:06:56.107721125+0800]: �[31mERROR:   �[0m[kubeadm init] 192.168.0.59: kubeadm init failed.

install docker failed!

ERROR: [install] install docker on 192.168.0.201 failed.
请问有办法查看日志,来判断详细原因吗?

kubeadm init failed错误

执行kainstall.sh init --master 192.168.2.52 --user root --password psd@123 --port 22 --version 1.19.7,提示kubeadm init failed,查看日志提示如下错误:
k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod : Get "https://apiserver.cluster.local:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master-node1&limit=500&resourceVersion=0": dial tcp 192.168.2.52:6443: connect: connection refused
请教下这个问题

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.