Coder Social home page Coder Social logo

breeze's Introduction

Breeze

  • Deploy a Production Ready Kubernetes Cluster with graphical interface

pipeline status License

English | 中文

Note: Branches may be in an unstable or even broken state during development. Please use releases instead of those branches in order to get stable binaries.

Refer to User Guide for more details on how to use Breeze.

Breeze

Project Breeze is an open source trusted solution allow you to create Kubernetes clusters on your internal, secure, cloud network with graphical user interface. As a cloud native installer project, Breeze is listed in CNCF Cloud Native Interactive Landscape.

Breeze

Features

  • Easy to run: Breeze combines all resources you need such as kubernetes components images, ansible playbooks for the deployment of kubernetes clusters into a single docker image (wise2c/playbook). It also works as a local RHEL/CentOS yum and Ubuntu apt repository server. You just need a linux server with docker and docker-compose installed to run Breeze.

  • Simplified the process of kubernetes clusters deployment: With a few simple commands, you can get Breeze running, and then finish all the other deployment processes by the graphical interface.

  • Support offline deployment: After 5 images (playbook, yum-repo/apt-source, pagoda, deploy-ui) have been loaded on the deploy server, kubernetes clusters can be setup without internet access. Breeze works as a yum/apt repository server and deploys a local Harbor registry and uses kubeadm to setup kubernetes clusters. All docker images will be pulled from the local Harbor registry.

  • Support multi-cluster: Breeze supports multiple kubernetes clusters deployment.

  • Support high available architecture: With Breeze, you can setup kubernetes clusters with 3 master servers and 3 etcd servers combined with haproxy and keepalived. All worker nodes will use the virtual floating ip address to communicate with the master servers.

Architecture

Alt

Alt

Components

  • breeze: Ansible playbook for deployments of docker, harbor, haproxy+keepalived, etcd, kubernetes.

  • yum-repo: RHEL/CentOS yum repository for docker, docker-compose, kubelet, kubectl, kubeadm, kubernetes-cni etc,.

  • apt-source: Ubuntu apt source repository for docker, docker-compose, kubelet, kubectl, kubeadm, kubernetes-cni etc,.

  • deploy-ui: Graphical user interface.

  • pagoda: Server offers the API to operate Ansible playbooks.

  • kubeadm-version: Get k8s components images version list by command "kubeadm config"

Install & Run

System requirements:

Deploy server: docker 1.13.1+ and docker-compose 1.12.0+ .

Kubernetes cluster server: RHEL/CentOS/RockyLinux/AlmaLinux/OracleLinux 8.4+ or Ubuntu 20/22 LTS or openEuler 22.03 LTS is required and minimal installation mode is recommended.

Refer to User Guide for more details on how to use Breeze.

Community

  • Slack: Join Breeze's community for discussion and ask questions: Breeze Slack, channel: #general

License

Breeze is available under the Apache 2 license.

breeze's People

Contributors

alanpeng avatar andyyoung01 avatar asxe avatar hymian avatar shonge avatar wise2ck8s avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

breeze's Issues

read file dir inherent property error

[GIN] 2018/10/24 - 01:56:37 | 400 | 3.686223ms | 118.26.3.3 | PUT /v1/clusters/b304f020-ec8d-4a3e-9aa9-c8e0c2100211/deployment
W1024 01:56:42.054962 1 stat.go:57] read file dir inherent property error: open /workspace/docker-p

etcd

Oct 11 2018 11:19:10: [etcd] [192.168.1.201] task: load etcd image - failed, message: Failed to import docker-py - No module named 'requests.packages.urllib3'. Try pip install docker-py
113 Oct 11 2018 11:19:10: [etcd] [192.168.1.202] task: load etcd image - failed, message: Failed to import docker-py - No module named 'requests.packages.urllib3'. Try pip install docker-py
114 Oct 11 2018 11:19:10: [etcd] [all] task: ending - failed, message:
115 Oct 11 2018 11:19:10: [etcd] [192.168.1.203] task: load etcd image - failed, message: Failed to import docker-py - No module named 'requests.packages.urllib3'. Try pip install docker-py

安装失败

服务器配置:
192.168.1.249 Habor
192.168.1.250 Master
192.168.1.100 Node
192.168.1.248 Work(执行Breeze)
所有机器均安装Centos7.5 Mini版本(https://mirrors.aliyun.com/centos/7.5.1804/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso)
所有操作均按照MD严格执行
Work工作机安装Docker版本1.13.1
4项功能中Docker/Etcd/Register安装成功,kubernetes安装失败
Harbor服务器的开源仓库中已有library/kube-controller-manager在内的8个包
日志下:
1 Oct 14 2018 16:09:48: [docker] [192.168.2.249] task: Gathering Facts - ok, message: 2 Oct 14 2018 16:09:49: [docker] [192.168.2.100] task: set hostname - ok, message: 3 Oct 14 2018 16:09:49: [docker] [192.168.2.249] task: set hostname - ok, message: 4 Oct 14 2018 16:09:50: [docker] [192.168.2.100] task: get seed ip - ok, message: 5 Oct 14 2018 16:09:48: [docker] [192.168.2.250] task: Gathering Facts - ok, message: 6 Oct 14 2018 16:09:50: [docker] [192.168.2.250] task: get seed ip - ok, message: 7 Oct 14 2018 16:09:49: [docker] [192.168.2.250] task: set hostname - ok, message: 8 Oct 14 2018 16:09:50: [docker] [192.168.2.249] task: get seed ip - ok, message: 9 Oct 14 2018 16:09:48: [docker] [192.168.2.100] task: Gathering Facts - ok, message: 10 Oct 14 2018 16:09:50: [docker] [192.168.2.249] task: add seed to /etc/hosts - ok, message: Block inserted 11 Oct 14 2018 16:09:53: [docker] [192.168.2.250] task: add to /etc/hosts - ok, message: All items completed 12 Oct 14 2018 16:09:52: [docker] [192.168.2.100] task: add to /etc/hosts - ok, message: All items completed 13 Oct 14 2018 16:09:53: [docker] [192.168.2.250] task: disabled selinux - ok, message: Config SELinux state changed from 'enforcing' to 'disabled' 14 Oct 14 2018 16:09:52: [docker] [192.168.2.249] task: add to /etc/hosts - ok, message: All items completed 15 Oct 14 2018 16:09:54: [docker] [192.168.2.100] task: start firewalld - ok, message: 16 Oct 14 2018 16:09:53: [docker] [192.168.2.100] task: disabled selinux - ok, message: Config SELinux state changed from 'enforcing' to 'disabled' 17 Oct 14 2018 16:09:54: [docker] [192.168.2.250] task: start firewalld - ok, message: 18 Oct 14 2018 16:09:50: [docker] [192.168.2.100] task: add seed to /etc/hosts - ok, message: Block inserted 19 Oct 14 2018 16:09:50: [docker] [192.168.2.250] task: add seed to /etc/hosts - ok, message: Block inserted 20 Oct 14 2018 16:09:53: [docker] [192.168.2.249] task: disabled selinux - ok, message: Config SELinux state changed from 'enforcing' to 'disabled' 21 Oct 14 2018 16:09:56: [docker] [192.168.2.249] task: config firewalld - ok, message: 22 Oct 14 2018 16:09:56: [docker] [192.168.2.100] task: config firewalld - ok, message: 23 Oct 14 2018 16:09:57: [docker] [192.168.2.100] task: distribute wise2c repo - ok, message: All items completed 24 Oct 14 2018 16:09:58: [docker] [192.168.2.100] task: distribute ipvs bootload file - ok, message: All items completed 25 Oct 14 2018 16:09:56: [docker] [192.168.2.250] task: config firewalld - ok, message: 26 Oct 14 2018 16:09:54: [docker] [192.168.2.249] task: start firewalld - ok, message: 27 Oct 14 2018 16:09:58: [docker] [192.168.2.250] task: distribute ipvs bootload file - ok, message: All items completed 28 Oct 14 2018 16:09:58: [docker] [192.168.2.249] task: distribute ipvs bootload file - ok, message: All items completed 29 Oct 14 2018 16:09:57: [docker] [192.168.2.250] task: distribute wise2c repo - ok, message: All items completed 30 Oct 14 2018 16:09:57: [docker] [192.168.2.249] task: distribute wise2c repo - ok, message: All items completed 31 Oct 14 2018 16:14:32: [docker] [192.168.2.250] task: install docker - ok, message: All items completed 32 Oct 14 2018 16:14:33: [docker] [192.168.2.100] task: install docker - ok, message: All items completed 33 Oct 14 2018 16:14:33: [docker] [192.168.2.250] task: distribute chrony server config - skipped, message: All items completed 34 Oct 14 2018 16:14:33: [docker] [192.168.2.100] task: distribute chrony server config - ok, message: All items completed 35 Oct 14 2018 16:14:33: [docker] [192.168.2.249] task: install docker - ok, message: All items completed 36 Oct 14 2018 16:14:34: [docker] [192.168.2.249] task: distribute chrony client config - ok, message: All items completed 37 Oct 14 2018 16:14:34: [docker] [192.168.2.100] task: start chrony - ok, message: 38 Oct 14 2018 16:14:35: [docker] [192.168.2.250] task: check docker - ok, message: 39 Oct 14 2018 16:14:33: [docker] [192.168.2.100] task: distribute chrony client config - skipped, message: All items completed 40 Oct 14 2018 16:14:33: [docker] [192.168.2.249] task: distribute chrony server config - skipped, message: All items completed 41 Oct 14 2018 16:14:34: [docker] [192.168.2.250] task: distribute chrony client config - ok, message: All items completed 42 Oct 14 2018 16:14:34: [docker] [192.168.2.250] task: start chrony - ok, message: 43 Oct 14 2018 16:14:35: [docker] [192.168.2.249] task: check docker - ok, message: 44 Oct 14 2018 16:14:37: [docker] [192.168.2.100] task: clear docker config - ok, message: All items completed 45 Oct 14 2018 16:14:38: [docker] [192.168.2.249] task: clear docker config - ok, message: All items completed 46 Oct 14 2018 16:14:35: [docker] [192.168.2.100] task: check docker - ok, message: 47 Oct 14 2018 16:14:38: [docker] [192.168.2.100] task: distribute docker config - ok, message: All items completed 48 Oct 14 2018 16:14:38: [docker] [192.168.2.250] task: clear docker config - ok, message: All items completed 49 Oct 14 2018 16:14:34: [docker] [192.168.2.249] task: start chrony - ok, message: 50 Oct 14 2018 16:14:38: [docker] [192.168.2.250] task: distribute docker config - ok, message: All items completed 51 Oct 14 2018 16:14:38: [docker] [192.168.2.249] task: distribute docker config - ok, message: All items completed 52 Oct 14 2018 16:14:44: [docker] [192.168.2.250] task: reload & restart docker - ok, message: 53 Oct 14 2018 16:14:44: [docker] [192.168.2.249] task: reload & restart docker - ok, message: 54 Oct 14 2018 16:14:44: [docker] [192.168.2.100] task: reload & restart docker - ok, message: 55 Oct 14 2018 16:14:45: [docker] [192.168.2.100] task: set sysctl - ok, message: All items completed 56 Oct 14 2018 16:14:45: [docker] [192.168.2.249] task: set sysctl - ok, message: All items completed 57 Oct 14 2018 16:14:47: [registry] [192.168.2.249] task: Gathering Facts - ok, message: 58 Oct 14 2018 16:14:45: [docker] [192.168.2.250] task: set sysctl - ok, message: All items completed 59 Oct 14 2018 16:14:48: [registry] [192.168.2.249] task: make registry dir - ok, message: All items completed 60 Oct 14 2018 16:14:45: [docker] [all] task: ending - ok, message: 61 Oct 14 2018 16:15:46: [registry] [192.168.2.249] task: unarchive harbor - ok, message: 62 Oct 14 2018 16:15:48: [registry] [192.168.2.249] task: generate registry config - ok, message: All items completed 63 Oct 14 2018 16:18:20: [registry] [192.168.2.249] task: launch registry - ok, message: 64 Oct 14 2018 16:18:20: [registry] [all] task: ending - ok, message: 65 Oct 14 2018 16:18:23: [etcd] [192.168.2.250] task: Gathering Facts - ok, message: 66 Oct 14 2018 16:18:23: [etcd] [192.168.2.250] task: make etcd dir - ok, message: All items completed 67 Oct 14 2018 16:18:27: [etcd] [192.168.2.250] task: copy etcd image - ok, message: All items completed 68 Oct 14 2018 16:18:45: [etcd] [192.168.2.250] task: load etcd image - ok, message: 69 Oct 14 2018 16:18:47: [etcd] [192.168.2.250] task: run etcd - ok, message: 70 Oct 14 2018 16:18:47: [etcd] [all] task: ending - ok, message: 71 Oct 14 2018 16:18:50: [kubernetes] [192.168.2.250] task: Gathering Facts - ok, message: 72 Oct 14 2018 16:18:50: [kubernetes] [192.168.2.100] task: Gathering Facts - ok, message: 73 Oct 14 2018 16:18:51: [kubernetes] [192.168.2.100] task: make k8s master dir - ok, message: All items completed 74 Oct 14 2018 16:18:51: [kubernetes] [192.168.2.100] task: check kubernetes - ok, message: 75 Oct 14 2018 16:18:52: [kubernetes] [192.168.2.100] task: remove swapfile from /etc/fstab - ok, message: 76 Oct 14 2018 16:18:52: [kubernetes] [192.168.2.250] task: disable swap - ok, message: 77 Oct 14 2018 16:18:52: [kubernetes] [192.168.2.250] task: remove swapfile from /etc/fstab - ok, message: 78 Oct 14 2018 16:18:51: [kubernetes] [192.168.2.250] task: check kubernetes - ok, message: 79 Oct 14 2018 16:18:51: [kubernetes] [192.168.2.250] task: make k8s master dir - ok, message: All items completed 80 Oct 14 2018 16:18:52: [kubernetes] [192.168.2.100] task: disable swap - ok, message: 81 Oct 14 2018 16:19:17: [kubernetes] [192.168.2.250] task: install kubernetes components - ok, message: All items completed 82 Oct 14 2018 16:19:18: [kubernetes] [192.168.2.250] task: distribute kubelet config - ok, message: All items completed 83 Oct 14 2018 16:19:19: [kubernetes] [192.168.2.100] task: distribute kubelet config - ok, message: All items completed 84 Oct 14 2018 16:19:17: [kubernetes] [192.168.2.100] task: install kubernetes components - ok, message: All items completed 85 Oct 14 2018 16:19:20: [kubernetes] [192.168.2.100] task: reload & enable kubelet - ok, message: 86 Oct 14 2018 16:19:21: [kubernetes] [192.168.2.250] task: reload & enable kubelet - ok, message: 87 Oct 14 2018 16:19:22: [kubernetes] [192.168.2.100] task: set sysctl - ok, message: All items completed 88 Oct 14 2018 16:19:22: [kubernetes] [192.168.2.250] task: set sysctl - ok, message: All items completed 89 Oct 14 2018 16:19:22: [kubernetes] [192.168.2.100] task: setup master - skipped, message: All items completed 90 Oct 14 2018 16:19:22: [kubernetes] [192.168.2.250] task: setup master - ok, message: 91 Oct 14 2018 16:19:33: [kubernetes] [192.168.2.250] task: copy k8s images - ok, message: All items completed 92 Oct 14 2018 16:20:25: [kubernetes] [192.168.2.250] task: load k8s images - ok, message: All items completed 93 Oct 14 2018 16:20:26: [kubernetes] [192.168.2.250] task: docker login - ok, message: 94 Oct 14 2018 16:20:29: [kubernetes] [192.168.2.250] task: tag images - ok, message: All items completed 95 Oct 14 2018 16:21:17: [kubernetes] [192.168.2.250] task: push images - ok, message: All items completed 96 Oct 14 2018 16:21:17: [kubernetes] [192.168.2.250] task: pull k8s pause images - ok, message: All items completed 97 Oct 14 2018 16:21:18: [kubernetes] [192.168.2.250] task: tag k8s pause images - ok, message: All items completed 98 Oct 14 2018 16:21:20: [kubernetes] [192.168.2.250] task: generate kubeadm config - ok, message: All items completed 99 Oct 14 2018 16:21:24: [kubernetes] [192.168.2.250] task: copy crt & key - ok, message: All items completed 100 Oct 14 2018 16:26:31: [kubernetes] [192.168.2.250] task: setup - failed, message: non-zero return code 101 Oct 14 2018 16:26:31: [kubernetes] [192.168.2.100] task: setup node - ok, message: 102 Oct 14 2018 16:26:44: [kubernetes] [192.168.2.100] task: pull k8s pause images - ok, message: All items completed 103 Oct 14 2018 16:26:46: [kubernetes] [192.168.2.100] task: copy k8s admin.conf - failed, message: All items completed 104 Oct 14 2018 16:26:46: [kubernetes] [all] task: ending - failed, message: 105 Oct 14 2018 16:26:45: [kubernetes] [192.168.2.100] task: tag k8s pause images - ok, message: All items completed
error.log

1.13.0 安装kubernetes时失败

基于Centos 7.5 1804.Minimal升级过内核以及相关组件,最终安装环境如下:
3 master,2 minion nodes
Linux localhost.localdomain 4.19.6-1.el7.elrepo.x86_64 #1 SMP Sat Dec 1 11:58:18 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.6.1810 (Core)

其它几项安装均成功,到最后一项Kubernetes时失败,日志如下:
Dec 05 2018 13:43:23: [kubernetes] [192.168.99.122] task: Gathering Facts - ok, message:
2

Dec 05 2018 13:43:23: [kubernetes] [192.168.99.111] task: Gathering Facts - ok, message:
3

Dec 05 2018 13:43:23: [kubernetes] [192.168.99.121] task: Gathering Facts - ok, message:
4

Dec 05 2018 13:43:24: [kubernetes] [192.168.99.112] task: Gathering Facts - ok, message:
5

Dec 05 2018 13:43:24: [kubernetes] [192.168.99.113] task: Gathering Facts - ok, message:
6

Dec 05 2018 13:43:25: [kubernetes] [192.168.99.111] task: make k8s master dir - ok, message: All items completed
7

Dec 05 2018 13:43:25: [kubernetes] [192.168.99.122] task: make k8s master dir - ok, message: All items completed
8

Dec 05 2018 13:43:25: [kubernetes] [192.168.99.112] task: make k8s master dir - ok, message: All items completed
9

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.113] task: make k8s master dir - ok, message: All items completed
10

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.111] task: check kubernetes - ok, message:
11

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.113] task: check kubernetes - ok, message:
12

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.121] task: check kubernetes - ok, message:
13

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.121] task: make k8s master dir - ok, message: All items completed
14

Dec 05 2018 13:43:26: [kubernetes] [192.168.99.112] task: check kubernetes - ok, message:
15

Dec 05 2018 13:43:27: [kubernetes] [192.168.99.122] task: check kubernetes - ok, message:
16

Dec 05 2018 13:43:28: [kubernetes] [192.168.99.121] task: remove swapfile from /etc/fstab - ok, message:
17

Dec 05 2018 13:43:28: [kubernetes] [192.168.99.112] task: remove swapfile from /etc/fstab - ok, message:
18

Dec 05 2018 13:43:28: [kubernetes] [192.168.99.122] task: remove swapfile from /etc/fstab - ok, message:
19

Dec 05 2018 13:43:28: [kubernetes] [192.168.99.111] task: remove swapfile from /etc/fstab - ok, message:
20

Dec 05 2018 13:43:28: [kubernetes] [192.168.99.113] task: remove swapfile from /etc/fstab - ok, message:
21

Dec 05 2018 13:43:29: [kubernetes] [192.168.99.121] task: disable swap - ok, message:
22

Dec 05 2018 13:43:29: [kubernetes] [192.168.99.122] task: disable swap - ok, message:
23

Dec 05 2018 13:43:29: [kubernetes] [192.168.99.111] task: disable swap - ok, message:
24

Dec 05 2018 13:43:29: [kubernetes] [192.168.99.112] task: disable swap - ok, message:
25

Dec 05 2018 13:43:29: [kubernetes] [192.168.99.113] task: disable swap - ok, message:
26

Dec 05 2018 13:43:31: [kubernetes] [192.168.99.111] task: install kubernetes components - ok, message: All items completed
27

Dec 05 2018 13:43:31: [kubernetes] [192.168.99.113] task: install kubernetes components - ok, message: All items completed
28

Dec 05 2018 13:43:31: [kubernetes] [192.168.99.121] task: install kubernetes components - ok, message: All items completed
29

Dec 05 2018 13:43:31: [kubernetes] [192.168.99.112] task: install kubernetes components - ok, message: All items completed
30

Dec 05 2018 13:43:31: [kubernetes] [192.168.99.122] task: install kubernetes components - ok, message: All items completed
31

Dec 05 2018 13:43:35: [kubernetes] [192.168.99.112] task: unarchive cfssl tool - ok, message:
32

Dec 05 2018 13:43:35: [kubernetes] [192.168.99.111] task: unarchive cfssl tool - ok, message:
33

Dec 05 2018 13:43:35: [kubernetes] [192.168.99.122] task: unarchive cfssl tool - ok, message:
34

Dec 05 2018 13:43:35: [kubernetes] [192.168.99.113] task: unarchive cfssl tool - ok, message:
35

Dec 05 2018 13:43:35: [kubernetes] [192.168.99.121] task: unarchive cfssl tool - ok, message:
36

Dec 05 2018 13:43:36: [kubernetes] [192.168.99.111] task: distribute kubelet config - ok, message: All items completed
37

Dec 05 2018 13:43:36: [kubernetes] [192.168.99.112] task: distribute kubelet config - ok, message: All items completed
38

Dec 05 2018 13:43:36: [kubernetes] [192.168.99.113] task: distribute kubelet config - ok, message: All items completed
39

Dec 05 2018 13:43:36: [kubernetes] [192.168.99.121] task: distribute kubelet config - ok, message: All items completed
40

Dec 05 2018 13:43:36: [kubernetes] [192.168.99.122] task: distribute kubelet config - ok, message: All items completed
41

Dec 05 2018 13:43:38: [kubernetes] [192.168.99.112] task: reload & enable kubelet - ok, message:
42

Dec 05 2018 13:43:38: [kubernetes] [192.168.99.122] task: reload & enable kubelet - ok, message:
43

Dec 05 2018 13:43:38: [kubernetes] [192.168.99.111] task: reload & enable kubelet - ok, message:
44

Dec 05 2018 13:43:38: [kubernetes] [192.168.99.121] task: reload & enable kubelet - ok, message:
45

Dec 05 2018 13:43:38: [kubernetes] [192.168.99.113] task: reload & enable kubelet - ok, message:
46

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.113] task: set sysctl - ok, message: All items completed
47

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.111] task: setup master - ok, message:
48

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.112] task: setup master - ok, message:
49

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.113] task: setup master - ok, message:
50

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.121] task: setup master - skipped, message:
51

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.122] task: setup master - skipped, message:
52

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.122] task: set sysctl - ok, message: All items completed
53

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.112] task: set sysctl - ok, message: All items completed
54

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.121] task: set sysctl - ok, message: All items completed
55

Dec 05 2018 13:43:40: [kubernetes] [192.168.99.111] task: set sysctl - ok, message: All items completed
56

Dec 05 2018 13:43:42: [kubernetes] [192.168.99.111] task: copy k8s images - ok, message: All items completed
57

Dec 05 2018 13:44:12: [kubernetes] [192.168.99.111] task: load k8s images - ok, message: All items completed
58

Dec 05 2018 13:44:13: [kubernetes] [192.168.99.111] task: docker login - ok, message:
59

Dec 05 2018 13:44:15: [kubernetes] [192.168.99.111] task: tag images - ok, message: All items completed
60

Dec 05 2018 13:44:19: [kubernetes] [192.168.99.111] task: push images - ok, message: All items completed
61

Dec 05 2018 13:44:19: [kubernetes] [192.168.99.111] task: pull k8s pause images - ok, message: All items completed
62

Dec 05 2018 13:44:20: [kubernetes] [192.168.99.112] task: pull k8s pause images - ok, message: All items completed
63

Dec 05 2018 13:44:20: [kubernetes] [192.168.99.113] task: pull k8s pause images - ok, message: All items completed
64

Dec 05 2018 13:44:20: [kubernetes] [192.168.99.111] task: tag k8s pause images - ok, message: All items completed
65

Dec 05 2018 13:44:20: [kubernetes] [192.168.99.112] task: tag k8s pause images - ok, message: All items completed
66

Dec 05 2018 13:44:20: [kubernetes] [192.168.99.113] task: tag k8s pause images - ok, message: All items completed
67

Dec 05 2018 13:44:22: [kubernetes] [192.168.99.111] task: generate kubeadm config - ok, message: All items completed
68

Dec 05 2018 13:44:22: [kubernetes] [192.168.99.112] task: generate kubeadm config - ok, message: All items completed
69

Dec 05 2018 13:44:22: [kubernetes] [192.168.99.113] task: generate kubeadm config - ok, message: All items completed
70

Dec 05 2018 13:44:25: [kubernetes] [192.168.99.111] task: copy certificates JSON files - ok, message: All items completed
71

Dec 05 2018 13:44:26: [kubernetes] [192.168.99.112] task: copy certificates JSON files - ok, message: All items completed
72

Dec 05 2018 13:44:26: [kubernetes] [192.168.99.113] task: copy certificates JSON files - ok, message: All items completed
73

Dec 05 2018 13:44:27: [kubernetes] [192.168.99.111] task: copy certificates generation scripts - ok, message: All items completed
74

Dec 05 2018 13:44:27: [kubernetes] [192.168.99.112] task: copy certificates generation scripts - ok, message: All items completed
75

Dec 05 2018 13:44:27: [kubernetes] [192.168.99.113] task: copy certificates generation scripts - ok, message: All items completed
76

Dec 05 2018 13:44:29: [kubernetes] [192.168.99.111] task: generate other certificates - ok, message:
77

Dec 05 2018 13:44:32: [kubernetes] [192.168.99.111] task: fetch certificates - ok, message: All items completed
78

Dec 05 2018 13:44:37: [kubernetes] [192.168.99.111] task: copy certificates - ok, message: All items completed
79

Dec 05 2018 13:44:38: [kubernetes] [192.168.99.113] task: copy certificates - ok, message: All items completed
80

Dec 05 2018 13:44:38: [kubernetes] [192.168.99.112] task: copy certificates - ok, message: All items completed
81

Dec 05 2018 13:44:39: [kubernetes] [192.168.99.111] task: generate kube-apiserver certificate - ok, message:
82

Dec 05 2018 13:44:39: [kubernetes] [192.168.99.112] task: generate kube-apiserver certificate - ok, message:
83

Dec 05 2018 13:44:39: [kubernetes] [192.168.99.113] task: generate kube-apiserver certificate - ok, message:
84

Dec 05 2018 13:44:40: [kubernetes] [192.168.99.111] task: kubeadm init - failed, message: non-zero return code
85

Dec 05 2018 13:44:40: [kubernetes] [192.168.99.112] task: kubeadm init - failed, message: non-zero return code
86

Dec 05 2018 13:44:41: [kubernetes] [192.168.99.113] task: kubeadm init - failed, message: non-zero return code
87

Dec 05 2018 13:44:41: [kubernetes] [192.168.99.121] task: setup node - ok, message:
88

Dec 05 2018 13:44:41: [kubernetes] [192.168.99.122] task: setup node - ok, message:
89

Dec 05 2018 13:44:42: [kubernetes] [192.168.99.121] task: pull k8s pause images - ok, message: All items completed
90

Dec 05 2018 13:44:42: [kubernetes] [192.168.99.122] task: pull k8s pause images - ok, message: All items completed
91

Dec 05 2018 13:44:43: [kubernetes] [192.168.99.121] task: tag k8s pause images - ok, message: All items completed
92

Dec 05 2018 13:44:43: [kubernetes] [192.168.99.122] task: tag k8s pause images - ok, message: All items completed
93

Dec 05 2018 13:44:43: [kubernetes] [192.168.99.121] task: copy k8s admin.conf - failed, message: All items completed
94

Dec 05 2018 13:44:43: [kubernetes] [192.168.99.122] task: copy k8s admin.conf - failed, message: All items completed
95

Dec 05 2018 13:44:43: [kubernetes] [all] task: ending - failed, message:

1.11.2 kubernetes安装失败

CentOS7.5 Minimal,只安装了chrony
181-182: master
183-185: node
186: registry

182 Oct 14 2018 14:56:50: [kubernetes] [192.168.88.181] task: pull k8s pause images - ok, message: All items completed
183 Oct 14 2018 14:56:50: [kubernetes] [192.168.88.182] task: pull k8s pause images - ok, message: All items completed
184 Oct 14 2018 14:56:51: [kubernetes] [192.168.88.182] task: tag k8s pause images - ok, message: All items completed
185 Oct 14 2018 14:56:51: [kubernetes] [192.168.88.181] task: tag k8s pause images - ok, message: All items completed
186 Oct 14 2018 14:56:53: [kubernetes] [192.168.88.182] task: generate kubeadm config - ok, message: All items completed
187 Oct 14 2018 14:56:53: [kubernetes] [192.168.88.181] task: generate kubeadm config - ok, message: All items completed
188 Oct 14 2018 14:56:58: [kubernetes] [192.168.88.181] task: copy crt & key - ok, message: All items completed
189 Oct 14 2018 14:56:58: [kubernetes] [192.168.88.182] task: copy crt & key - ok, message: All items completed
190 Oct 14 2018 14:57:02: [kubernetes] [192.168.88.181] task: setup - failed, message: non-zero return code
191 Oct 14 2018 14:57:16: [kubernetes] [192.168.88.182] task: setup - failed, message: non-zero return code
192 Oct 14 2018 14:57:16: [kubernetes] [192.168.88.183] task: setup node - ok, message:
193 Oct 14 2018 14:57:16: [kubernetes] [192.168.88.185] task: setup node - ok, message:
194 Oct 14 2018 14:57:16: [kubernetes] [192.168.88.184] task: setup node - ok, message:
195 Oct 14 2018 14:57:17: [kubernetes] [192.168.88.183] task: pull k8s pause images - ok, message: All items completed
196 Oct 14 2018 14:57:17: [kubernetes] [192.168.88.184] task: pull k8s pause images - ok, message: All items completed
197 Oct 14 2018 14:57:17: [kubernetes] [192.168.88.185] task: pull k8s pause images - ok, message: All items completed
198 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.184] task: tag k8s pause images - ok, message: All items completed
199 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.184] task: copy k8s admin.conf - failed, message: All items completed
200 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.183] task: copy k8s admin.conf - failed, message: All items completed
201 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.183] task: tag k8s pause images - ok, message: All items completed
202 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.185] task: tag k8s pause images - ok, message: All items completed
203 Oct 14 2018 14:57:18: [kubernetes] [192.168.88.185] task: copy k8s admin.conf - failed, message: All items completed
204 Oct 14 2018 14:57:18: [kubernetes] [all] task: ending - failed, message:


TASK [setup] *******************************************************************
fatal: [192.168.88.181]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:00:03.344911", "end": "2018-10-14 22:57:02.361991", "msg": "non-zero return code", "rc": 2, "start": "2018-10-14 22:56:59.017080", "stderr": "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nI1014 22:56:59.673822 26982 kernel_validator.go:81] Validating kernel version\nI1014 22:56:59.673936 26982 kernel_validator.go:96] Validating kernel config\n[preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image [192.168.88.186/library/coredns:1.1.3]: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...", "stderr_lines": ["\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "I1014 22:56:59.673822 26982 kernel_validator.go:81] Validating kernel version", "I1014 22:56:59.673936 26982 kernel_validator.go:96] Validating kernel config", "[preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image [192.168.88.186/library/coredns:1.1.3]: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=..."], "stdout": "[init] using Kubernetes version: v1.11.2\n[preflight] running pre-flight checks\n[preflight/images] Pulling images required for setting up a Kubernetes cluster\n[preflight/images] This might take a minute or two, depending on the speed of your internet connection\n[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] using Kubernetes version: v1.11.2", "[preflight] running pre-flight checks", "[preflight/images] Pulling images required for setting up a Kubernetes cluster", "[preflight/images] This might take a minute or two, depending on the speed of your internet connection", "[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'"]}
fatal: [192.168.88.182]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:00:17.151342", "end": "2018-10-14 22:57:16.164721", "msg": "non-zero return code", "rc": 2, "start": "2018-10-14 22:56:59.013379", "stderr": "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nI1014 22:56:59.086099 22227 kernel_validator.go:81] Validating kernel version\nI1014 22:56:59.086237 22227 kernel_validator.go:96] Validating kernel config\n[preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image [192.168.88.186/library/coredns:1.1.3]: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...", "stderr_lines": ["\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "I1014 22:56:59.086099 22227 kernel_validator.go:81] Validating kernel version", "I1014 22:56:59.086237 22227 kernel_validator.go:96] Validating kernel config", "[preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image [192.168.88.186/library/coredns:1.1.3]: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=..."], "stdout": "[init] using Kubernetes version: v1.11.2\n[preflight] running pre-flight checks\n[preflight/images] Pulling images required for setting up a Kubernetes cluster\n[preflight/images] This might take a minute or two, depending on the speed of your internet connection\n[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] using Kubernetes version: v1.11.2", "[preflight] running pre-flight checks", "[preflight/images] Pulling images required for setting up a Kubernetes cluster", "[preflight/images] This might take a minute or two, depending on the speed of your internet connection", "[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'"]}
TASK [setup node] **************************************************************
included: /workspace/kubernetes-playbook/v1.11.2/node.ansible for 192.168.88.183, 192.168.88.184, 192.168.88.185
TASK [pull k8s pause images] ***************************************************
changed: [192.168.88.183] => (item={u'repo': u'gcr.io/google_containers', u'tag': 3.1, u'name': u'pause'})
changed: [192.168.88.184] => (item={u'repo': u'gcr.io/google_containers', u'tag': 3.1, u'name': u'pause'})
changed: [192.168.88.185] => (item={u'repo': u'gcr.io/google_containers', u'tag': 3.1, u'name': u'pause'})
TASK [tag k8s pause images] ****************************************************
changed: [192.168.88.184] => (item={u'repo': u'192.168.88.186/library', u'tag': 3.1, u'name': u'pause'})
changed: [192.168.88.183] => (item={u'repo': u'192.168.88.186/library', u'tag': 3.1, u'name': u'pause'})
changed: [192.168.88.185] => (item={u'repo': u'192.168.88.186/library', u'tag': 3.1, u'name': u'pause'})
TASK [copy k8s admin.conf] *****************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /workspace/kubernetes-playbook/v1.11.2/file/admin.conf
failed: [192.168.88.184] (item={u'dest': u'/root/.kube/config', u'src': u'file/admin.conf'}) => {"changed": false, "item": {"dest": "/root/.kube/config", "src": "file/admin.conf"}, "msg": "Could not find or access 'file/admin.conf'\nSearched in:\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /workspace/kubernetes-playbook/v1.11.2/file/admin.conf
failed: [192.168.88.183] (item={u'dest': u'/root/.kube/config', u'src': u'file/admin.conf'}) => {"changed": false, "item": {"dest": "/root/.kube/config", "src": "file/admin.conf"}, "msg": "Could not find or access 'file/admin.conf'\nSearched in:\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /workspace/kubernetes-playbook/v1.11.2/file/admin.conf
failed: [192.168.88.185] (item={u'dest': u'/root/.kube/config', u'src': u'file/admin.conf'}) => {"changed": false, "item": {"dest": "/root/.kube/config", "src": "file/admin.conf"}, "msg": "Could not find or access 'file/admin.conf'\nSearched in:\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.11.2/file/admin.conf"}
PLAY RECAP *********************************************************************
192.168.88.181 : ok=19 changed=8 unreachable=0 failed=1
192.168.88.182 : ok=14 changed=6 unreachable=0 failed=1
192.168.88.183 : ok=12 changed=4 unreachable=0 failed=1
192.168.88.184 : ok=12 changed=4 unreachable=0 failed=1
192.168.88.185 : ok=12 changed=4 unreachable=0 failed=1
I1014 14:57:18.718069 5 command.go:239] failed at a step
[GIN] 2018/10/14 - 14:57:19 | 200 | 1.004762609s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/14 - 14:57:19 | 200 | 1.01017254s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/14 - 14:57:19 | 200 | 1.00447124s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/14 - 14:57:19 | 200 | 1.012282531s | 127.0.0.1 | POST /v1/notify

V1.13.2.3 , 先是kubernetes init -failed, 重新安装就报下面的错误

254 Jan 31 2019 12:46:00: [kubernetes] [all] task: generate other certificates - starting, message:
255 Jan 31 2019 12:46:00: [kubernetes] [172.31.19.11] task: generate other certificates - failed, message: non-zero return code
256 Jan 31 2019 12:46:00: [kubernetes] [all] task: ending - failed, message:

kubeadm init - starting, message 报错

199 Feb 13 2019 06:34:51: [kubernetes] [all] task: kubeadm init - starting, message:
200 Feb 13 2019 06:34:55: [kubernetes] [10.254.253.33] task: kubeadm init - failed, message: non-zero return code
201 Feb 13 2019 06:34:55: [kubernetes] [10.254.253.35] task: kubeadm init - failed, message: non-zero return code
202 Feb 13 2019 06:34:56: [kubernetes] [10.254.253.34] task: kubeadm init - failed, message: non-zero return code
203 Feb 13 2019 06:34:56: [kubernetes] [all] task: ending - failed, message:


TASK [kubeadm init] ************************************************************
fatal: [10.254.253.33]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:00:03.256467", "end": "2019-02-13 14:34:54.228830", "msg": "non-zero return code", "rc": 1, "start": "2019-02-13 14:34:50.972363", "stderr": "W0213 14:34:51.014410   17525 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.33\"\n\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["W0213 14:34:51.014410   17525 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.33\"", "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.3", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]}
fatal: [10.254.253.35]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:00:03.697065", "end": "2019-02-13 14:34:54.769929", "msg": "non-zero return code", "rc": 1, "start": "2019-02-13 14:34:51.072864", "stderr": "W0213 14:34:51.109321   13744 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.35\"\n\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["W0213 14:34:51.109321   13744 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.35\"", "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.3", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]}
fatal: [10.254.253.34]: FAILED! => {"changed": true, "cmd": "kubeadm init --config /var/tmp/wise2c/kubernetes/kubeadm.conf", "delta": "0:00:04.305117", "end": "2019-02-13 14:34:55.316278", "msg": "non-zero return code", "rc": 1, "start": "2019-02-13 14:34:51.011161", "stderr": "W0213 14:34:51.042027   13720 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.34\"\n\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly\nerror execution phase preflight: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`", "stderr_lines": ["W0213 14:34:51.042027   13720 common.go:159] WARNING: overriding requested API server bind address: requested \"127.0.0.1\", actual \"10.254.253.34\"", "\t[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly", "error execution phase preflight: [preflight] Some fatal errors occurred:", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-apiserver:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-apiserver:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-controller-manager:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-controller-manager:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-scheduler:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-scheduler:v1.13.3 not found", ", error: exit status 1", "\t[ERROR ImagePull]: failed to pull image 10.254.253.31/library/kube-proxy:v1.13.3: output: Error response from daemon: manifest for 10.254.253.31/library/kube-proxy:v1.13.3 not found", ", error: exit status 1", "[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`"], "stdout": "[init] Using Kubernetes version: v1.13.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'", "stdout_lines": ["[init] Using Kubernetes version: v1.13.3", "[preflight] Running pre-flight checks", "[preflight] Pulling images required for setting up a Kubernetes cluster", "[preflight] This might take a minute or two, depending on the speed of your internet connection", "[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'"]}
PLAY RECAP *********************************************************************
10.254.253.33              : ok=25   changed=8    unreachable=0    failed=1
10.254.253.34              : ok=18   changed=4    unreachable=0    failed=1
10.254.253.35              : ok=18   changed=4    unreachable=0    failed=1
I0213 06:34:56.371922       1 command.go:239] failed at a step

docker login 失败了

23 Oct 19 2018 10:29:05: [kubernetes] [192.168.2.195] task: docker login - failed, message: Logging into ******** for user ******** failed - 500 Server Error: Internal Server Error ("{"message":"unable to parse server address: parse https://********: invalid character " " in host name"}")

安装 prometheus时,每次都出错

Jan 08 2019 07:02:10: [prometheus] [all] task: Gathering Facts - starting, message:
2 Jan 08 2019 07:02:13: [prometheus] [192.168.2.200] task: Gathering Facts - ok, message:
3 Jan 08 2019 07:02:13: [prometheus] [all] task: make prometheus dir - starting, message:
4 Jan 08 2019 07:02:14: [prometheus] [192.168.2.200] task: make prometheus dir - ok, message: All items completed
5 Jan 08 2019 07:02:14: [prometheus] [all] task: unarchive prometheus - starting, message:
6 Jan 08 2019 07:02:31: [prometheus] [192.168.2.200] task: unarchive prometheus - ok, message:
7 Jan 08 2019 07:02:31: [prometheus] [all] task: copy prometheus operator images - starting, message:
8 Jan 08 2019 07:02:34: [prometheus] [192.168.2.200] task: copy prometheus operator images - ok, message: All items completed
9 Jan 08 2019 07:02:34: [prometheus] [all] task: copy prometheus operator deploy and reset script - starting, message:
10 Jan 08 2019 07:02:37: [prometheus] [192.168.2.200] task: copy prometheus operator deploy and reset script - ok, message: All items completed
11 Jan 08 2019 07:02:37: [prometheus] [all] task: copy prometheus operator deploy script dependance file - starting, message:
12 Jan 08 2019 07:02:46: [prometheus] [192.168.2.200] task: copy prometheus operator deploy script dependance file - ok, message: All items completed
13 Jan 08 2019 07:02:46: [prometheus] [all] task: load prometheus operator images - starting, message:
14 Jan 08 2019 07:03:56: [prometheus] [192.168.2.200] task: load prometheus operator images - ok, message: All items completed
15 Jan 08 2019 07:03:56: [prometheus] [all] task: docker login - starting, message:
16 Jan 08 2019 07:03:58: [prometheus] [192.168.2.200] task: docker login - ok, message:
17 Jan 08 2019 07:03:58: [prometheus] [all] task: set harbor address for deploy script - starting, message:
18 Jan 08 2019 07:03:59: [prometheus] [192.168.2.200] task: set harbor address for deploy script - ok, message: All items completed
19 Jan 08 2019 07:03:59: [prometheus] [all] task: set etcd address for deploy script - starting, message:
20 Jan 08 2019 07:04:00: [prometheus] [192.168.2.200] task: set etcd address for deploy script - ok, message: All items completed
21 Jan 08 2019 07:04:00: [prometheus] [all] task: set nodeport for prometheus alertmanager and grafana service - starting, message:
22 Jan 08 2019 07:04:02: [prometheus] [192.168.2.200] task: set nodeport for prometheus alertmanager and grafana service - ok, message: All items completed
23 Jan 08 2019 07:04:03: [prometheus] [all] task: prometheus operator deploy - starting, message:
24 Jan 08 2019 07:04:12: [prometheus] [192.168.2.200] task: prometheus operator deploy - failed, message: non-zero return code
25 Jan 08 2019 07:04:12: [prometheus] [all] task: ending - failed, message:

ssh免登录问题

部署机shell ssh没有问题,但是部署的时候出现Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

内置的ceph 好像版本有些问题 与k8s 结合不太顺畅

首先非常感谢提供如此便利的部署工具,我在实际部署过程中还是遇到了一些坑,再次反馈给项目组希望可以更正。本项目中内置的ceph 版本是12.2.9 luminous,再手动创建pvc 的时候是可以使用的。
但是如果使用storageclass 动态创建会有些问题:
1、目前 提供的CephFS Volume Provisioner,均是基于mimic版本的,详见:
https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/cephfs
镜像:
quay.io/external_storage/cephfs-provisioner:latest

2、该CephFS Volume Provisioner 在动态分配pvc 时,新建的文件夹不能正常读写,报错如下:。

_20181127135940

3、ceph 系统中手动创建的文件夹可正常读写。

task: setup - failed, message: non-zero return code

报错,task: setup - failed, message: non-zero return code
没有更多的日志了
master的系统日志:
k8s02-master kubelet: E1017 09:59:52.596598 17573 azure_dd.go:147] failed to get azure cloud in GetVolumeLimits, plugin.host: k8s02-master

不知道应该如何解决,希望得到帮助,谢谢

获取 loadbalancer组件属性失败

我在选择安装loadbalancer组件时,选择完版本出现提示:获取 loadbalancer组件属性失败。
版本: master分支

version: '2'
services:
  deploy:
    container_name: deploy-main
    image: wise2c/pagoda:v1.0
    restart: always
    entrypoint: sh
    command:
    - -c
    - "/root/pagoda -logtostderr -v 4 -w /workspace"
    ports:
    - 88:80
    - 8088:8080
    volumes:
    - $HOME/.ssh:/root/.ssh
    - $PWD/deploy:/deploy
    volumes_from:
    - playbook
  ui:
    container_name: deploy-ui
    image: wise2c/deploy-ui:v1.2
    restart: always
    network_mode: "service:deploy"
  playbook:
    container_name: deploy-playbook
    image: wise2c/playbook:v1.13
    volumes:
    - playbook:/workspace
  yum-repo:
    container_name: deploy-yumrepo
    image: wise2c/yum-repo:v1.13
    ports:
    - 2009:2009
    restart: always
volumes:
  playbook:
    external: false

安装完后能否替换网络组件为calico

现在的网络组件是flanneld,能否替换calico,我现在的做法是安装完后删除了flanneld,重新安装calico,但是报错:
Readiness probe failed: calico/node is not ready: BIRD is not ready: BGP not established with 192.168.112.100,10.244.1.1
麻烦告知一下,谢谢

docker启动失败

                                              NAMES

d3090cff6ce7 goharbor/harbor-jobservice:v1.6.2 "/harbor/start.sh" 32 minutes ago Up 32 minutes harbor-jobservice
530939cedbb6 goharbor/nginx-photon:v1.6.2 "nginx -g 'daemon ..." 32 minutes ago Up 32 minutes (unhealthy) 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp nginx
f66b394d8278 goharbor/harbor-ui:v1.6.2 "/harbor/start.sh" 32 minutes ago Up 32 minutes (unhealthy) harbor-ui
45c0dea25dd2 goharbor/harbor-db:v1.6.2 "/entrypoint.sh po..." 32 minutes ago Up 32 minutes (unhealthy) 5432/tcp harbor-db
c7624b34fedf goharbor/harbor-adminserver:v1.6.2 "/harbor/start.sh" 32 minutes ago Up 32 minutes (unhealthy) harbor-adminserver
1789e586d4f8 goharbor/redis-photon:v1.6.2 "docker-entrypoint..." 32 minutes ago Up 32 minutes 6379/tcp redis
9cace8419b9b goharbor/harbor-log:v1.6.2 "/bin/sh -c /usr/l..." 32 minutes ago Up 32 minutes (unhealthy) 127.0.0.1:1514->10514/tcp harbor-log
f396caf77a1f k8s.gcr.io/etcd-amd64:3.2.24 "etcd --name etcd0..." 44 minutes ago Up 44 minutes

出现 unhealthy 这是什么原因呢??

1.11.3部署失败之后切换到1.13后,界面的kubernetes依旧是1.11.3版本

1.11.3版本部署出现找不到admin.conf,然后我用了最新版的docker-compose.yml,拉取镜像之后,清理

rm -rf deploy/
docker rm deploy-ui
docker rm deploy-main
docker rm deploy-playbook
docker rm deploy-yumrepo

再次启动之后界面显示的kubernetes依旧是1.11.3的版本
image

docker ps显示如下:
image

为了验证playbook是否有问题,进入容器之后查看kubernetes版本:
image

发现 deploy-ui 调用的接口/v1/components/kubernetes/versions 获取到的版本是1.11.3 。。。

每次都错,我把Harbor部署在了master节点上面有影响吗?

430 Jan 08 2019 09:22:39: [kubernetes] [10.240.0.13] task: restart kubelet - ok, message:
431 Jan 08 2019 09:22:39: [kubernetes] [10.240.0.14] task: restart kubelet - ok, message:
432 Jan 08 2019 09:22:39: [kubernetes] [10.240.0.12] task: restart kubelet - ok, message:
433 Jan 08 2019 09:22:39: [kubernetes] [all] task: execute prometheus-fix-worker-nodes script - starting, message:
434 Jan 08 2019 09:22:40: [kubernetes] [10.240.0.11] task: execute prometheus-fix-worker-nodes script - ok, message:
435 Jan 08 2019 09:22:40: [kubernetes] [10.240.0.14] task: execute prometheus-fix-worker-nodes script - ok, message:
436 Jan 08 2019 09:22:40: [kubernetes] [10.240.0.13] task: execute prometheus-fix-worker-nodes script - ok, message:
437 Jan 08 2019 09:22:40: [kubernetes] [10.240.0.12] task: execute prometheus-fix-worker-nodes script - ok, message:
438 Jan 08 2019 09:22:40: [kubernetes] [all] task: ending - failed, message:

另外可以把内核升级到最新再安装,有影响吗

安装出错inventory_hostname == ansible_play_batch[0]' failed.

新装的系统,centos,安装报错如下:inventory_hostname == ansible_play_batch[0]' failed.

Jan 25 2019 03:03:50: [docker] [192.168.112.100] task: distribute chrony server config - failed, message: The conditional check 'inventory_hostname == ansible_play_batch[0]' failed. The error was: error while evaluating conditional (inventory_hostname == ansible_play_batch[0]): list object has no element 0 The error appears to have been in '/workspace/docker-playbook/18.06.1-CE/docker.ansible': line 69, column 5, but may be elsewhere in the file depending on the exact syntax problem. The offending line appears to be: - name: distribute chrony server config ^ here

谢谢

Oct 11 2018 06:32:20: [registry] [192.168.21.96] task: Gathering Facts - unreachable, message: Failed to connect to the host via ssh: Control socket connect(/root/.ansible/cp/2fd8118b33): Connection refused Failed to connect to new control master

Oct 11 2018 06:32:20: [registry] [192.168.21.96] task: Gathering Facts - unreachable, message: Failed to connect to the host via ssh: Control socket connect(/root/.ansible/cp/2fd8118b33): Connection refused Failed to connect to new control master

这个问题怎么解决,使用ssh命令是可以免密码登录。

启动安装,安装界面没有任何日志

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

能否在已存在docker 和Harbor 的环境中使用该工具安装k8s?

根据安装手册,这两个东西都要重新安装,但环境中已经使用了docker 和 Harbor ,请问支持这种方式安装吗,目前尝试了安装,报错
17 Nov 05 2018 08:40:34: [kubernetes] [192.168.3.220] task: install kubernetes components - failed, message: All items completed
18 Nov 05 2018 08:40:34: [kubernetes] [192.168.1.118] task: install kubernetes components - failed, message: All items completed
19 Nov 05 2018 08:40:34: [kubernetes] [all] task: ending - failed, message:

why just ansible, few things can be done by ssh directly.

Is your feature request related to a problem? Please describe.
i don't want every time run

yum -y install python-docker-py

Describe the solution you'd like
deploy machine, can use ansible, can not use ssh directly?

Describe alternatives you've considered

ssh host yum -y install python-docker-py

部署主机断电重启之后docker容器无法启动

机房突然断电,虚拟机重启之后,部署的容器没有自动启动,手动启动时报如下错误:

Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/da81325611865416d190f29cedfd3b17e928d4399ad62752f2457faf0f3f1025/merged: invalid argument
Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/f8a03f773116299ab4a369b8dcbf2298148440e9bb030eb66882442df45f3e03/merged: invalid argument
Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/8629137bb019826e1f8cda5e72598ec1a3e55097a5c7d453cd9987474c9d0dac/merged: invalid argument
Error response from daemon: error creating overlay mount to /var/lib/docker/overlay2/836e4b937276bdbfb5d8d9eb4c5d1285bd44816ecbcb55c079bb054a862a71e6/merged: invalid argument

step docker failed: exit status 250 with output

CentOS7 按照流程安装, 最后一步部署的时候报错
查看/var/log/message
journal: W1018 12:18:53.020032 1 stat.go:57] read file dir inherent property error: open /workspace/docker-playbook/1.13.1/inherent.yaml: no such file or directory
Oct 18 20:18:53 10-9-85-3 journal: I1018 12:18:53.020662 1 stat.go:73] genereate inherent property: endpoint: http://10.9.136.202:2379,http://10.10.18.148:2379
Oct 18 20:18:53 10-9-85-3 journal: I1018 12:18:53.021220 1 stat.go:73] genereate inherent property: endpoint: 10.9.136.202:6443
Oct 18 20:18:53 10-9-85-3 journal: I1018 12:18:53.021582 1 stat.go:73] genereate inherent property: endpoint: 10.9.85.3
Oct 18 20:18:53 10-9-85-3 journal: user: admin
Oct 18 20:18:53 10-9-85-3 journal: password: Harbor12345
Oct 18 20:18:53 10-9-85-3 journal: I1018 12:18:53.023654 1 command.go:180] begin to install cluster test
Oct 18 20:18:53 10-9-85-3 journal: [GIN] 2018/10/18 - 12:18:53 | 200 #33[0m| 23.978612ms | 10.10.57.120 | PUT #33[0m /v1/clusters/24e5c7ed-de22-4d8a-90e7-73a8f21a493f/deployment
Oct 18 20:18:53 10-9-85-3 journal: 10.10.57.120 - - [18/Oct/2018:12:18:53 +0000] "PUT /v1/clusters/24e5c7ed-de22-4d8a-90e7-73a8f21a493f/deployment HTTP/1.0" 200 48
Oct 18 20:18:53 10-9-85-3 journal: [GIN] 2018/10/18 - 12:18:53 | 200 #33[0m| 1.055677ms | 10.10.57.120 | GET #33[0m /v1/clusters/24e5c7ed-de22-4d8a-90e7-73a8f21a493f
Oct 18 20:18:53 10-9-85-3 journal: 10.10.57.120 - - [18/Oct/2018:12:18:53 +0000] "GET /v1/clusters/24e5c7ed-de22-4d8a-90e7-73a8f21a493f HTTP/1.0" 200 537
Oct 18 20:18:53 10-9-85-3 journal: E1018 12:18:53.150637 1 main.go:190] upgrade error: websocket: not a websocket handshake: 'upgrade' token not found in 'Connection' header
Oct 18 20:18:53 10-9-85-3 journal: [GIN] 2018/10/18 - 12:18:53 | 400 #33[0m| 85.745µs | 10.10.57.120 | GET #33[0m /v1/stats
Oct 18 20:18:53 10-9-85-3 journal: 10.10.57.120 - - [18/Oct/2018:12:18:53 +0000] "GET /v1/stats HTTP/1.0" 400 36
Oct 18 20:18:54 10-9-85-3 journal: I1018 12:18:54.024367 1 command.go:203] start step docker
Oct 18 20:18:55 10-9-85-3 journal: I1018 12:18:55.042038 1 command.go:206] step docker failed: exit status 250 with output: ERROR! Unexpected Exception, this is probably a bug: [Errno 2] No such file or directory: '/opt/ansible/ansible/lib/ansible/plugins/callback/cgroup_memory_recap.py'
Oct 18 20:18:55 10-9-85-3 journal: to see the full traceback, use -vvv
Oct 18 20:18:55 10-9-85-3 journal: I1018 12:18:55.042134 1 command.go:239] failed at a step

ansible ssh问题

当下发执行的时候,遇到mux_client_request_session: read from master failed: Broken pipe
查看进程发现很多wise2c/pagoda:v1.0 容器生成了很多ssh进程,请问如何解决?

harbor安装失败

安装过程中提示错误:

task: unarchive harbor - failed, message: Failed to find handler for "/root/.ansible/tmp/ansible-tmp-1545811631.65-227516696061090/source". Make sure the required command to extract the file is installed. Command "/usr/bin/unzip" could not handle archive. Command "/usr/bin/gtar" could not handle archive.

1.12.1安装成功,DashBoard页面不能访问

CentOS7.5 Minimal + 离线安装chrony-3.2-2.el7.x86_64
部署机: 130
Master:131、132
Node: 133、134
Registry: 135

360浏览器极速模式访问 https://192.168.0.133:30300 无法访问
能获取到token
[root@localhost ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-bg96d
Namespace: kube-system
Labels:
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 78c3350d-d034-11e8-914a-00505688224a

Type: kubernetes.io/service-account-token

Data

ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWJnOTZkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3OGMzMzUwZC1kMDM0LTExZTgtOTE0YS0wMDUwNTY4ODIyNGEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.bTLBEfqBA0aao6usWoVLYtGgmx7uc98ND_dO9Vf2p2f59YwiLencNqiLlksHUJRXl79SY0rCTADehUzrSU7zH9LtbATrBMxQjwsbv1nra1-030T7gqyTyeJ4-ge1piRjpPodZF82esRoZi7b7EKS-T1wOlX5Inif31oQ34XPl8JEck0IcDgxMAyxrvCOfiM7i76Vf5mZbtTkx6UgCYKgJN9sG9mxcoO54zxkzvOcVWEY6p4Y3k_Qfdcop8HtlbexbI0av-TtEE5aL9bQaaCVf8Avubomu0IR44M656MS4uOt_yfIvSuJr0SFcKsdvNCl5kMsq2ytIfCdrncrq1LdKQ

--------------------------安装日志----------------------------------
I1015 04:37:37.046323 5 command.go:203] start step kubernetes
[GIN] 2018/10/15 - 04:37:38 | 200 | 1.055335755s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:39 | 200 | 2.003860327s | 127.0.0.1 | POST /v1/notify
I1015 04:37:39.364885 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:37:39 {Gathering Facts ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:37:39 | 200 | 16.224346ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:39.423863 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:37:39 {Gathering Facts ok} kubernetes processing 192.168.0.132}
I1015 04:37:39.447148 5 main.go:181] &{map[changed:false msg:] Oct 15 2018 04:37:39 {Gathering Facts ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:37:39 | 200 | 23.06805ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:39.475838 5 main.go:181] &{map[changed:false msg:] Oct 15 2018 04:37:39 {Gathering Facts ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:37:39 | 200 | 28.481717ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:39 | 200 | 23.325968ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:41.181181 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:41 {make k8s master dir ok} kubernetes processing 192.168.0.134}
I1015 04:37:41.185644 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:41 {make k8s master dir ok} kubernetes processing 192.168.0.131}
I1015 04:37:41.188946 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:41 {make k8s master dir ok} kubernetes processing 192.168.0.133}
I1015 04:37:41.192531 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:41 {make k8s master dir ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:37:41 | 200 | 184.284376ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:42 | 200 | 1.00195158s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:42 | 200 | 1.00234625s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:42 | 200 | 1.002089048s | 127.0.0.1 | POST /v1/notify
I1015 04:37:42.471899 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:42 {check kubernetes ok} kubernetes processing 192.168.0.132}
I1015 04:37:42.476607 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:42 {check kubernetes ok} kubernetes processing 192.168.0.133}
I1015 04:37:42.483173 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:42 {check kubernetes ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:37:42 | 200 | 11.583682ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:42.488818 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:42 {check kubernetes ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:37:42 | 200 | 18.975963ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:43.245395 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:43 {remove swapfile from /etc/fstab ok} kubernetes processing 192.168.0.132}
I1015 04:37:43.252895 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:43 {remove swapfile from /etc/fstab ok} kubernetes processing 192.168.0.131}
I1015 04:37:43.259129 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:43 {remove swapfile from /etc/fstab ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:37:43 | 200 | 17.232876ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:43.268850 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:43 {remove swapfile from /etc/fstab ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:37:43 | 200 | 22.98119ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:43 | 200 | 1.025545007s | 127.0.0.1 | POST /v1/notify
I1015 04:37:44.038500 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:44 {disable swap ok} kubernetes processing 192.168.0.131}
I1015 04:37:44.043821 5 main.go:181] &{map[changed:true msg:] Oct 15 2018 04:37:44 {disable swap ok} kubernetes processing 192.168.0.134}
I1015 04:37:44.049691 5 main.go:181] &{map[changed:true msg:] Oct 15 2018 04:37:44 {disable swap ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:37:44 | 200 | 14.442129ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:44.059849 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:44 {disable swap ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:37:44 | 200 | 23.470467ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:44 | 200 | 1.032286625s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:44 | 200 | 2.037417503s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:45 | 200 | 1.0018219s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:45 | 200 | 1.00171608s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:45 | 200 | 2.003655322s | 127.0.0.1 | POST /v1/notify
I1015 04:37:56.952443 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:56 {install kubernetes components ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:37:56 | 200 | 2.807791ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:57.001857 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:57 {install kubernetes components ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:37:57 | 200 | 2.690951ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:57.055466 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:57 {install kubernetes components ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:37:57 | 200 | 2.557363ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:57.146871 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:57 {install kubernetes components ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:37:57 | 200 | 27.26785ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:58.412884 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:58 {distribute kubelet config ok} kubernetes processing 192.168.0.134}
I1015 04:37:58.428622 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:58 {distribute kubelet config ok} kubernetes processing 192.168.0.133}
I1015 04:37:58.434659 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:58 {distribute kubelet config ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:37:58 | 200 | 22.978276ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:58.439617 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:37:58 {distribute kubelet config ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:37:58 | 200 | 27.472808ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:59.323093 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:59 {reload & enable kubelet ok} kubernetes processing 192.168.0.134}
I1015 04:37:59.329845 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:59 {reload & enable kubelet ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:37:59 | 200 | 13.481755ms | 127.0.0.1 | POST /v1/notify
I1015 04:37:59.341644 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:59 {reload & enable kubelet ok} kubernetes processing 192.168.0.133}
I1015 04:37:59.345398 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:37:59 {reload & enable kubelet ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:37:59 | 200 | 24.451104ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:37:59 | 200 | 1.027527652s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:00 | 200 | 1.00240291s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:00 | 200 | 1.031484406s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:00 | 200 | 2.006752696s | 127.0.0.1 | POST /v1/notify
I1015 04:38:00.599378 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:38:00 {set sysctl ok} kubernetes processing 192.168.0.131}
I1015 04:38:00.602423 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:38:00 {set sysctl ok} kubernetes processing 192.168.0.134}
I1015 04:38:00.604964 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:38:00 {set sysctl ok} kubernetes processing 192.168.0.133}
I1015 04:38:00.608767 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:38:00 {set sysctl ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:38:00 | 200 | 12.163647ms | 127.0.0.1 | POST /v1/notify
I1015 04:38:00.714765 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:38:00 {setup master skipped} kubernetes processing 192.168.0.133}
I1015 04:38:00.717031 5 main.go:181] &{map[changed:false msg:All items completed] Oct 15 2018 04:38:00 {setup master skipped} kubernetes processing 192.168.0.134}
I1015 04:38:00.720482 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:38:00 {setup master ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:38:00 | 200 | 16.443274ms | 127.0.0.1 | POST /v1/notify
I1015 04:38:00.745856 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:38:00 {setup master ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:38:00 | 200 | 10.989605ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:01 | 200 | 1.035743058s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:01 | 200 | 1.034316039s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:02 | 200 | 2.040325145s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:02 | 200 | 2.038823671s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:38:03 | 200 | 3.03757271s | 127.0.0.1 | POST /v1/notify
I1015 04:38:12.034886 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:38:12 {copy k8s images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:38:12 | 200 | 30.29899ms | 127.0.0.1 | POST /v1/notify
I1015 04:38:57.336948 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:38:57 {load k8s images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:38:57 | 200 | 28.27225ms | 127.0.0.1 | POST /v1/notify
I1015 04:38:57.957882 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:38:57 {docker login ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:38:58 | 200 | 551.218509ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:01.444873 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:01 {tag images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:39:01 | 200 | 24.309637ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:46.044901 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:46 {push images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:39:46 | 200 | 28.223941ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:46.484278 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:39:46 {pull k8s pause images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:39:46 | 200 | 2.611479ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:46.869887 5 main.go:181] &{map[changed:true msg:All items completed] Oct 15 2018 04:39:46 {pull k8s pause images ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:39:46 | 200 | 26.217033ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:47.350859 5 main.go:181] &{map[msg:All items completed changed:false] Oct 15 2018 04:39:47 {tag k8s pause images ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:39:47 | 200 | 2.459367ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:47.379848 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:47 {tag k8s pause images ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:39:47 | 200 | 27.332567ms | 127.0.0.1 | POST /v1/notify
I1015 04:39:49.338295 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:49 {generate kubeadm config ok} kubernetes processing 192.168.0.131}
I1015 04:39:49.341578 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:49 {generate kubeadm config ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:39:49 | 200 | 24.843502ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:39:50 | 200 | 1.015983954s | 127.0.0.1 | POST /v1/notify
I1015 04:39:54.237892 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:54 {copy crt & key ok} kubernetes processing 192.168.0.131}
I1015 04:39:54.244834 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:39:54 {copy crt & key ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:39:54 | 200 | 22.256883ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:39:55 | 200 | 1.001620631s | 127.0.0.1 | POST /v1/notify
I1015 04:40:29.139801 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:29 {setup ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:40:29 | 200 | 2.706475ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:39.387943 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:39 {setup ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:40:39 | 200 | 20.419044ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:39.776862 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:40:39 {fetch admin.conf ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:40:39 | 200 | 27.26219ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:40.185200 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:40 {config kubectl ok} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:40:40 | 200 | 2.29179ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:40.203944 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:40 {config kubectl ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:40:40 | 200 | 29.342239ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:41.361869 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:41 {apply addons ok} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:40:41 | 200 | 24.222082ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:41.471127 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:40:41 {setup node ok} kubernetes processing 192.168.0.133}
I1015 04:40:41.471969 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:40:41 {setup node ok} kubernetes processing 192.168.0.134}
I1015 04:40:41.481242 5 main.go:181] &{map[msg: changed:false] Oct 15 2018 04:40:41 {setup node skipped} kubernetes processing 192.168.0.131}
[GIN] 2018/10/15 - 04:40:41 | 200 | 12.652041ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:41.497903 5 main.go:181] &{map[changed:false msg:] Oct 15 2018 04:40:41 {setup node skipped} kubernetes processing 192.168.0.132}
[GIN] 2018/10/15 - 04:40:41 | 200 | 17.733072ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:40:42 | 200 | 1.001587855s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:40:42 | 200 | 1.001793657s | 127.0.0.1 | POST /v1/notify
I1015 04:40:42.588043 5 main.go:181] &{map[changed:true msg:All items completed] Oct 15 2018 04:40:42 {pull k8s pause images ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:40:42 | 200 | 2.510756ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:42.641864 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:40:42 {pull k8s pause images ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:40:42 | 200 | 28.350261ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:43.249586 5 main.go:181] &{map[changed:true msg:All items completed] Oct 15 2018 04:40:43 {tag k8s pause images ok} kubernetes processing 192.168.0.133}
I1015 04:40:43.253643 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:40:43 {tag k8s pause images ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:40:43 | 200 | 24.633907ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:44.010387 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:40:44 {copy k8s admin.conf ok} kubernetes processing 192.168.0.134}
I1015 04:40:44.018334 5 main.go:181] &{map[msg:All items completed changed:true] Oct 15 2018 04:40:44 {copy k8s admin.conf ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:40:44 | 200 | 26.821056ms | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:40:44 | 200 | 1.00166525s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 04:40:45 | 200 | 1.002723196s | 127.0.0.1 | POST /v1/notify
I1015 04:40:46.698566 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:46 {setup node ok} kubernetes processing 192.168.0.133}
[GIN] 2018/10/15 - 04:40:46 | 200 | 2.350085ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:47.204877 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:47 {setup node ok} kubernetes processing 192.168.0.134}
[GIN] 2018/10/15 - 04:40:47 | 200 | 28.291317ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:47.711556 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:47 {restart kubelet ok} kubernetes started 192.168.0.134}
[GIN] 2018/10/15 - 04:40:47 | 200 | 8.131226ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:48.066233 5 main.go:181] &{map[msg: changed:true] Oct 15 2018 04:40:48 {restart kubelet ok} kubernetes started 192.168.0.133}
I1015 04:40:48.071512 5 main.go:181] &{map[changed:false msg:] Oct 15 2018 04:40:48 {ending ok} kubernetes ok all}
[GIN] 2018/10/15 - 04:40:48 | 200 | 35.988748ms | 127.0.0.1 | POST /v1/notify
I1015 04:40:48.121952 5 command.go:209] step kubernetes compeleted
I1015 04:40:48.121981 5 command.go:223] complete all install step
[GIN] 2018/10/15 - 04:40:49 | 200 | 1.00173137s | 127.0.0.1 | POST /v1/notify
[GIN] 2018/10/15 - 05:02:28 | 200 | 1.361402ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:02:28 | 200 | 1.727145ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad
[GIN] 2018/10/15 - 05:02:29 | 200 | 1.763666ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/components
[GIN] 2018/10/15 - 05:02:29 | 200 | 267.192µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:02:29 | 200 | 469.592µs | 192.168.6.239 | GET /v1/components
[GIN] 2018/10/15 - 05:02:30 | 200 | 5.870199ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/logs
[GIN] 2018/10/15 - 05:05:19 | 200 | 663.683µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:05:19 | 200 | 2.249758ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/components
[GIN] 2018/10/15 - 05:05:19 | 200 | 74.05µs | 192.168.6.239 | GET /v1/components
[GIN] 2018/10/15 - 05:07:34 | 200 | 6.223676ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/logs
[GIN] 2018/10/15 - 05:07:41 | 200 | 775.28µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:07:41 | 200 | 89.522µs | 192.168.6.239 | GET /v1/components
[GIN] 2018/10/15 - 05:07:41 | 200 | 2.734292ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/components
[GIN] 2018/10/15 - 05:08:31 | 200 | 6.11646ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/logs
[GIN] 2018/10/15 - 05:08:31 | 200 | 489.479µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:08:33 | 200 | 501.504µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:08:33 | 200 | 54.818µs | 192.168.6.239 | GET /v1/components
[GIN] 2018/10/15 - 05:08:33 | 200 | 4.160305ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/components
[GIN] 2018/10/15 - 05:08:35 | 200 | 6.706555ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/logs
[GIN] 2018/10/15 - 05:08:36 | 200 | 91.999µs | 192.168.6.239 | GET /v1/components
[GIN] 2018/10/15 - 05:08:36 | 200 | 1.498564ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/components
[GIN] 2018/10/15 - 05:08:36 | 200 | 306.781µs | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/hosts
[GIN] 2018/10/15 - 05:27:40 | 200 | 6.616749ms | 192.168.6.239 | GET /v1/clusters/2ec87480-cd93-4136-8810-523edef2e7ad/logs
-------------------------------------dashboard容器日志------------------------------
2018/10/15 04:41:07 Using in-cluster config to connect to apiserver
2018/10/15 04:41:07 Using service account token for csrf signing
2018/10/15 04:41:07 No request provided. Skipping authorization
2018/10/15 04:41:07 Starting overwatch
2018/10/15 04:41:07 Successful initial request to the apiserver, version: v1.12.1
2018/10/15 04:41:07 Generating JWE encryption key
2018/10/15 04:41:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/10/15 04:41:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/10/15 04:41:08 Storing encryption key in a secret
2018/10/15 04:41:08 Creating in-cluster Heapster client
2018/10/15 04:41:08 Auto-generating certificates
2018/10/15 04:41:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:41:08 Successfully created certificates
2018/10/15 04:41:08 Serving securely on HTTPS port: 8443
2018/10/15 04:41:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:42:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:42:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:43:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:43:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:44:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:44:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:45:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:45:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:46:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:46:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:47:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:47:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:48:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:48:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:49:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:49:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:50:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:50:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:51:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:51:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:52:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:52:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:53:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:53:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:54:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:54:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:55:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:55:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:56:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:56:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:57:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:57:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:58:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:58:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:59:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 04:59:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:00:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:00:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:01:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:01:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:02:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:02:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:03:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:03:35 http: TLS handshake error from 10.244.2.0:65059: tls: first record does not look like a TLS handshake
2018/10/15 05:03:35 http: TLS handshake error from 10.244.2.0:65060: tls: first record does not look like a TLS handshake
2018/10/15 05:03:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:03:45 http: TLS handshake error from 10.244.2.0:65066: EOF
2018/10/15 05:03:45 http: TLS handshake error from 10.244.2.0:65068: EOF
2018/10/15 05:03:45 http: TLS handshake error from 10.244.2.0:65069: EOF
2018/10/15 05:03:45 http: TLS handshake error from 10.244.2.0:65057: EOF
2018/10/15 05:03:45 http: TLS handshake error from 10.244.2.0:65056: EOF
2018/10/15 05:04:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:04:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:05:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:05:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:06:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:06:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:07:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:07:10 http: TLS handshake error from 10.244.3.0:63859: EOF
2018/10/15 05:07:11 http: TLS handshake error from 10.244.3.0:63860: EOF
2018/10/15 05:07:11 http: TLS handshake error from 10.244.3.0:63861: EOF
2018/10/15 05:07:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:08:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:08:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:08:51 http: TLS handshake error from 10.244.2.0:57173: tls: first record does not look like a TLS handshake
2018/10/15 05:09:06 http: TLS handshake error from 10.244.2.0:57174: EOF
2018/10/15 05:09:06 http: TLS handshake error from 10.244.2.0:57194: tls: first record does not look like a TLS handshake
2018/10/15 05:09:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:09:19 http: TLS handshake error from 10.244.2.0:57195: EOF
2018/10/15 05:09:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:10:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:10:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:11:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:11:16 http: TLS handshake error from 10.244.0.0:57246: tls: first record does not look like a TLS handshake
2018/10/15 05:11:16 http: TLS handshake error from 10.244.0.0:57248: tls: first record does not look like a TLS handshake
2018/10/15 05:11:18 http: TLS handshake error from 10.244.1.1:57319: tls: first record does not look like a TLS handshake
2018/10/15 05:11:19 http: TLS handshake error from 10.244.1.1:57323: tls: first record does not look like a TLS handshake
2018/10/15 05:11:19 http: TLS handshake error from 10.244.1.1:57324: tls: first record does not look like a TLS handshake
2018/10/15 05:11:28 http: TLS handshake error from 10.244.0.0:57247: EOF
2018/10/15 05:11:28 http: TLS handshake error from 10.244.1.1:57320: tls: first record does not look like a TLS handshake
2018/10/15 05:11:28 http: TLS handshake error from 10.244.1.1:57321: EOF
2018/10/15 05:11:31 http: TLS handshake error from 10.244.1.1:57375: tls: first record does not look like a TLS handshake
2018/10/15 05:11:38 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.
2018/10/15 05:11:42 http: TLS handshake error from 10.244.1.1:57376: EOF
2018/10/15 05:11:42 http: TLS handshake error from 10.244.3.0:57429: EOF
2018/10/15 05:11:42 http: TLS handshake error from 10.244.3.0:57430: EOF
2018/10/15 05:11:42 http: TLS handshake error from 10.244.3.0:57431: EOF
2018/10/15 05:12:08 Metric client health check failed: unknown (get services heapster). Retrying in 30 seconds.

到安装kubernetes报错

环境:
4台服务器均Centos7.5 Mini安装,未进行任何其他操作
配置是1主3从
最终还是报错

87 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.245] task: set sysctl - ok, message: All items completed
88 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.244] task: set sysctl - ok, message: All items completed
89 Oct 13 2018 06:55:39: [kubernetes] [192.168.2.245] task: pull k8s pause images - failed, message: All items completed
90 Oct 13 2018 06:55:39: [kubernetes] [192.168.2.244] task: pull k8s pause images - failed, message: All items completed
91 Oct 13 2018 06:55:39: [kubernetes] [all] task: ending - failed, message:
92 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.245] task: setup master - skipped, message: All items completed
93 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.244] task: setup node - ok, message:
94 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.245] task: setup node - ok, message:
95 Oct 13 2018 06:55:37: [kubernetes] [192.168.2.244] task: setup master - skipped, message: All items completed

缺少admin.conf

TASK [copy k8s admin.conf] *****************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /workspace/kubernetes-playbook/v1.13.0/file/admin.conf
failed: [10.7.111.208] (item={u'dest': u'/root/.kube/config', u'src': u'file/admin.conf'}) => {"changed": false, "item": {"dest": "/root/.kube/config", "src": "file/admin.conf"}, "msg": "Could not find or access 'file/admin.conf'\nSearched in:\n\t/workspace/kubernetes-playbook/v1.13.0/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/file/admin.conf"}
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /workspace/kubernetes-playbook/v1.13.0/file/admin.conf
failed: [10.7.111.209] (item={u'dest': u'/root/.kube/config', u'src': u'file/admin.conf'}) => {"changed": false, "item": {"dest": "/root/.kube/config", "src": "file/admin.conf"}, "msg": "Could not find or access 'file/admin.conf'\nSearched in:\n\t/workspace/kubernetes-playbook/v1.13.0/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/files/file/admin.conf\n\t/workspace/kubernetes-playbook/v1.13.0/file/admin.conf"}

Harbor Server,安装完成后harbor-db和harbor-adminserver无法启动

请问这两个服务的日志路径在哪里?能否给出点排查方向,非常感谢你们的贡献。
153b8824f2af goharbor/harbor-db:v1.6.2 "/entrypoint.sh po..." 20 minutes ago Exited (255) 14 minutes ago harbor-db
d3642b4c541e goharbor/harbor-adminserver:v1.6.2 "/harbor/start.sh" 20 minutes ago Exited (137) 14 minutes ago harbor-adminserver

无法通过(HAProxy+Keeplived)VIP访问

Linux localhost.localdomain 4.19.6-1.el7.elrepo.x86_64 #1 SMP Sat Dec 1 11:58:18 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
CentOS Linux release 7.6.1810 (Core)

3 MASTER,2 NODES,安装均成功!
为三台MASTER配置(HAProxy+Keeplived)的虚拟IP为:192.168.99.200,通过NodePort成功开放Service端口:30000,可直接通过MASTER或NODE的IP访问30000端口提供的服务,但无法通过虚拟IP:192.168.99.200:30000访问,页面一直空白,状态显示为加载中。

安装成功,但是DashBoard页面访问不了。

安装成功后,DashBoard页面访问异常:

vip有在监听30300端口:
[root@k8s01 ~]# netstat -anotlp | grep 30300
tcp6 0 0 :::30300 :::* LISTEN 9339/kube-proxy off (0.00/0/0)

打开页面如下:
default

token可以生成:
default

下面是安装日志:
1

Nov 14 2018 01:03:49: [docker] [10.0.18.53] task: Gathering Facts - ok, message:
2

Nov 14 2018 01:03:49: [docker] [10.0.18.52] task: Gathering Facts - ok, message:
3

Nov 14 2018 01:03:49: [docker] [10.0.18.46] task: Gathering Facts - ok, message:
4

Nov 14 2018 01:03:49: [docker] [10.0.18.47] task: Gathering Facts - ok, message:
5

Nov 14 2018 01:03:50: [docker] [10.0.18.51] task: Gathering Facts - ok, message:
6

Nov 14 2018 01:03:50: [docker] [10.0.18.53] task: set hostname - ok, message:
7

Nov 14 2018 01:03:50: [docker] [10.0.18.47] task: set hostname - ok, message:
8

Nov 14 2018 01:03:50: [docker] [10.0.18.51] task: set hostname - ok, message:
9

Nov 14 2018 01:03:50: [docker] [10.0.18.52] task: set hostname - ok, message:
10

Nov 14 2018 01:03:50: [docker] [10.0.18.46] task: set hostname - ok, message:
11

Nov 14 2018 01:03:51: [docker] [10.0.18.46] task: get seed ip - ok, message:
12

Nov 14 2018 01:03:51: [docker] [10.0.18.47] task: get seed ip - ok, message:
13

Nov 14 2018 01:03:51: [docker] [10.0.18.52] task: add seed to /etc/hosts - ok, message: Block inserted
14

Nov 14 2018 01:03:51: [docker] [10.0.18.46] task: add seed to /etc/hosts - ok, message: Block inserted
15

Nov 14 2018 01:03:51: [docker] [10.0.18.53] task: add seed to /etc/hosts - ok, message: Block inserted
16

Nov 14 2018 01:03:51: [docker] [10.0.18.51] task: get seed ip - ok, message:
17

Nov 14 2018 01:03:51: [docker] [10.0.18.53] task: get seed ip - ok, message:
18

Nov 14 2018 01:03:51: [docker] [10.0.18.52] task: get seed ip - ok, message:
19

Nov 14 2018 01:03:51: [docker] [10.0.18.47] task: add seed to /etc/hosts - ok, message: Block inserted
20

Nov 14 2018 01:03:51: [docker] [10.0.18.51] task: add seed to /etc/hosts - ok, message: Block inserted
21

Nov 14 2018 01:03:53: [docker] [10.0.18.46] task: add to /etc/hosts - ok, message: All items completed
22

Nov 14 2018 01:03:53: [docker] [10.0.18.47] task: add to /etc/hosts - ok, message: All items completed
23

Nov 14 2018 01:03:53: [docker] [10.0.18.52] task: add to /etc/hosts - ok, message: All items completed
24

Nov 14 2018 01:03:53: [docker] [10.0.18.51] task: add to /etc/hosts - ok, message: All items completed
25

Nov 14 2018 01:03:53: [docker] [10.0.18.53] task: add to /etc/hosts - ok, message: All items completed
26

Nov 14 2018 01:03:54: [docker] [10.0.18.51] task: disabled selinux - ok, message:
27

Nov 14 2018 01:03:54: [docker] [10.0.18.47] task: disabled selinux - ok, message:
28

Nov 14 2018 01:03:54: [docker] [10.0.18.46] task: disabled selinux - ok, message:
29

Nov 14 2018 01:03:54: [docker] [10.0.18.53] task: disabled selinux - ok, message:
30

Nov 14 2018 01:03:54: [docker] [10.0.18.52] task: disabled selinux - ok, message:
31

Nov 14 2018 01:03:55: [docker] [10.0.18.46] task: start firewalld - ok, message:
32

Nov 14 2018 01:03:55: [docker] [10.0.18.51] task: start firewalld - ok, message:
33

Nov 14 2018 01:03:55: [docker] [10.0.18.53] task: start firewalld - ok, message:
34

Nov 14 2018 01:03:55: [docker] [10.0.18.52] task: start firewalld - ok, message:
35

Nov 14 2018 01:03:55: [docker] [10.0.18.47] task: start firewalld - ok, message:
36

Nov 14 2018 01:03:58: [docker] [10.0.18.47] task: config firewalld - ok, message:
37

Nov 14 2018 01:03:58: [docker] [10.0.18.46] task: config firewalld - ok, message:
38

Nov 14 2018 01:03:58: [docker] [10.0.18.52] task: config firewalld - ok, message:
39

Nov 14 2018 01:03:58: [docker] [10.0.18.51] task: config firewalld - ok, message:
40

Nov 14 2018 01:03:58: [docker] [10.0.18.53] task: config firewalld - ok, message:
41

Nov 14 2018 01:03:59: [docker] [10.0.18.46] task: distribute wise2c repo - ok, message: All items completed
42

Nov 14 2018 01:03:59: [docker] [10.0.18.51] task: distribute wise2c repo - ok, message: All items completed
43

Nov 14 2018 01:03:59: [docker] [10.0.18.47] task: distribute wise2c repo - ok, message: All items completed
44

Nov 14 2018 01:03:59: [docker] [10.0.18.52] task: distribute wise2c repo - ok, message: All items completed
45

Nov 14 2018 01:03:59: [docker] [10.0.18.53] task: distribute wise2c repo - ok, message: All items completed
46

Nov 14 2018 01:04:00: [docker] [10.0.18.46] task: distribute ipvs bootload file - ok, message: All items completed
47

Nov 14 2018 01:04:00: [docker] [10.0.18.52] task: distribute ipvs bootload file - ok, message: All items completed
48

Nov 14 2018 01:04:00: [docker] [10.0.18.51] task: distribute ipvs bootload file - ok, message: All items completed
49

Nov 14 2018 01:04:00: [docker] [10.0.18.53] task: distribute ipvs bootload file - ok, message: All items completed
50

Nov 14 2018 01:04:00: [docker] [10.0.18.47] task: distribute ipvs bootload file - ok, message: All items completed
51

Nov 14 2018 01:04:27: [docker] [10.0.18.51] task: install docker - ok, message: All items completed
52

Nov 14 2018 01:04:29: [docker] [10.0.18.47] task: install docker - ok, message: All items completed
53

Nov 14 2018 01:04:29: [docker] [10.0.18.52] task: install docker - ok, message: All items completed
54

Nov 14 2018 01:04:29: [docker] [10.0.18.53] task: install docker - ok, message: All items completed
55

Nov 14 2018 01:04:29: [docker] [10.0.18.46] task: install docker - ok, message: All items completed
56

Nov 14 2018 01:04:29: [docker] [10.0.18.47] task: distribute chrony server config - skipped, message: All items completed
57

Nov 14 2018 01:04:29: [docker] [10.0.18.53] task: distribute chrony server config - skipped, message: All items completed
58

Nov 14 2018 01:04:29: [docker] [10.0.18.51] task: distribute chrony server config - skipped, message: All items completed
59

Nov 14 2018 01:04:29: [docker] [10.0.18.52] task: distribute chrony server config - skipped, message: All items completed
60

Nov 14 2018 01:04:30: [docker] [10.0.18.46] task: distribute chrony server config - ok, message: All items completed
61

Nov 14 2018 01:04:30: [docker] [10.0.18.46] task: distribute chrony client config - skipped, message: All items completed
62

Nov 14 2018 01:04:30: [docker] [10.0.18.52] task: distribute chrony client config - ok, message: All items completed
63

Nov 14 2018 01:04:30: [docker] [10.0.18.51] task: distribute chrony client config - ok, message: All items completed
64

Nov 14 2018 01:04:30: [docker] [10.0.18.53] task: distribute chrony client config - ok, message: All items completed
65

Nov 14 2018 01:04:30: [docker] [10.0.18.47] task: distribute chrony client config - ok, message: All items completed
66

Nov 14 2018 01:04:31: [docker] [10.0.18.46] task: check docker - ok, message:
67

Nov 14 2018 01:04:31: [docker] [10.0.18.53] task: check docker - ok, message:
68

Nov 14 2018 01:04:31: [docker] [10.0.18.52] task: check docker - ok, message:
69

Nov 14 2018 01:04:31: [docker] [10.0.18.51] task: check docker - ok, message:
70

Nov 14 2018 01:04:31: [docker] [10.0.18.51] task: start chrony - ok, message:
71

Nov 14 2018 01:04:31: [docker] [10.0.18.53] task: start chrony - ok, message:
72

Nov 14 2018 01:04:31: [docker] [10.0.18.47] task: check docker - ok, message:
73

Nov 14 2018 01:04:31: [docker] [10.0.18.46] task: start chrony - ok, message:
74

Nov 14 2018 01:04:31: [docker] [10.0.18.52] task: start chrony - ok, message:
75

Nov 14 2018 01:04:31: [docker] [10.0.18.47] task: start chrony - ok, message:
76

Nov 14 2018 01:04:34: [docker] [10.0.18.52] task: clear docker config - ok, message: All items completed
77

Nov 14 2018 01:04:34: [docker] [10.0.18.47] task: clear docker config - ok, message: All items completed
78

Nov 14 2018 01:04:34: [docker] [10.0.18.46] task: clear docker config - ok, message: All items completed
79

Nov 14 2018 01:04:34: [docker] [10.0.18.51] task: clear docker config - ok, message: All items completed
80

Nov 14 2018 01:04:34: [docker] [10.0.18.46] task: distribute docker config - ok, message: All items completed
81

Nov 14 2018 01:04:34: [docker] [10.0.18.52] task: distribute docker config - ok, message: All items completed
82

Nov 14 2018 01:04:34: [docker] [10.0.18.53] task: distribute docker config - ok, message: All items completed
83

Nov 14 2018 01:04:34: [docker] [10.0.18.53] task: clear docker config - ok, message: All items completed
84

Nov 14 2018 01:04:34: [docker] [10.0.18.51] task: distribute docker config - ok, message: All items completed
85

Nov 14 2018 01:04:34: [docker] [10.0.18.47] task: distribute docker config - ok, message: All items completed
86

Nov 14 2018 01:04:37: [docker] [10.0.18.46] task: reload & restart docker - ok, message:
87

Nov 14 2018 01:04:37: [docker] [10.0.18.51] task: reload & restart docker - ok, message:
88

Nov 14 2018 01:04:37: [docker] [10.0.18.47] task: reload & restart docker - ok, message:
89

Nov 14 2018 01:04:37: [docker] [10.0.18.52] task: reload & restart docker - ok, message:
90

Nov 14 2018 01:04:37: [docker] [10.0.18.53] task: reload & restart docker - ok, message:
91

Nov 14 2018 01:04:38: [docker] [10.0.18.51] task: set sysctl - ok, message: All items completed
92

Nov 14 2018 01:04:38: [docker] [10.0.18.46] task: set sysctl - ok, message: All items completed
93

Nov 14 2018 01:04:38: [docker] [10.0.18.47] task: set sysctl - ok, message: All items completed
94

Nov 14 2018 01:04:38: [docker] [10.0.18.53] task: set sysctl - ok, message: All items completed
95

Nov 14 2018 01:04:38: [docker] [10.0.18.52] task: set sysctl - ok, message: All items completed
96

Nov 14 2018 01:04:38: [docker] [all] task: ending - ok, message:
97

Nov 14 2018 01:04:40: [registry] [10.0.18.53] task: Gathering Facts - ok, message:
98

Nov 14 2018 01:04:41: [registry] [10.0.18.53] task: make registry dir - ok, message: All items completed
99

Nov 14 2018 01:05:24: [registry] [10.0.18.53] task: unarchive harbor - ok, message:
100

Nov 14 2018 01:05:24: [registry] [10.0.18.53] task: generate registry config - ok, message: All items completed
101

Nov 14 2018 01:06:09: [registry] [10.0.18.53] task: launch registry - ok, message:
102

Nov 14 2018 01:06:09: [registry] [all] task: ending - ok, message:
103

Nov 14 2018 01:06:11: [etcd] [10.0.18.47] task: Gathering Facts - ok, message:
104

Nov 14 2018 01:06:11: [etcd] [10.0.18.51] task: Gathering Facts - ok, message:
105

Nov 14 2018 01:06:11: [etcd] [10.0.18.46] task: Gathering Facts - ok, message:
106

Nov 14 2018 01:06:11: [etcd] [10.0.18.46] task: make etcd dir - ok, message: All items completed
107

Nov 14 2018 01:06:11: [etcd] [10.0.18.47] task: make etcd dir - ok, message: All items completed
108

Nov 14 2018 01:06:11: [etcd] [10.0.18.51] task: make etcd dir - ok, message: All items completed
109

Nov 14 2018 01:06:15: [etcd] [10.0.18.51] task: copy etcd image - ok, message: All items completed
110

Nov 14 2018 01:06:15: [etcd] [10.0.18.47] task: copy etcd image - ok, message: All items completed
111

Nov 14 2018 01:06:15: [etcd] [10.0.18.46] task: copy etcd image - ok, message: All items completed
112

Nov 14 2018 01:06:26: [etcd] [10.0.18.51] task: load etcd image - ok, message:
113

Nov 14 2018 01:06:26: [etcd] [10.0.18.47] task: load etcd image - ok, message:
114

Nov 14 2018 01:06:27: [etcd] [10.0.18.46] task: load etcd image - ok, message:
115

Nov 14 2018 01:06:28: [etcd] [10.0.18.46] task: run etcd - ok, message:
116

Nov 14 2018 01:06:28: [etcd] [10.0.18.47] task: run etcd - ok, message:
117

Nov 14 2018 01:06:28: [etcd] [10.0.18.51] task: run etcd - ok, message:
118

Nov 14 2018 01:06:28: [etcd] [all] task: ending - ok, message:
119

Nov 14 2018 01:06:30: [kubernetes] [10.0.18.51] task: Gathering Facts - ok, message:
120

Nov 14 2018 01:06:30: [kubernetes] [10.0.18.46] task: Gathering Facts - ok, message:
121

Nov 14 2018 01:06:30: [kubernetes] [10.0.18.47] task: Gathering Facts - ok, message:
122

Nov 14 2018 01:06:30: [kubernetes] [10.0.18.52] task: Gathering Facts - ok, message:
123

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.46] task: make k8s master dir - ok, message: All items completed
124

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.47] task: make k8s master dir - ok, message: All items completed
125

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.52] task: make k8s master dir - ok, message: All items completed
126

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.51] task: make k8s master dir - ok, message: All items completed
127

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.52] task: check kubernetes - ok, message:
128

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.46] task: check kubernetes - ok, message:
129

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.51] task: check kubernetes - ok, message:
130

Nov 14 2018 01:06:31: [kubernetes] [10.0.18.47] task: check kubernetes - ok, message:
131

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.52] task: remove swapfile from /etc/fstab - ok, message:
132

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.47] task: remove swapfile from /etc/fstab - ok, message:
133

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.46] task: disable swap - ok, message:
134

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.46] task: remove swapfile from /etc/fstab - ok, message:
135

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.51] task: remove swapfile from /etc/fstab - ok, message:
136

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.52] task: disable swap - ok, message:
137

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.51] task: disable swap - ok, message:
138

Nov 14 2018 01:06:32: [kubernetes] [10.0.18.47] task: disable swap - ok, message:
139

Nov 14 2018 01:06:42: [kubernetes] [10.0.18.52] task: install kubernetes components - ok, message: All items completed
140

Nov 14 2018 01:06:42: [kubernetes] [10.0.18.51] task: install kubernetes components - ok, message: All items completed
141

Nov 14 2018 01:06:42: [kubernetes] [10.0.18.46] task: install kubernetes components - ok, message: All items completed
142

Nov 14 2018 01:06:42: [kubernetes] [10.0.18.47] task: install kubernetes components - ok, message: All items completed
143

Nov 14 2018 01:06:43: [kubernetes] [10.0.18.51] task: distribute kubelet config - ok, message: All items completed
144

Nov 14 2018 01:06:43: [kubernetes] [10.0.18.47] task: distribute kubelet config - ok, message: All items completed
145

Nov 14 2018 01:06:43: [kubernetes] [10.0.18.52] task: distribute kubelet config - ok, message: All items completed
146

Nov 14 2018 01:06:43: [kubernetes] [10.0.18.46] task: distribute kubelet config - ok, message: All items completed
147

Nov 14 2018 01:06:44: [kubernetes] [10.0.18.52] task: reload & enable kubelet - ok, message:
148

Nov 14 2018 01:06:44: [kubernetes] [10.0.18.51] task: reload & enable kubelet - ok, message:
149

Nov 14 2018 01:06:44: [kubernetes] [10.0.18.47] task: reload & enable kubelet - ok, message:
150

Nov 14 2018 01:06:44: [kubernetes] [10.0.18.46] task: reload & enable kubelet - ok, message:
151

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.46] task: set sysctl - ok, message: All items completed
152

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.51] task: set sysctl - ok, message: All items completed
153

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.47] task: set sysctl - ok, message: All items completed
154

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.52] task: set sysctl - ok, message: All items completed
155

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.52] task: setup master - skipped, message: All items completed
156

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.46] task: setup master - ok, message:
157

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.51] task: setup master - ok, message:
158

Nov 14 2018 01:06:45: [kubernetes] [10.0.18.47] task: setup master - ok, message:
159

Nov 14 2018 01:06:55: [kubernetes] [10.0.18.46] task: copy k8s images - ok, message: All items completed
160

Nov 14 2018 01:07:28: [kubernetes] [10.0.18.46] task: load k8s images - ok, message: All items completed
161

Nov 14 2018 01:07:29: [kubernetes] [10.0.18.46] task: docker login - ok, message:
162

Nov 14 2018 01:07:33: [kubernetes] [10.0.18.46] task: tag images - ok, message: All items completed
163

Nov 14 2018 01:08:21: [kubernetes] [10.0.18.46] task: push images - ok, message: All items completed
164

Nov 14 2018 01:08:22: [kubernetes] [10.0.18.46] task: pull k8s pause images - ok, message: All items completed
165

Nov 14 2018 01:08:22: [kubernetes] [10.0.18.47] task: pull k8s pause images - ok, message: All items completed
166

Nov 14 2018 01:08:22: [kubernetes] [10.0.18.51] task: pull k8s pause images - ok, message: All items completed
167

Nov 14 2018 01:08:23: [kubernetes] [10.0.18.46] task: tag k8s pause images - ok, message: All items completed
168

Nov 14 2018 01:08:23: [kubernetes] [10.0.18.51] task: tag k8s pause images - ok, message: All items completed
169

Nov 14 2018 01:08:23: [kubernetes] [10.0.18.47] task: tag k8s pause images - ok, message: All items completed
170

Nov 14 2018 01:08:25: [kubernetes] [10.0.18.47] task: generate kubeadm config - ok, message: All items completed
171

Nov 14 2018 01:08:25: [kubernetes] [10.0.18.51] task: generate kubeadm config - ok, message: All items completed
172

Nov 14 2018 01:08:25: [kubernetes] [10.0.18.46] task: generate kubeadm config - ok, message: All items completed
173

Nov 14 2018 01:08:30: [kubernetes] [10.0.18.51] task: copy crt & key - ok, message: All items completed
174

Nov 14 2018 01:08:30: [kubernetes] [10.0.18.46] task: copy crt & key - ok, message: All items completed
175

Nov 14 2018 01:08:30: [kubernetes] [10.0.18.47] task: copy crt & key - ok, message: All items completed
176

Nov 14 2018 01:08:55: [kubernetes] [10.0.18.46] task: setup - ok, message:
177

Nov 14 2018 01:09:02: [kubernetes] [10.0.18.51] task: setup - ok, message:
178

Nov 14 2018 01:09:05: [kubernetes] [10.0.18.47] task: setup - ok, message:
179

Nov 14 2018 01:09:05: [kubernetes] [10.0.18.46] task: fetch admin.conf - ok, message: All items completed
180

Nov 14 2018 01:09:06: [kubernetes] [10.0.18.51] task: config kubectl - ok, message:
181

Nov 14 2018 01:09:06: [kubernetes] [10.0.18.46] task: config kubectl - ok, message:
182

Nov 14 2018 01:09:06: [kubernetes] [10.0.18.47] task: config kubectl - ok, message:
183

Nov 14 2018 01:09:07: [kubernetes] [10.0.18.46] task: apply addons - ok, message:
184

Nov 14 2018 01:09:07: [kubernetes] [10.0.18.52] task: setup node - ok, message:
185

Nov 14 2018 01:09:07: [kubernetes] [10.0.18.46] task: setup node - skipped, message:
186

Nov 14 2018 01:09:07: [kubernetes] [10.0.18.47] task: setup node - skipped, message:
187

Nov 14 2018 01:09:07: [kubernetes] [10.0.18.51] task: setup node - skipped, message:
188

Nov 14 2018 01:09:08: [kubernetes] [10.0.18.52] task: pull k8s pause images - ok, message: All items completed
189

Nov 14 2018 01:09:09: [kubernetes] [10.0.18.52] task: tag k8s pause images - ok, message: All items completed
190

Nov 14 2018 01:09:09: [kubernetes] [10.0.18.52] task: copy k8s admin.conf - ok, message: All items completed
191

Nov 14 2018 01:09:17: [kubernetes] [10.0.18.52] task: setup node - ok, message:
192

Nov 14 2018 01:09:17: [kubernetes] [10.0.18.52] task: restart kubelet - ok, message:
193

Nov 14 2018 01:09:17: [kubernetes] [all] task: ending - ok, message:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.