Coder Social home page Coder Social logo

kube-ansible's Introduction

kube-ansible

kube-ansible is a set of Ansible playbooks and roles that allows you to instantiate a vanilla Kubernetes cluster on (primarily) CentOS virtual machines or baremetal.

Additionally, kube-ansible includes CNI pod networking (defaulting to Flannel, with an ability to deploy Weave, Multus and OVN Kubernetes).

The purpose of kube-ansible is to provide a simpler lab environment that allows prototyping and proof of concepts. For staging and production deployments, we recommend that you utilize OpenShift-Ansible

Playbooks

Playbooks are located in the playbooks/ directory.

Playbook Inventory Purpose
virthost-setup.yml ./inventory/virthost/ Provision a virtual machine host
bmhost-setup.yml ./inventory/bmhost/ Provision a bare metal host and add to group nodes.
allhost-setup.yml ./inventory/allhosts/ Provision both a virtual machine host and a bare metal host.
kube-install.yml ./inventory/all.local.generated Install and configure a k8s cluster using all hosts in group nodes
kube-install-ovn.yml ./inventory/all.local.generated Install and configure a k8s cluster with OVN network using all hosts in group nodes
kube-teardown.yml ./inventory/all.local.generated Runs kubeadm reset on all nodes to tear down k8s
vm-teardown.yml ./inventory/virthost/ Destroys VMs on the virtual machine host
fedora-python-bootstrapper.yml ./inventory/vms.local.generated Bootstrapping Python dependencies on cloud images

(Table generated with markdown tables)

Overview

kube-ansible provides the means to install and setup KVM as a virtual host platform on which virtual machines can be created, and used as the foundation of a Kubernetes cluster installation.

kube-ansible Topology Overview

There are generally two steps to this deployment:

  • Installation of KVM on the baremetal system and virtual machines instantiation
  • Kubernetes environment installation and setup on the virtual machines

Start with configuring the virthost/ inventory to match the required working environment, including DNS or IP address of the baremetal system, that will be installed and configured on the KVM platform. It also setup the network (KVM network, whether that be a bridged interface, or a NAT interface), and then define the system topology that needs to be deployed (e.g number of virtual machines to instantiate).

All the above mentioned configuration is done by virthost-setup.yml playbook, which performs the virtual host basic configuration, virtual machine instantiation, and extra virtual disk creation when configuring persistent storage with GlusterFS.

During the virthost-setup.yml a vms.local.generated inventory file is created with the IP addresses and hostname of the virtual machines. The vms.local.generated file can then be used with Kubernetes installation playbooks like kube-install.yml or kube-install-ovn.yml.

Usage

Step 0. Install dependent roles

Install role dependencies with ansible-galaxy. This step will install the main dependencies like (go and docker) and also brings other roles that is required for setting up the VMs.

ansible-galaxy install -r requirements.yml

Step 1. Create virtual host inventory

Copy the example virthost inventory into a new directory.

cp -r inventory/examples/virthost inventory/virthost/

Modify ./inventory/virthost/virthost.inventory to setup a virtual host (If inventory is already present, please skip this step).

Step 2. Override the default configuration if requires

All the default configuration settings used by kube-ansible playbooks are present in the all.yml file.

For instance by default kube-ansible creates one master and two worker node setup only (please refer to ordered list under virtual_machines in all.yml), but if HA cluster deployment (stacked control plane nodes) is required, edit the all.yml file and change the configuration to something on the line of

virtual_machines:
  - name: kube-lb
    node_type: lb
  - name: kube-master1
    node_type: master
  - name: kube-master2
    node_type: master_slave
  - name: kube-master3
    node_type: master_slave
  - name: kube-node-1
    node_type: nodes
  - name: kube-node-2
    node_type: nodes

Above configuration change will create 3 node HA cluster with 2 worker nodes and a LB node.

You can also define separate vCPU and vRAM for each of the virtual machines with system_ram_mb and system_cpus. The default values are setup via system_default_ram_mb and system_default_cpus which can also be overridden if you wish different default values. (Current defaults are 2048MB and 4 vCPU.)

WARNING

If you're not going to be connecting to the virtual machines from the same network as your source machine, you'll need to make sure you setup the ssh_proxy_enabled: true and other related ssh_proxy_... variables to allow the kube-install.yml playbook to work properly. See next NOTE for more information.

Step 3. Create the virtual machines defined in all.yml

Once the default configuration is being changed as per the setup requirements, execute the following instruction to create the VMs and generate the final inventory with all the details required for Kubernetes installation on these VMs.

NOTE

There are a few extra variables you may wish to set against the virtual host which can be satisfied in the inventory/virthost/group_vars/virthost.yml file of your local inventory configuration in inventory/virthost/ that you just created.

Primarily, this is for overriding the default variables located in the all.yml file, or overriding the default values associated with the roles.

Some common variables you may wish to override include:

  • bridge_networking: false disable bridge networking setup
  • images_directory: /home/images/kubelab override image directory location
  • spare_disk_location: /home/images/kubelab override spare disk location

The following values are used in the generation of the final inventory file vms.local.generated

  • ssh_proxy_enabled: true proxy via jump host (remote virthost)
  • ssh_proxy_user: root username to SSH into virthost
  • ssh_proxy_host: virthost hostname or IP of virthost
  • ssh_proxy_port: 2222 port of the virthost (optional, default 22)
  • vm_ssh_key_path: /home/lmadsen/.ssh/id_vm_rsa path to local SSH key

Running on virthost directly

ansible-playbook -i inventory/virthost/ playbooks/virthost-setup.yml

Setting up virthost as a jump host

ansible-playbook -i inventory/virthost/ -e ssh_proxy_enabled=true playbooks/virthost-setup.yml

Both the commands above will generate a new inventory file vm.local.generated in inventory directory. This inventory file will be used by the Kubernetes installation playbooks to install Kubernetes on the provisioned VMs. For instance, below content is an example of vm.local.generated file for 3 node HA Kubernetes cluster

kube-lb ansible_host=192.168.122.31
kube-master1 ansible_host=192.168.122.117
kube-master2 ansible_host=192.168.122.160
kube-master3 ansible_host=192.168.122.143
kube-node-1 ansible_host=192.168.122.53
kube-node-2 ansible_host=192.168.122.60

[lb]
kube-lb

[master]
kube-master1

[master_slave]
kube-master2
kube-master3

[nodes]
kube-node-1
kube-node-2


[all:vars]
ansible_user=centos
ansible_ssh_private_key_file=/root/.ssh/dev-server/id_vm_rsa

Tip User can override the configuration values from command line as well

 # ansible-playbook -i inventory/virthost.inventory -e 'network_type=2nics' playbooks/virthost-setup.yml

Step 4. Install Kubernetes on the instantiated virtual machines

During the execution of Step 3 a local inventory file inventory/vms.local.generated should have been generated. This inventory file contains the virtual machines and their IP addresses. Alternatively you can ignore the generated inventory and copy the example inventory directory from inventory/examples/vms/ and modify to your hearts content.

This inventory file need to be passed to the Kubernetes Installation playbooks (kube-install.yml \ kube-install-ovn.yml).

ansible-playbook -i inventory/vms.local.generated playbooks/kube-install.yml

NOTE

If you're not running the Ansible playbooks from the virtual host itself, it's possible to connect to the virtual machines via SSH proxy. You can do this by setting up the ssh_proxy_... variables as noted in Step 3.

Options

kube-ansible supports following options and these options can be configured in all.yml:

  • network_type (optional, string): specify network topology for the virthost, each master/worker has one interface (eth0) in default:
    • 2nics: each master/worker node has two interfaces: eth0 and eth1
    • bridge: add linux bridge (cni0) and move eth0 under cni0. This is useful to use linux bridge CNI for Kubernetes Pod's network
  • container_runtime (optional, string): specify container runtime that Kubernetess uses. Default uses Docker.
    • crio: install cri-o for the container runtime
  • crio_use_copr (optional, boolean): (only in case of cri-o) set true if copr cri-o RPM is used
  • ovn_image_repo (optional, string): set the container image (e.g. docker.io/ovnkube/ovn-daemonset-u:latest)
  • enable_endpointslice (optional, boolean): set True if endpointslice is used instead of endpoints
  • enable_auditlog (optional, boolean): set True if auditing logs
  • enable_ovn_raft (optional, boolean): (kube-install-ovn.yml only) set True if you want to OVN with raft mode
  • ovn_image_repo (optional, string): Replace the url if image needs to be pull from other location.

NOTE

In case of enable_ovn_raft=True, you need to build your own image from the upstream ovn-kubernetes repo and push it to your account and configure ovn_image_repo to point to that newly built image, because current official ovn-kubernetes image does not support raft.

Tip User can override the all.yml configuration values from command line as well. Here's the example:

  • Install Kubernetes with cri-o runtime, each host has two NICs (eth0, eth1):
# ansible-playbook -i inventory/vms.local.generated -e 'network_type=2nics' -e 'container_runtime=crio' playbooks/kube-install.yml

Once ansible-playbook execute successfully, to verify the installation login to the Kubernetes master virtual machine and run kubectl get nodes and verify that all the nodes are in a Ready state. (It may take some time for everything to coalesce and the nodes to report back to the Kubernetes master node.)

In order to login to the nodes, you may need to ssh-add ~/.ssh/vmhost/id_vm_rsa. The private key created on the virtual host will be automatically fetched to your local machine, allowing you to connect to the nodes when proxying.

Pro Tip

You can create a ~/.bashrc alias to SSH into the virtual machines if you're not executing the Ansible playbooks directly from your virtual host (i.e. from your laptop or desktop). To SSH into the nodes via SSH proxy, add the following alias:

alias ssh-virthost='ssh -o ProxyCommand="ssh -W %h:%p root@virthost"'

It's assumed you're logging into the virtual host as the root user and at hostname virthost. Change as required.

Usage: source ~/.bashrc ; ssh-virthost centos@kube-master

Step 5. Verify the installation

Once you're logged into your Kubernetes master node, run the following command to check the state of your cluster.

$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
kube-master1   Ready    master   18h   v1.17.3
kube-master2   Ready    master   18h   v1.17.3
kube-master3   Ready    master   18h   v1.17.3
kube-node-1    Ready    <none>   18h   v1.17.3
kube-node-2    Ready    <none>   18h   v1.17.3

Everything should be marked as ready. If so, you're good to go!

Example Setup and configuration instructions

Following instructions are to create a HA Kubernetes cluster with two worker nodes and OVN-Kubernetes in Raft mode as a CNI. All these instructions are executed from the physical server where virtual virtual_machines will be created to deploy the Kubernetes cluster.

Install requirements

ansible-galaxy install -r requirements.yml

Create inventory

cp -r inventory/examples/virthost inventory/virthost/

Configure inventory Content of inventory/virthost/virthost.inventory

dev-server ansible_host=127.0.0.1 ansible_ssh_user=root
[virthost]
dev-server

Configure default values Overridden configuration values in all.yml

container_runtime: crio
virtual_machines:
  - name: kube-master1
    node_type: master
  - name: kube-node-1
    node_type: nodes
  - name: kube-node-2
    node_type: nodes
# Uncomment following (lb/master_slave) for k8s master HA cluster
  - name: kube-lb
    node_type: lb
  - name: kube-master2
    node_type: master_slave
  - name: kube-master3
    node_type: master_slave

ovn_image_repo: "docker.io/avishnoi/ovn-kubernetes:latest"
enable_ovn_raft: True

Create Virtual Machines for Kubernetes deployment and generate final inventory

ansible-playbook -i inventory/virthost/ playbooks/virthost-setup.yml

This playbook creates required VMs and generate the final inventory file (vms.local.generated). virsh list lists all the created VMs.

# virsh list
 Id    Name                           State
----------------------------------------------------
 4     kube-master1                   running
 5     kube-node-1                    running
 6     kube-node-2                    running
 7     kube-lb                        running
 8     kube-master2                   running
 9     kube-master3                   running

Generated vms.local.generated file

# cat ./inventory/vms.local.generated
kube-lb ansible_host=192.168.122.31
kube-master1 ansible_host=192.168.122.117
kube-master2 ansible_host=192.168.122.160
kube-master3 ansible_host=192.168.122.143
kube-node-1 ansible_host=192.168.122.53
kube-node-2 ansible_host=192.168.122.60

[lb]
kube-lb

[master]
kube-master1

[master_slave]
kube-master2
kube-master3

[nodes]
kube-node-1
kube-node-2


[all:vars]
ansible_user=centos
ansible_ssh_private_key_file=/root/.ssh/dev-server/id_vm_rsa

Install Kubernetes

ansible-playbook -i inventory/vms.local.generated playbooks/kube-install-ovn.yml

Verify Setup Login to Kubernets master node

ssh -i ~/.ssh/dev-server/id_vm_rsa centos@kube-master1

Verify that all the nodes join the cluster

[centos@kube-master1 ~]$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
kube-master1   Ready    master   18h   v1.17.3
kube-master2   Ready    master   18h   v1.17.3
kube-master3   Ready    master   18h   v1.17.3
kube-node-1    Ready    <none>   18h   v1.17.3
kube-node-2    Ready    <none>   18h   v1.17.3

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}

About

Initially inspired by:

kube-ansible's People

Contributors

dougbtv avatar fepan avatar jianzzha avatar leifmadsen avatar nfvpe-robot avatar pliurh avatar s1061123 avatar tfherbert avatar vishnoianil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kube-ansible's Issues

IPv6: Verify deployment process

Unsure if enhancement, or bug, but, I have fixes pending for the IPv6 deployment process which are outstanding on top of the base lab as earlier merged in #106

Add ability to spin up in RDO Cloud

Would be really nice to add ability to spin all of this up in an OpenStack based cloud. We'll assume the cloud is RDO Cloud for now. Should be able to support other OpenStack clouds without much (or any) effort.

CRI-O updates / run-through

CRI-O and Buildah seem to be broken as of the #113 PR, I believe that likely there are upstream changes to CRI-O and Buildah that may have impacted the functionality of these roles, and not due to the change in method for installing golang.

That being said, it needs to have a run-through to make sure it's functional.

Add `AUTHORS.md` and `CHANGELOG` to repository

We've been doing enough changes and tags that we're probably due to start adding an AUTHORS.md file and CHANGELOG to note any changes from version to version. Something we should try and flesh out soon, as we're likely due for another tag.

CRI-O with Kube 1.7.x

Currently CRI-O with Kube 1.7.x is seemingly busted.

Had trouble coming up with that. So... it's a todo.

Bring in grouper changes

Bring in the changes from the IPv6 branch that starts on the process of parameterizing the virtual machine spin up. This allows me to merge it into master, and iterate, so that ideally we avoid major conflicts on the IPv6 branch when rebasing.

Related to #99

Refactor playbooks to work with AWX (and cloud providers)

Currently we somewhat assume you're deploying into a baremetal / virtual host environment.

These assumptions break the ability to deploy with AWX, and into cloud provider environments (like an OpenStack public cloud).

Work on refactoring the playbooks here so that it's easier to run in multiple environments, and fire this off from AWX.

IPv6: Documentation

Doug has a set of raw notes for the IPv6 deployment. He needs to clean them up generally to use as documentation stubs for the IPv6 deployment process.

Multus CRD documentation

Need to document usage in the readme, and Doug also plans to create a blog post showing a full out scenario and description of usage.

Update to latest Multus CNI

Some work was done in the early summer to update the RBACs used in the default multus config (flannel + passthrough to host bridge for two interfaces demo). However, there has been further progress on Multus and otherwise needs to be spun up and inspected.

Fix our `iptables` setup for Kubernetes

Currently when you run the kube-install.yml you get all these ugly ignored iptables plays. We should fix these so that they are either dealt with more cleanly, or even setup the firewall rules correctly.

TASK [kube-install : Stop iptables :(] ********************************************************************************
fatal: [kube-master]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-1]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-2]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-3]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring

TASK [kube-install : Disable iptables :(] *****************************************************************************
fatal: [kube-master]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-1]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-2]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-3]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n  ^ here\nThis one looks easy to fix.  It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote.  For instance:\n\n    when: \"ok\" in result.stdout\n\nCould be written as:\n\n   when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n   when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}

Atomic host compatibility

Likely a good idea, as Leif mentioned (paraphrased), "with the PV setup we have this might start to look more reasonable"

Allow disk size to be independently sized

Currently, you can use the increase_root_size_gigs variable to set the size of the image used for deploying virtual machines.

However, this goes across all virtual machines instantiated. It would be ideal to make this another configuration option to the virtual_machines list, so you can specify a disk size for a single virtual machine.

kubeadm init failed with swap on error

Firstly, Thank you very much guys for your awesome work. Recently started working on kubernetes and came through your repository and kucean is really helpful in creating my own Kubernetes cluster.

I picked latest tag v0.1.6 and started the process in installing the cluster. Then i have seen this below error:

fatal: [kube-master]: FAILED! => {
    "changed": true, 
    "cmd": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
    "delta": "0:00:00.825112", 
    "end": "2017-12-06 11:35:59.254000", 
    "failed": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "kubeadm init  --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log", 
            "_uses_shell": true, 
            "chdir": null, 
            "creates": "/etc/.kubeadm-complete", 
            "executable": null, 
            "removes": null, 
            "warn": true
        }
    }, 
    "rc": 2, 
    "start": "2017-12-06 11:35:58.428888", 
    "stderr": "[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings\n[preflight] Some fatal errors occurred:\n\trunning with swap on is not supported. Please disable swap\n[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`", 
    "stderr_lines": [
        "[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings", 
        "[preflight] Some fatal errors occurred:", 
        "\trunning with swap on is not supported. Please disable swap", 
        "[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`"
    ], 
    "stdout": "", 
    "stdout_lines": []
}

PLAY RECAP ***********************************************************************************************************************************************************
kube-master                : ok=20   changed=3    unreachable=0    failed=1   
kube-node-1                : ok=15   changed=3    unreachable=0    failed=0   

Can any one help me out in getting rid of this and help me getting my Kubernetes cluster up.

Thanks,
Praveen.

Document how to spin up kucean lab

Our documentation and blog posts could really use a refresh. Go through the documentation and create a new set of documentation / clean up README so that people can easily approach the configuration of this environment locally.

Assume we're going to be doing this with a virtual machine using KVM.

Allow virtual machines to be parameterized

When building a virtual_machines list, we should be able to parameterize the list so that you can specify requirements for the VMs, like RAM, CPU, etc independently. This will be required for the build environment work.

Weave with 1.6.4

Tomo has reported that weave isn't working with Kube 1.6.4 with the playbooks -- guessing that it's RBAC?

Scrap the old and busted playbooks

We have a few playbooks that are no longer necessary as we've iterated on deploying GlusterFS, and have a better method. This issue will be used as part of some deprecated playbook removal prior to documentation updates.

Synchronize artifacts from build environment to VM

Blocked by #118

Create a VM that could potentially run a registry, but for now, the scope of this work is to simply create a small VM that can hold the artifacts and allow a synchronize from the build VM to the artifact VM. This adds the functionality we'll need in the next steps.

Build simple script to make running scenarios easier

Right now, when you go and run kube-centos-ansible (more on that later), it starts to get a little wordy. For example:

ansible-playbook -i inventory/virthost/ virthost-setup.yml

It would be a lot nicer to do something like...

kean setup

kean would be shorthand for kube-ansible which is the new proposed name for this project, since we're not strictly limited to centos any more.

Scope of kean

So basically I just want to build a simple (lolz...) bash script which mostly just wraps some commands with simplified versions. The initial version should support the following:

  • setup: virthost-setup.yml
  • deploy: kube-install.yml
  • teardown: vm-teardown.yml
  • build: builder.yml

Some environment variables could be loaded from ~/.kean.cfg which would contain things like the inventory path to use. We could start expanding on things like adding other Ansible variables that would be passed in like -e etc. I don't think that should be strictly necessary for the initial deploy.

Other Thoughts

The purpose is to make iterating a bit easier and to make the barrier of entry a bit lower for people coming to this repository. We don't necessarily want to replace everything and make something incredibly complicated, but rather just build out some shorthand from the larger commands we have now.

Convert use of `all_vms` group to just the `all` group

We shouldn't really need to have an all_vms group, because Ansible already had a built in construct to signal that called all.

Update the project to use the all group instead, which will make deploying with AWX a bit simpler. Right now you need to manually add the all_vms group that includes the master and nodes groups, but none of that should really be necessary.

Ability to install optional packages on hosts

Used to have some play where I installed packages that not everyone needs, but, that I need every single time I spin up a cluster. Especially an editor and network tracing tools (e.g. tcpdump).

I'm going to create a playbook that lets you set those packages if you need.

Deploying with pod_network_type = "multus" results in the master in "NotReady" state

Hi @dougbtv,

Firstly, thank you for your awesome work here. Using your playbooks I'm able to deploy a kubernetes cluster with pod network set to flannel. However, I'm unable to get the multus network plugin to work.

The minion nodes come up but the master does not come up? Any help would be greatly appreciated. I have a similar problem with the "weave" pod_network_type as well for which I'll open another issue.
Thanks.

vm.local.j2 has static elements

The file vm.local.j2 has statically defined names for master and nodes sections:

[master]
kube-master

[nodes]
kube-node-1
kube-node-2
kube-node-3

This causes the generated inventory to be invalid when user specifies vm names other than default as defined in all.yml. The groups and host names in here should be generated from variable, it should also allow other groups, as not not all vms are k8s master or k8s nodes, there could be other helper VMs being created.

I think we should split virtual_machines variable to multiple variables so we can easily use those names. One possible way:

master_prefix: kube-master
master_count: 1
minion_prefix: kube-node
minion_count: 3
other_vms:
  - my_custom_vm
  - my_other_vm

Deploying a k8's cluster with pod_network_type == "weave" does not work. The nodes do not transition to the ready state.

I tried to deploy a k8's cluster with the pod_network_type set to "weave". When this is deployed the nodes do not transition to the Ready state.

The status of the nodes is shown below:

[centos@k8s-master-1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-1 NotReady master 1m v1.8.1
k8s-minion-1 NotReady 32s v1.8.1
k8s-minion-2 NotReady 32s v1.8.1

I'm not sure where to start looking. Any pointers will be greatly appreciated. Thank you so much!

Follow-up on CNI bugfix

I'm putting in a work-around of modifying the kubeadm config @ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Work-around mentioned as:

we worked around it by removing KUBELET_NETWORK_ARGS from kubelet command line. after that kubeadm init worked fine and we were able to install canal cni plugin.

In issue @ kubernetes/kubernetes#43815

I'm putting a variable to execute this work-around, but, want to return to it

[root@kube-master centos]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS

[cleanup] replace shell for copy kube admin.conf

 - name: Copy admin.conf to kubectl user's home
   shell: >
-    cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/admin.conf
+    cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/.kube/admin.conf

Leif notes:

You can replace this with synchronize or copy with remote_src: true
Also, you don't need a full shell, just command would also accomplish this.

Run spinup will cause Ansible failure if nodes exist

TASK [vm-spinup : Run spinup for each host that doesn't exist] *****************
failed: [kubehost] (item=kube-master) => {"changed": true, "cmd": "/root/spinup.sh kube-master", "delta": "0:00:00.015187", "end": "2017-05-17 15:51:51.900749", "failed": true, "item": "kube-master", "rc": 1, "start": "2017-05-17 15:51:51.885562", "stderr": "", "stdout": "[WARNING] kube-master already exists.  \nNot overwriting kube-master. Exiting...", "stdout_lines": ["[WARNING] kube-master already exists.  ", "Not overwriting kube-master. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-1) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-1", "delta": "0:00:00.014157", "end": "2017-05-17 15:51:52.061309", "failed": true, "item": "kube-minion-1", "rc": 1, "start": "2017-05-17 15:51:52.047152", "stderr": "", "stdout": "[WARNING] kube-minion-1 already exists.  \nNot overwriting kube-minion-1. Exiting...", "stdout_lines": ["[WARNING] kube-minion-1 already exists.  ", "Not overwriting kube-minion-1. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-2) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-2", "delta": "0:00:00.014413", "end": "2017-05-17 15:51:52.221176", "failed": true, "item": "kube-minion-2", "rc": 1, "start": "2017-05-17 15:51:52.206763", "stderr": "", "stdout": "[WARNING] kube-minion-2 already exists.  \nNot overwriting kube-minion-2. Exiting...", "stdout_lines": ["[WARNING] kube-minion-2 already exists.  ", "Not overwriting kube-minion-2. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-3) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-3", "delta": "0:00:00.014710", "end": "2017-05-17 15:51:52.381280", "failed": true, "item": "kube-minion-3", "rc": 1, "start": "2017-05-17 15:51:52.366570", "stderr": "", "stdout": "[WARNING] kube-minion-3 already exists.  \nNot overwriting kube-minion-3. Exiting...", "stdout_lines": ["[WARNING] kube-minion-3 already exists.  ", "Not overwriting kube-minion-3. Exiting..."], "warnings": []}

Failure with joining, bridge-nf-call-iptables contents are not set to 1

Had an error that looked like this... Might want to at some point ensure that the contents of /proc/sys/net/bridge/bridge-nf-call-iptables == 1.

[root@kube-minion-3 centos]# kubeadm join --token 0a60cc.af9035c8f46a0912 192.168.122.14:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kube-minion-3" could not be reached
[preflight] WARNING: hostname "kube-minion-3" lookup kube-minion-3 on 192.168.122.1:53: no such host
[preflight] Some fatal errors occurred:
	/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
[root@kube-minion-3 centos]# 
[root@kube-minion-3 centos]# 
[root@kube-minion-3 centos]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@kube-minion-3 centos]# 
[root@kube-minion-3 centos]# 
[root@kube-minion-3 centos]# kubeadm join --token 0a60cc.af9035c8f46a0912 192.168.122.14:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kube-minion-3" could not be reached
[preflight] WARNING: hostname "kube-minion-3" lookup kube-minion-3 on 192.168.122.1:53: no such host
[discovery] Trying to connect to API Server "192.168.122.14:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.122.14:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.122.14:6443"
[discovery] Successfully established connection with API Server "192.168.122.14:6443"
[bootstrap] Detected server version: v1.6.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
[root@kube-minion-3 centos]# mkdir /etc/.kubeadm-joined

Create version that works with Fedora

Need to spin up some Fedora VMs because I want to have buildah working for a whole non-docker workflow. However, buildah complains that there isn't a new enough kernel, so I'm hoping that with a Fedora kernel we can get the features that buildah is looking for.

Add `auto-kube-dev` as build environment via role

We want to start being able to build our own k8s artifacts, and consuming them from kube-centos-ansible. First step here is to allow auto-kube-dev to be deployed as a VM, configured as an included role, and then execute the artifacts build automatically.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.