redhat-nfvpe / kube-ansible Goto Github PK
View Code? Open in Web Editor NEWSpin up a Kubernetes development environment
License: Apache License 2.0
Spin up a Kubernetes development environment
License: Apache License 2.0
Should likely have an option of spinning up a docker image registry
Bring in the changes from the IPv6 branch that starts on the process of parameterizing the virtual machine spin up. This allows me to merge it into master, and iterate, so that ideally we avoid major conflicts on the IPv6 branch when rebasing.
Related to #99
We want to start being able to build our own k8s artifacts, and consuming them from kube-centos-ansible. First step here is to allow auto-kube-dev
to be deployed as a VM, configured as an included role, and then execute the artifacts build automatically.
Some work was done in the early summer to update the RBACs used in the default multus config (flannel + passthrough to host bridge for two interfaces demo). However, there has been further progress on Multus and otherwise needs to be spun up and inspected.
Tomo has reported that weave isn't working with Kube 1.6.4 with the playbooks -- guessing that it's RBAC?
Find out the things that differentiate go-install in the kube-centos-ansible repo, and migrate those changes into ansible-role-install-go, then consume it.
Would be really nice to add ability to spin all of this up in an OpenStack based cloud. We'll assume the cloud is RDO Cloud for now. Should be able to support other OpenStack clouds without much (or any) effort.
{see Tomo's notes}
Had an error that looked like this... Might want to at some point ensure that the contents of /proc/sys/net/bridge/bridge-nf-call-iptables
== 1.
[root@kube-minion-3 centos]# kubeadm join --token 0a60cc.af9035c8f46a0912 192.168.122.14:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kube-minion-3" could not be reached
[preflight] WARNING: hostname "kube-minion-3" lookup kube-minion-3 on 192.168.122.1:53: no such host
[preflight] Some fatal errors occurred:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`
[root@kube-minion-3 centos]#
[root@kube-minion-3 centos]#
[root@kube-minion-3 centos]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@kube-minion-3 centos]#
[root@kube-minion-3 centos]#
[root@kube-minion-3 centos]# kubeadm join --token 0a60cc.af9035c8f46a0912 192.168.122.14:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kube-minion-3" could not be reached
[preflight] WARNING: hostname "kube-minion-3" lookup kube-minion-3 on 192.168.122.1:53: no such host
[discovery] Trying to connect to API Server "192.168.122.14:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.122.14:6443"
[discovery] Cluster info signature and contents are valid, will use API Server "https://192.168.122.14:6443"
[discovery] Successfully established connection with API Server "192.168.122.14:6443"
[bootstrap] Detected server version: v1.6.0
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request
[csr] Received signed certificate from the API server, generating KubeConfig...
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
[root@kube-minion-3 centos]# mkdir /etc/.kubeadm-joined
TASK [vm-spinup : Run spinup for each host that doesn't exist] *****************
failed: [kubehost] (item=kube-master) => {"changed": true, "cmd": "/root/spinup.sh kube-master", "delta": "0:00:00.015187", "end": "2017-05-17 15:51:51.900749", "failed": true, "item": "kube-master", "rc": 1, "start": "2017-05-17 15:51:51.885562", "stderr": "", "stdout": "[WARNING] kube-master already exists. \nNot overwriting kube-master. Exiting...", "stdout_lines": ["[WARNING] kube-master already exists. ", "Not overwriting kube-master. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-1) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-1", "delta": "0:00:00.014157", "end": "2017-05-17 15:51:52.061309", "failed": true, "item": "kube-minion-1", "rc": 1, "start": "2017-05-17 15:51:52.047152", "stderr": "", "stdout": "[WARNING] kube-minion-1 already exists. \nNot overwriting kube-minion-1. Exiting...", "stdout_lines": ["[WARNING] kube-minion-1 already exists. ", "Not overwriting kube-minion-1. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-2) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-2", "delta": "0:00:00.014413", "end": "2017-05-17 15:51:52.221176", "failed": true, "item": "kube-minion-2", "rc": 1, "start": "2017-05-17 15:51:52.206763", "stderr": "", "stdout": "[WARNING] kube-minion-2 already exists. \nNot overwriting kube-minion-2. Exiting...", "stdout_lines": ["[WARNING] kube-minion-2 already exists. ", "Not overwriting kube-minion-2. Exiting..."], "warnings": []}
failed: [kubehost] (item=kube-minion-3) => {"changed": true, "cmd": "/root/spinup.sh kube-minion-3", "delta": "0:00:00.014710", "end": "2017-05-17 15:51:52.381280", "failed": true, "item": "kube-minion-3", "rc": 1, "start": "2017-05-17 15:51:52.366570", "stderr": "", "stdout": "[WARNING] kube-minion-3 already exists. \nNot overwriting kube-minion-3. Exiting...", "stdout_lines": ["[WARNING] kube-minion-3 already exists. ", "Not overwriting kube-minion-3. Exiting..."], "warnings": []}
there's some cruft left behind after upgrading to gluster-kubernetes. A few templates in the role, and then a few variables in all.yml.
I've set this up previously, but it's usually done from looking through a bunch of different blogs and documentation locations. I found this today, which documents how to set it all up in a single space.
http://ixday.github.io/post/unprivileged_libvirt/
We should avoid the use of running as root if we can avoid it. As part of the future migration to kvm-inventory-builder
we could adjust the plays so that they run as non-root.
Currently, you can use the increase_root_size_gigs
variable to set the size of the image used for deploying virtual machines.
However, this goes across all virtual machines instantiated. It would be ideal to make this another configuration option to the virtual_machines
list, so you can specify a disk size for a single virtual machine.
We have a few playbooks that are no longer necessary as we've iterated on deploying GlusterFS, and have a better method. This issue will be used as part of some deprecated playbook removal prior to documentation updates.
It would be useful to have some debug
output that shows the captured values of currently installed binary versions of various pieces of software.
Unsure if enhancement, or bug, but, I have fixes pending for the IPv6 deployment process which are outstanding on top of the base lab as earlier merged in #106
Currently CRI-O with Kube 1.7.x is seemingly busted.
Had trouble coming up with that. So... it's a todo.
Firstly, Thank you very much guys for your awesome work. Recently started working on kubernetes and came through your repository and kucean is really helpful in creating my own Kubernetes cluster.
I picked latest tag v0.1.6 and started the process in installing the cluster. Then i have seen this below error:
fatal: [kube-master]: FAILED! => {
"changed": true,
"cmd": "kubeadm init --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log",
"delta": "0:00:00.825112",
"end": "2017-12-06 11:35:59.254000",
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "kubeadm init --pod-network-cidr 10.244.0.0/16 > /var/log/kubeadm.init.log",
"_uses_shell": true,
"chdir": null,
"creates": "/etc/.kubeadm-complete",
"executable": null,
"removes": null,
"warn": true
}
},
"rc": 2,
"start": "2017-12-06 11:35:58.428888",
"stderr": "[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings\n[preflight] Some fatal errors occurred:\n\trunning with swap on is not supported. Please disable swap\n[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`",
"stderr_lines": [
"[preflight] WARNING: Connection to \"https://172.29.123.19:6443\" uses proxy \"http://proxy.esl.cisco.com:80\". If that is not intended, adjust your proxy settings",
"[preflight] Some fatal errors occurred:",
"\trunning with swap on is not supported. Please disable swap",
"[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`"
],
"stdout": "",
"stdout_lines": []
}
PLAY RECAP ***********************************************************************************************************************************************************
kube-master : ok=20 changed=3 unreachable=0 failed=1
kube-node-1 : ok=15 changed=3 unreachable=0 failed=0
Can any one help me out in getting rid of this and help me getting my Kubernetes cluster up.
Thanks,
Praveen.
I tried to deploy a k8's cluster with the pod_network_type set to "weave". When this is deployed the nodes do not transition to the Ready state.
The status of the nodes is shown below:
[centos@k8s-master-1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-1 NotReady master 1m v1.8.1
k8s-minion-1 NotReady 32s v1.8.1
k8s-minion-2 NotReady 32s v1.8.1
I'm not sure where to start looking. Any pointers will be greatly appreciated. Thank you so much!
Blocked by #118
Create a VM that could potentially run a registry, but for now, the scope of this work is to simply create a small VM that can hold the artifacts and allow a synchronize from the build VM to the artifact VM. This adds the functionality we'll need in the next steps.
Currently we somewhat assume you're deploying into a baremetal / virtual host environment.
These assumptions break the ability to deploy with AWX, and into cloud provider environments (like an OpenStack public cloud).
Work on refactoring the playbooks here so that it's easier to run in multiple environments, and fire this off from AWX.
https://github.com/heketi/heketi/releases has a new release of 5.0.0 about 45 days ago, lines up with about when I started putting in work-arounds.
Found out with kind help from the heketi folks @ heketi/heketi#880 (comment)
Likely offending lines: https://github.com/redhat-nfvpe/kube-centos-ansible/blob/master/roles/glusterfs-kube-config/tasks/main.yml#L59-L62
...worth a test and update.
Likely a good idea, as Leif mentioned (paraphrased), "with the PV setup we have this might start to look more reasonable"
Assign it to me, I'll know what it means, -Leif
Find out the things that differentiate docker-install in the kube-centos-ansible repo, and migrate those changes into ansible-role-install-docker
, then consume it.
Need to spin up some Fedora VMs because I want to have buildah working for a whole non-docker workflow. However, buildah complains that there isn't a new enough kernel, so I'm hoping that with a Fedora kernel we can get the features that buildah is looking for.
The file vm.local.j2 has statically defined names for master and nodes sections:
[master]
kube-master
[nodes]
kube-node-1
kube-node-2
kube-node-3
This causes the generated inventory to be invalid when user specifies vm names other than default as defined in all.yml. The groups and host names in here should be generated from variable, it should also allow other groups, as not not all vms are k8s master or k8s nodes, there could be other helper VMs being created.
I think we should split virtual_machines variable to multiple variables so we can easily use those names. One possible way:
master_prefix: kube-master
master_count: 1
minion_prefix: kube-node
minion_count: 3
other_vms:
- my_custom_vm
- my_other_vm
[centos@kube-master ~]$ kubectl get svc --namespace=gluster | grep 8080 | awk '{print $2}'
ClusterIP
[centos@kube-master ~]$ kubectl get svc --namespace=gluster | grep 8080 | awk '{print $3}'
10.97.143.109
Add option to do an install from source
We've been doing enough changes and tags that we're probably due to start adding an AUTHORS.md
file and CHANGELOG
to note any changes from version to version. Something we should try and flesh out soon, as we're likely due for another tag.
The README is becoming pretty huge right now, so move some of the extra scenarios and usage section out into its own separate documentation file.
- name: Copy admin.conf to kubectl user's home
shell: >
- cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/admin.conf
+ cp -f /etc/kubernetes/admin.conf {{ kubectl_home }}/.kube/admin.conf
Leif notes:
You can replace this with synchronize or copy with remote_src: true
Also, you don't need a full shell, just command would also accomplish this.
Hi @dougbtv,
Firstly, thank you for your awesome work here. Using your playbooks I'm able to deploy a kubernetes cluster with pod network set to flannel. However, I'm unable to get the multus network plugin to work.
The minion nodes come up but the master does not come up? Any help would be greatly appreciated. I have a similar problem with the "weave" pod_network_type as well for which I'll open another issue.
Thanks.
Some roles make use of the yum
module, but it should utilize package
where possible to make this more friendly across CentOS/Fedora.
Ignore me
CRI-O and Buildah seem to be broken as of the #113 PR, I believe that likely there are upstream changes to CRI-O and Buildah that may have impacted the functionality of these roles, and not due to the change in method for installing golang.
That being said, it needs to have a run-through to make sure it's functional.
Used to have some play where I installed packages that not everyone needs, but, that I need every single time I spin up a cluster. Especially an editor and network tracing tools (e.g. tcpdump).
I'm going to create a playbook that lets you set those packages if you need.
It's broken and doesn't search cluster.local
-- so you gotta configure that by hand, often.
At any rate...
use scratch.sh from https://gist.github.com/dougbtv/67589a7b3e443d1b4e2cdf05698f58ca
as a reference.
Doug has a set of raw notes for the IPv6 deployment. He needs to clean them up generally to use as documentation stubs for the IPv6 deployment process.
And also move those roles, and this project under the nfvpe github repos.
Per fpan, we could install bins from say this release from a fork.
Could be useful in the ipv6 additions
I'm putting in a work-around of modifying the kubeadm config @ /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Work-around mentioned as:
we worked around it by removing KUBELET_NETWORK_ARGS from kubelet command line. after that kubeadm init worked fine and we were able to install canal cni plugin.
In issue @ kubernetes/kubernetes#43815
I'm putting a variable to execute this work-around, but, want to return to it
[root@kube-master centos]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_EXTRA_ARGS
I'm compiling notes on how I create my own development environment. Once I've got that documentation complete and tested, I'll convert it to playbooks.
When building a virtual_machines
list, we should be able to parameterize the list so that you can specify requirements for the VMs, like RAM, CPU, etc independently. This will be required for the build environment work.
Right now, when you go and run kube-centos-ansible (more on that later), it starts to get a little wordy. For example:
ansible-playbook -i inventory/virthost/ virthost-setup.yml
It would be a lot nicer to do something like...
kean setup
kean would be shorthand for kube-ansible
which is the new proposed name for this project, since we're not strictly limited to centos
any more.
kean
So basically I just want to build a simple (lolz...) bash script which mostly just wraps some commands with simplified versions. The initial version should support the following:
virthost-setup.yml
kube-install.yml
vm-teardown.yml
builder.yml
Some environment variables could be loaded from ~/.kean.cfg
which would contain things like the inventory path to use. We could start expanding on things like adding other Ansible variables that would be passed in like -e
etc. I don't think that should be strictly necessary for the initial deploy.
The purpose is to make iterating a bit easier and to make the barrier of entry a bit lower for people coming to this repository. We don't necessarily want to replace everything and make something incredibly complicated, but rather just build out some shorthand from the larger commands we have now.
...needs a few missing things.
in my case one of which is location of the ssh key, but, there may be others.
Currently when you run the kube-install.yml
you get all these ugly ignored iptables
plays. We should fix these so that they are either dealt with more cleanly, or even setup the firewall rules correctly.
TASK [kube-install : Stop iptables :(] ********************************************************************************
fatal: [kube-master]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-1]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-2]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-3]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 9, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Stop iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
TASK [kube-install : Disable iptables :(] *****************************************************************************
fatal: [kube-master]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-1]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-2]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
...ignoring
fatal: [kube-node-3]: FAILED! => {"failed": true, "msg": "The task includes an option with an undefined variable. The error was: '__firewall_service' is undefined\n\nThe error appears to have been in '/home/leif/kube-centos-ansible/roles/kube-install/tasks/main.yml': line 15, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: \"Disable iptables :(\"\n ^ here\nThis one looks easy to fix. It seems that there is a value started\nwith a quote, and the YAML parser is expecting to see the line ended\nwith the same kind of quote. For instance:\n\n when: \"ok\" in result.stdout\n\nCould be written as:\n\n when: '\"ok\" in result.stdout'\n\nOr equivalently:\n\n when: \"'ok' in result.stdout\"\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: '__firewall_service' is undefined"}
We shouldn't really need to have an all_vms
group, because Ansible already had a built in construct to signal that called all
.
Update the project to use the all
group instead, which will make deploying with AWX a bit simpler. Right now you need to manually add the all_vms
group that includes the master
and nodes
groups, but none of that should really be necessary.
Need to document usage in the readme, and Doug also plans to create a blog post showing a full out scenario and description of usage.
Our documentation and blog posts could really use a refresh. Go through the documentation and create a new set of documentation / clean up README so that people can easily approach the configuration of this environment locally.
Assume we're going to be doing this with a virtual machine using KVM.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.