Coder Social home page Coder Social logo

kubevirt-ssp-operator's Introduction

This project has been deprecated, and replaced by the new SSP operator.

kubevirt-ssp-operator

Operator that manages Scheduling, Scale and Performance addons for KubeVirt

Prerequisites

  • Golang environment and GOPATH correctly set
  • Docker (used for creating container images, etc.) with access for the current user
  • a Kubernetes 1.13 /OpenShift 4 instance
  • Operator SDK

Installation instructions

The kubevirt-ssp-operator requires a Openshift cluster to run properly. Installation on vanilla kubernetes is technically possible, but many features will not work, so this option is unsupported.

Using HCO

The Hyperconverged Cluster Operator automatically installs the SSP operator when deploying. So you can just install the HCO on your openshift cluster.

Manual installation steps

We assume you install the kubevirt-ssp-operator AFTER that kubevirt is succesfully deployed on the same cluster.

You can install the kubevirt-ssp-operator using the provided manifests.

Assuming you work from the operator source tree root:

cd kubevirt-ssp-operator

Select the namespace you want to install the operator into. If unsure, the kubevirt namespace is a safe choice:

export NAMESPACE=kubevirt

To avoid incurring in the github API throttling, if you have a github personal access token, you should set it now, by doing something like

export GITHUB_TOKEN=...

Now, run in your repo:

hack/install-operator.sh $NAMESPACE

Generate the YAML manifests

The generation process requires the operator SDK binary. If present in your path, the process will use that, otherwise it will be downloaded from the release channel. To regenerate the manifests, do in your repo:

make manifests

Find the manifests in the _out directory once done.

Manifests, CSV generator and HCO integration

The kubevirt-ssp-operator provides three way to consume its manifests

  1. individual manifests files under deploy and deploy/crds. Please note that deploy/olm-catalog is autogenerated. these are the authoritative manifests that the developers maintain and enhance, for example when they add features. An end user should not, however, consume them directly. If you want to install the kubevirt-ssp-operator without HCO, please use the hack/install-operator.sh helper.
  2. CSVs and manifests to be used with HCO. HCO is the preferred way to deploy the kubevirt-ssp-operator. We provide CSV file and package file (and everything else), autogenerated on release using make manifests. These manifests are available for download in the release page. This step creates both the unversioned CSV (see below) and the versioned -per release CSV. This is because...
  3. Recent HCO prefer to consume the CSV from the container images, using the org.kubevirt.hco.csv-generator.v1 LABEL entrypoint. The HCO build process will invoke the script on each container, expecting an up to date, dynamically generated CSV as output, and will take care of merging. In the kubevirt-ssp-operator case, this is created from the unversioned CSV generated in the step #2 above.

Functional tests

We use traviskube to integrate the functional tests on travis. Make sure you initialize the submodules. In your repo:

git submodule init

To run the functional tests, you will need access to a OKD cluster. The travis script set up from scratch a minishift environment to run the tests into.

Once the environment is set up, you can run the tests by doing, in your repo:

make functests

kubevirt-ssp-operator's People

Contributors

akrejcir avatar davidvossel avatar ffromani avatar irosenzw avatar ksimon1 avatar marsik avatar omeryahud avatar petrkotas avatar phoracek avatar rmohr avatar shwetaap avatar tiraboschi avatar yanirq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kubevirt-ssp-operator's Issues

Random failures on "Wait for the node-labeller to start" task on KubevirtNodeLabeller role

Not sure how and how much this is reproducible but on HCO CI we got an error on "Wait for the node-labeller to start" task on KubevirtNodeLabeller role.

Here the logs:

{"level":"info","ts":1575298868.440825,"logger":"runner","msg":"Ansible-runner exited successfully","job":"6458151258009196224","name":"template-validator-hyperconverged-cluster","namespace":"kubevirt-hyperconverged"}
{"level":"info","ts":1575298871.0519183,"logger":"proxy","msg":"Cache miss: template.openshift.io/v1, Kind=Template, openshift/centos8-desktop-small-v0.7.0"}
{"level":"info","ts":1575298871.773912,"logger":"proxy","msg":"Injecting owner reference"}
{"level":"info","ts":1575298873.1728268,"logger":"logging_event_handler","msg":"[playbook task]","name":"template-validator-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","gvk":"kubevirt.io/v1, Kind=KubevirtTemplateValidator","event_type":"playbook_on_task_start","job":"7708864119878773697","EventData.Name":"Gathering Facts"}
{"level":"info","ts":1575298873.3979034,"logger":"proxy","msg":"Cache miss: apps/v1, Kind=DaemonSet, kubevirt-hyperconverged/kubevirt-node-labeller"}
{"level":"info","ts":1575298873.663067,"logger":"proxy","msg":"Cache miss: template.openshift.io/v1, Kind=Template, openshift/centos8-desktop-tiny-v0.7.0"}
{"level":"info","ts":1575298876.2454655,"logger":"proxy","msg":"Injecting owner reference"}
{"level":"error","ts":1575298876.3249695,"logger":"logging_event_handler","msg":"","name":"node-labeller-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","gvk":"kubevirt.io/v1, Kind=KubevirtNodeLabellerBundle","event_type":"runner_on_failed","job":"5793712081029260939","EventData.Task":"Wait for the node-labeller to start","EventData.TaskArgs":"","EventData.FailedTaskPath":"/opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:44","error":"[playbook task failed]","stacktrace":"github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tsrc/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/events.loggingEventHandler.Handle\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/events/log_events.go:84"}
{"level":"error","ts":1575298876.5155296,"logger":"runner","msg":"\u001b[0;34mansible-playbook 2.7.10\u001b[0m\r\n\u001b[0;34m  config file = /etc/ansible/ansible.cfg\u001b[0m\r\n\u001b[0;34m  configured module search path = [u'/usr/share/ansible/openshift']\u001b[0m\r\n\u001b[0;34m  ansible python module location = /usr/lib/python2.7/site-packages/ansible\u001b[0m\r\n\u001b[0;34m  executable location = /usr/bin/ansible-playbook\u001b[0m\r\n\u001b[0;34m  python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]\u001b[0m\r\n\n\u001b[0;34mUsing /etc/ansible/ansible.cfg as config file\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet host_list requirements, check plugin documentation if this is unexpected\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet script requirements, check plugin documentation if this is unexpected\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet script requirements, check plugin documentation if this is unexpected\u001b[0m\n\r\nPLAYBOOK: kubevirtnodelabeller.yaml ********************************************\n\u001b[0;34m1 plays in /opt/ansible/kubevirtnodelabeller.yaml\u001b[0m\n\r\nPLAY [localhost] ***************************************************************\n\r\nTASK [Gathering Facts] *********************************************************\r\n\u001b[1;30mtask path: /opt/ansible/kubevirtnodelabeller.yaml:1\u001b[0m\n\u001b[0;32mok: [localhost]\u001b[0m\n\u001b[0;34mMETA: ran handlers\u001b[0m\n\r\nTASK [KubevirtCircuitBreaker : Extract the CR info] ****************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtCircuitBreaker/tasks/main.yml:3\u001b[0m\n\u001b[0;32mok: [localhost] => {\"ansible_facts\": {\"cr_info\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34288\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}]}}}, \"changed\": false}\u001b[0m\n\r\nTASK [KubevirtCircuitBreaker : Extract the disable info] ***********************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtCircuitBreaker/tasks/main.yml:6\u001b[0m\n\u001b[0;32mok: [localhost] => {\"ansible_facts\": {\"is_paused\": false}, \"changed\": false}\u001b[0m\n\u001b[0;34mMETA: \u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Create the node labeller roles] *******************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:2\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: v1\u001b[0m\r\n\u001b[0;32mkind: ServiceAccount\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m  name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32m  namespace: kubevirt-hyperconverged) => {\"changed\": false, \"item\": \"apiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n  name: kubevirt-node-labeller\\n  namespace: kubevirt-hyperconverged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"v1\", \"imagePullSecrets\": [{\"name\": \"kubevirt-node-labeller-dockercfg-sdxjq\"}], \"kind\": \"ServiceAccount\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:36Z\", \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34030\", \"selfLink\": \"/api/v1/namespaces/kubevirt-hyperconverged/serviceaccounts/kubevirt-node-labeller\", \"uid\": \"cc8e24b6-1513-11ea-845f-664f163f5f0f\"}, \"secrets\": [{\"name\": \"kubevirt-node-labeller-dockercfg-sdxjq\"}, {\"name\": \"kubevirt-node-labeller-token-dzc5c\"}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: rbac.authorization.k8s.io/v1\u001b[0m\r\n\u001b[0;32mkind: ClusterRole\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m  name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32mrules:\u001b[0m\r\n\u001b[0;32m- apiGroups:\u001b[0m\r\n\u001b[0;32m  - \"\"\u001b[0m\r\n\u001b[0;32m  resources:\u001b[0m\r\n\u001b[0;32m  - nodes\u001b[0m\r\n\u001b[0;32m  verbs:\u001b[0m\r\n\u001b[0;32m  - get\u001b[0m\r\n\u001b[0;32m  - patch\u001b[0m\r\n\u001b[0;32m  - update\u001b[0m\r\n\u001b[0;32m- apiGroups:\u001b[0m\r\n\u001b[0;32m  - security.openshift.io\u001b[0m\r\n\u001b[0;32m  resources:\u001b[0m\r\n\u001b[0;32m  - securitycontextconstraints\u001b[0m\r\n\u001b[0;32m  verbs:\u001b[0m\r\n\u001b[0;32m  - use\u001b[0m\r\n\u001b[0;32m  resourceNames:\u001b[0m\r\n\u001b[0;32m  - privileged) => {\"changed\": false, \"item\": \"apiVersion: rbac.authorization.k8s.io/v1\\nkind: ClusterRole\\nmetadata:\\n  name: kubevirt-node-labeller\\nrules:\\n- apiGroups:\\n  - \\\"\\\"\\n  resources:\\n  - nodes\\n  verbs:\\n  - get\\n  - patch\\n  - update\\n- apiGroups:\\n  - security.openshift.io\\n  resources:\\n  - securitycontextconstraints\\n  verbs:\\n  - use\\n  resourceNames:\\n  - privileged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"ClusterRole\", \"metadata\": {\"annotations\": {\"operator-sdk/primary-resource\": \"kubevirt-hyperconverged/node-labeller-hyperconverged-cluster\", \"operator-sdk/primary-resource-type\": \"KubevirtNodeLabellerBundle.kubevirt.io\"}, \"creationTimestamp\": \"2019-12-02T14:55:38Z\", \"name\": \"kubevirt-node-labeller\", \"resourceVersion\": \"34047\", \"selfLink\": \"/apis/rbac.authorization.k8s.io/v1/clusterroles/kubevirt-node-labeller\", \"uid\": \"cd6d3dfd-1513-11ea-845f-664f163f5f0f\"}, \"rules\": [{\"apiGroups\": [\"\"], \"resources\": [\"nodes\"], \"verbs\": [\"get\", \"patch\", \"update\"]}, {\"apiGroups\": [\"security.openshift.io\"], \"resourceNames\": [\"privileged\"], \"resources\": [\"securitycontextconstraints\"], \"verbs\": [\"use\"]}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: rbac.authorization.k8s.io/v1\u001b[0m\r\n\u001b[0;32mkind: ClusterRoleBinding\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m  name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32mroleRef:\u001b[0m\r\n\u001b[0;32m  apiGroup: rbac.authorization.k8s.io\u001b[0m\r\n\u001b[0;32m  kind: ClusterRole\u001b[0m\r\n\u001b[0;32m  name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32msubjects:\u001b[0m\r\n\u001b[0;32m- kind: ServiceAccount\u001b[0m\r\n\u001b[0;32m  name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32m  namespace: kubevirt-hyperconverged) => {\"changed\": false, \"item\": \"apiVersion: rbac.authorization.k8s.io/v1\\nkind: ClusterRoleBinding\\nmetadata:\\n  name: kubevirt-node-labeller\\nroleRef:\\n  apiGroup: rbac.authorization.k8s.io\\n  kind: ClusterRole\\n  name: kubevirt-node-labeller\\nsubjects:\\n- kind: ServiceAccount\\n  name: kubevirt-node-labeller\\n  namespace: kubevirt-hyperconverged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"ClusterRoleBinding\", \"metadata\": {\"annotations\": {\"operator-sdk/primary-resource\": \"kubevirt-hyperconverged/node-labeller-hyperconverged-cluster\", \"operator-sdk/primary-resource-type\": \"KubevirtNodeLabellerBundle.kubevirt.io\"}, \"creationTimestamp\": \"2019-12-02T14:55:39Z\", \"name\": \"kubevirt-node-labeller\", \"resourceVersion\": \"34095\", \"selfLink\": \"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubevirt-node-labeller\", \"uid\": \"ce35740f-1513-11ea-845f-664f163f5f0f\"}, \"roleRef\": {\"apiGroup\": \"rbac.authorization.k8s.io\", \"kind\": \"ClusterRole\", \"name\": \"kubevirt-node-labeller\"}, \"subjects\": [{\"kind\": \"ServiceAccount\", \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\"}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: v1\u001b[0m\r\n\u001b[0;32mkind: ConfigMap\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m  name: kubevirt-cpu-plugin-configmap\u001b[0m\r\n\u001b[0;32m  namespace: kubevirt-hyperconverged\u001b[0m\r\n\u001b[0;32mdata:\u001b[0m\r\n\u001b[0;32m  cpu-plugin-configmap.yaml: |-\u001b[0m\r\n\u001b[0;32m    obsoleteCPUs:\u001b[0m\r\n\u001b[0;32m      - \"486\"\u001b[0m\r\n\u001b[0;32m      - \"pentium\"\u001b[0m\r\n\u001b[0;32m      - \"pentium2\"\u001b[0m\r\n\u001b[0;32m      - \"pentium3\"\u001b[0m\r\n\u001b[0;32m      - \"pentiumpro\"\u001b[0m\r\n\u001b[0;32m      - \"coreduo\"\u001b[0m\r\n\u001b[0;32m      - \"n270\"\u001b[0m\r\n\u001b[0;32m      - \"core2duo\"\u001b[0m\r\n\u001b[0;32m      - \"Conroe\"\u001b[0m\r\n\u001b[0;32m      - \"athlon\"\u001b[0m\r\n\u001b[0;32m      - \"phenom\"\u001b[0m\r\n\u001b[0;32m    minCPU: \"Penryn\"\u001b[0m\r\n\u001b[0;32m\u001b[0m\r\n\u001b[0;32m) => {\"changed\": false, \"item\": \"apiVersion: v1\\nkind: ConfigMap\\nmetadata:\\n  name: kubevirt-cpu-plugin-configmap\\n  namespace: kubevirt-hyperconverged\\ndata:\\n  cpu-plugin-configmap.yaml: |-\\n    obsoleteCPUs:\\n      - \\\"486\\\"\\n      - \\\"pentium\\\"\\n      - \\\"pentium2\\\"\\n      - \\\"pentium3\\\"\\n      - \\\"pentiumpro\\\"\\n      - \\\"coreduo\\\"\\n      - \\\"n270\\\"\\n      - \\\"core2duo\\\"\\n      - \\\"Conroe\\\"\\n      - \\\"athlon\\\"\\n      - \\\"phenom\\\"\\n    minCPU: \\\"Penryn\\\"\\n\\n\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"v1\", \"data\": {\"cpu-plugin-configmap.yaml\": \"obsoleteCPUs:\\n  - \\\"486\\\"\\n  - \\\"pentium\\\"\\n  - \\\"pentium2\\\"\\n  - \\\"pentium3\\\"\\n  - \\\"pentiumpro\\\"\\n  - \\\"coreduo\\\"\\n  - \\\"n270\\\"\\n  - \\\"core2duo\\\"\\n  - \\\"Conroe\\\"\\n  - \\\"athlon\\\"\\n  - \\\"phenom\\\"\\nminCPU: \\\"Penryn\\\"\"}, \"kind\": \"ConfigMap\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:41Z\", \"name\": \"kubevirt-cpu-plugin-configmap\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34152\", \"selfLink\": \"/api/v1/namespaces/kubevirt-hyperconverged/configmaps/kubevirt-cpu-plugin-configmap\", \"uid\": \"cf0d7fab-1513-11ea-845f-664f163f5f0f\"}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Create the node labeller daemon set] **************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:8\u001b[0m\nd/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-\n\u001b[0;32mok: [localhost] => {\"changed\": false, \"method\": \"patch\", \"result\": {\"apiVersion\": \"apps/v1\", \"kind\": \"DaemonSet\", \"metadata\": {\"annotations\": {\"deprecated.daemonset.template.generation\": \"1\"}, \"creationTimestamp\": \"2019-12-02T14:55:49Z\", \"generation\": 1, \"labels\": {\"app\": \"kubevirt-node-labeller\"}, \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34228\", \"selfLink\": \"/apis/apps/v1/namespaces/kubevirt-hyperconverged/daemonsets/kubevirt-node-labeller\", \"uid\": \"d4277cef-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"revisionHistoryLimit\": 10, \"selector\": {\"matchLabels\": {\"app\": \"kubevirt-node-labeller\"}}, \"template\": {\"metadata\": {\"creationTimestamp\": null, \"labels\": {\"app\": \"kubevirt-node-labeller\"}}, \"spec\": {\"containers\": [{\"args\": [\"infinity\"], \"command\": [\"sleep\"], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller-sleeper\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\"}], \"dnsPolicy\": \"ClusterFirst\", \"initContainers\": [{\"args\": [\"cp /usr/bin/kvm-caps-info-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/kvm-info-nfd-plugin:v0.5.8\", \"imagePullPolicy\": \"Always\", \"name\": \"kvm-info-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"args\": [\"cp /plugin/dest/cpu-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/; cp /config/cpu-plugin-configmap.yaml /etc/kubernetes/node-feature-discovery/source.d/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-source\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \"cpu-config\"}]}}, \"updateStrategy\": {\"rollingUpdate\": {\"maxUnavailable\": 1}, \"type\": \"RollingUpdate\"}}, \"status\": {\"currentNumberScheduled\": 2, \"desiredNumberScheduled\": 2, \"numberMisscheduled\": 0, \"numberReady\": 0, \"numberUnavailable\": 2, \"observedGeneration\": 1, \"updatedNumberScheduled\": 2}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Refresh node-labeller var] ************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:14\u001b[0m\nd/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-\nsource\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \n\u001b[0;32mok: [localhost] => {\"changed\": false, \"method\": \"patch\", \"result\": {\"apiVersion\": \"apps/v1\", \"kind\": \"DaemonSet\", \"metadata\": {\"annotations\": {\"deprecated.daemonset.template.generation\": \"1\"}, \"creationTimestamp\": \"2019-12-02T14:55:49Z\", \"generation\": 1, \"labels\": {\"app\": \"kubevirt-node-labeller\"}, \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34228\", \"selfLink\": \"/apis/apps/v1/namespaces/kubevirt-hyperconverged/daemonsets/kubevirt-node-labeller\", \"uid\": \"d4277cef-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"revisionHistoryLimit\": 10, \"selector\": {\"matchLabels\": {\"app\": \"kubevirt-node-labeller\"}}, \"template\": {\"metadata\": {\"creationTimestamp\": null, \"labels\": {\"app\": \"kubevirt-node-labeller\"}}, \"spec\": {\"containers\": [{\"args\": [\"infinity\"], \"command\": [\"sleep\"], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller-sleeper\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\"}], \"dnsPolicy\": \"ClusterFirst\", \"initContainers\": [{\"args\": [\"cp /usr/bin/kvm-caps-info-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/kvm-info-nfd-plugin:v0.5.8\", \"imagePullPolicy\": \"Always\", \"name\": \"kvm-info-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"args\": [\"cp /plugin/dest/cpu-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/; cp /config/cpu-plugin-configmap.yaml /etc/kubernetes/node-feature-discovery/source.d/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-source\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \"cpu-config\"}]}}, \"updateStrategy\": {\"rollingUpdate\": {\"maxUnavailable\": 1}, \"type\": \"RollingUpdate\"}}, \"status\": {\"currentNumberScheduled\": 2, \"desiredNumberScheduled\": 2, \"numberMisscheduled\": 0, \"numberReady\": 0, \"numberUnavailable\": 2, \"observedGeneration\": 1, \"updatedNumberScheduled\": 2}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Set UseKVM condition] *****************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:20\u001b[0m\n\u001b[0;33mchanged: [localhost] => {\"changed\": true, \"result\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34464\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster/status\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}, {\"message\": \"KVM support is enabled.\", \"reason\": \"enabled\", \"status\": \"True\", \"type\": \"KVMSupport\"}]}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Set progressing condition] ************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:32\u001b[0m\n\u001b[0;33mchanged: [localhost] => {\"changed\": true, \"result\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34470\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster/status\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}, {\"message\": \"KVM support is enabled.\", \"reason\": \"enabled\", \"status\": \"True\", \"type\": \"KVMSupport\"}, {\"message\": \"Node-labeller is progressing.\", \"reason\": \"progressing\", \"status\": \"True\", \"type\": \"Progressing\"}]}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Wait for the node-labeller to start] **************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:44\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (300 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (299 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (298 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (297 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (296 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (295 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (294 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (293 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (292 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (291 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (290 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (289 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (288 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (287 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (286 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (285 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (284 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (283 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (282 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (281 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (280 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (279 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (278 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (277 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (276 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (275 retries left).\u001b[0m\n\u001b[0;31mfatal: [localhost]: FAILED! => {\"msg\": \"The conditional check 'nl_status.resources[0].status.currentNumberScheduled == nl_status.resources[0].status.numberReady | default(false)' failed. The error was: error while evaluating conditional (nl_status.resources[0].status.currentNumberScheduled == nl_status.resources[0].status.numberReady | default(false)): list object has no element 0\"}\u001b[0m\n\r\nPLAY RECAP *********************************************************************\r\n\u001b[0;31mlocalhost\u001b[0m                  : \u001b[0;32mok=8   \u001b[0m \u001b[0;33mchanged=2   \u001b[0m unreachable=0    \u001b[0;31mfailed=1   \u001b[0m\r\n\n","job":"5793712081029260939","name":"node-labeller-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","error":"exit status 2","stacktrace":"github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tsrc/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/runner.(*runner).Run.func1\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/runner/runner.go:289"}
E1202 15:01:17.634517       1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51

And here a link to the failed job:
https://kubevirt-master-stdci-production.apps.ovirt.org/job/kubevirt_hyperconverged-cluster-operator_standard-check-pr/966//artifact/check-patch.okd-4.1.el7.x86_64/mock_logs/script/stdout_stderr.log

expose and document a way to override container versions

The operator should install the versions of the container we (developers) believe are the best for any given version.
However, as escape hatch, we should offer a way to the cluster administrator to override the opertator choices.
This fits nicely with the spec.version field of the operator CRs. We should

  • expose
  • support (= make sure this not breaks)
  • document
    the fact that the value set in spec.version (if any) takes precedence over the operator defaults.

Example (versions made up): the operator v1.0.2 wants to install the validator v0.6.1, but if the cluster admin sets the version v0.5.8 of the validator, this should be honored.

SSP seems to report ready before node-labeller & template vlidator are ready

NAME                                               READY   STATUS              RESTARTS   AGE
cdi-apiserver-6fc5dc58c7-bn6jv                     1/1     Running             0          88s
cdi-deployment-d959c59db-x9bhk                     1/1     Running             0          87s
cdi-operator-6fbd7df95-vcw7v                       1/1     Running             0          2m13s
cdi-uploadproxy-5594d48c97-w754f                   1/1     Running             0          87s
cluster-network-addons-operator-69d7484bb4-6hd2m   1/1     Running             0          2m13s
hyperconverged-cluster-operator-57dc7f587b-jbzzs   1/1     Running             0          2m13s
kubevirt-node-labeller-jbq8b                       0/1     Init:1/4            0          71s
kubevirt-node-labeller-xz58q                       0/1     Init:0/4            0          71s
kubevirt-ssp-operator-6d47b894dc-xmghh             1/1     Running             0          2m13s
machine-remediation-operator-bc4bc6c7b-mrbpq       1/1     Running             0          2m13s
node-maintenance-operator-78d8b664-m5zmw           1/1     Running             0          2m13s
virt-api-8476fb6656-ksl59                          1/1     Running             0          84s
virt-api-8476fb6656-rls5q                          1/1     Running             0          84s
virt-controller-74594d876b-hdlbt                   1/1     Running             0          40s
virt-controller-74594d876b-vqdht                   1/1     Running             0          40s
virt-handler-b22bk                                 1/1     Running             0          40s
virt-handler-pdkvk                                 1/1     Running             0          40s
virt-operator-6dff79c4db-chbr9                     1/1     Running             0          2m13s
virt-operator-6dff79c4db-hvkcl                     1/1     Running             0          2m13s
virt-template-validator-f89b6fb6b-8jpkz            0/1     ContainerCreating   0          71s
ReconcileComplete	True	Reconcile completed successfully
Available	True	Reconcile completed successfully
Progressing	False	Reconcile completed successfully
Degraded	False	Reconcile completed successfully
Upgradeable	True	Reconcile completed successfully

Improve test suite

We should add automated test to perform basic validation.
Ideas:

  • e2e tests
  • operator-sdk scorecard

The main challenge seems to be having a OCP/OKD cluster available, needed to run tests.

SSP CSV Generator is broken - creates corrupted CRDs

Description of problem:

SSP image quay.io/fromani/kubevirt-ssp-operator-container:v1.2.1 image was changed from quay.io/fromani/kubevirt-ssp-operator-container@sha256:a8f8bd3c02db4c76a80b00288e4f9bf8e0f9885db1f303202771e44051d9127a to quay.io/fromani/kubevirt-ssp-operator-container@sha256:e7b8222457fe126c746b3602410fa913ffb9ff0b5bef67f07645eae03ec847d2

It seems that as result, the CSV generator creates corrupted CRD files, with duplecated keys, e.g.:

  preserveUnknownFields: false
  preserveUnknownFields: false
  preserveUnknownFields: false
  preserveUnknownFields: false

/kind bug

What happened:
CRDs are corruptted.

HCO tests fail.

What you expected to happen:
Keep generate valid CRDs

How to reproduce it (as minimally and precisely as possible):
docker run --rm --entrypoint=/usr/bin/csv-generator quay.io/fromani/kubevirt-ssp-operator-container:v1.2.1 --namespace=kubevirt-hyperconverged --csv-version=1.3.0 --operator-image=quay.io/fromani/kubevirt-ssp-operator-container@sha256:e7b8222457fe126c746b3602410fa913ffb9ff0b5bef67f07645eae03ec847d2 --operator-version=v1.2.1 --dumpcdrs

Check the result CRDs.

Anything else we need to know?:

Environment:

  • SSP Operator version: v1.2.1

CRs lack "status" field

We need to add "status" fields to express installation status of CRs.

Additionally: in the current form, the operator is likely trying to (re)deploy CRs every iteration. We need to check this. If so, the "status" field can prevent that.

Verify templates are correctly upgraded

we need a functest to make sure the templates are correctly upgraded TO v0.6.2
This originated from the verification of the HCO 0.0.1 -> 0.0.2 upgrade flow.
The functest should document (using the test itself) the intended flow and verify it works in our CI.

allow to update deps exactly once

We should have a single unified place on which declare dependencies.
Automation should be added to update the versions all across the project:

  1. CRs
  2. _defaults.yml
  3. pkg/versions

It doesn't matter what this place is -any existing one or a new one entirely- as long as it is one and we have automation to update everything else.

operator-courier should run on the unit test flow

currently, the manifests are validated using operator-courier only in the release flow (see makefile + travis config). This makes failures harder to debug, and errors are detected very late in the cycle.
Thus, the manifest check, or at least the courier validation, should be also done in the unit-test flow, on a per-PR basis.

defaults: unify the settings

so far we have:

kubevirt-ssp-operator $ find . -iname "*defaults*"
# expunging non-relevant informations
./_defaults.yml
./playbooks/_defaults.yml
./roles/KubevirtRepoInfo/defaults
./roles/KubevirtTemplateValidator/defaults
./roles/KubevirtCommonTemplatesBundle/defaults
./roles/KubevirtNodeLabeller/defaults

All of these must be kept in sync, and this rarely happens, and it is time consuming anyway.
We need to have one central place to supply defaults, or at least a documented, automated flow to update them.

pluggable defaults.yml

It feels very awkward to be force to rebuild the container image for each dependency bump.
I'd like to have a mechanism (e.g. a ConfigMap)? to update the dependency list at runtime.

Expected flow:

  • urgent fixes could be delivered using either a new image or an update dependencies map
  • at consistent but slower pace, the base dependency map is updated and the new container image always contain the freshest at build time

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.