Not sure how and how much this is reproducible but on HCO CI we got an error on "Wait for the node-labeller to start" task on KubevirtNodeLabeller role.
{"level":"info","ts":1575298868.440825,"logger":"runner","msg":"Ansible-runner exited successfully","job":"6458151258009196224","name":"template-validator-hyperconverged-cluster","namespace":"kubevirt-hyperconverged"}
{"level":"info","ts":1575298871.0519183,"logger":"proxy","msg":"Cache miss: template.openshift.io/v1, Kind=Template, openshift/centos8-desktop-small-v0.7.0"}
{"level":"info","ts":1575298871.773912,"logger":"proxy","msg":"Injecting owner reference"}
{"level":"info","ts":1575298873.1728268,"logger":"logging_event_handler","msg":"[playbook task]","name":"template-validator-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","gvk":"kubevirt.io/v1, Kind=KubevirtTemplateValidator","event_type":"playbook_on_task_start","job":"7708864119878773697","EventData.Name":"Gathering Facts"}
{"level":"info","ts":1575298873.3979034,"logger":"proxy","msg":"Cache miss: apps/v1, Kind=DaemonSet, kubevirt-hyperconverged/kubevirt-node-labeller"}
{"level":"info","ts":1575298873.663067,"logger":"proxy","msg":"Cache miss: template.openshift.io/v1, Kind=Template, openshift/centos8-desktop-tiny-v0.7.0"}
{"level":"info","ts":1575298876.2454655,"logger":"proxy","msg":"Injecting owner reference"}
{"level":"error","ts":1575298876.3249695,"logger":"logging_event_handler","msg":"","name":"node-labeller-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","gvk":"kubevirt.io/v1, Kind=KubevirtNodeLabellerBundle","event_type":"runner_on_failed","job":"5793712081029260939","EventData.Task":"Wait for the node-labeller to start","EventData.TaskArgs":"","EventData.FailedTaskPath":"/opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:44","error":"[playbook task failed]","stacktrace":"github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tsrc/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/events.loggingEventHandler.Handle\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/events/log_events.go:84"}
{"level":"error","ts":1575298876.5155296,"logger":"runner","msg":"\u001b[0;34mansible-playbook 2.7.10\u001b[0m\r\n\u001b[0;34m config file = /etc/ansible/ansible.cfg\u001b[0m\r\n\u001b[0;34m configured module search path = [u'/usr/share/ansible/openshift']\u001b[0m\r\n\u001b[0;34m ansible python module location = /usr/lib/python2.7/site-packages/ansible\u001b[0m\r\n\u001b[0;34m executable location = /usr/bin/ansible-playbook\u001b[0m\r\n\u001b[0;34m python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]\u001b[0m\r\n\n\u001b[0;34mUsing /etc/ansible/ansible.cfg as config file\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet host_list requirements, check plugin documentation if this is unexpected\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet script requirements, check plugin documentation if this is unexpected\u001b[0m\r\n\n\u001b[0;34m/tmp/ansible-operator/runner/kubevirt.io/v1/KubevirtNodeLabellerBundle/kubevirt-hyperconverged/node-labeller-hyperconverged-cluster/inventory/hosts did not meet script requirements, check plugin documentation if this is unexpected\u001b[0m\n\r\nPLAYBOOK: kubevirtnodelabeller.yaml ********************************************\n\u001b[0;34m1 plays in /opt/ansible/kubevirtnodelabeller.yaml\u001b[0m\n\r\nPLAY [localhost] ***************************************************************\n\r\nTASK [Gathering Facts] *********************************************************\r\n\u001b[1;30mtask path: /opt/ansible/kubevirtnodelabeller.yaml:1\u001b[0m\n\u001b[0;32mok: [localhost]\u001b[0m\n\u001b[0;34mMETA: ran handlers\u001b[0m\n\r\nTASK [KubevirtCircuitBreaker : Extract the CR info] ****************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtCircuitBreaker/tasks/main.yml:3\u001b[0m\n\u001b[0;32mok: [localhost] => {\"ansible_facts\": {\"cr_info\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34288\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}]}}}, \"changed\": false}\u001b[0m\n\r\nTASK [KubevirtCircuitBreaker : Extract the disable info] ***********************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtCircuitBreaker/tasks/main.yml:6\u001b[0m\n\u001b[0;32mok: [localhost] => {\"ansible_facts\": {\"is_paused\": false}, \"changed\": false}\u001b[0m\n\u001b[0;34mMETA: \u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Create the node labeller roles] *******************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:2\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: v1\u001b[0m\r\n\u001b[0;32mkind: ServiceAccount\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32m namespace: kubevirt-hyperconverged) => {\"changed\": false, \"item\": \"apiVersion: v1\\nkind: ServiceAccount\\nmetadata:\\n name: kubevirt-node-labeller\\n namespace: kubevirt-hyperconverged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"v1\", \"imagePullSecrets\": [{\"name\": \"kubevirt-node-labeller-dockercfg-sdxjq\"}], \"kind\": \"ServiceAccount\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:36Z\", \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34030\", \"selfLink\": \"/api/v1/namespaces/kubevirt-hyperconverged/serviceaccounts/kubevirt-node-labeller\", \"uid\": \"cc8e24b6-1513-11ea-845f-664f163f5f0f\"}, \"secrets\": [{\"name\": \"kubevirt-node-labeller-dockercfg-sdxjq\"}, {\"name\": \"kubevirt-node-labeller-token-dzc5c\"}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: rbac.authorization.k8s.io/v1\u001b[0m\r\n\u001b[0;32mkind: ClusterRole\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32mrules:\u001b[0m\r\n\u001b[0;32m- apiGroups:\u001b[0m\r\n\u001b[0;32m - \"\"\u001b[0m\r\n\u001b[0;32m resources:\u001b[0m\r\n\u001b[0;32m - nodes\u001b[0m\r\n\u001b[0;32m verbs:\u001b[0m\r\n\u001b[0;32m - get\u001b[0m\r\n\u001b[0;32m - patch\u001b[0m\r\n\u001b[0;32m - update\u001b[0m\r\n\u001b[0;32m- apiGroups:\u001b[0m\r\n\u001b[0;32m - security.openshift.io\u001b[0m\r\n\u001b[0;32m resources:\u001b[0m\r\n\u001b[0;32m - securitycontextconstraints\u001b[0m\r\n\u001b[0;32m verbs:\u001b[0m\r\n\u001b[0;32m - use\u001b[0m\r\n\u001b[0;32m resourceNames:\u001b[0m\r\n\u001b[0;32m - privileged) => {\"changed\": false, \"item\": \"apiVersion: rbac.authorization.k8s.io/v1\\nkind: ClusterRole\\nmetadata:\\n name: kubevirt-node-labeller\\nrules:\\n- apiGroups:\\n - \\\"\\\"\\n resources:\\n - nodes\\n verbs:\\n - get\\n - patch\\n - update\\n- apiGroups:\\n - security.openshift.io\\n resources:\\n - securitycontextconstraints\\n verbs:\\n - use\\n resourceNames:\\n - privileged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"ClusterRole\", \"metadata\": {\"annotations\": {\"operator-sdk/primary-resource\": \"kubevirt-hyperconverged/node-labeller-hyperconverged-cluster\", \"operator-sdk/primary-resource-type\": \"KubevirtNodeLabellerBundle.kubevirt.io\"}, \"creationTimestamp\": \"2019-12-02T14:55:38Z\", \"name\": \"kubevirt-node-labeller\", \"resourceVersion\": \"34047\", \"selfLink\": \"/apis/rbac.authorization.k8s.io/v1/clusterroles/kubevirt-node-labeller\", \"uid\": \"cd6d3dfd-1513-11ea-845f-664f163f5f0f\"}, \"rules\": [{\"apiGroups\": [\"\"], \"resources\": [\"nodes\"], \"verbs\": [\"get\", \"patch\", \"update\"]}, {\"apiGroups\": [\"security.openshift.io\"], \"resourceNames\": [\"privileged\"], \"resources\": [\"securitycontextconstraints\"], \"verbs\": [\"use\"]}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: rbac.authorization.k8s.io/v1\u001b[0m\r\n\u001b[0;32mkind: ClusterRoleBinding\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32mroleRef:\u001b[0m\r\n\u001b[0;32m apiGroup: rbac.authorization.k8s.io\u001b[0m\r\n\u001b[0;32m kind: ClusterRole\u001b[0m\r\n\u001b[0;32m name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32msubjects:\u001b[0m\r\n\u001b[0;32m- kind: ServiceAccount\u001b[0m\r\n\u001b[0;32m name: kubevirt-node-labeller\u001b[0m\r\n\u001b[0;32m namespace: kubevirt-hyperconverged) => {\"changed\": false, \"item\": \"apiVersion: rbac.authorization.k8s.io/v1\\nkind: ClusterRoleBinding\\nmetadata:\\n name: kubevirt-node-labeller\\nroleRef:\\n apiGroup: rbac.authorization.k8s.io\\n kind: ClusterRole\\n name: kubevirt-node-labeller\\nsubjects:\\n- kind: ServiceAccount\\n name: kubevirt-node-labeller\\n namespace: kubevirt-hyperconverged\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"rbac.authorization.k8s.io/v1\", \"kind\": \"ClusterRoleBinding\", \"metadata\": {\"annotations\": {\"operator-sdk/primary-resource\": \"kubevirt-hyperconverged/node-labeller-hyperconverged-cluster\", \"operator-sdk/primary-resource-type\": \"KubevirtNodeLabellerBundle.kubevirt.io\"}, \"creationTimestamp\": \"2019-12-02T14:55:39Z\", \"name\": \"kubevirt-node-labeller\", \"resourceVersion\": \"34095\", \"selfLink\": \"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubevirt-node-labeller\", \"uid\": \"ce35740f-1513-11ea-845f-664f163f5f0f\"}, \"roleRef\": {\"apiGroup\": \"rbac.authorization.k8s.io\", \"kind\": \"ClusterRole\", \"name\": \"kubevirt-node-labeller\"}, \"subjects\": [{\"kind\": \"ServiceAccount\", \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\"}]}}\u001b[0m\n\u001b[0;32mok: [localhost] => (item=apiVersion: v1\u001b[0m\r\n\u001b[0;32mkind: ConfigMap\u001b[0m\r\n\u001b[0;32mmetadata:\u001b[0m\r\n\u001b[0;32m name: kubevirt-cpu-plugin-configmap\u001b[0m\r\n\u001b[0;32m namespace: kubevirt-hyperconverged\u001b[0m\r\n\u001b[0;32mdata:\u001b[0m\r\n\u001b[0;32m cpu-plugin-configmap.yaml: |-\u001b[0m\r\n\u001b[0;32m obsoleteCPUs:\u001b[0m\r\n\u001b[0;32m - \"486\"\u001b[0m\r\n\u001b[0;32m - \"pentium\"\u001b[0m\r\n\u001b[0;32m - \"pentium2\"\u001b[0m\r\n\u001b[0;32m - \"pentium3\"\u001b[0m\r\n\u001b[0;32m - \"pentiumpro\"\u001b[0m\r\n\u001b[0;32m - \"coreduo\"\u001b[0m\r\n\u001b[0;32m - \"n270\"\u001b[0m\r\n\u001b[0;32m - \"core2duo\"\u001b[0m\r\n\u001b[0;32m - \"Conroe\"\u001b[0m\r\n\u001b[0;32m - \"athlon\"\u001b[0m\r\n\u001b[0;32m - \"phenom\"\u001b[0m\r\n\u001b[0;32m minCPU: \"Penryn\"\u001b[0m\r\n\u001b[0;32m\u001b[0m\r\n\u001b[0;32m) => {\"changed\": false, \"item\": \"apiVersion: v1\\nkind: ConfigMap\\nmetadata:\\n name: kubevirt-cpu-plugin-configmap\\n namespace: kubevirt-hyperconverged\\ndata:\\n cpu-plugin-configmap.yaml: |-\\n obsoleteCPUs:\\n - \\\"486\\\"\\n - \\\"pentium\\\"\\n - \\\"pentium2\\\"\\n - \\\"pentium3\\\"\\n - \\\"pentiumpro\\\"\\n - \\\"coreduo\\\"\\n - \\\"n270\\\"\\n - \\\"core2duo\\\"\\n - \\\"Conroe\\\"\\n - \\\"athlon\\\"\\n - \\\"phenom\\\"\\n minCPU: \\\"Penryn\\\"\\n\\n\", \"method\": \"patch\", \"result\": {\"apiVersion\": \"v1\", \"data\": {\"cpu-plugin-configmap.yaml\": \"obsoleteCPUs:\\n - \\\"486\\\"\\n - \\\"pentium\\\"\\n - \\\"pentium2\\\"\\n - \\\"pentium3\\\"\\n - \\\"pentiumpro\\\"\\n - \\\"coreduo\\\"\\n - \\\"n270\\\"\\n - \\\"core2duo\\\"\\n - \\\"Conroe\\\"\\n - \\\"athlon\\\"\\n - \\\"phenom\\\"\\nminCPU: \\\"Penryn\\\"\"}, \"kind\": \"ConfigMap\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:41Z\", \"name\": \"kubevirt-cpu-plugin-configmap\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34152\", \"selfLink\": \"/api/v1/namespaces/kubevirt-hyperconverged/configmaps/kubevirt-cpu-plugin-configmap\", \"uid\": \"cf0d7fab-1513-11ea-845f-664f163f5f0f\"}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Create the node labeller daemon set] **************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:8\u001b[0m\nd/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-\n\u001b[0;32mok: [localhost] => {\"changed\": false, \"method\": \"patch\", \"result\": {\"apiVersion\": \"apps/v1\", \"kind\": \"DaemonSet\", \"metadata\": {\"annotations\": {\"deprecated.daemonset.template.generation\": \"1\"}, \"creationTimestamp\": \"2019-12-02T14:55:49Z\", \"generation\": 1, \"labels\": {\"app\": \"kubevirt-node-labeller\"}, \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34228\", \"selfLink\": \"/apis/apps/v1/namespaces/kubevirt-hyperconverged/daemonsets/kubevirt-node-labeller\", \"uid\": \"d4277cef-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"revisionHistoryLimit\": 10, \"selector\": {\"matchLabels\": {\"app\": \"kubevirt-node-labeller\"}}, \"template\": {\"metadata\": {\"creationTimestamp\": null, \"labels\": {\"app\": \"kubevirt-node-labeller\"}}, \"spec\": {\"containers\": [{\"args\": [\"infinity\"], \"command\": [\"sleep\"], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller-sleeper\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\"}], \"dnsPolicy\": \"ClusterFirst\", \"initContainers\": [{\"args\": [\"cp /usr/bin/kvm-caps-info-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/kvm-info-nfd-plugin:v0.5.8\", \"imagePullPolicy\": \"Always\", \"name\": \"kvm-info-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"args\": [\"cp /plugin/dest/cpu-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/; cp /config/cpu-plugin-configmap.yaml /etc/kubernetes/node-feature-discovery/source.d/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-source\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \"cpu-config\"}]}}, \"updateStrategy\": {\"rollingUpdate\": {\"maxUnavailable\": 1}, \"type\": \"RollingUpdate\"}}, \"status\": {\"currentNumberScheduled\": 2, \"desiredNumberScheduled\": 2, \"numberMisscheduled\": 0, \"numberReady\": 0, \"numberUnavailable\": 2, \"observedGeneration\": 1, \"updatedNumberScheduled\": 2}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Refresh node-labeller var] ************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:14\u001b[0m\nd/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-\nsource\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \n\u001b[0;32mok: [localhost] => {\"changed\": false, \"method\": \"patch\", \"result\": {\"apiVersion\": \"apps/v1\", \"kind\": \"DaemonSet\", \"metadata\": {\"annotations\": {\"deprecated.daemonset.template.generation\": \"1\"}, \"creationTimestamp\": \"2019-12-02T14:55:49Z\", \"generation\": 1, \"labels\": {\"app\": \"kubevirt-node-labeller\"}, \"name\": \"kubevirt-node-labeller\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"name\": \"node-labeller-hyperconverged-cluster\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34228\", \"selfLink\": \"/apis/apps/v1/namespaces/kubevirt-hyperconverged/daemonsets/kubevirt-node-labeller\", \"uid\": \"d4277cef-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"revisionHistoryLimit\": 10, \"selector\": {\"matchLabels\": {\"app\": \"kubevirt-node-labeller\"}}, \"template\": {\"metadata\": {\"creationTimestamp\": null, \"labels\": {\"app\": \"kubevirt-node-labeller\"}}, \"spec\": {\"containers\": [{\"args\": [\"infinity\"], \"command\": [\"sleep\"], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller-sleeper\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\"}], \"dnsPolicy\": \"ClusterFirst\", \"initContainers\": [{\"args\": [\"cp /usr/bin/kvm-caps-info-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/kvm-info-nfd-plugin:v0.5.8\", \"imagePullPolicy\": \"Always\", \"name\": \"kvm-info-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"args\": [\"cp /plugin/dest/cpu-nfd-plugin /etc/kubernetes/node-feature-discovery/source.d/; cp /config/cpu-plugin-configmap.yaml /etc/kubernetes/node-feature-discovery/source.d/cpu-plugin-configmap.yaml;\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"quay.io/kubevirt/cpu-nfd-plugin:v0.1.1\", \"imagePullPolicy\": \"Always\", \"name\": \"kubevirt-cpu-nfd-plugin\", \"resources\": {}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}, {\"mountPath\": \"/config\", \"name\": \"cpu-config\"}]}, {\"args\": [\"libvirtd -d; chmod o+rw /dev/kvm; virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm > /etc/kubernetes/node-feature-discovery/source.d/virsh_domcapabilities.xml; cp -r /usr/share/libvirt/cpu_map /etc/kubernetes/node-feature-discovery/source.d/\"], \"command\": [\"/bin/sh\", \"-c\"], \"image\": \"kubevirt/virt-launcher:v0.23.0\", \"imagePullPolicy\": \"Always\", \"name\": \"libvirt\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}, {\"env\": [{\"name\": \"NODE_NAME\", \"valueFrom\": {\"fieldRef\": {\"apiVersion\": \"v1\", \"fieldPath\": \"spec.nodeName\"}}}], \"image\": \"quay.io/kubevirt/node-labeller:v0.1.1\", \"imagePullPolicy\": \"IfNotPresent\", \"name\": \"kubevirt-node-labeller\", \"resources\": {\"limits\": {\"devices.kubevirt.io/kvm\": \"1\"}, \"requests\": {\"devices.kubevirt.io/kvm\": \"1\"}}, \"securityContext\": {\"privileged\": true}, \"terminationMessagePath\": \"/dev/termination-log\", \"terminationMessagePolicy\": \"File\", \"volumeMounts\": [{\"mountPath\": \"/etc/kubernetes/node-feature-discovery/source.d/\", \"name\": \"nfd-source\"}]}], \"restartPolicy\": \"Always\", \"schedulerName\": \"default-scheduler\", \"securityContext\": {}, \"serviceAccount\": \"kubevirt-node-labeller\", \"serviceAccountName\": \"kubevirt-node-labeller\", \"terminationGracePeriodSeconds\": 30, \"volumes\": [{\"emptyDir\": {}, \"name\": \"nfd-source\"}, {\"configMap\": {\"defaultMode\": 420, \"name\": \"kubevirt-cpu-plugin-configmap\"}, \"name\": \"cpu-config\"}]}}, \"updateStrategy\": {\"rollingUpdate\": {\"maxUnavailable\": 1}, \"type\": \"RollingUpdate\"}}, \"status\": {\"currentNumberScheduled\": 2, \"desiredNumberScheduled\": 2, \"numberMisscheduled\": 0, \"numberReady\": 0, \"numberUnavailable\": 2, \"observedGeneration\": 1, \"updatedNumberScheduled\": 2}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Set UseKVM condition] *****************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:20\u001b[0m\n\u001b[0;33mchanged: [localhost] => {\"changed\": true, \"result\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34464\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster/status\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}, {\"message\": \"KVM support is enabled.\", \"reason\": \"enabled\", \"status\": \"True\", \"type\": \"KVMSupport\"}]}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Set progressing condition] ************************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:32\u001b[0m\n\u001b[0;33mchanged: [localhost] => {\"changed\": true, \"result\": {\"apiVersion\": \"kubevirt.io/v1\", \"kind\": \"KubevirtNodeLabellerBundle\", \"metadata\": {\"creationTimestamp\": \"2019-12-02T14:55:27Z\", \"generation\": 1, \"labels\": {\"app\": \"hyperconverged-cluster\"}, \"name\": \"node-labeller-hyperconverged-cluster\", \"namespace\": \"kubevirt-hyperconverged\", \"ownerReferences\": [{\"apiVersion\": \"hco.kubevirt.io/v1alpha1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"HyperConverged\", \"name\": \"hyperconverged-cluster\", \"uid\": \"c68526b6-1513-11ea-845f-664f163f5f0f\"}], \"resourceVersion\": \"34470\", \"selfLink\": \"/apis/kubevirt.io/v1/namespaces/kubevirt-hyperconverged/kubevirtnodelabellerbundles/node-labeller-hyperconverged-cluster/status\", \"uid\": \"c6bc8118-1513-11ea-845f-664f163f5f0f\"}, \"spec\": {\"useKVM\": true}, \"status\": {\"conditions\": [{\"ansibleResult\": {\"changed\": 2, \"completion\": \"2019-12-02T14:55:53.248099\", \"failures\": 1, \"ok\": 5, \"skipped\": 0}, \"lastTransitionTime\": \"2019-12-02T14:55:53Z\", \"message\": \"Failed to patch object: {\\\"kind\\\":\\\"Status\\\",\\\"apiVersion\\\":\\\"v1\\\",\\\"metadata\\\":{},\\\"status\\\":\\\"Failure\\\",\\\"message\\\":\\\"Operation cannot be fulfilled on daemonsets.apps \\\\\\\"kubevirt-node-labeller\\\\\\\": the object has been modified; please apply your changes to the latest version and try again\\\",\\\"reason\\\":\\\"Conflict\\\",\\\"details\\\":{\\\"name\\\":\\\"kubevirt-node-labeller\\\",\\\"group\\\":\\\"apps\\\",\\\"kind\\\":\\\"daemonsets\\\"},\\\"code\\\":409}\\n\", \"reason\": \"Failed\", \"status\": \"False\", \"type\": \"Failure\"}, {\"lastTransitionTime\": \"2019-12-02T14:55:54Z\", \"message\": \"Running reconciliation\", \"reason\": \"Running\", \"status\": \"True\", \"type\": \"Running\"}, {\"message\": \"KVM support is enabled.\", \"reason\": \"enabled\", \"status\": \"True\", \"type\": \"KVMSupport\"}, {\"message\": \"Node-labeller is progressing.\", \"reason\": \"progressing\", \"status\": \"True\", \"type\": \"Progressing\"}]}}}\u001b[0m\n\r\nTASK [KubevirtNodeLabeller : Wait for the node-labeller to start] **************\r\n\u001b[1;30mtask path: /opt/ansible/roles/KubevirtNodeLabeller/tasks/main.yml:44\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (300 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (299 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (298 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (297 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (296 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (295 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (294 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (293 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (292 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (291 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (290 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (289 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (288 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (287 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (286 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (285 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (284 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (283 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (282 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (281 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (280 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (279 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (278 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (277 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (276 retries left).\u001b[0m\n\u001b[1;30mFAILED - RETRYING: Wait for the node-labeller to start (275 retries left).\u001b[0m\n\u001b[0;31mfatal: [localhost]: FAILED! => {\"msg\": \"The conditional check 'nl_status.resources[0].status.currentNumberScheduled == nl_status.resources[0].status.numberReady | default(false)' failed. The error was: error while evaluating conditional (nl_status.resources[0].status.currentNumberScheduled == nl_status.resources[0].status.numberReady | default(false)): list object has no element 0\"}\u001b[0m\n\r\nPLAY RECAP *********************************************************************\r\n\u001b[0;31mlocalhost\u001b[0m : \u001b[0;32mok=8 \u001b[0m \u001b[0;33mchanged=2 \u001b[0m unreachable=0 \u001b[0;31mfailed=1 \u001b[0m\r\n\n","job":"5793712081029260939","name":"node-labeller-hyperconverged-cluster","namespace":"kubevirt-hyperconverged","error":"exit status 2","stacktrace":"github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tsrc/github.com/operator-framework/operator-sdk/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/operator-framework/operator-sdk/pkg/ansible/runner.(*runner).Run.func1\n\tsrc/github.com/operator-framework/operator-sdk/pkg/ansible/runner/runner.go:289"}
E1202 15:01:17.634517 1 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65
src/github.com/operator-framework/operator-sdk/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51