Coder Social home page Coder Social logo

kubemacpool's People

Contributors

0xfelix avatar alonsadan avatar dankenigsberg avatar dcbw avatar fossedihelm avatar ormergi avatar oshoval avatar phoracek avatar qinqon avatar ramlavi avatar rhrazdil avatar s1061123 avatar schseba avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

kubemacpool's Issues

tests sometimes fail with error "connect: connection refused"

occurs on the following tests:

  • should return an error because no MAC address is available [It] (prow link)
  • should return an error because no MAC address is available [It] (prow link)
  • should reject a vm creation with an already allocated MAC address [It] (prow link)
  • should be able to create a new virtual machine [It] (prow link - in this case it is internal error)
  • should successfully release the MAC and the new VM should be created with no errors [It] (prow link)

How to recreate:

  • run make functest or run prow job. happens 1 in 3-4 runs. when focusing the test specifically then the issue is not reproduced.

the test logs + Error:

Pod Name: kubemacpool-mac-controller-manager-0 
{{ } {kubemacpool-mac-controller-manager-0 kubemacpool-mac-controller-manager- kubemacpool-system /api/v1/namespaces/kubemacpool-system/pods/kubemacpool-mac-controller-manager-0 020a0831-f923-4151-9f5c-74fcd4ac7554 1223034 0 2020-04-16 11:20:29 +0300 IDT <nil> <nil> map[app:kubemacpool control-plane:mac-controller-manager controller-revision-hash:kubemacpool-mac-controller-manager-798656b7c7 controller-tools.k8s.io:1.0 kubemacpool-leader:true statefulset.kubernetes.io/pod-name:kubemacpool-mac-controller-manager-0] map[] [{apps/v1 StatefulSet kubemacpool-mac-controller-manager 8a5b7678-2532-479e-b831-2bf0d307b105 0xc00094c05b 0xc00094c05c}] []  []} {[{default-token-ctzmh {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-ctzmh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}] [] [{manager registry:5000/kubevirt/kubemacpool:latest [/manager] [--v=debug --wait-time=10]  [{webhook-server 0 8000 TCP } {healthz 0 9440 TCP }] [] [{POD_NAMESPACE  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAME  &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {RANGE_START  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:kubemacpool-mac-range-config,},Key:RANGE_START,Optional:nil,},SecretKeyRef:nil,}} {RANGE_END  &EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:kubemacpool-mac-range-config,},Key:RANGE_END,Optional:nil,},SecretKeyRef:nil,}} {HEALTH_PROBE_HOST 0.0.0.0 nil} {HEALTH_PROBE_PORT 9440 nil}] {map[cpu:{{300 -3} {<nil>} 300m DecimalSI} memory:{{629145600 0} {<nil>}  BinarySI}] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{314572800 0} {<nil>} 300Mi BinarySI}]} [{default-token-ctzmh true /var/run/secrets/kubernetes.io/serviceaccount  <nil> }] [] nil &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 healthz},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File Always nil false false false}] [] Always 0xc00094c290 <nil> ClusterFirst map[] default default <nil> node01 false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] kubemacpool-mac-controller-manager-0 kubemacpool-service &Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[]PodAffinityTerm{},PreferredDuringSchedulingIgnoredDuringExecution:[]WeightedPodAffinityTerm{WeightedPodAffinityTerm{Weight:1,PodAffinityTerm:PodAffinityTerm{LabelSelector:&v1.LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[]LabelSelectorRequirement{LabelSelectorRequirement{Key:control-plane,Operator:In,Values:[mac-controller-manager],},},},Namespaces:[],TopologyKey:kubernetes.io/hostname,},},},},} default-scheduler [{node.kubernetes.io/not-ready Exists  NoExecute 0xc00094c390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00094c3b0}] []  0xc00094c3c0 nil [] <nil> 0xc00094c3c4 <nil> map[] []} {Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 11:20:29 +0300 IDT  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 11:20:49 +0300 IDT  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 11:20:49 +0300 IDT  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-16 11:20:29 +0300 IDT  }]    192.168.66.101 10.244.0.84 [{10.244.0.84}] 2020-04-16 11:20:29 +0300 IDT [] [{manager {nil &ContainerStateRunning{StartedAt:2020-04-16 11:20:30 +0300 IDT,} nil} {nil nil nil} true 0 registry:5000/kubevirt/kubemacpool:latest docker-pullable://registry:5000/kubevirt/kubemacpool@sha256:d1ef5456b62b7d94eb1f74320be352b7eb9793d544a8379d97c3eb5ff59ca2e2 docker://acec8554569ca74d35bb74410b519520e2d9156707fae01a0b4fa796904f3975 0xc00094c4d7}] Burstable []}} 
2020-04-16T08:20:30.832Z	INFO	manager	Setting up client for manager
2020-04-16T08:20:30.834Z	INFO	manager	Setting up leader electionManager
2020-04-16T08:20:31.324Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
2020-04-16T08:20:31.326Z	INFO	manager	waiting for manager to become leader
I0416 08:20:31.326820       1 leaderelection.go:241] attempting to acquire leader lease  kubemacpool-system/kubemacpool-election...
I0416 08:20:48.733361       1 leaderelection.go:251] successfully acquired lease kubemacpool-system/kubemacpool-election
2020-04-16T08:20:48.733Z	DEBUG	manager.events	Normal	{"object": {"kind":"ConfigMap","namespace":"kubemacpool-system","name":"kubemacpool-election","uid":"c763aff0-5a12-4d33-90c0-d32795f5c953","apiVersion":"v1","resourceVersion":"1223030"}, "reason": "LeaderElection", "message": "kubemacpool-mac-controller-manager-0_f2bf72d5-64e0-4a0c-a33f-f03ddda22d72 became leader"}
2020-04-16T08:20:48.825Z	INFO	manager	marked this manager as leader for webhook	{"podName": "kubemacpool-mac-controller-manager-0"}
2020-04-16T08:20:48.825Z	INFO	manager	Setting up Manager
2020-04-16T08:20:49.290Z	INFO	controller-runtime.metrics	metrics server is starting to listen	{"addr": ":8080"}
2020-04-16T08:20:49.338Z	DEBUG	PoolManager	start InitMaps to reserve existing mac addresses before allocation new ones
2020-04-16T08:20:49.417Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "cluster-network-addons-operator-597b8ff899-lrmzs", "podNamespace": "cluster-network-addons-operator"}
2020-04-16T08:20:49.417Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "local-volume-provisioner-kvwf2", "podNamespace": "default"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "coredns-6955765f44-kbst8", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "coredns-6955765f44-w8hh6", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "etcd-node01", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-apiserver-node01", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-controller-manager-node01", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-flannel-ds-amd64-rxs7b", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-proxy-h7hxc", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-scheduler-node01", "podNamespace": "kube-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kubemacpool-mac-controller-manager-0", "podNamespace": "kubemacpool-system"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-api-57bc76d84d-mcvtt", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-api-57bc76d84d-swkh2", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-controller-775cb4757d-7mw4f", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-controller-775cb4757d-pdp9c", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-handler-t8br8", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-operator-59d8fdd55b-7gxnp", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "virt-operator-59d8fdd55b-t2wxv", "podNamespace": "kubevirt"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "bridge-marker-d6fqf", "podNamespace": "linux-bridge"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-cni-linux-bridge-plugin-w2s2r", "podNamespace": "linux-bridge"}
2020-04-16T08:20:49.418Z	DEBUG	PoolManager	InitMaps for pod	{"podName": "kube-multus-ds-amd64-ff7sd", "podNamespace": "multus"}
2020-04-16T08:20:49.432Z	INFO	manager	Setting up controller
2020-04-16T08:20:49.432Z	INFO	controller-runtime.webhook	registering webhook	{"path": "/mutate-pods"}
2020-04-16T08:20:49.432Z	INFO	controller-runtime.webhook	registering webhook	{"path": "/mutate-virtualmachines"}
2020-04-16T08:20:49.432Z	INFO	PoolManager	starting cleanup loop for waiting mac addresses
2020-04-16T08:20:49.432Z	INFO	controller-runtime.manager	starting metrics server	{"path": "/metrics"}
2020-04-16T08:20:49.532Z	INFO	webhook/server	Starting nodenetworkconfigurationpolicy webhook server
2020-04-16T08:20:49.532Z	INFO	webhook/server/certificate/manager	Updating CA bundle for webhook
2020-04-16T08:20:49.533Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "pod-controller", "source": "kind source: /, Kind="}
2020-04-16T08:20:49.534Z	INFO	controller-runtime.controller	Starting EventSource	{"controller": "virtualmachine-controller", "source": "kind source: /, Kind="}
2020-04-16T08:20:49.633Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "pod-controller"}
2020-04-16T08:20:49.634Z	INFO	controller-runtime.controller	Starting Controller	{"controller": "virtualmachine-controller"}
2020-04-16T08:20:49.734Z	INFO	controller-runtime.controller	Starting workers	{"controller": "pod-controller", "worker count": 1}
2020-04-16T08:20:49.734Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/kube-apiserver-node01"}
2020-04-16T08:20:49.735Z	INFO	controller-runtime.controller	Starting workers	{"controller": "virtualmachine-controller", "worker count": 1}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/etcd-node01"}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubemacpool-system/kubemacpool-mac-controller-manager-1"}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "linux-bridge/bridge-marker-d6fqf"}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-api-57bc76d84d-mcvtt"}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.735Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.735Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/kube-scheduler-node01"}
2020-04-16T08:20:49.735Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-controller-775cb4757d-7mw4f"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-controller-775cb4757d-pdp9c"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "cluster-network-addons-operator/cluster-network-addons-operator-597b8ff899-lrmzs"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-api-57bc76d84d-swkh2"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "multus/kube-multus-ds-amd64-ff7sd"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/kube-flannel-ds-amd64-rxs7b"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/coredns-6955765f44-kbst8"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/kube-controller-manager-node01"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/kube-proxy-h7hxc"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.736Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.736Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-operator-59d8fdd55b-t2wxv"}
2020-04-16T08:20:49.736Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-operator-59d8fdd55b-7gxnp"}
2020-04-16T08:20:49.737Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "linux-bridge/kube-cni-linux-bridge-plugin-w2s2r"}
2020-04-16T08:20:49.737Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	PoolManager	AllocatePodMac: Data	{"macmap": {}, "podmap": {}, "currentMac": "02:00:00:00:00:00"}
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubevirt/virt-handler-t8br8"}
2020-04-16T08:20:49.737Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kube-system/coredns-6955765f44-w8hh6"}
2020-04-16T08:20:49.737Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "kubemacpool-system/kubemacpool-mac-controller-manager-0"}
2020-04-16T08:20:49.737Z	DEBUG	Pod Controller	got a pod event in the controller
2020-04-16T08:20:49.737Z	DEBUG	controller-runtime.controller	Successfully Reconciled	{"controller": "pod-controller", "request": "default/local-volume-provisioner-kvwf2"}
2020-04-16T08:20:49.742Z	INFO	webhook/server/certificate/manager	Starting cert manager
2020-04-16T08:20:49.742Z	INFO	webhook/server/certificate/manager	Wait for cert/key to be created

the server rejected our request for an unknown reason (get pods kubemacpool-mac-controller-manager-1)
• Failure [27.280 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:28
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:38
    When trying to create a VM after all MAC addresses in range have been occupied
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:219
      should return an error because no MAC address is available [It]
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:220

      Unexpected error:
          <*errors.StatusError | 0xc0005cb0e0>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "Internal error occurred: failed calling webhook \"mutatevirtualmachines.kubemacpool.io\": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.42.251:443: connect: connection refused",
                  Reason: "InternalError",
                  Details: {
                      Name: "",
                      Group: "",
                      Kind: "",
                      UID: "",
                      Causes: [
                          {
                              Type: "",
                              Message: "failed calling webhook \"mutatevirtualmachines.kubemacpool.io\": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.42.251:443: connect: connection refused",
                              Field: "",
                          },
                      ],
                      RetryAfterSeconds: 0,
                  },
                  Code: 500,
              },
          }
          Internal error occurred: failed calling webhook "mutatevirtualmachines.kubemacpool.io": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.42.251:443: connect: connection refused
      occurred

      /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43

also attaching kube-apiserver pod relevant logs:

W0416 11:44:48.656771       1 dispatcher.go:180] Failed calling webhook, failing closed mutatevirtualmachines.kubemacpool.io: failed calling webhook "mutatevirtualmachines.kubemacpool.io": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.248.132:443: connect: connection refused
W0416 11:44:48.775357       1 dispatcher.go:180] Failed calling webhook, failing closed mutatevirtualmachines.kubemacpool.io: failed calling webhook "mutatevirtualmachines.kubemacpool.io": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.248.132:443: connect: connection refused
W0416 11:44:48.777780       1 dispatcher.go:180] Failed calling webhook, failing closed mutatevirtualmachines.kubemacpool.io: failed calling webhook "mutatevirtualmachines.kubemacpool.io": Post https://kubemacpool-service.kubemacpool-system.svc:443/mutate-virtualmachines?timeout=30s: dial tcp 10.96.248.132:443: connect: connection refused

virtual machine e2e test #2162 sometimes fail

Following the exclusion of virtual machine e2e test #2162 in the following PR, this issue has been open to resolve why the test sometimes fail.

test error logs summary are:

services "kubemacpool-service" not found
• Failure [0.026 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:34
    When trying to create a VM after all MAC addresses in range have been occupied
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:180
      should return an error because no MAC address is available [It]
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:181

      Unexpected error:
          <*errors.StatusError | 0xc00012ed20>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "configmaps \"kubemacpool-mac-range-config\" not found",
                  Reason: "NotFound",
                  Details: {
                      Name: "kubemacpool-mac-range-config",
                      Group: "",
                      Kind: "configmaps",
                      UID: "",
                      Causes: nil,
                      RetryAfterSeconds: 0,
                  },
                  Code: 404,
              },
          }
          configmaps "kubemacpool-mac-range-config" not found
      occurred

      /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43```

virtual machine e2e test #2165 sometimes fail

Following the exclusion of virtual machine e2e test #2165 in the following PR, this issue has been open to resolve why the test sometimes fail.

test error logs summary are:

• Failure [0.030 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:34
    when trying to create a VM after a MAC address has just been released duo to a VM deletion
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:203
      should re-use the released MAC address for the creation of the new VM and not return an error [It]
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:204

      Unexpected error:
          <*errors.StatusError | 0xc0005a4be0>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "configmaps \"kubemacpool-mac-range-config\" not found",
                  Reason: "NotFound",
                  Details: {
                      Name: "kubemacpool-mac-range-config",
                      Group: "",
                      Kind: "configmaps",
                      UID: "",
                      Causes: nil,
                      RetryAfterSeconds: 0,
                  },
                  Code: 404,
              },
          }
          configmaps "kubemacpool-mac-range-config" not found
      occurred

      /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43```

Run controller in master nodes

It's important to run the kubemacpool pod in the master nodes.

The kubemacpool is a critical webhook and if it's not working (nodes issues) we are not able to create new pods and vms

kubemacpool pod "Running" status doesn't mean that pod is operational

The issue is that "Running" state of kubemacpool pod doesn't mean that pod is operational. Code inside the pod still needs few more seconds to get UP and starts serving requesst. As a result of this behaviour we can not install VM few seconds(1-3 seconds) after kubemacpool restart .

Add kubevirt.io/v1 apiVersion to the webhook rules

What happened:
kubemacpool does not provide a mac to kubevirt machiens deployed with apiVersion kubevirt.io/v1

What you expected to happen:

mac address assigned.

How to reproduce it (as minimally and precisely as possible):

Create a new machine with with the apiVersion kubevirt.io/v1 (kubevirt v0.37.0)

Anything else we need to know?:

The webhook rules need an update to include v1

rules:
  - apiGroups:
    - kubevirt.io
    apiVersions:
    - v1alpha3
    operations:
    - CREATE
    - UPDATE
    resources:
    - virtualmachines

Getting "the object has been modified; please apply your changes to the latest version and try again" errors in log

What happened:
during the regular operation of the kubemapool pod, we get conflict Errors when trying to updated objects.

2020-05-24T14:53:39.472Z	DEBUG	controller-runtime.webhook.webhooks	wrote response	{"webhook": "/mutate-virtualmachines", "UID": "a8293e24-3338-4816-a09b-645b5eeb4572", "allowed": true, "result": {}, "resultError": "got runtime.Object without object metadata: &Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:,Message:,Reason:,Details:nil,Code:200,}"}
2020-05-24T14:53:39.474Z	ERROR	VirtualMachine Controller.addFinalizerAndUpdate	failed to update the VM with the new finalizer	{"virtualMachineName": "testvmrvgsflf2q6l84cphpgkh7lr9ksdsknkl", "virtualMachineNamespace": "kubemacpool-test", "error": "Operation cannot be fulfilled on virtualmachines.kubevirt.io \"testvmrvgsflf2q6l84cphpgkh7lr9ksdsknkl\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128

github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).addFinalizerAndUpdate
	/root/github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:130
github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).Reconcile
	/root/github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:109
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
2020-05-24T14:53:39.474Z	ERROR	controller-runtime.controller	Reconciler error	{"controller": "virtualmachine-controller", "request": "kubemacpool-test/testvmrvgsflf2q6l84cphpgkh7lr9ksdsknkl", "error": "Operation cannot be fulfilled on virtualmachines.kubevirt.io \"testvmrvgsflf2q6l84cphpgkh7lr9ksdsknkl\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
	/root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88

What you expected to happen:
This should properly handled by a Retry Conflict method.

How to reproduce it (as minimally and precisely as possible):
simply run e2e tests and observe the logs.

Anything else we need to know?:
This is not a low priority issue, but nontheless we should clean the logs from any false negative issues
Environment: k8s-1.17 provider

Test that KubeMacPool doesn't block SDN recovery

KubeMacPool is connected to the default Kubernetes network. When the default network pods are getting upgraded we may end up in a dead-lock where pods are waiting for KubeMacPool webhook, but it is inaccessible. We should create a test covering this scenario and fix it accordingly.

#69

virtual machine e2e test: testing finalizers sometimes fail

Following the exclusion of virtual machine e2e test: testing finalizers in the following PR, this issue has been open to resolve why the test sometimes fail.

test error logs summary are: (full logs attached )

2020-03-01T13:15:31.660Z	ERROR	VirtualMachine Controller	failed to update the VM with the new finalizer	{"virtualMachineName": "testvmsk5jgjwksrv7lrb5wnmdtmtrw9qz2w4s", "virtualMachineNamespace": "kubemacpool-test", "error": "Operation cannot be fulfilled on virtualmachines.kubevirt.io \"testvmsk5jgjwksrv7lrb5wnmdtmtrw9qz2w4s\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).addFinalizerAndUpdate
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:124
github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine.(*ReconcilePolicy).Reconcile
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/controller/virtualmachine/virtualmachine_controller.go:101
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:256
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
2020-03-01T13:15:31.660Z	ERROR	controller-runtime.controller	Reconciler error	{"controller": "virtualmachine-controller", "request": "kubemacpool-test/testvmsk5jgjwksrv7lrb5wnmdtmtrw9qz2w4s", "error": "Operation cannot be fulfilled on virtualmachines.kubevirt.io \"testvmsk5jgjwksrv7lrb5wnmdtmtrw9qz2w4s\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:258
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:232
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:211
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153
github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait.Until
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88

Service: &Service{ObjectMeta:{kubemacpool-service  kubemacpool-system /api/v1/namespaces/kubemacpool-system/services/kubemacpool-service 304f234e-0050-416f-8c1a-fa6717c49d0e 24801 0 2020-03-01 14:53:35 +0200 IST <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubemacpool-service","namespace":"kubemacpool-system"},"spec":{"ports":[{"port":443,"targetPort":8000}],"publishNotReadyAddresses":true,"selector":{"kubemacpool-leader":"true"}}}
] [{apps/v1 Deployment kubemacpool-mac-controller-manager b5a8d87d-8575-4564-a371-53b95fc267b4 <nil> <nil>}] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:443,TargetPort:{0 8000 },NodePort:0,},},Selector:map[string]string{kubemacpool-leader: true,},ClusterIP:10.96.76.246,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:true,SessionAffinityConfig:nil,IPFamily:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}Endpoint: &Endpoints{ObjectMeta:{kubemacpool-service  kubemacpool-system /api/v1/namespaces/kubemacpool-system/endpoints/kubemacpool-service f4003912-b32c-4136-974d-49be16947818 30345 0 2020-03-01 14:53:35 +0200 IST <nil> <nil> map[] map[endpoints.kubernetes.io/last-change-trigger-time:2020-03-01T13:15:31Z] [] []  []},Subsets:[]EndpointSubset{EndpointSubset{Addresses:[]EndpointAddress{EndpointAddress{IP:10.244.0.40,TargetRef:&ObjectReference{Kind:Pod,Namespace:kubemacpool-system,Name:kubemacpool-mac-controller-manager-6dd6599854-wk7g7,UID:01c1a72f-ba88-4d82-b6de-7e58b258a58b,APIVersion:,ResourceVersion:30344,FieldPath:,},Hostname:,NodeName:*node01,},},NotReadyAddresses:[]EndpointAddress{},Ports:[]EndpointPort{EndpointPort{Name:,Port:8000,Protocol:TCP,},},},},}• Failure [21.320 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:34
    testing finalizers
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:359
      When the VM is not being deleted
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:360
        should have a finalizer and deletion timestamp should be zero  [It]
        /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:361

        Expected
            <bool>: false
        to equal
            <bool>: true

        /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38```

Mac re-usage doesn't work.

Scenario:
Step 1: Create mac-pool with 3 available addresses
Step 2: Add VM_A with 3 interfaces
Step 3: Check that 2 mac addresses were allocated to the new VM's interfaces
Step 4: Try to add VM_B with 3 interfaces. Make sure that step is failed due to MAC starvation.
Step 5: Delete VM_A and make sure that 2 mac addresses were released
Step 6: Try to recreate VM_B. The step is failed due to bug.
Issue: Mac re-usage doesn't work . Released mac doesn't available for re assignment

requested changes in https://github.com/k8snetworkplumbingwg/kubemacpool/pull/100#discussion_r388329464

#100
in order to keeo track on tohse notes I'm adding them here:

Expect(err).ToNot(HaveOccurred())

  • Maybe we can keep the failure message, here and and in the rest of the Expects. something like :
    Expect(err).ToNot(HaveOccurred(), "failed to apply the new vm object")

Expect(strings.Contains(err.Error(), "failed to allocate requested mac address")).To(Equal(true))

  • maybe check that err is not nil, before checking that the error is as expected?
Expect(err).To(HaveOccurred())
Expect(strings.Contains(err.Error(), "failed to allocate requested mac address")).To(Equal(true))

After applying invalid yaml mac addresses are not released

Once you apply invalid yaml mac addresses are not released.
Scenario:
Step 1: Apply invalid yaml:
oc apply -f temp_a
The "" is invalid: spec.template.spec.networks: every network must be mapped to an interface

Step 2: Check logs
oc logs kubemacpool-mac-controller-manager-844cd8fbf8-rw6vk -n kubemacpool-system
{"level":"info","ts":1561535723.8990824,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:01","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}
{"level":"info","ts":1561535723.8990977,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:02","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}
{"level":"info","ts":1561535723.8991125,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:03","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}

Step 3: Reapply invalid yaml
oc apply -f temp_a

Step 4: Check logs:
oc logs kubemacpool-mac-controller-manager-844cd8fbf8-rw6vk -n kubemacpool-system
{"level":"info","ts":1561535798.2916842,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:04","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}
{"level":"info","ts":1561535798.2917054,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:05","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}
{"level":"info","ts":1561535798.2917206,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:06","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}
{"level":"info","ts":1561535798.2917743,"logger":"PoolManager","msg":"mac from pool was allocated for virtual machine","allocatedMac":"02:aa:bf:00:00:07","virtualMachineName":"vma","virtualMachineNamespace":"myproject"}

Bug: Once invalid yaml is applied mac addresses are not released.

kubemacpool doesn't release mac addresses of removed vm's interfaces

Issue: kubemacpool doesn't release mac addresses of removed vm's interfaces
Scenario:
Create VM-a 2 interfaces + static mac on one of them:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vma    
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
            - name: br1
              bridge: {}
            - name: br2
              macAddress: 06:bb:bb:00:00:01
              bridge: {}
        resources:
          requests:
            memory: 2G      
        cpu:
          cores: 1
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
              runcmd:
                - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24
                - sudo nmcli con mod 'Wired connection 1' ipv4.method manual
                - sudo nmcli con up 'Wired connection 1'
                - sudo nmcli con mod 'Wired connection 2' ipv4.address 10.202.0.1/24
                - sudo nmcli con mod 'Wired connection 2' ipv4.method manual
                - sudo nmcli con up 'Wired connection 2'
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1
        - multus:
            networkName: br1
          name: br2

Remove interface with static mac from vma:
Delete following section:
- name: br2
macAddress: 06:bb:bb:00:00:01
bridge: {}
- multus:
networkName: br1
name: br2
Reapply yaml

oc apply -f yaml

Create new VM-B with same static mac as was used by VM-A:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vmb    
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
            - name: br1
              bridge: {}
            - name: br2
              macAddress: 06:bb:bb:00:00:01
              bridge: {}
        resources:
          requests:
            memory: 2G      
        cpu:
          cores: 1
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
              runcmd:
                - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24
                - sudo nmcli con mod 'Wired connection 1' ipv4.method manual
                - sudo nmcli con up 'Wired connection 1'
                - sudo nmcli con mod 'Wired connection 2' ipv4.address 10.202.0.1/24
                - sudo nmcli con mod 'Wired connection 2' ipv4.method manual
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1
        - multus:
            networkName: br1
          name: br2

Error:
{"level":"error","ts":1562074389.970346,"logger":"PoolManager","msg":"mac address already allocated","error":"failed to allocate requested mac address","stacktrace":"github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).allocateRequestedVirtualMachineInterfaceMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/virtualmachine_pool.go:145\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager.(*PoolManager).AllocateVirtualMachineMac\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/pool-manager/virtualmachine_pool.go:68\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/webhook/virtualmachine.(*virtualMachineAnnotator).mutateVirtualMachinesFn\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/webhook/virtualmachine/virtualmachine.go:98\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/pkg/webhook/virtualmachine.(*virtualMachineAnnotator).Handle\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/pkg/webhook/virtualmachine/virtualmachine.go:85\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).handleMutating\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission/webhook.go:133\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).Handle\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission/webhook.go:120\ngithub.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP\n\t/go/src/github.com/K8sNetworkPlumbingWG/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission/http.go:93\nnet/http.(*ServeMux).ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2361\nnet/http.serverHandler.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:2741\nnet/http.initNPNRequest.ServeHTTP\n\t/usr/lib/golang/src/net/http/server.go:3291\nnet/http.Handler.ServeHTTP-fm\n\t/usr/lib/golang/src/net/http/h2_bundle.go:5592\nnet/http.(*http2serverConn).runHandler\n\t/usr/lib/golang/src/net/http/h2_bundle.go:5877"}

Same MAC two VMs.

You can add 2 same mac addresses into 2 different VMs(For example VM_A and VM_B).
Scenario:
Step 1: Find out which mac address is going to be used as next available.
oc logs kubemacpool-mac-controller-manager-6674dd57cf-94xvj -n kubemacpool-system | grep current:
2019-04-15T09:17:48.160Z DEBUG PoolManager AllocateVirtualMachineMac: data {"macmap": {"02:50:b6:00:00:00":"Allocated","02:50:b6:00:00:01":"Allocated"}, "podmap": {}, "vmmap": {"myproject/vmtest":["02:50:b6:00:00:00","02:50:b6:00:00:01"]}, "currentMac": "02:50:b6:00:00:01"}
Step 2: Assign this mac manually to VM_A. Please noticed: Do not run this VM
Step 3: Add VM_B with automatic mac assignment. This VM will use currentMac which is allocated already to VM_A.
Step 4: Run both VMs.
Issue: You have 2 VMs with same mac address

All mac addresses are got lost when vm yamls applies more the 1 time

Scenario:
1)Create VM-A .

oc apply -f vm_a.yaml

  1. Check mac addresses from spec

oc describe vm vma | grep 'Mac '
Mac Address: 02:83:e8:00:00:37

  1. Apply again same yaml.

oc apply -f vm_a.yaml

  1. Check MAC in vm spec.

oc describe vm vma | grep 'Mac '
BUG: There is no mac address.

vm yaml:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vma        
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              bridge: {}
            - name: br1
              bridge: {}
        resources:
          requests:
            memory: 1G      
        cpu:
          cores: 2
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
              runcmd:
                - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24
                - sudo nmcli con mod 'Wired connection 1' ipv4.method manual
                - sudo nmcli con up 'Wired connection 1'
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1

documentation regarding multus installation is not up to date

What happened:
documentation instruction regarding multus is not up to date, resulting with the following error:

hades01:kubemacpool $ kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/multus configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/multus configured
serviceaccount/multus created
error: unable to recognize "https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

What you expected to happen:
to run the yaml with no errors.
How to reproduce it (as minimally and precisely as possible):
follow documentation usage and run

kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/kubemacpool/master/hack/multus/multus.yaml

Anything else we need to know?:

Environment:

Test handling of VM attached to the Pod network

What happened:

We don't have any tests making sure our logic for VM interfaces attached to Pod networks works all right.

What you expected to happen:

We test what happens when a VM has an interface connected to the Pod network through bridge and masquerade bindings. It should be also clearly documented in the README.

move multus manifest daemonsets to apps/v1

What happened:
In kubemacpool some manifests contain daemonset with apiVersion extensions/v1beta1, while it should be apps/v1
files: Multus.yaml, cni-plugin.yaml
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

Pod mac is added into the podMap instead of VMMap

Scenario:
Step 1:Create VM with more the 2 or 3 interfaces bridged to Linux Bridge.
Step 2:Turn on this VM
Step 3: Check that mac allocated from default mac pool
Step 4: Shrink mac pool and restart kubemacpool-mac-controller-manager pod
Step 5: Check logs.
Issue is below:

Data {"macmap": {"02:00:00:00:00:05":"Allocated","02:00:00:00:00:06":"Allocated"}, "podmap": {"myproject/virt-launcher-vmtest-m9hxg":{"br1":"02:00:00:00:00:05","br2":"02:00:00:00:00:06"}}, "vmmap": {}, "currentMac": "02:00:00:00:00:00"}

its add the pod into the podMap and not under the vmMap

virtual machine e2e test #2995 sometimes fail

Following the exclusion of virtual machine e2e test #2995 in the following commit, this issue has been open to resolve why the test sometimes fail.

test error logs summary are: (full logs in link)

• Failure [82.615 seconds]
Virtual Machines
/tmp/kubemacpool/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /tmp/kubemacpool/kubemacpool/tests/virtual_machines_test.go:34
    When a VM's NIC is removed and a new VM is created with the same MAC
    /tmp/kubemacpool/kubemacpool/tests/virtual_machines_test.go:492
      should successfully release the MAC and the new VM should be created with no errors [It]
      /tmp/kubemacpool/kubemacpool/tests/virtual_machines_test.go:493
      Timed out after 50.001s.
      
      failed to create the new VM
      Unexpected error:
          <*errors.StatusError | 0xc000b85a40>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "admission webhook \"mutatevirtualmachines.kubemacpool.io\" denied the request: Failed to create virtual machine allocation error: the range is full",
                  Reason: "",
                  Details: nil,
                  Code: 500,
              },
          }
          admission webhook "mutatevirtualmachines.kubemacpool.io" denied the request: Failed to create virtual machine allocation error: the range is full
      occurred
      /tmp/kubemacpool/kubemacpool/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:56

Don't use latest image version in manifests

What happened:
We redeployed an older version of the manifest for our test cluster and it kept failing due to RBAC not being correct. The software expected to be able to list namespaces, but the manifest we used didn't allow that.

This is because you're using the latest tag in the deployment pod (which is a really bad idea anyways). Please consider locking this to the current tag of a release.

What you expected to happen:
Manifests should specify a compatible version, not latest.

How to reproduce it (as minimally and precisely as possible):
Deploy any manifest before v0.35.0.

prevent dangling goroutines if controller-manager returns with error

What happened:
the controller-manager docs say that Start() returns in two cases:

  • an error occured
  • context was canceled

If Start returns an error, then the goroutines are still running. If Start returns error in each iteration for some reason, then we may be spawning new go k.waitForSignal() goroutines in each iteration.

What you expected to happen:
no dangling goroutines

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

add pool allocation to statefulsets and deploymets

add the ability to allocate and preserve mac address for statefulsets and deploymets.

add a new annotation kubemacpool/mac with a list of requested or allocated mac addresses related to the deployment or statefulset.

when scale up the deployment we are going to add more addresses into the list, and when scale down the allocated one will stay there.

the addresses will be free only after the deploymet/statefulset is removed

virtual machine e2e test #2243 sometimes fail

Following the exclusion of virtual machine e2e test #2433 in the following PR, this issue has been open to resolve why the test sometimes fail.

Should switch to using RetryOnConflict when update
test error logs summary are: (full logs attached )

] [{apps/v1 Deployment kubemacpool-mac-controller-manager b5a8d87d-8575-4564-a371-53b95fc267b4 <nil> <nil>}] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:443,TargetPort:{0 8000 },NodePort:0,},},Selector:map[string]string{kubemacpool-leader: true,},ClusterIP:10.96.76.246,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:true,SessionAffinityConfig:nil,IPFamily:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}Endpoint: &Endpoints{ObjectMeta:{kubemacpool-service  kubemacpool-system /api/v1/namespaces/kubemacpool-system/endpoints/kubemacpool-service f4003912-b32c-4136-974d-49be16947818 29733 0 2020-03-01 14:53:35 +0200 IST <nil> <nil> map[] map[endpoints.kubernetes.io/last-change-trigger-time:2020-03-01T13:13:03Z] [] []  []},Subsets:[]EndpointSubset{EndpointSubset{Addresses:[]EndpointAddress{EndpointAddress{IP:10.244.0.39,TargetRef:&ObjectReference{Kind:Pod,Namespace:kubemacpool-system,Name:kubemacpool-mac-controller-manager-6dd6599854-2dk82,UID:54df5f0c-c956-464c-bfcc-74525e470b8c,APIVersion:,ResourceVersion:29732,FieldPath:,},Hostname:,NodeName:*node01,},},NotReadyAddresses:[]EndpointAddress{},Ports:[]EndpointPort{EndpointPort{Name:,Port:8000,Protocol:TCP,},},},},}• Failure [149.307 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:34
    When we re-apply a VM yaml
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:280
      should assign to the VM the same MAC addresses as before the re-apply, and not return an error [It]
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:281

      Timed out after 120.001s.
      failed to update VM
      Unexpected error:
          <*errors.StatusError | 0xc000844500>: {
              ErrStatus: {
                  TypeMeta: {Kind: "", APIVersion: ""},
                  ListMeta: {
                      SelfLink: "",
                      ResourceVersion: "",
                      Continue: "",
                      RemainingItemCount: nil,
                  },
                  Status: "Failure",
                  Message: "Operation cannot be fulfilled on virtualmachines.kubevirt.io \"testvmb8twpj27rrlwsrgh7k869ccpjxdcgjsc\": the object has been modified; please apply your changes to the latest version and try again",
                  Reason: "Conflict",
                  Details: {
                      Name: "testvmb8twpj27rrlwsrgh7k869ccpjxdcgjsc",
                      Group: "kubevirt.io",
                      Kind: "virtualmachines",
                      UID: "",
                      Causes: nil,
                      RetryAfterSeconds: 0,
                  },
                  Code: 409,
              },
          }
          Operation cannot be fulfilled on virtualmachines.kubevirt.io "testvmb8twpj27rrlwsrgh7k869ccpjxdcgjsc": the object has been modified; please apply your changes to the latest version and try again
      occurred

      /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/asyncassertion/async_assertion.go:56```

virtual machine e2e test #2633 sometimes fail

Following the exclusion of virtual machine e2e test #2633 in the following PR, this issue has been open to resolve why the test sometimes fail.

test error logs summary are: (full logs attached )

2020-03-04T06:15:56.949Z	ERROR	PoolManager	mac address already allocated	{"error": "failed to allocate requested mac address"}
github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr.(*zapLogger).Error
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/go-logr/zapr/zapr.go:128
github.com/k8snetworkplumbingwg/kubemacpool/pkg/pool-manager.(*PoolManager).allocateRequestedVirtualMachineInterfaceMac
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/pool-manager/virtualmachine_pool.go:226
github.com/k8snetworkplumbingwg/kubemacpool/pkg/pool-manager.(*PoolManager).AllocateVirtualMachineMac
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/pool-manager/virtualmachine_pool.go:71
github.com/k8snetworkplumbingwg/kubemacpool/pkg/webhook/virtualmachine.(*virtualMachineAnnotator).mutateCreateVirtualMachinesFn
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/webhook/virtualmachine/virtualmachine.go:104
github.com/k8snetworkplumbingwg/kubemacpool/pkg/webhook/virtualmachine.(*virtualMachineAnnotator).Handle
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/pkg/webhook/virtualmachine/virtualmachine.go:71
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).Handle
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission/webhook.go:135
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission.(*Webhook).ServeHTTP
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/admission/http.go:87
github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook.instrumentedHook.func1
	/go/src/github.com/k8snetworkplumbingwg/kubemacpool/vendor/sigs.k8s.io/controller-runtime/pkg/webhook/server.go:117
net/http.HandlerFunc.ServeHTTP
	/usr/local/go/src/net/http/server.go:1995
net/http.(*ServeMux).ServeHTTP
	/usr/local/go/src/net/http/server.go:2375
net/http.serverHandler.ServeHTTP
	/usr/local/go/src/net/http/server.go:2774
net/http.(*conn).serve
	/usr/local/go/src/net/http/server.go:1878
2020-03-04T06:15:56.949Z	DEBUG	PoolManager	Revert vm allocation	{"vmName": "kubemacpool-test/testvmgx562t9gvltmgm7cnr7qnzxxrjt49fjq", "allocations": {}}
2020-03-04T06:15:56.949Z	DEBUG	controller-runtime.webhook.webhooks	wrote response	{"webhook": "/mutate-virtualmachines", "UID": "6120386d-a04f-47e3-a964-1840c1e00c8f", "allowed": false, "result": {}, "resultError": "got runtime.Object without object metadata: &Status{ListMeta:ListMeta{SelfLink:,ResourceVersion:,Continue:,RemainingItemCount:nil,},Status:,Message:Failed to create virtual machine allocation error: failed to allocate requested mac address,Reason:,Details:nil,Code:500,}"}

Service: &Service{ObjectMeta:{kubemacpool-service  kubemacpool-system /api/v1/namespaces/kubemacpool-system/services/kubemacpool-service d9c05903-663c-421f-9b6f-59b1ca9b6f42 1989 0 2020-03-03 14:49:16 +0200 IST <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"kubemacpool-service","namespace":"kubemacpool-system"},"spec":{"ports":[{"port":443,"targetPort":8000}],"publishNotReadyAddresses":true,"selector":{"kubemacpool-leader":"true"}}}
] [{apps/v1 Deployment kubemacpool-mac-controller-manager cf48b8d2-18c3-4d01-ad33-e95b72db7e77 <nil> <nil>}] []  []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:443,TargetPort:{0 8000 },NodePort:0,},},Selector:map[string]string{kubemacpool-leader: true,},ClusterIP:10.96.30.206,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:true,SessionAffinityConfig:nil,IPFamily:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}Endpoint: &Endpoints{ObjectMeta:{kubemacpool-service  kubemacpool-system /api/v1/namespaces/kubemacpool-system/endpoints/kubemacpool-service 8a81e5ca-a6a0-4b91-a49b-67c892b67f82 246005 0 2020-03-03 14:49:16 +0200 IST <nil> <nil> map[] map[endpoints.kubernetes.io/last-change-trigger-time:2020-03-04T06:15:55Z] [] []  []},Subsets:[]EndpointSubset{EndpointSubset{Addresses:[]EndpointAddress{EndpointAddress{IP:10.244.0.183,TargetRef:&ObjectReference{Kind:Pod,Namespace:kubemacpool-system,Name:kubemacpool-mac-controller-manager-6dd6599854-t42nx,UID:0eb634dc-475e-484f-ae59-5263edc751db,APIVersion:,ResourceVersion:246003,FieldPath:,},Hostname:,NodeName:*node01,},},NotReadyAddresses:[]EndpointAddress{},Ports:[]EndpointPort{EndpointPort{Name:,Port:8000,Protocol:TCP,},},},},}• Failure [21.311 seconds]
Virtual Machines
/root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:24
  Check the client
  /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:34
    When we re-apply a failed VM yaml
    /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:329
      should allow to assign to the VM the same MAC addresses, different name as requested before and do not return an error [It]
      /root/github.com/k8snetworkplumbingwg/kubemacpool/tests/virtual_machines_test.go:356

      Expected
          <bool>: false
      to equal
          <bool>: true

      /root/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:38

Prevent duplicating a host MAC

Is there a protection to prevent a that a MAC address of a (nmstate) managed interface of a physical host is taken by a VM?

vmi start fails to start when both pods and vms label is opted in

What happened:
when trying to run vm - getting the following error:

failed to create virtual machine pod: admission webhook "mutatepods.kubemacpool.io" denied the request: failed to allocate requested mac address

What you expected to happen:
the mutatepods should ignore this pod since it is controlled by kubevirt (aka this mac is already given to the vm instance). running the vm should be successful.

How to reproduce it (as minimally and precisely as possible):

  1. apply namespace with opt-in lables of vms and pods:
apiVersion: v1
kind: Namespace
metadata:
  labels:
    mutatepods.kubemacpool.io: allocateForAll
    mutatevirtualmachines.kubemacpool.io: allocateForAll
  name: kmp-opt-in-ns

  1. apply this NAD:
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: "kmp-opt-br"
  namespace: "kmp-opt-in-ns"
  annotations:
    k8s.v1.cni.cncf.io/resourceName: "bridge.network.kubevirt.io/kmp-opt-br"
spec:
  config: '{
    "cniVersion": "0.3.1",
    "name": "kmp-opt-br",
    "plugins": [{
      "type": "cnv-bridge",
      "bridge": "kmp-opt-br"
    }]
}'
  1. apply this vm:
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  labels:
    special: kmp-opt-vm
  name: kmp-opt-vm
  namespace: kmp-opt-in-ns
spec:
  running: false
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-fedora
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: containerdisk
          - disk:
              bus: virtio
            name: cloudinitdisk
          interfaces:
            - name: default
              masquerade: {}
            - name: kmp-opt-br
              bridge: {}
          rng: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 1024M
      terminationGracePeriodSeconds: 0
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: kmp-opt-br
          name: kmp-opt-br
      volumes:
      - containerDisk:
          image: quay.io/redhat/cnv-tests-fedora-staging:31
        name: containerdisk
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
        name: cloudinitdisk
  1. start vm
virtctl start kmp-opt-vm

Anything else we need to know?:
Note that this is not related to the opt-in functionality. on the contrary - the opt-in functionality gives us a workaround where we only use the vm label and then can avoid this.

Environment:
KUBEVIRT_PROVIDER=k8s-1.17

[flaky CI] tier1 test failed on CNAO: context deadline exceeded

What happened:
KMP tier1 lane test failed on CNAO: prow link
getting error:

          Unexpected error:
              <*errors.StatusError | 0xc0009e10e0>: {
                  ErrStatus: {
                      TypeMeta: {Kind: "", APIVersion: ""},
                      ListMeta: {
                          SelfLink: "",
                          ResourceVersion: "",
                          Continue: "",
                          RemainingItemCount: nil,
                      },
                      Status: "Failure",
                      Message: "Internal error occurred: failed calling webhook \"mutatevirtualmachines.kubemacpool.io\": Post \"https://kubemacpool-service.cluster-network-addons.svc:443/mutate-virtualmachines?timeout=10s\": context deadline exceeded",
                      Reason: "InternalError",
                      Details: {
                          Name: "",
                          Group: "",
                          Kind: "",
                          UID: "",
                          Causes: [
                              {
                                  Type: "",
                                  Message: "failed calling webhook \"mutatevirtualmachines.kubemacpool.io\": Post \"https://kubemacpool-service.cluster-network-addons.svc:443/mutate-virtualmachines?timeout=10s\": context deadline exceeded",
                                  Field: "",
                              },
                          ],
                          RetryAfterSeconds: 0,
                      },
                      Code: 500,
                  },
              }
              Internal error occurred: failed calling webhook "mutatevirtualmachines.kubemacpool.io": Post "https://kubemacpool-service.cluster-network-addons.svc:443/mutate-virtualmachines?timeout=10s": context deadline exceeded
          occurred
          /tmp/deploy.kubemacpool.pxq2/github.com/k8snetworkplumbingwg/kubemacpool/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:43

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

creating pod when Kubemacpool is serving the namespace causes an error: "Error from server (InternalError): error when creating "STDIN": Internal error occurred"

What happened:
creating a pod fails when Kubemacpool is enabled on the namespace:

Error from server (InternalError): error when creating "STDIN": Internal error occurred: v1.Pod.ObjectMeta: v1.ObjectMeta.Annotations: ReadMapCB: expect { or n, but found ", error found in #10 byte of ...|tations":"[{\"name\"|..., bigger context ...|ion":"v1","kind":"Pod","metadata":{"annotations":"[{\"name\":\"ovs-conf\",\"namespace\":\"default\",|...

What you expected to happen:
pod successfully created
How to reproduce it (as minimally and precisely as possible):

make cluster-down cluster-up cluster-sync
./cluster/cli.sh ssh ${node} -- sudo yum install -y http://cbs.centos.org/kojifiles/packages/openvswitch/2.9.2/1.el7/x86_64/openvswitch-2.9.2-1.el7.x86_64.rpm http://cbs.centos.org/kojifiles/packages/openvswitch/2.9.2/1.el7/x86_64/openvswitch-devel-2.9.2-1.el7.x86_64.rpm http://cbs.centos.org/kojifiles/packages/dpdk/17.11/3.el7/x86_64/dpdk-17.11-3.el7.x86_64.rpm
./cluster/cli.sh ssh ${node} -- sudo systemctl daemon-reload
./cluster/cli.sh ssh ${node} -- sudo systemctl restart openvswitch
./cluster/cli.sh ssh node01 -- sudo ovs-vsctl add-br br1

./cluster/kubectl.sh label namespace default mutatepods.kubemacpool.io=allocate
cat <<EOF | ./cluster/kubectl.sh apply -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: ovs-conf
  annotations:
    k8s.v1.cni.cncf.io/resourceName: ovs-cni.network.kubevirt.io/br1
spec:
  config: '{
      "cniVersion": "0.3.1",
      "name": "ovs-conf",
      "plugins" : [
        {
          "type": "ovs",
          "bridge": "br1",
          "vlan": 100
        },
        {
          "type": "tuning"
        }
      ]
    }'
EOF
cat <<EOF | ./cluster/kubectl.sh apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: '[{ "name": "ovs-conf"}]'
spec:
  containers:
  - name: samplepod
    image: quay.io/schseba/kubemacpool-test:latest
    imagePullPolicy: "IfNotPresent"
EOF

Anything else we need to know?:

Environment:

test-id:2164 fails in production configuration

What happened:
during test should not return an error because the MAC addresses of the old VMs should have been released sometimes the controller reconciler doesn't add the finalizer (due to retryOnConflict) before the vm is deleted, causing the vm to be stuck in the cache and the test to fail.

logs simulating the issue: http://pastebin.test.redhat.com/870327
What you expected to happen:
for the finalizer to be updated before the vm is deleted

How to reproduce it (as minimally and precisely as possible):

  1. change wait-time arg to 600 @kmp-deployment
  2. run test should not return an error because the MAC addresses of the old VMs should have been released

Anything else we need to know?:

Environment:

managed-by label of kubemacpool-mutator-ca secret is wrong

What happened:
the managed-by label value of self deployed objects should not be inherited from CNAO/HCO CR.

$ kubectl get secret kubemacpool-mutator-ca -n kubevirt-hyperconverged -o custom-columns="":.metadata.labels

map[
app.kubernetes.io/component:network
app.kubernetes.io/managed-by:hco-operator 
app.kubernetes.io/part-of:hyperconverged-cluster 
app.kubernetes.io/version:1.6.0
]

What you expected to happen:
It must be app.kubernetes.io/managed-by: kubemacpool

How to reproduce it (as minimally and precisely as possible):
deploy HCO/CNAO and take a look at the secret label.

Anything else we need to know?:
similar to issue opened on CNAO kubevirt/cluster-network-addons-operator#1051
Environment:

kubemacpool-mac-controller-manager going into CrashLoopBackOff when creating a new HCO deployment

What happened:
I was trying out the latest hco build kubevirt / hyperconverged-cluster-index:1.5.0-unstable but something seems to be wrong with the kubemacpool-mac-controller-manager. The pod doesn't come up and stays in CrashLoopBackOff
What you expected to happen:
kubemacpool-mac-controller-manager should have been up and the kubervirt hco should be in succeeded state
How to reproduce it (as minimally and precisely as possible):
Have openshift 4.8.0-fc.0 installed on 3 bare metals.
Followed this instruction to install unreleased builds:- https://github.com/kubevirt/hyperconverged-cluster-operator#installing-unreleased-bundle-using-a-custom-catalog-source 1.5.0-unstable
Created the following
CatalogSource
Namespace
OperatorGroup
Subscription
HCO cr: https://raw.githubusercontent.com/kubevirt/hyperconverged-cluster-operator/main/deploy/hco.cr.yaml
Anything else we need to know?:
Have the following logs from kubemacpool-mac-controller-manager

kubemacpool-cert-manager-6557fb8648-wj5cc-manager.log
kubemacpool-mac-controller-manager-7df76694f-tg9bb-manager.log

Environment:
Kubernetes version (use kubectl version):
Client Version: 4.7.7
Server Version: 4.8.0-fc.0
Kubernetes Version: v1.21.0-rc.0+fde4aa9

Hardware configuration:
3 baremetals
Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz
16/16 cores; 32 threads 256 GB memory
OS (e.g. from /etc/os-release):
cat /etc/os-release
NAME="Red Hat Enterprise Linux CoreOS"
VERSION="48.84.202104151145-0"
VERSION_ID="4.8"
OPENSHIFT_VERSION="4.8"
RHEL_VERSION="8.4"
PRETTY_NAME="Red Hat Enterprise Linux CoreOS 48.84.202104151145-0 (Ootpa)"
ID="rhcos"
ID_LIKE="rhel fedora"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::coreos"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="OpenShift Container Platform"
REDHAT_BUGZILLA_PRODUCT_VERSION="4.8"
REDHAT_SUPPORT_PRODUCT="OpenShift Container Platform"
REDHAT_SUPPORT_PRODUCT_VERSION="4.8"
OSTREE_VERSION='48.84.202104151145-0'
Kernel (e.g. uname -a):
Linux ********* 4.18.0-293.el8.x86_64 #1 SMP Mon Mar 1 10:04:09 EST 2021 x86_64 x86_64 x86_64 GNU/Linux

Remember there is no perfect failure detector

@MikeSpreitzer I move the issue here to have a better visibility

Leader election does not guarantee that only one instance thinks it is leader at a time. While normally at most one is active at a time, there are corner cases in which there can be multiple active at once. This is not a bug in leader election that we can expect to be fixed; it is a consequence of a fundamental problem in distributed systems.

The kubernetes API machinery supports a very limited repertoire of ACID transactions. One is to create an object if and only if there is not already an object of the same kind, namespace, and name. That means we can use an object as a lock on any sort of thing that can be named.

In https://github.com/MikeSpreitzer/kube-examples/tree/add-kos/staging/kos#the-ipam-controller you will see an example of using this to allocate IP addresses.

Error message too long and confusing

Replication:
Create VM-A based on the following yaml:

``apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vma        
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
            - name: br1
              macAddress: 02:54:00:6a:9a:15 
              bridge: {}
        resources:
          requests:
            memory: 1G      
        cpu:
          cores: 2
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
              runcmd:
                - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24
                - sudo nmcli con mod 'Wired connection 1' ipv4.method manual
                - sudo nmcli con up 'Wired connection 1'
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1
      nodeSelector: 
         kubernetes.io/hostname: working-8v662-worker-0-rdjsp

Add new interface to yaml with same mac address. Use yaml below:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
metadata:
  name: vma        
  namespace: myproject
spec:
  running: false
  template:
    spec:
      domain:
        devices:
          disks:
            - name: containerdisk
              disk:
                bus: virtio
            - name: cloudinitdisk
              disk:
                bus: virtio
          interfaces:
            - name: default
              masquerade: {}
            - name: br1
              macAddress: 02:54:00:6a:9a:15 
              bridge: {}
            - name: br2
              macAddress: 02:54:00:6a:9a:15
              bridge: {}
        resources:
          requests:
            memory: 1G      
        cpu:
          cores: 2
      volumes:
        - name: containerdisk
          containerDisk:
            image: kubevirt/fedora-cloud-container-disk-demo:latest
        - name: cloudinitdisk
          cloudInitNoCloud:
            userData: |-
              #cloud-config
              password: fedora
              chpasswd: { expire: False }
              runcmd:
                - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24
                - sudo nmcli con mod 'Wired connection 1' ipv4.method manual
                - sudo nmcli con up 'Wired connection 1'
      networks:
        - name: default
          pod: {}
        - multus:
            networkName: br1
          name: br1
        - multus:
            networkName: br2
          name: br1
      nodeSelector: 
         kubernetes.io/hostname: working-8v662-worker-0-rdjsp

Try to reapply this yaml
oc apply -f yaml
BUG: you will receive the following error:
Error from server (InternalError): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"kubevirt.io/v1alpha3","kind":"VirtualMachine","metadata":{"annotations":{},"name":"vma","namespace":"myproject"},"spec":{"running":false,"template":{"spec":{"domain":{"cpu":{"cores":2},"devices":{"disks":[{"disk":{"bus":"virtio"},"name":"containerdisk"},{"disk":{"bus":"virtio"},"name":"cloudinitdisk"}],"interfaces":[{"masquerade":{},"name":"default"},{"bridge":{},"macAddress":"02:54:00:6a:9a:15","name":"br1"},{"bridge":{},"macAddress":"02:54:00:6a:9a:15","name":"br2"}]},"resources":{"requests":{"memory":"1G"}}},"networks":[{"name":"default","pod":{}},{"multus":{"networkName":"br1"},"name":"br1"},{"multus":{"networkName":"br2"},"name":"br1"}],"nodeSelector":{"kubernetes.io/hostname":"working-8v662-worker-0-rdjsp"},"volumes":[{"containerDisk":{"image":"kubevirt/fedora-cloud-container-disk-demo:latest"},"name":"containerdisk"},{"cloudInitNoCloud":{"userData":"#cloud-config\npassword: fedora\nchpasswd: { expire: False }\nruncmd:\n - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24\n - sudo nmcli con mod 'Wired connection 1' ipv4.method manual\n - sudo nmcli con up 'Wired connection 1'"},"name":"cloudinitdisk"}]}}}}\n"}},"spec":{"template":{"spec":{"domain":{"devices":{"interfaces":[{"masquerade":{},"name":"default"},{"bridge":{},"macAddress":"02:54:00:6a:9a:15","name":"br1"},{"bridge":{},"macAddress":"02:54:00:6a:9a:15","name":"br2"}]}}}}}}
to:
Resource: "kubevirt.io/v1alpha3, Resource=virtualmachines", GroupVersionKind: "kubevirt.io/v1alpha3, Kind=VirtualMachine"
Name: "vma", Namespace: "myproject"
Object: &{map["apiVersion":"kubevirt.io/v1alpha3" "kind":"VirtualMachine" "metadata":map["resourceVersion":"5431264" "selfLink":"/apis/kubevirt.io/v1alpha3/namespaces/myproject/virtualmachines/vma" "uid":"116f95c0-9bdd-11e9-a14f-52fdfc072182" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{"apiVersion":"kubevirt.io/v1alpha3","kind":"VirtualMachine","metadata":{"annotations":{},"name":"vma","namespace":"myproject"},"spec":{"running":false,"template":{"spec":{"domain":{"cpu":{"cores":2},"devices":{"disks":[{"disk":{"bus":"virtio"},"name":"containerdisk"},{"disk":{"bus":"virtio"},"name":"cloudinitdisk"}],"interfaces":[{"masquerade":{},"name":"default"},{"bridge":{},"macAddress":"02:54:00:6a:9a:15","name":"br1"}]},"resources":{"requests":{"memory":"1G"}}},"networks":[{"name":"default","pod":{}},{"multus":{"networkName":"br1"},"name":"br1"},{"multus":{"networkName":"br2"},"name":"br1"}],"nodeSelector":{"kubernetes.io/hostname":"working-8v662-worker-0-rdjsp"},"volumes":[{"containerDisk":{"image":"kubevirt/fedora-cloud-container-disk-demo:latest"},"name":"containerdisk"},{"cloudInitNoCloud":{"userData":"#cloud-config\npassword: fedora\nchpasswd: { expire: False }\nruncmd:\n - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24\n - sudo nmcli con mod 'Wired connection 1' ipv4.method manual\n - sudo nmcli con up 'Wired connection 1'"},"name":"cloudinitdisk"}]}}}}\n"] "creationTimestamp":"2019-07-01T08:48:59Z" "generation":'\x01' "name":"vma" "namespace":"myproject"] "spec":map["running":%!q(bool=false) "template":map["spec":map["networks":[map["name":"default" "pod":map[]] map["multus":map["networkName":"br1"] "name":"br1"] map["multus":map["networkName":"br2"] "name":"br1"]] "nodeSelector":map["kubernetes.io/hostname":"working-8v662-worker-0-rdjsp"] "volumes":[map["containerDisk":map["image":"kubevirt/fedora-cloud-container-disk-demo:latest"] "name":"containerdisk"] map["cloudInitNoCloud":map["userData":"#cloud-config\npassword: fedora\nchpasswd: { expire: False }\nruncmd:\n - sudo nmcli con mod 'Wired connection 1' ipv4.address 10.201.0.1/24\n - sudo nmcli con mod 'Wired connection 1' ipv4.method manual\n - sudo nmcli con up 'Wired connection 1'"] "name":"cloudinitdisk"]] "domain":map["cpu":map["cores":'\x02'] "devices":map["disks":[map["disk":map["bus":"virtio"] "name":"containerdisk"] map["name":"cloudinitdisk" "disk":map["bus":"virtio"]]] "interfaces":[map["macAddress":"02:ff:fb:00:00:07" "masquerade":map[] "name":"default"] map["bridge":map[] "macAddress":"02:54:00:6a:9a:15" "name":"br1"]]] "resources":map["requests":map["memory":"1G"]]]]]]]}
for: "vm_a_temp.yaml": Internal error occurred: admission webhook "mutatevirtualmachines.example.com" denied the request: Failed to update virtual machine allocation error: failed to allocate requested mac address

Race between VM creation and vmWaitCleanup gorutine

What happened:
TL:DR If we drop the Eventually from 2633 test, its causes a race between VM creation request to vmWaitCleanup gorutine

The test 2633 first creates a VM by applying a bad yaml (not valid VM create request to kube-api) and expecting to fail, then fixes the yaml and apply it again.

If we do not use Eventually on the second Create request,
the gorutine that run vmWaitCleanup in the background (every 3 seconds) wont clear the MAC that requested from the vmWaitConfigMap and internal cache on time and cause the allocation to fail with "MAC already allocated" error, and so is the test.

Logs for test 2633 https://paste.centos.org/view/0050a9a5
Focused on first test It https://paste.centos.org/view/abfaa692

What you expected to happen:
Successfully create a VM by applying a bad yaml, then applying another yaml with the same MAC address.

How to reproduce it (as minimally and precisely as possible):
Run test 2633 w/o Eventually at

Eventually(func() error {
and
Eventually(func() error {

OR
Apply this yaml (bad VM yaml) and this yaml (fixed yaml) right away
Example:
https://imgur.com/xK7ZIfw
Anything else we need to know?:

Environment:

Support for Unique dynamic mac address range generation per cluster - hashing with k8s cluster name or any UUID unique to k8s cluster

What happened:
Kubemacpool allows users to define a static macpool range. There is no capability to generate MAC address range by hashing with any unique ID like k8s Cluster ID/name/UUID. In vendor provided solutions like VMWare hash vsphere cluster ID to generate the MAC address range for VMs

What you expected to happen:
Kubevirtmacpool operator will dynamically generate the mac address pool range by hashing a unique ID/UUID/Clustername of the k9s cluster.
How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

It would be great to have an option to generate Unique MAC address ranges for a k8s cluster by hashing a unique ID/name/UUID/domain name of a cluster

Environment:

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.