Coder Social home page Coder Social logo

csi-driver's Introduction

HPE CSI Driver for Kubernetes

A Container Storage Interface (CSI) Driver for Kubernetes. The HPE CSI Driver for Kubernetes allows you to use a Container Storage Provider (CSP) to perform data management operations on storage resources.

Releases

The CSI driver is released with a set of container images published on Quay. Those are in turn referenced from deployment manifests, Helm charts and Operators (see next section). Relases are rolled up on the HPE Storage Container Orchestrator Documentation (SCOD) portal.

Deploying and using the CSI driver on Kubernetes

All documentation for installing and using the CSI driver with a HPE storage backend (CSP) is available on SCOD.

Release vehicles include:

Source deployment manifests are available in hpe-storage/co-deployments.

Building

Instructions on how to build the HPE CSI Driver can be found in BUILDING.md.

Testing

Example Kubernetes object definitions used to build test cases for the CSI driver are available in examples.

Support

The HPE CSI Driver for Kubernetes 1.0.0 and onwards is fully supported by HPE and is Generally Available. Certain CSI features may be subject to alpha and beta status. Refer to the official table of feature gates in the Kubernetes documentation to find availability of beta features.

Formal support statements for each HPE supported CSP is available on SCOD. Use this facility for formal support of your HPE storage products, including the CSI driver.

Community

Please file any issues, questions or feature requests you may have here (do not use this facility for support inquiries of your HPE storage product, see SCOD for support). You may also join our Slack community to chat with HPE folks close to this project. We hang out in #NimbleStorage, #3par-primera, #hpe-cloud-volumes and #Kubernetes. Sign up at slack.hpedev.io and login at hpedev.slack.com

Contributing

We value all feedback and contributions. If you find any issues or want to contribute, please feel free to open an issue or file a PR. More details in CONTRIBUTING.md

License

This is open source software licensed using the Apache License 2.0. Please see LICENSE for details.

csi-driver's People

Contributors

adarshpratikhpe avatar anushay1916 avatar ayush1123 avatar bhagyashree-sarawate avatar c-snell avatar ckallianpur avatar datamattsson avatar dileepds avatar e4jet avatar imran-ansari avatar jlarriba avatar maddiegos avatar maxirus avatar megakid avatar pavansshanbhag avatar raunakkumar avatar rkumpf avatar shivamerla avatar sijeesh avatar sneharai4 avatar suneeth51 avatar suneethkumar-byadarahalli avatar vidhutsingh avatar wdurairaj avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

csi-driver's Issues

Unexpected argument 'multi_initiator'

I'm trying to get the csi hpe provisioning working but end up with the error Unexpected argument 'multi_initiator'.

Nimble: 5.0.7.300-644174-opt

csi-provisioner:

I0927 09:34:47.698020       1 controller.go:481] CreateVolumeRequest {Name:pvc-40a9a047-13e3-487f-981b-7af701f11cf1 CapacityRange:required_bytes:10737418240  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[accessProtocol:fc csi.storage.k8s.io/controller-publish-secret-name:nimble-secret csi.storage.k8s.io/controller-publish-secret-namespace:kube-system csi.storage.k8s.io/fstype:xfs csi.storage.k8s.io/node-publish-secret-name:nimble-secret csi.storage.k8s.io/node-publish-secret-namespace:kube-system csi.storage.k8s.io/node-stage-secret-name:nimble-secret csi.storage.k8s.io/node-stage-secret-namespace:kube-system csi.storage.k8s.io/provisioner-secret-name:nimble-secret csi.storage.k8s.io/provisioner-secret-namespace:kube-system description:Volume provisioned by the HPE CSI Driver limitIops:76800] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:<nil> XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0927 09:34:47.698214       1 event.go:255] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"testpvc1", UID:"40a9a047-13e3-487f-981b-7af701f11cf1", APIVersion:"v1", ResourceVersion:"26527162", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/testpvc1"
I0927 09:34:47.704997       1 connection.go:180] GRPC call: /csi.v1.Controller/CreateVolume
I0927 09:34:47.705033       1 connection.go:181] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-40a9a047-13e3-487f-981b-7af701f11cf1","parameters":{"accessProtocol":"fc","description":"Volume provisioned by the HPE CSI Driver","limitIops":"76800"},"secrets":"***stripped***","volume_capabilities":[{"AccessType":{"Mount":{"fs_type":"xfs"}},"access_mode":{"mode":1}}]}
I0927 09:34:49.175415       1 connection.go:183] GRPC response: {}
I0927 09:34:49.176087       1 connection.go:184] GRPC error: rpc error: code = Internal desc = Failed to create volume pvc-40a9a047-13e3-487f-981b-7af701f11cf1 due to error: status code was 400 Bad Request for request: action=POST path=http://nimble-csp-svc:8080/containers/v1/volumes
I0927 09:34:49.176167       1 controller.go:979] Final error received, removing PVC 40a9a047-13e3-487f-981b-7af701f11cf1 from claims in progress
W0927 09:34:49.176186       1 controller.go:886] Retrying syncing claim "40a9a047-13e3-487f-981b-7af701f11cf1", failure 8

csp-service:

09:31:33 DEBUG [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) executeTask-1569576693091: Waiting for result from command CreateVolume.
09:31:33 DEBUG [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) executeTask-1569576693091: Processing result from command CreateVolume.
09:31:33 DEBUG [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) executeTask-1569576693091: Added result from command CreateVolume to chain.results. Id: CreateVolume, Result: null.
09:31:33 ERROR [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) executeTask-1569576693091: Command [id=CreateVolume, name=CreateVolume] has reported an exception.
09:31:33 DEBUG [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) rollBack-1569576693091: called on 70317b2c-dd1b-42f5-94ac-bc7bf596c8c5
09:31:33 DEBUG [co.ni.hi.ut.ExecutionManager]] (executor-thread-1) rollBack-1569576693091: Looking for any commands that needed to stop executing.
09:31:34 ERROR [co.ni.hi.cs.se.er.GroupMgmtServerExceptionMapper]] (executor-thread-1) Bad request: com.nimblestorage.nos.groupmgmt.GroupMgmtServerException: HTTP 400 OK ([{ code: SM_http_bad_request,
 text: The request could not be understood by the server.,
 arguments: null}
, { code: SM_unexpected_arg,
 text: Unexpected argument 'multi_initiator'.,
 arguments: {arg=multi_initiator}}
])
	at com.nimblestorage.nos.groupmgmt.GroupMgmtClient.invoke(GroupMgmtClient.java:776)
	at com.nimblestorage.nos.groupmgmt.GroupMgmtClient.invoke(GroupMgmtClient.java:737)
	at com.nimblestorage.nos.groupmgmt.GroupMgmtClient.post(GroupMgmtClient.java:733)
	at com.nimblestorage.nos.groupmgmt.v1.objectset.VolumeObjectSet.createObject(VolumeObjectSet.java:261)
	at com.nimblestorage.hi.nos.group.VolumeService.createVolume(VolumeService.java:606)
	at com.nimblestorage.hi.nos.group.VolumeService.createVolume(VolumeService.java:596)
	at com.nimblestorage.hi.csp.v1.commands.CreateVolume.execute(CreateVolume.java:144)
	at com.nimblestorage.hi.utils.Command.call(Command.java:40)
	at com.nimblestorage.hi.utils.Command.call(Command.java:21)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
	at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:460)
	at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:193)
Caused by: javax.ws.rs.BadRequestException: HTTP 400 OK
	at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:1053)
	at org.glassfish.jersey.client.JerseyInvocation.translate(JerseyInvocation.java:924)
	at org.glassfish.jersey.client.JerseyInvocation.lambda$invoke$2(JerseyInvocation.java:763)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
	at org.glassfish.jersey.internal.Errors.process(Errors.java:205)
	at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:390)
	at org.glassfish.jersey.client.JerseyInvocation.invoke(JerseyInvocation.java:761)
	at com.nimblestorage.nos.groupmgmt.GroupMgmtClient.invoke(GroupMgmtClient.java:744)
	... 16 more

storage class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"creationTimestamp":"2019-09-18T10:26:12Z","name":"generic-nimble-csi","resourceVersion":"24853915","selfLink":"/apis/storage.k8s.io/v1/storageclasses/generic-nimble-csi","uid":"edddbb4f-f1ca-4b18-ad4b-cdaa6f1fa52d"},"parameters":{"accessProtocol":"fc","csi.storage.k8s.io/controller-publish-secret-name":"nimble-secret","csi.storage.k8s.io/controller-publish-secret-namespace":"kube-system","csi.storage.k8s.io/fstype":"xfs","csi.storage.k8s.io/node-publish-secret-name":"nimble-secret","csi.storage.k8s.io/node-publish-secret-namespace":"kube-system","csi.storage.k8s.io/node-stage-secret-name":"nimble-secret","csi.storage.k8s.io/node-stage-secret-namespace":"kube-system","csi.storage.k8s.io/provisioner-secret-name":"nimble-secret","csi.storage.k8s.io/provisioner-secret-namespace":"kube-system","description":"Volume provisioned by the HPE CSI Driver","limitIops":"76800"},"provisioner":"csi.hpe.com","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
  creationTimestamp: "2019-09-27T09:29:50Z"
  name: generic-nimble-csi
  resourceVersion: "26527081"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/generic-nimble-csi
  uid: 3c846177-6b9d-4642-8cb8-d22dd89c861c
parameters:
  accessProtocol: fc
  csi.storage.k8s.io/controller-publish-secret-name: nimble-secret
  csi.storage.k8s.io/controller-publish-secret-namespace: kube-system
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/node-publish-secret-name: nimble-secret
  csi.storage.k8s.io/node-publish-secret-namespace: kube-system
  csi.storage.k8s.io/node-stage-secret-name: nimble-secret
  csi.storage.k8s.io/node-stage-secret-namespace: kube-system
  csi.storage.k8s.io/provisioner-secret-name: nimble-secret
  csi.storage.k8s.io/provisioner-secret-namespace: kube-system
  description: Volume provisioned by the HPE CSI Driver
  limitIops: "76800"
provisioner: csi.hpe.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

Csi primera "is already mounted error"

Hi,

We are having issues running hpe-csi-driver for primera with FC and multipath enabled.
We are running a kubernetes cluster v1.20.2 on a baremetal infra and we are trying to create volumes on a Primera FC storage version 4.3.x
We have followed csi driver 2.0.0 deployment guide https://scod.hpedev.io/csi_driver/deployment.html and we chose helm as installation method. Currently the driver is able to create pvc- virtual volumes and export the volumes to hosts. The pvc attachment to pods fails with following error
Warning FailedMount 72s kubelet MountVolume.MountDevice failed for volume "pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d" : rpc error: code = Internal desc = Failed to stage volume pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d, err: failed to mount device /dev/mapper/mpathh, Error mounting device /dev/mapper/mpathh on mountpoint /var/lib/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d with options [], unable to mount the device at mountPoint : /var/lib/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d. Error: device /dev/mapper/mpathh is not mounted at /var/lib/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d Warning FailedMount 33s (x6 over 70s) kubelet MountVolume.MountDevice failed for volume "pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d" : rpc error: code = Internal desc = Failed to stage volume pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d, err: failed to mount device /dev/mapper/mpathh, Error mounting device /dev/mapper/mpathh on mountpoint /var/lib/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d with options [], unable to mount the device at mountPoint : /var/lib/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d. Error: command mount failed with rc=32 err=mount: /dev/mapper/mpathh is already mounted or /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d busy /dev/mapper/mpathh is already mounted on /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3ffc0751c29d
I can see the volume mounted correctly behind multipath config:

[root@hpecp12 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 500G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 499G 0 part ├─centos-root 253:0 0 50G 0 lvm / ├─centos-swap 253:1 0 4G 0 lvm [SWAP] ├─centos-srv 253:2 0 100G 0 lvm /srv ├─centos-opt 253:3 0 100G 0 lvm /opt └─centos-var 253:4 0 150G 0 lvm /var sdb 8:16 0 2G 0 disk └─mpathh 253:6 0 2G 0 mpath /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3f sdc 8:32 0 1.2T 0 disk sdd 8:48 0 2G 0 disk └─mpathh 253:6 0 2G 0 mpath /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3f sde 8:64 0 1.2T 0 disk └─sde1 8:65 0 1.2T 0 part └─VolBDSCStore-thinpool 253:5 0 1.2T 0 lvm /var/lib/docker sdf 8:80 0 2G 0 disk └─mpathh 253:6 0 2G 0 mpath /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3f sdg 8:96 0 2G 0 disk └─mpathh 253:6 0 2G 0 mpath /var/lib/docker/kubelet/plugins/hpe.com/mounts/pvc-6877504c-4a3b-4c40-ab22-3f loop0 7:0 0 1.2T 0 loop loop1 7:1 0 1.2T 0 loop loop3 7:3 0 1.2T 0 loop

Any thougths?

KR

HPECSIDriver Conditions: Initialized, ReleaseFailed - failed to install release: clusterroles.rbac.authorization.k8s.io ...

Operator v1.4
OCP 4.6.9

Pre-created the scc per the install documentation before installing the csi driver

cat hpe-csi-scc.yaml

kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
name: hpe-csi-scc
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:

  • '*'
    defaultAddCapabilities: []
    fsGroup:
    type: RunAsAny
    groups: []
    priority:
    readOnlyRootFilesystem: false
    requiredDropCapabilities: []
    runAsUser:
    type: RunAsAny
    seLinuxContext:
    type: RunAsAny
    supplementalGroups:
    type: RunAsAny
    users:
  • system:serviceaccount:hpe-csi-driver:hpe-csi-controller-sa
  • system:serviceaccount:hpe-csi-driver:hpe-csi-node-sa
  • system:serviceaccount:hpe-csi-driver:hpe-csp-sa
  • system:serviceaccount:hpe-csi-driver:hpe-csi-operator-sa
  • system:serviceaccount:hpe-nfs:hpe-csi-nfs-sa
    volumes:
  • '*'

oc create -f hpe-csi-scc.yaml
securitycontextconstraints.security.openshift.io/hpe-csi-scc created

Errors when deploying/creating the HPECSIDriver instance:

status:
conditions:
- lastTransitionTime: '2021-01-28T14:48:31Z'
status: 'True'
type: Initialized
- lastTransitionTime: '2021-01-28T14:48:35Z'
message: >-
failed to install release: clusterroles.rbac.authorization.k8s.io
"hpe-csi-volumegroup-role" is forbidden: user
"system:serviceaccount:hpe-csi-driver:hpe-csi-operator-sa"
(groups=["system:serviceaccounts"
"system:serviceaccounts:hpe-csi-driver" "system:authenticated"]) is
attempting to grant RBAC permissions not currently held:

    {APIGroups:["storage.hpe.com"], Resources:["volumegroupclasses"],
    Verbs:["patch"]}

    {APIGroups:["storage.hpe.com"], Resources:["volumegroupcontents"],
    Verbs:["patch"]}

    {APIGroups:["storage.hpe.com"],
    Resources:["volumegroupcontents/status"], Verbs:["patch"]}

    {APIGroups:["storage.hpe.com"], Resources:["volumegroups"],
    Verbs:["patch"]}

    {APIGroups:["storage.hpe.com"], Resources:["volumegroups/status"],
    Verbs:["patch"]}
  reason: InstallError
  status: 'True'
  type: ReleaseFailed

Is this a known issue? I don't recall running into the issue with OCP v4.6 /w HPE CSI Driver v1.3, however, with trying to set this up with v1.4 I'm running into the above.

Thanks

Create clone from snapshot or clone from pvc fails "Body length 0" error

Hi all,

I am using hpe csi driver 2.0.0 w/ Primera FC and w/ k8s 1.20.2
After enabling CSI snapshot

git clone https://github.com/kubernetes-csi/external-snapshotter
cd external-snapshotter

# Kubernetes 1.20 and newer
git checkout tags/v4.0.0 -b release-4.0

and creating volumesnapshot class

apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
  name: hpe-snapshot
  annotations:
    snapshot.storage.kubernetes.io/is-default-class: "true"
driver: csi.hpe.com
deletionPolicy: Delete
parameters:
  description: "Snapshot created by the HPE CSI Driver"
  csi.storage.k8s.io/snapshotter-secret-name: hpe-backend
  csi.storage.k8s.io/snapshotter-secret-namespace: hpe-storage
  csi.storage.k8s.io/snapshotter-list-secret-name: hpe-backend
  csi.storage.k8s.io/snapshotter-list-secret-namespace: hpe-storage

I am able to create volume snapshot correctly

# k get pvc my-first-pvc1
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-first-pvc1   Bound    pvc-760bdebc-a625-4dd6-86f4-de7bce1af1c4   2Gi        RWO            hpe-primera    42h

# k get volumesnapshot
NAME           READYTOUSE   SOURCEPVC       SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS   SNAPSHOTCONTENT                                    CREATIONTIME   AGE
my-snapshot2   true         my-first-pvc1                           2Gi           hpe-snapshot    snapcontent-70a455b2-eba8-4400-8a95-de201ddecc3c   11s            12s

but when creating a pvc from snap or pvc from pvc

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc-from-snapshot
spec:
  dataSource:
    name: my-snapshot2
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: hpe-primera

I get this error

# k describe pvc my-pvc-from-snapshot
Name:          my-pvc-from-snapshot
Namespace:     default
StorageClass:  hpe-primera
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
DataSource:
  APIGroup:  snapshot.storage.k8s.io
  Kind:      VolumeSnapshot
  Name:      my-snapshot2
Used By:     <none>
Events:
  Type     Reason                Age                From                                                            Message
  ----     ------                ----               ----                                                            -------
  Normal   Provisioning          24s (x6 over 86s)  csi.hpe.com_hpecp14.local_ae153ec4-f172-48bf-904d-e606c15ca7aa  External provisioner is provisioning volume for claim "default/my-pvc-from-snapshot"
  Warning  ProvisioningFailed    18s (x6 over 80s)  csi.hpe.com_hpecp14.local_ae153ec4-f172-48bf-904d-e606c15ca7aa  failed to provision volume with StorageClass "hpe-primera": rpc error: code = Internal desc = Failed to clone-create volume pvc-b894308c-aa0d-4656-b659-ea80809a8833, Post http://primera3par-csp-svc:8080/containers/v1/volumes: http: ContentLength=448 with Body length 0
  Normal   ExternalProvisioning  3s (x7 over 86s)   persistentvolume-controller                                     waiting for a volume to be created, either by external provisioner "csi.hpe.com" or manually created by system administrator

Sounds like the driver's waiting for somebody else to create an empty destination volume?

iscsi session remains on old node when pod migrates during outage

We have run a scenario where we do the following

  1. Reboot node 1 where pod that uses a volume runs

  2. While node 1 is down delete the pod

  3. Pod gets reschedule to other node but won't come up due to "Multi-Attach error for volume"
    This is due to that the volumeattachment resource still exists

  4. Once node comes up volumeattachment gets cleaned up but the iscsi session still stays on the old node so the pod can't come up

Expected is that

  1. On node failure a pod should be able to move quickly
  2. On node reboot iscsi session should be cleared up if a volumeattachment doesn't exist for the node anymore

NFS Server Provisioner deployment fails

Have created a new nfs StorageClass.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: hpe-nfs-mtc
provisioner: csi.hpe.com
parameters:
csi.storage.k8s.io/controller-expand-secret-name: mtc-3par-02-secret
csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
csi.storage.k8s.io/controller-publish-secret-name: mtc-3par-02-secret
csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
csi.storage.k8s.io/node-publish-secret-name: mtc-3par-02-secret
csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
csi.storage.k8s.io/node-stage-secret-name: mtc-3par-02-secret
csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
csi.storage.k8s.io/provisioner-secret-name: mtc-3par-02-secret
csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
description: "NFS volume created by the HPE CSI Driver for Kubernetes"
accessProtocol: iscsi
csi.storage.k8s.io/fstype: xfs
nfsResources: "true"
allowOverrides: nfsNamespace
cpg: K8S_LAB
iscsiPortalIps: x.x.x.x, y.y.y.y
reclaimPolicy: Delete
allowVolumeExpansion: true

After that I created a basic pvc

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-rwx-pvc
spec:
accessModes:

  • ReadWriteMany
    storageClassName: hpe-nfs-mtc
    resources:
    requests:
    storage: 2Gi

This pvc get stuck in status "Pending"

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-rwx-pvc Pending hpe-nfs-mtc 14m

Lot of errors on this pvc

kubectl describe pvc my-rwx-pvc
Name: my-rwx-pvc
Namespace: default
StorageClass: hpe-nfs-mtc
Status: Pending
Volume:
Labels:
Annotations: volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By:
Events:
Type Reason Age From Message


Normal Provisioning 69s (x11 over 15m) csi.hpe.com_tsrv-dockeree-6.int.comhem.com_d6a2aa9d-fb4a-45e6-b16f-67971e2c6d34 External provisioner is provisioning volume for claim "default/my-rwx-pvc"
Warning ProvisionStorage 64s (x11 over 15m) csi.hpe.com failed to create nfs deployment hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, err deployments.apps "hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792" is forbidden: non-admin user "hpe-storage:hpe-csi-controller-sa" [service account "hpe-nfs:hpe-csi-nfs-sa"]. The configured privileged attributes access for non-admin users ("[]")("[]") and for service accounts ("[]")("[]") lack required permissions to use attributes [kernelcapabilities privileged] for resource hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792
Warning ProvisioningFailed 64s (x11 over 15m) csi.hpe.com_tsrv-dockeree-6.int.comhem.com_d6a2aa9d-fb4a-45e6-b16f-67971e2c6d34 failed to provision volume with StorageClass "hpe-nfs-mtc": rpc error: code = Internal desc = Failed to create NFS provisioned volume pvc-91862b16-67db-49d3-9123-116eb4b01792, err failed to create nfs deployment hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, err deployments.apps "hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792" is forbidden: non-admin user "hpe-storage:hpe-csi-controller-sa" [service account "hpe-nfs:hpe-csi-nfs-sa"]. The configured privileged attributes access for non-admin users ("[]")("[]") and for service accounts ("[]")("[]") lack required permissions to use attributes [kernelcapabilities privileged] for resource hpe-nfs-91862b16-67db-49d3-9123-116eb4b01792, rollback status: success
Normal ExternalProvisioning 26s (x63 over 15m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "csi.hpe.com" or manually created by system administrator

Have tried with both 2.0.0 and 1.4.0, same error on both

Expanded Peer Persistence PVC failed with HPE CSI Driver 2.2.0

As mentioned in the title, I found that PVCs cannot be expanded directly with HPE CSI Driver when volumes are created using the Remote Copy。We had HPE CSI Driver 2.2.0 installed in Kubernetes 1.21. Our storages are 2 Alletra-9060 arrays configed Remote Copy in 2DC Peer Persistence mode. When using “kubectl patch or edit” command to expand pvc, there are errors reported as follows from volume_expand and externel-resizer csi.hpe.com.
"lgnoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an extern
al controller to process this PVC."
"resize volume "pvc-6eeedcac-4b0c-4ec9-9848-1d6fbe498eeb"by resizer "csi.hpe.com"faile d: rpc error: code = Internal desc= Failed to expand volume to requested size, Request faile d with status code 400 and errors Error code (Bad Request) and message (HTTP error respo nse from backend {"code":29,"desc":"Error: Cannot grow local volume\n VV pvc-6eeedcac-4b0c-4ec9-9848-1d6 is currently in a started remote copy group. The group must be stopped b efore the volume's user space can be grown."])"
Accoding to these errors, I need to stop the rocopy group first, then I can extend pvc successfully and I need to start the rcopygroup manually, which is really inconvenient.
So, will HPE realizes some functions to solve Peer Persistence Volume expansion problem in the furture? Do we have some method to expand PP PVC gracefully?
After all, not all customers can use the storage command line to finish these tasks.

Shorter names for VV for 3par

Hi all!

In 3par there is limitation to amount of characters in VV name, 31 symbol.

image
from "HP 3PAR Command Line Interface Reference"

Using all 31 symbol in VV by driver makes it impossible to create snapshots, cos they need different name. Most popular pattern for snapshots is @vvname@ + timestamp, but it breaks limits.

Is it somehow possible to decrease amount of symbols in VV name?

MountVolume.MountDevice failed for volume

i followed the steps from this link :

https://scod.hpedev.io/learn/persistent_storage/index.html#installing_the_helm_chart

all was fine and the PVC got created in our namespace and it is bound, and i can also see the PVC created on Nimble Storage
however whenever i install any pod like nginx, it get's stuck and i get the below error when launching the pod :

.Successfully assigned hpe-storage/nginx to alssworker1.areeba.com
.AttachVolume.Attach succeeded for volume "pvc-5118ca53-c33c-499a-a439-e480bc4815aa"
.unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[config kube-api-access-85dnm]: timed out waiting for the condition

MountVolume.MountDevice failed for volume "pvc-5118ca53-c33c-499a-a439-e480bc4815aa" : rpc error: code = Internal desc = Failed to stage volume 0656c0fe884312fd06000000000000000000000025, err: rpc error: code = Internal desc = Error creating device for volume 0656c0fe884312fd06000000000000000000000025, err: device not found with serial 6c8379f43c4dcc6a6c9ce9008e03d215 or target

extremely slow provision rate

Hi,

my csi-driver are deployed successfully

$ kubectl -n kube-system get pods -A | grep -e 'cs[ip]'
kube-system   primera3par-csp-6c65cbfcc-mt5p6             1/1     Running    0          4h45m
kube-system   hpe-csi-controller-64d868c595-sf7k4         5/5     Running    0          4h45m
kube-system   hpe-csi-node-bm4kp                          2/2     Running    0          4h45m

storageclass

$ kubectl get storageclass
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
3par-sc                csi.hpe.com             Delete          Immediate              false                  5d21h

When I create one pvc using this storageclass,then mount the pvc onto a pod,it works.
But when I create two or more pvcs in the same way, mount them, it doesn't work.

pvc & pv bound successfully

here are some logs:

pod events :

Events:
  Type     Reason              Age                    From                     Message
  ----     ------              ----                   ----                     -------
  Warning  FailedMount         19m (x5 over 85m)      kubelet, fcmaster        Unable to attach or mount volumes: unmounted volumes=[pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0], unattached volumes=[pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0 host-dev vm-images default-token-w8q9x]: timed out waiting for the condition
  Warning  FailedMount         15m (x8 over 97m)      kubelet, fcmaster        Unable to attach or mount volumes: unmounted volumes=[pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0], unattached volumes=[host-dev vm-images default-token-w8q9x pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0]: timed out waiting for the condition
  Warning  FailedAttachVolume  9m30s (x52 over 101m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-6950d4e3-0eaf-4b14-905e-2129f7266371" : rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6950d4e3-0eaf-4b14-905e-2129f7266371:288023a9-6ad0-6663-6d61-737465720000
  Warning  FailedMount         8m44s (x18 over 99m)   kubelet, fcmaster        Unable to attach or mount volumes: unmounted volumes=[pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0], unattached volumes=[vm-images default-token-w8q9x pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0 host-dev]: timed out waiting for the condition
  Warning  FailedAttachVolume  5m26s (x54 over 101m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-5781d798-06c2-4b91-baf6-bd10785f37d7" : rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5781d798-06c2-4b91-baf6-bd10785f37d7:288023a9-6ad0-6663-6d61-737465720000
  Warning  FailedMount         4m10s (x4 over 53m)    kubelet, fcmaster        Unable to attach or mount volumes: unmounted volumes=[pvc-3par-sc-multi-pv2-0-d0 pvc-3par-sc-multi-pv2-0-d1], unattached volumes=[pvc-3par-sc-multi-pv2-0-d0 host-dev vm-images default-token-w8q9x pvc-3par-sc-multi-pv2-0-d1]: timed out waiting for the condition
  Warning  FailedMount         112s (x9 over 94m)     kubelet, fcmaster        Unable to attach or mount volumes: unmounted volumes=[pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0], unattached volumes=[default-token-w8q9x pvc-3par-sc-multi-pv2-0-d1 pvc-3par-sc-multi-pv2-0-d0 host-dev vm-images]: timed out waiting for the condition

hpe-csi-driver logs:

time="2020-09-03T06:46:41Z" level=error msg="Failed to publish volume pvc-5781d798-06c2-4b91-baf6-bd10785f37d7, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-5781d798-06c2-4b91-baf6-bd10785f37d7/actions/publish: http: ContentLength=79 with Body length 0" file="controller_server.go:850"
time="2020-09-03T06:46:41Z" level=error msg="Error controller publishing volume pvc-5781d798-06c2-4b91-baf6-bd10785f37d7, err: rpc error: code = Internal desc = Failed to add ACL to volume pvc-5781d798-06c2-4b91-baf6-bd10785f37d7 for node &{ fcmaster 288023a9-6ad0-6663-6d61-737465720000 [0xc0006cc360] [0xc000232d00 0xc000232d10 0xc000232d20 0xc000232d30 0xc000232d40] [0xc00082b640 0xc00082b650]  } via CSP, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-5781d798-06c2-4b91-baf6-bd10785f37d7/actions/publish: http: ContentLength=79 with Body length 0" file="controller_server.go:709"
time="2020-09-03T06:46:41Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to add ACL to volume pvc-5781d798-06c2-4b91-baf6-bd10785f37d7 for node &{ fcmaster 288023a9-6ad0-6663-6d61-737465720000 [0xc0006cc360] [0xc000232d00 0xc000232d10 0xc000232d20 0xc000232d30 0xc000232d40] [0xc00082b640 0xc00082b650]  } via CSP, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-5781d798-06c2-4b91-baf6-bd10785f37d7/actions/publish: http: ContentLength=79 with Body length 0" file="utils.go:73"
time="2020-09-03T06:46:42Z" level=error msg="Failed to publish volume pvc-6950d4e3-0eaf-4b14-905e-2129f7266371, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-6950d4e3-0eaf-4b14-905e-2129f7266371/actions/publish: http: ContentLength=79 with Body length 0" file="controller_server.go:850"
time="2020-09-03T06:46:42Z" level=error msg="Error controller publishing volume pvc-6950d4e3-0eaf-4b14-905e-2129f7266371, err: rpc error: code = Internal desc = Failed to add ACL to volume pvc-6950d4e3-0eaf-4b14-905e-2129f7266371 for node &{ fcmaster 288023a9-6ad0-6663-6d61-737465720000 [0xc0008743b0] [0xc0003a7b80 0xc0003a7b90 0xc0003a7ba0 0xc0003a7bb0 0xc0003a7bc0] [0xc0004348a0 0xc0004348b0]  } via CSP, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-6950d4e3-0eaf-4b14-905e-2129f7266371/actions/publish: http: ContentLength=79 with Body length 0" file="controller_server.go:709"
time="2020-09-03T06:46:42Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to add ACL to volume pvc-6950d4e3-0eaf-4b14-905e-2129f7266371 for node &{ fcmaster 288023a9-6ad0-6663-6d61-737465720000 [0xc0008743b0] [0xc0003a7b80 0xc0003a7b90 0xc0003a7ba0 0xc0003a7bb0 0xc0003a7bc0] [0xc0004348a0 0xc0004348b0]  } via CSP, err: Put http://primera3par-csp-svc:8080/containers/v1/volumes/pvc-6950d4e3-0eaf-4b14-905e-2129f7266371/actions/publish: http: ContentLength=79 with Body length 0" file="utils.go:73"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-a33c358b-fbc5-4297-9871-406dd8d9ba32\"}" file="utils.go:70"
time="2020-09-03T06:47:13Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5d24afc3-c284-415b-89fa-fed1bdfabfb7\"}" file="utils.go:70"
time="2020-09-03T06:47:13Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:13Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-c53ea3a2-2df4-41ff-97f5-f00ff27b2ebf\"}" file="utils.go:70"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5781d798-06c2-4b91-baf6-bd10785f37d7\"}" file="utils.go:70"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:13Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-a33c358b-fbc5-4297-9871-406dd8d9ba32\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-a33c358b-fbc5-4297-9871-406dd8d9ba32:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5d24afc3-c284-415b-89fa-fed1bdfabfb7\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5d24afc3-c284-415b-89fa-fed1bdfabfb7:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-c53ea3a2-2df4-41ff-97f5-f00ff27b2ebf\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-c53ea3a2-2df4-41ff-97f5-f00ff27b2ebf:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5781d798-06c2-4b91-baf6-bd10785f37d7\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5781d798-06c2-4b91-baf6-bd10785f37d7:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6950d4e3-0eaf-4b14-905e-2129f7266371\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6950d4e3-0eaf-4b14-905e-2129f7266371:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:28Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-0be8d11a-19a9-405f-99a7-80313ee38173\"}" file="utils.go:70"
time="2020-09-03T06:47:28Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-0be8d11a-19a9-405f-99a7-80313ee38173:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:29Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:29Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:29Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:29Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:29Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5781d798-06c2-4b91-baf6-bd10785f37d7\"}" file="utils.go:70"
time="2020-09-03T06:47:29Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5781d798-06c2-4b91-baf6-bd10785f37d7:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:29Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:29Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746\"}" file="utils.go:70"
time="2020-09-03T06:47:29Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:29Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:29Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183\"}" file="utils.go:70"
time="2020-09-03T06:47:29Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:29Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:29Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd\"}" file="utils.go:70"
time="2020-09-03T06:47:29Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:30Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:30Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6950d4e3-0eaf-4b14-905e-2129f7266371\"}" file="utils.go:70"
time="2020-09-03T06:47:30Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6950d4e3-0eaf-4b14-905e-2129f7266371:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:30Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:30Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-0be8d11a-19a9-405f-99a7-80313ee38173\"}" file="utils.go:70"
time="2020-09-03T06:47:30Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-0be8d11a-19a9-405f-99a7-80313ee38173:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:30Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:30Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:30Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:30Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:30Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5781d798-06c2-4b91-baf6-bd10785f37d7\"}" file="utils.go:70"
time="2020-09-03T06:47:30Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5781d798-06c2-4b91-baf6-bd10785f37d7:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:30Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:30Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746\"}" file="utils.go:70"
time="2020-09-03T06:47:30Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:31Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:31Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183\"}" file="utils.go:70"
time="2020-09-03T06:47:31Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:31Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:31Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd\"}" file="utils.go:70"
time="2020-09-03T06:47:31Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:31Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:31Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6950d4e3-0eaf-4b14-905e-2129f7266371\"}" file="utils.go:70"
time="2020-09-03T06:47:31Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6950d4e3-0eaf-4b14-905e-2129f7266371:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:31Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:31Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-0be8d11a-19a9-405f-99a7-80313ee38173\"}" file="utils.go:70"
time="2020-09-03T06:47:31Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-0be8d11a-19a9-405f-99a7-80313ee38173:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:31Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:31Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:31Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:32Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-5781d798-06c2-4b91-baf6-bd10785f37d7\"}" file="utils.go:70"
time="2020-09-03T06:47:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-5781d798-06c2-4b91-baf6-bd10785f37d7:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:32Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746\"}" file="utils.go:70"
time="2020-09-03T06:47:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-22c7cf40-4338-4fe9-aec1-9ebcf8d13746:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:32Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183\"}" file="utils.go:70"
time="2020-09-03T06:47:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6a1b99c7-e0c3-44a5-bbbe-f50959638183:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:32Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd\"}" file="utils.go:70"
time="2020-09-03T06:47:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:32Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:32Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-6950d4e3-0eaf-4b14-905e-2129f7266371\"}" file="utils.go:70"
time="2020-09-03T06:47:32Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-6950d4e3-0eaf-4b14-905e-2129f7266371:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:33Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:33Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-0be8d11a-19a9-405f-99a7-80313ee38173\"}" file="utils.go:70"
time="2020-09-03T06:47:33Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-0be8d11a-19a9-405f-99a7-80313ee38173:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:33Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:33Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d\"}" file="utils.go:70"
time="2020-09-03T06:47:33Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-e7f3cb9d-e1e7-482c-bf7b-0d6dd898f86d:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:34Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"
time="2020-09-03T06:47:34Z" level=info msg="GRPC request: {\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\",\"secrets\":\"***stripped***\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1599097633503-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd\"}" file="utils.go:70"
time="2020-09-03T06:47:34Z" level=error msg="GRPC error: rpc error: code = Aborted desc = There is already an operation pending for the specified id ControllerPublishVolume:pvc-f02ca42f-1aac-4f0f-b33f-11d36e3f5bfd:288023a9-6ad0-6663-6d61-737465720000" file="utils.go:73"
time="2020-09-03T06:47:34Z" level=info msg="GRPC call: /csi.v1.Controller/ControllerPublishVolume" file="utils.go:69"

Extract "Published" field from HPEVolumeInfo

In our environment, we are using the Retain reclaim policy to keep PVs for a while in case a direct restart is necessary.

We have a golang script to remove Released PVs and the storage from 3 par in a control way. We would like to add a double check that the PV is released (information in the Kubernetes PV object) and is not published by this controller, information that I could recover from hpevolumeinfos.

It would be possible to retrieve this information in some way from hpevolumeinfos directly from golang? Instead of using unstructured dynamic client or requesting directly to the 3 par?

Thanks

CSI driver fails to provision volume

Hi,

I created a vanilla Kubernetes cluster and deployed the CSI driver using the advanced procedure described at https://scod.hpedev.io/csi_driver/deployment.html#advanced_install.

CSI Pods were successfully deployed

kubectl -n kube-system get pods -o wide | grep -e 'cs[ip]'
csp-service-5dbf66584c-l7hnv                      1/1     Running   0          23m     10.244.5.6      kubernetes-4              <none>           <none>
hpe-csi-controller-789d94d7d9-88kwx               5/5     Running   0          21m     192.168.3.112   kubernetes-4              <none>           <none>
hpe-csi-node-qhz74                                2/2     Running   0          21m     192.168.3.112   kubernetes-4              <none>           <none>
hpe-csi-node-swhgj                                2/2     Running   0          21m     192.168.3.114   kubernetes-3              <none>           <none>

Storage Class

kubectl get storageclass
NAME         PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
hpe-nimble   csi.hpe.com   Delete          Immediate           true                   19m

I created a Persistent Volume Claim

kubectl get pvc data-postgres-postgresql-0 -n gitlab
NAME                         STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
data-postgres-postgresql-0   Pending                                      hpe-nimble     20m

But the CSI driver fails to create the volume

time="2020-06-16T16:24:05Z" level=info msg="GRPC call: /csi.v1.Controller/CreateVolume" file="utils.go:63"
time="2020-06-16T16:24:05Z" level=info msg="GRPC request: {\"capacity_range\":{\"required_bytes\":68719476736},\"name\":\"pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d\",\"parameters\":{\"description\":\"Volume created by the HPE CSI Driver for Kubernetes\"},\"secrets\":\"***stripped***\",\"volume_capabilities\":[{\"AccessType\":{\"Mount\":{\"fs_type\":\"xfs\"}},\"access_mode\":{\"mode\":1}}]}" file="utils.go:64"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> CreateVolume" file="controller_server.go:184"
time="2020-06-16T16:24:05Z" level=info msg="CreateVolume requested volume 'pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d' with the following Capabilities: '[mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > ]' and Parameters: 'map[description:Volume created by the HPE CSI Driver for Kubernetes]'" file="controller_server.go:187"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> HandleDuplicateRequest, key: CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="driver.go:426"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> GetRequest, key: CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="driver.go:483"
time="2020-06-16T16:24:05Z" level=trace msg="Acquiring mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:44"
time="2020-06-16T16:24:05Z" level=trace msg="Releasing mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:67"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< GetRequest" file="driver.go:490"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> AddRequest, key: CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d, value: %!s(bool=true)" file="driver.go:495"
time="2020-06-16T16:24:05Z" level=trace msg="Acquiring mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:44"
time="2020-06-16T16:24:05Z" level=trace msg="Print RequestCache: map[CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d:true]" file="driver.go:503"
time="2020-06-16T16:24:05Z" level=trace msg="Successfully inserted an entry with key CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d to the cache map" file="driver.go:504"
time="2020-06-16T16:24:05Z" level=trace msg="Releasing mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:67"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< AddRequest" file="driver.go:505"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< HandleDuplicateRequest" file="driver.go:443"
time="2020-06-16T16:24:05Z" level=trace msg="DB service disabled" file="driver.go:550"
time="2020-06-16T16:24:05Z" level=info msg="Requested for capacity bytes: 68719476736" file="controller_server.go:218"
time="2020-06-16T16:24:05Z" level=info msg=">>>>> ConfigureAnnotations called with PVC Name pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="flavor.go:109"
time="2020-06-16T16:24:05Z" level=info msg=">>>>> getClaimFromClaimName called with pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="flavor.go:346"
time="2020-06-16T16:24:05Z" level=info msg="Looking up PVC with uid 127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="flavor.go:354"
time="2020-06-16T16:24:05Z" level=info msg="Found the following claims: [&PersistentVolumeClaim{ObjectMeta:{data-postgres-postgresql-0  gitlab /api/v1/namespaces/gitlab/persistentvolumeclaims/data-postgres-postgresql-0 127a1291-32ec-47dc-8c58-1a10c9d22c2d 7494517 0 2020-06-16 16:05:34 +0000 UTC <nil> <nil> map[app:postgresql release:postgres role:master] map[volume.beta.kubernetes.io/storage-provisioner:csi.hpe.com] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2020-06-16 16:05:34 +0000 UTC ProtoFields{Map:map[string]Fields{},Raw:nil,}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{68719476736 0} {<nil>}  BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hpe-nimble,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}]" file="flavor.go:363"
time="2020-06-16T16:24:05Z" level=info msg="<<<<< getClaimFromClaimName" file="flavor.go:365"
time="2020-06-16T16:24:05Z" level=info msg="Configuring annotations on PVC &PersistentVolumeClaim{ObjectMeta:{data-postgres-postgresql-0  gitlab /api/v1/namespaces/gitlab/persistentvolumeclaims/data-postgres-postgresql-0 127a1291-32ec-47dc-8c58-1a10c9d22c2d 7494517 0 2020-06-16 16:05:34 +0000 UTC <nil> <nil> map[app:postgresql release:postgres role:master] map[volume.beta.kubernetes.io/storage-provisioner:csi.hpe.com] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2020-06-16 16:05:34 +0000 UTC ProtoFields{Map:map[string]Fields{},Raw:nil,}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{68719476736 0} {<nil>}  BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hpe-nimble,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}" file="flavor.go:116"
time="2020-06-16T16:24:05Z" level=info msg=">>>> getClassOverrideOptions" file="flavor.go:369"
time="2020-06-16T16:24:05Z" level=info msg="resulting override keys :[]string{\"nfsPVC\"}" file="flavor.go:388"
time="2020-06-16T16:24:05Z" level=info msg="<<<<< getClassOverrideOptions" file="flavor.go:389"
time="2020-06-16T16:24:05Z" level=info msg=">>>>> getClaimOverrideOptions for csi.hpe.com" file="flavor.go:393"
time="2020-06-16T16:24:05Z" level=info msg=">>>>> getPVFromPVCAnnotation for csi.hpe.com" file="flavor.go:433"
time="2020-06-16T16:24:05Z" level=info msg="<<<<< getPVFromPVCAnnotation" file="flavor.go:439"
time="2020-06-16T16:24:05Z" level=info msg="<<<<< getClaimOverrideOptions" file="flavor.go:429"
time="2020-06-16T16:24:05Z" level=info msg="<<<<< ConfigureAnnotations" file="flavor.go:127"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> createVolume, name: pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d, size: 68719476736, volumeCapabilities: [mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > ], createParameters: map[description:Volume created by the HPE CSI Driver for Kubernetes]" file="controller_server.go:261"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> AreVolumeCapabilitiesSupported, volCapabilities : [mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > ]" file="controller_server.go:154"
time="2020-06-16T16:24:05Z" level=trace msg="Validating volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:158"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> IsValidVolumeCapability, volCapability: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:84"
time="2020-06-16T16:24:05Z" level=trace msg="Found access_mode: SINGLE_NODE_WRITER" file="controller_server.go:93"
time="2020-06-16T16:24:05Z" level=trace msg="Found Mount access_type fs_type:\"xfs\" " file="controller_server.go:117"
time="2020-06-16T16:24:05Z" level=trace msg="Found Mount access_type, FileSystem: xfs" file="controller_server.go:120"
time="2020-06-16T16:24:05Z" level=trace msg="Total length of mount flags: 0" file="controller_server.go:134"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< IsValidVolumeCapability" file="controller_server.go:148"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< AreVolumeCapabilitiesSupported" file="controller_server.go:165"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> ValidateAndGetVolumeAccessType, volCaps: [mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > ]" file="controller_server.go:48"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> getVolumeAccessType, volCap: mount:<fs_type:\"xfs\" > access_mode:<mode:SINGLE_NODE_WRITER > " file="controller_server.go:24"
time="2020-06-16T16:24:05Z" level=trace msg="Found mount access type fs_type:\"xfs\" " file="controller_server.go:37"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< getVolumeAccessType" file="controller_server.go:38"
time="2020-06-16T16:24:05Z" level=trace msg="Retrieved volume access type: mount" file="controller_server.go:77"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< ValidateAndGetVolumeAccessType" file="controller_server.go:78"
time="2020-06-16T16:24:05Z" level=trace msg="Volume create options: map[description:Volume created by the HPE CSI Driver for Kubernetes multi_initiator:false]" file="controller_server.go:343"
time="2020-06-16T16:24:05Z" level=trace msg="Adding filesystem to the volume context, Filesystem: xfs" file="controller_server.go:362"
time="2020-06-16T16:24:05Z" level=trace msg="Volume context in response to CO: map[description:Volume created by the HPE CSI Driver for Kubernetes fsType:xfs volumeAccessMode:mount]" file="controller_server.go:366"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> GetStorageProvider" file="driver.go:271"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> CreateCredentials" file="storage_provider.go:58"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< CreateCredentials" file="storage_provider.go:114"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> AddStorageProvider" file="driver.go:242"
time="2020-06-16T16:24:05Z" level=info msg="Adding connection to CSP at IP 192.168.3.165, port 8080, context path , with username admin and serviceName nimble-csp-svc" file="driver.go:245"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> NewContainerStorageProvider" file="container_storage_provider.go:70"
time="2020-06-16T16:24:05Z" level=trace msg=">>>>> getCspClient (service) using URI http://nimble-csp-svc:8080 and username admin" file="container_storage_provider.go:810"
time="2020-06-16T16:24:05Z" level=trace msg="<<<<< getCspClient" file="container_storage_provider.go:816"
time="2020-06-16T16:24:05Z" level=trace msg="Attempting initial login to CSP" file="container_storage_provider.go:85"
time="2020-06-16T16:24:05Z" level=info msg="About to attempt login to CSP for backend 192.168.3.165" file="container_storage_provider.go:114"
time="2020-06-16T16:24:05Z" level=trace msg="Acquiring mutex lock for 192.168.3.165" file="concurrent.go:44"
time="2020-06-16T16:24:05Z" level=trace msg="Request: action=POST path=http://nimble-csp-svc:8080/containers/v1/tokens" file="client.go:173"
time="2020-06-16T16:24:35Z" level=trace msg="Releasing mutex lock for 192.168.3.165" file="concurrent.go:67"
time="2020-06-16T16:24:35Z" level=error msg="Failed to login to CSP.  Status code: 0.  Error: Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="container_storage_provider.go:88"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< NewContainerStorageProvider" file="container_storage_provider.go:89"
time="2020-06-16T16:24:35Z" level=error msg="Failed to create new CSP connection from given parameters" file="driver.go:251"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< AddStorageProvider" file="driver.go:252"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< GetStorageProvider" file="driver.go:290"
time="2020-06-16T16:24:35Z" level=error msg="err: Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="controller_server.go:371"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< createVolume" file="controller_server.go:372"
time="2020-06-16T16:24:35Z" level=error msg="Volume creation failed, err: rpc error: code = Unavailable desc = Failed to get storage provider from secrets, Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="controller_server.go:238"
time="2020-06-16T16:24:35Z" level=trace msg="DB service disabled" file="driver.go:606"
time="2020-06-16T16:24:35Z" level=trace msg=">>>>> ClearRequest, key: CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="driver.go:509"
time="2020-06-16T16:24:35Z" level=trace msg="Acquiring mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:44"
time="2020-06-16T16:24:35Z" level=trace msg="Print RequestCache: map[]" file="driver.go:529"
time="2020-06-16T16:24:35Z" level=trace msg="Successfully removed an entry with key CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d from the cache map" file="driver.go:530"
time="2020-06-16T16:24:35Z" level=trace msg="Releasing mutex lock for CreateVolume:pvc-127a1291-32ec-47dc-8c58-1a10c9d22c2d" file="concurrent.go:67"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< ClearRequest" file="driver.go:531"
time="2020-06-16T16:24:35Z" level=trace msg="<<<<< CreateVolume" file="controller_server.go:239"
time="2020-06-16T16:24:35Z" level=error msg="GRPC error: rpc error: code = Unavailable desc = Failed to get storage provider from secrets, Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="utils.go:67"

hpe-storage-logs-kubernetes-3-20200616_161556.tar.gz
hpe-storage-logs-kubernetes-4-20200616_161553.tar.gz

Unable to mount PV into pod with iscsi access

I have a 8200 setup with iscsi card, deploy CSI go ok, but when try to provision a volume to make a simple test, I get this error:

time="2020-09-01T01:01:25Z" level=info msg="Host name reported as fcmaster" file="node_server.go:1830"
time="2020-09-01T01:01:25Z" level=info msg="Processing network named eth0 with IpV4 CIDR 172.29.1.170/24" file="node_server.go:1870"
time="2020-09-01T01:01:25Z" level=info msg="Processing network named eth4 with IpV4 CIDR 169.254.187.49/16" file="node_server.go:1870"
time="2020-09-01T01:01:25Z" level=info msg="Processing network named eth5 with IpV4 CIDR 169.254.90.15/16" file="node_server.go:1870"
time="2020-09-01T01:01:25Z" level=info msg="Processing network named flannel.1 with IpV4 CIDR 10.42.0.0/32" file="node_server.go:1870"
time="2020-09-01T01:01:25Z" level=info msg="Processing network named cni0 with IpV4 CIDR 10.42.0.1/24" file="node_server.go:1870"
time="2020-09-01T01:01:26Z" level=info msg="Node info fcmaster already known to cluster\n" file="flavor.go:166"
time="2020-09-01T01:01:26Z" level=info msg="updating Node fcmaster with iqns [iqn.2016-04.com.open-iscsi:dd779986294] wwpns [10000000c9d03876 10000000c9adc715] networks [172.29.1.170/24 169.254.187.49/16 169.254.90.15/16 10.42.0.0/32 10.42.0.1/24]" file="flavor.go:198"
time="2020-09-01T01:01:26Z" level=warning msg="Failed to add [/etc/networks] file to watch list, err no such file or directory :" file="watcher.go:82"
time="2020-09-01T01:01:26Z" level=info msg="GRPC response: {\"max_volumes_per_node\":100,\"node_id\":\"288023a9-6ad0-6663-6d61-737465720000\"}" file="utils.go:75"
time="2020-09-01T01:10:06Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2020-09-01T01:10:06Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2020-09-01T01:10:06Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}}]}" file="utils.go:75"
time="2020-09-01T01:10:06Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69"
time="2020-09-01T01:10:06Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"chapPassword\":\"\",\"chapUsername\":\"\",\"discoveryIps\":\"172.29.1.121,172.29.1.122\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"ext4\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"60002ac000000000000000130001dc17\",\"targetNames\":\"iqn.2000-05.com.3pardata:21210002ac01dc17,iqn.2000-05.com.3pardata:21220002ac01dc17\",\"targetScope\":\"group\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1598871748459-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c\"}" file="utils.go:70"
time="2020-09-01T01:10:06Z" level=info msg="NodeStageVolume requested volume pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c/globalmount, capability mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi chapPassword: chapUsername: discoveryIps:172.29.1.121,172.29.1.122 fsCreateOptions: fsMode: fsOwner: fsType:ext4 lunId:0 readOnly:false serialNumber:60002ac000000000000000130001dc17 targetNames:iqn.2000-05.com.3pardata:21210002ac01dc17,iqn.2000-05.com.3pardata:21220002ac01dc17 targetScope:group volumeAccessMode:mount] and volumeContext map[accessProtocol:iscsi cpg:CPG_Raid6 fsType:ext4 provisioningType:full provisioning_type:full storage.kubernetes.io/csiProvisionerIdentity:1598871748459-8081-csi.hpe.com volumeAccessMode:mount]" file="node_server.go:220"
time="2020-09-01T01:10:06Z" level=info msg="Adding connection to CSP at IP 172.29.1.10, port 8080, context path , with username 3paradm and serviceName primera3par-csp-svc" file="driver.go:307"
time="2020-09-01T01:10:06Z" level=info msg="About to attempt login to CSP for backend 172.29.1.10" file="container_storage_provider.go:108"
time="2020-09-01T01:10:14Z" level=info msg="\n SCANNING iscsi lun id 0" file="iscsi.go:1003"
time="2020-09-01T01:10:14Z" level=info msg="\n SCANNING iscsi lun id 0" file="iscsi.go:1003"
time="2020-09-01T01:10:14Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2020-09-01T01:10:14Z" level=error msg="\n Passed details " file="volume.go:88"
time="2020-09-01T01:10:14Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2020-09-01T01:10:14Z" level=error msg="\n Passed details " file="volume.go:88"
time="2020-09-01T01:10:15Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:15Z" level=error msg="process with pid : 71 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:15Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:15Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:20Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:20Z" level=error msg="process with pid : 77 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:20Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:20Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:25Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:25Z" level=error msg="process with pid : 83 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:25Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:25Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:30Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:30Z" level=error msg="process with pid : 89 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:30Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:30Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:35Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:35Z" level=error msg="process with pid : 95 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:35Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:35Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:40Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:40Z" level=error msg="process with pid : 101 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:40Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:40Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:45Z" level=error msg="unable to create device for volume  with IQN " file="device.go:991"
time="2020-09-01T01:10:45Z" level=error msg="Failed to create device from publish info. Error: device not found with serial 60002ac000000000000000130001dc17 or target " file="node_server.go:472"
time="2020-09-01T01:10:45Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to stage volume pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c, err: rpc error: code = Internal desc = Error creating device for volume pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c, err: device not found with serial 60002ac000000000000000130001dc17 or target " file="utils.go:73"
time="2020-09-01T01:10:46Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetCapabilities" file="utils.go:69"
time="2020-09-01T01:10:46Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2020-09-01T01:10:46Z" level=info msg="GRPC response: {\"capabilities\":[{\"Type\":{\"Rpc\":{\"type\":1}}},{\"Type\":{\"Rpc\":{\"type\":3}}}]}" file="utils.go:75"
time="2020-09-01T01:10:46Z" level=info msg="GRPC call: /csi.v1.Node/NodeStageVolume" file="utils.go:69"
time="2020-09-01T01:10:46Z" level=info msg="GRPC request: {\"publish_context\":{\"accessProtocol\":\"iscsi\",\"chapPassword\":\"\",\"chapUsername\":\"\",\"discoveryIps\":\"172.29.1.121,172.29.1.122\",\"fsCreateOptions\":\"\",\"fsMode\":\"\",\"fsOwner\":\"\",\"fsType\":\"ext4\",\"lunId\":\"0\",\"readOnly\":\"false\",\"serialNumber\":\"60002ac000000000000000130001dc17\",\"targetNames\":\"iqn.2000-05.com.3pardata:21210002ac01dc17,iqn.2000-05.com.3pardata:21220002ac01dc17\",\"targetScope\":\"group\",\"volumeAccessMode\":\"mount\"},\"secrets\":\"***stripped***\",\"staging_target_path\":\"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c/globalmount\",\"volume_capability\":{\"AccessType\":{\"Mount\":{\"fs_type\":\"ext4\"}},\"access_mode\":{\"mode\":1}},\"volume_context\":{\"accessProtocol\":\"iscsi\",\"cpg\":\"CPG_Raid6\",\"fsType\":\"ext4\",\"provisioningType\":\"full\",\"provisioning_type\":\"full\",\"storage.kubernetes.io/csiProvisionerIdentity\":\"1598871748459-8081-csi.hpe.com\",\"volumeAccessMode\":\"mount\"},\"volume_id\":\"pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c\"}" file="utils.go:70"
time="2020-09-01T01:10:46Z" level=info msg="NodeStageVolume requested volume pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c with access type mount, targetPath /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-eb3537f2-1c5e-4ed0-9da9-fd7059424d9c/globalmount, capability mount:<fs_type:\"ext4\" > access_mode:<mode:SINGLE_NODE_WRITER > , publishContext map[accessProtocol:iscsi chapPassword: chapUsername: discoveryIps:172.29.1.121,172.29.1.122 fsCreateOptions: fsMode: fsOwner: fsType:ext4 lunId:0 readOnly:false serialNumber:60002ac000000000000000130001dc17 targetNames:iqn.2000-05.com.3pardata:21210002ac01dc17,iqn.2000-05.com.3pardata:21220002ac01dc17 targetScope:group volumeAccessMode:mount] and volumeContext map[accessProtocol:iscsi cpg:CPG_Raid6 fsType:ext4 provisioningType:full provisioning_type:full storage.kubernetes.io/csiProvisionerIdentity:1598871748459-8081-csi.hpe.com volumeAccessMode:mount]" file="node_server.go:220"
time="2020-09-01T01:10:46Z" level=info msg="\n SCANNING iscsi lun id 0" file="iscsi.go:1003"
time="2020-09-01T01:10:46Z" level=info msg="\n SCANNING iscsi lun id 0" file="iscsi.go:1003"
time="2020-09-01T01:10:46Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2020-09-01T01:10:46Z" level=error msg="\n Passed details " file="volume.go:88"
time="2020-09-01T01:10:46Z" level=error msg="\n Error in GetSecondaryBackends unexpected end of JSON input" file="volume.go:87"
time="2020-09-01T01:10:46Z" level=error msg="\n Passed details " file="volume.go:88"
time="2020-09-01T01:10:47Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:47Z" level=error msg="process with pid : 122 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:47Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:47Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:52Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:52Z" level=error msg="process with pid : 130 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:52Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:52Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:57Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:10:57Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:10:57Z" level=error msg="process with pid : 136 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:10:57Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:02Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:02Z" level=error msg="process with pid : 142 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:11:02Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:02Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:11:07Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:07Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:11:07Z" level=error msg="process with pid : 148 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:11:07Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:12Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:12Z" level=error msg="process with pid : 154 finished with error = exit status 1" file="cmd.go:60"
time="2020-09-01T01:11:12Z" level=warning msg="multipathdShowCmd: error command multipathd failed with rc=1 err= with args [show paths format %w %d %t %i %o %T %c %s %m]" file="multipath.go:86"
time="2020-09-01T01:11:12Z" level=error msg="\n Error in GetSecondaryArrayLUNIds unexpected end of JSON input" file="volume.go:29"
time="2020-09-01T01:11:17Z" level=error msg="unable to create device for volume  with IQN " file="device.go:991"
time="2020-09-01T01:11:17Z" level=error msg="Failed to create device from publish info. Error: device not found with serial 60002ac000000000000000130001dc17 or target " file="node_server.go:472"

Driver doesn't work in microk8s

The following events exist for one of the pods of DaemonSet/hpe-csi-node:

Events:
  Type     Reason       Age                  From     Message
  ----     ------       ----                 ----     -------
  Warning  FailedMount  31m (x56 over 137m)  kubelet  MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry is not a directory
  Warning  FailedMount  21m (x49 over 117m)  kubelet  (combined from similar events): Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[root-dir etcsystemd linux-config-file hpe-csi-node-sa-token-ltwgz etc-hpe-storage-dir etc-kubernetes plugin-dir registration-dir log-dir runsystemd device-dir sys pods-mount-dir]: timed out waiting for the condition
  Warning  FailedMount  15m                  kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[etcsystemd pods-mount-dir etc-hpe-storage-dir linux-config-file 
plugin-dir log-dir runsystemd registration-dir hpe-csi-node-sa-token-ltwgz etc-kubernetes root-dir device-dir sys]: timed out waiting for the condition
  Warning  FailedMount  13m                  kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[pods-mount-dir log-dir runsystemd etcsystemd root-dir etc-kubernetes etc-hpe-storage-dir registration-dir hpe-csi-node-sa-token-ltwgz device-dir sys linux-config-file plugin-dir]: timed out waiting for the condition
  Warning  FailedMount  11m                  kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[runsystemd linux-config-file registration-dir root-dir sys hpe-csi-node-sa-token-ltwgz etc-hpe-storage-dir etc-kubernetes etcsystemd plugin-dir pods-mount-dir device-dir log-dir]: timed out waiting for the condition
  Warning  FailedMount  9m1s                 kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[runsystemd registration-dir hpe-csi-node-sa-token-ltwgz plugin-dir etcsystemd device-dir etc-hpe-storage-dir log-dir etc-kubernetes sys pods-mount-dir root-dir linux-config-file]: timed out waiting for the condition
  Warning  FailedMount  6m47s                kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[runsystemd plugin-dir sys etcsystemd device-dir etc-kubernetes pods-mount-dir root-dir etc-hpe-storage-dir registration-dir linux-config-file hpe-csi-node-sa-token-ltwgz log-dir]: timed out waiting for the condition
  Warning  FailedMount  4m31s                kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[runsystemd plugin-dir linux-config-file root-dir device-dir etc-kubernetes log-dir sys registration-dir pods-mount-dir etc-hpe-storage-dir hpe-csi-node-sa-token-ltwgz etcsystemd]: timed out waiting for the condition
  Warning  FailedMount  2m14s                kubelet  Unable to attach or mount volumes: unmounted volumes=[registration-dir], unattached volumes=[root-dir etc-kubernetes sys linux-config-file device-dir etc-hpe-storage-dir hpe-csi-node-sa-token-ltwgz log-dir plugin-dir runsystemd etcsystemd registration-dir pods-mount-dir]: timed out waiting for the condition
  Warning  FailedMount  91s (x16 over 17m)   kubelet  MountVolume.SetUp failed for volume "registration-dir" : hostPath type check failed: /var/lib/kubelet/plugins_registry is not a directory

Any pods that use a PVC get stuck on ContainerCreating.
This is an event that prevents pods from running:

AttachVolume.Attach failed for volume "pvc-f42f9ef2-1cd3-4bdf-880d-4b1cbdcba50a" : node "kube-dev2" has no NodeID annotation

I am pretty new to Kubernetes, but I know that microk8s uses /var/snap/microk8s/var/lib/kubelet/plugins_registry instead.

Failed to create NFS provisioned volume in OpenShift Virtualization

Environment: OCP 4.12 deployed on bare metal with OpenShift Virtualization operator installed. The cluster is connected to a HPE Primera storage array. HPE CSI driver for Kubernetes 2.2.0 and HPE NFS Provisioner 3.0.0 were installed.

Issue:

When creating the following Data Volume:

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: dv1
spec:
source:
blank: {}
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-ssd

It failed with the following error messages:

failed to provision volume with StorageClass "nfs-ssd": rpc error: code = Internal desc = Failed to create NFS provisioned volume pvc-c2a552e7-5d05-4fe0-b32f-bc79ba6fb1e3, err persistentvolumeclaims "hpe-nfs-c2a552e7-5d05-4fe0-b32f-bc79ba6fb1e3" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , , rollback status: success

nfs-ssd is a storage class backed by HPE NFS provisioner.

Failed to get initiators for host, restarting registration container

OpenShift 4.6
Worker Nodes: CoreOS
Storage: Nimble (iSCSI)

Installed via the OpenShift Operator Hub

Secret yaml:

apiVersion: v1
kind: Secret
metadata:
  name: hpe-backend
  namespace: hpe-csi-driver
stringData:
  serviceName: nimble-csp-svc
  servicePort: "8080"
  backend: 10.x.x.30
  username: ocpcsi
  password: passwordhere

StorageClass yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: hpe-nimble
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: hpe-backend
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/controller-publish-secret-name: hpe-backend
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/node-publish-secret-name: hpe-backend
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/node-stage-secret-name: hpe-backend
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/provisioner-secret-name: hpe-backend
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-csi-driver
  description: Volume created by the HPE CSI Driver for Kubernetes
  accessProtocol: iscsi
  csi.storage.k8s.io/fstype: xfs
reclaimPolicy: Delete
allowVolumeExpansion: true

hpe-csi-scc.yaml:

kind: SecurityContextConstraints
apiVersion: security.openshift.io/v1
metadata:
  name: hpe-csi-scc
allowHostDirVolumePlugin: true
allowHostIPC: true
allowHostNetwork: true
allowHostPID: true
allowHostPorts: true
allowPrivilegeEscalation: true
allowPrivilegedContainer: true
allowedCapabilities:
- '*'
defaultAddCapabilities: []
fsGroup:
  type: RunAsAny
groups: []
priority:
readOnlyRootFilesystem: false
requiredDropCapabilities: []
runAsUser:
  type: RunAsAny
seLinuxContext:
  type: RunAsAny
supplementalGroups:
  type: RunAsAny
users:
- system:serviceaccount:hpe-csi-driver:hpe-csi-controller-sa
- system:serviceaccount:hpe-csi-driver:hpe-csi-node-sa
- system:serviceaccount:hpe-csi-driver:hpe-csp-sa
- system:serviceaccount:hpe-csi-driver:hpe-csi-operator-sa
- system:serviceaccount:hpe-nfs:hpe-csi-nfs-sa
volumes:
- '*'

The hpe-csi-node pods are crash looping:

% oc get all
NAME                                           READY   STATUS             RESTARTS   AGE
pod/hpe-csi-controller-65778d9bd6-r2znj        7/7     Running            0          54m
pod/hpe-csi-driver-operator-7786f7f695-2nztk   1/1     Running            0          111m
pod/hpe-csi-node-7hsmf                         1/2     CrashLoopBackOff   13         46m
pod/hpe-csi-node-7ht64                         1/2     CrashLoopBackOff   13         46m
pod/hpe-csi-node-djjzj                         1/2     CrashLoopBackOff   13         46m
pod/hpe-csi-node-kqs9w                         1/2     CrashLoopBackOff   13         46m
pod/nimble-csp-84845b579b-glt6c                1/1     Running            0          54m
pod/primera3par-csp-6d87785fcb-dxhwp           1/1     Running            0          54m

NAME                          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/nimble-csp-svc        ClusterIP   172.16.25.52    <none>        8080/TCP   54m
service/primera3par-csp-svc   ClusterIP   172.16.25.116   <none>        8080/TCP   54m

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/hpe-csi-node   4         4         0       4            0           <none>          54m

NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hpe-csi-controller        1/1     1            1           54m
deployment.apps/hpe-csi-driver-operator   1/1     1            1           111m
deployment.apps/nimble-csp                1/1     1            1           54m
deployment.apps/primera3par-csp           1/1     1            1           54m

NAME                                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/hpe-csi-controller-65778d9bd6        1         1         1       54m
replicaset.apps/hpe-csi-driver-operator-7786f7f695   1         1         1       111m
replicaset.apps/nimble-csp-84845b579b                1         1         1       54m
replicaset.apps/primera3par-csp-6d87785fcb           1         1         1       54m

Logs from those pods:

% oc logs pod/hpe-csi-node-7hsmf csi-node-driver-registrar
I0114 15:31:37.939708       1 main.go:110] Version: v1.1.0-0-g80a94421
I0114 15:31:37.939764       1 main.go:120] Attempting to open a gRPC connection with: "/csi/csi.sock"
I0114 15:31:37.939782       1 connection.go:151] Connecting to unix:///csi/csi.sock
I0114 15:31:37.940409       1 main.go:127] Calling CSI driver to discover driver name
I0114 15:31:37.940447       1 connection.go:180] GRPC call: /csi.v1.Identity/GetPluginInfo
I0114 15:31:37.940458       1 connection.go:181] GRPC request: {}
I0114 15:31:37.943526       1 connection.go:183] GRPC response: {"name":"csi.hpe.com","vendor_version":"0.1"}
I0114 15:31:37.944022       1 connection.go:184] GRPC error: <nil>
I0114 15:31:37.944035       1 main.go:137] CSI driver name: "csi.hpe.com"
I0114 15:31:37.944139       1 node_register.go:54] Starting Registration Server at: /registration/csi.hpe.com-reg.sock
I0114 15:31:37.944349       1 node_register.go:61] Registration Server started at: /registration/csi.hpe.com-reg.sock
I0114 15:31:39.225204       1 main.go:77] Received GetInfo call: &InfoRequest{}
I0114 15:31:39.248235       1 main.go:87] Received NotifyRegistrationStatus call: &RegistrationStatus{PluginRegistered:false,Error:RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = Failed to get initiators for host,}
E0114 15:31:39.248347       1 main.go:89] Registration process failed with error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = Internal desc = Failed to get initiators for host, restarting registration container.

and

% oc logs pod/hpe-csi-node-7hsmf hpe-csi-driver
time="2021-01-14T15:36:51Z" level=info msg="GRPC call: /csi.v1.Identity/GetPluginInfo" file="utils.go:69"
time="2021-01-14T15:36:51Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2021-01-14T15:36:51Z" level=info msg=">>>>> GetPluginInfo" file="identity_server.go:16"
time="2021-01-14T15:36:51Z" level=info msg="<<<<< GetPluginInfo" file="identity_server.go:19"
time="2021-01-14T15:36:51Z" level=info msg="GRPC response: {\"name\":\"csi.hpe.com\",\"vendor_version\":\"0.1\"}" file="utils.go:75"
time="2021-01-14T15:36:53Z" level=info msg="GRPC call: /csi.v1.Node/NodeGetInfo" file="utils.go:69"
time="2021-01-14T15:36:53Z" level=info msg="GRPC request: {}" file="utils.go:70"
time="2021-01-14T15:36:53Z" level=info msg="Host name reported as worker2.openshift4c.x.x.com" file="node_server.go:1830"
time="2021-01-14T15:36:53Z" level=warning msg="no fc adapters found on the host" file="fc.go:49"
time="2021-01-14T15:36:53Z" level=error msg="Failed to get initiators for host worker2.openshift4c.x.x.com.  Error: iscsi and fc initiators not found" file="node_server.go:1834"
time="2021-01-14T15:36:53Z" level=error msg="GRPC error: rpc error: code = Internal desc = Failed to get initiators for host" file="utils.go:73"

It's not clear what the issue is, other than it seems to be related to either or both an iscsi initiator and/or plugin/registration issue.

I can't find any other steps that I've missed.

I do see if I create a PVC a LUN on Nimble is created, so that part is working. Perhaps some OpenShift/worker node issue but I'm unsure how to further root cause the issue. It's my understanding that CoreOS should not require any changes and should have what is needed from a worker nodes perspective.

HPE Nimble not usable with OpenShift Virtualization

The HPE CSI driver does not work as expected in ReadWriteMany mode using a raw block device.

During a VM live migration, the following happens.

  1. VM is running in pod PA on node A
  2. Live migration starts
  3. pod PB is started on node B
  4. VM memory/state is copied to PB on node B
  5. PA shuts down, PB resumes the VM

The VM disk is a volume that is attached to 2 pods (source migration and destination) from steps 2 to 4. Once the migration finishes the source pod (PA) is deleted and the VM runs on the destination (PB). The problem is that once PA shuts down the volume is set to offline by the CSI driver, and the destination (PB) loses access to the volume, pausing the VM with EIO as its storage is gone, so instead of running on the destination node, the VM is now paused/hung because it lost access to its disk/volume.

The problem is reproducible by starting 2 pods on 2 different nodes, sharing a raw block RWX volume. No need for live migration or virtualisation.

Once the first pod shuts down the remaining pod loses access to the volume, because the nimble side makes the backing volume go offline and the iscsi connection is abruptly closed while one of the pods is still using it.

  1. Create test-pvc with Block and RWX, on HPE CSI StorageClass.
  2. Create 2 pods, running on 2 different nodes, using the shared PVC
apiVersion: v1
kind: Pod
metadata:
name: example
labels:
app: httpd
namespace: default
spec:
containers:
- name: httpd
image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest'
ports:
- containerPort: 8080
volumes: 
- name: raw 
persistentVolumeClaim: 
claimName: test-pvc
  1. Shutdown one pod

  2. Other pod loses LUN access, LUN is offline on HPE Nimble dashboard

Logs for the HPE CSI driver shows the request to offline the LUN that is still used by the other pod:

2022-08-22T21:32:44.661195588Z 21:32:44 DEBUG [co.ni.no.gr.GroupMgmtClient]] (executor-thread-88) 7 * Sending client request on thread executor-thread-88
2022-08-22T21:32:44.661195588Z 7 > PUT https://10.24.8.59:5392/v1/volumes/066b77f2277cc0b66d00000000000000000000098e
2022-08-22T21:32:44.661195588Z 7 > Content-Type: application/json
2022-08-22T21:32:44.661195588Z 7 > X-Auth-Token: ****
2022-08-22T21:32:44.661195588Z {"data":{"online":false,"force":true}}

I should not offline the volume, so PB does not lose access to it. This should only be offlined once all pods sharing it stop using it. Perhaps missing some refcount.

Can't mount volume in a Pod

Hello,

I finally found time to test the CSI driver 👍 Explanations are clear and examples two, thanx !

However I'm struggling with a problem 😢

I applied all config files for k8s v1.15 (README). ✔️
A volume is well created on the nimble's side after a PVC is applied ✔️

image

But the volume can't be mounted on the pod ⛔️

image

The error message isn't that clear. What does "Please sanity check host OS and array IP configuration, network, netmask and gateway" means ? Where do I have to look at ? Pod ? Host ? Driver ? Config ?

Looking in the networks packets, I can see that my K8s worker send a RST packet to the nimble.
Is there a timeout parameter that can explain that ?
Have you can't other idea on where does these problem came from ?

Regards

Error response from daemon: unauthorized: access to the requested resource is not authorized

It seems the nfs-provisioner in quay is not publically accessible?

$ docker pull quay.io/hpestorage/nfs-provisioner:v1.0.0
Error response from daemon: unauthorized: access to the requested resource is not authorized

Just to verify my docker is setup ok:

$ docker pull quay.io/coreos/etcd
Using default tag: latest
latest: Pulling from coreos/etcd
ff3a5c916c92: Downloading [===>                                               ]  164.7kB/2.066MB
...

Reboot maps volume to /var/lib

When a node comes back up from a reboot or a hard reset our volumes get mapped to /var/lib volume instead of to the proper multipath volume

cat /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-7b735c41-37c7-4505-9140-22ca514c546a/globalmount/deviceInfo.json 
{"volume_id":"0671136b496958f3f500000000000000000000009a","volume_access_mode":2,"device":{"path_name":"dm-11","serial_number":"2069cb61307eb56ea6c9ce900bc984b18","major":"253","minor":"11","alt_full_path_name":"/dev/mapper/mpathm","mpath_device_name":"mpathm","size":102400,"slaves":["sdh"],"iscsi_target":{"Name":"iqn.2007-11.com.nimblestorage:pvc-7b735c41-37c7-4505-9140-22ca514c546a-v71136b496958f3f5.0000009a.184b98bc","Address":"10.32.8.7","Port":"3260","Tag":"2460","Scope":""},"target_scope":"volume","state":"active"},"mount_info":{"mount_point":"/var/lib/kubelet/plugins/hpe.com/mounts/0671136b496958f3f500000000000000000000009a","filesystem_options":{"Type":"xfs","Mode":"","Owner":""}}}
mount |grep 7b73
/dev/mapper/centos00-var_lib on /var/lib/kubelet/pods/1787c1da-969e-4f09-b158-ab683014c788/volumes/kubernetes.io~csi/pvc-7b735c41-37c7-4505-9140-22ca514c546a/mount type xfs (rw,relatime,seclabel,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota)
ls -l /dev/mapper/
total 0
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-home -> ../dm-5
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-root -> ../dm-0
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-swap -> ../dm-1
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-var -> ../dm-2
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-var_lib -> ../dm-4
lrwxrwxrwx. 1 root root       7 Nov  3 04:49 centos00-var_log -> ../dm-3
crw-------. 1 root root 10, 236 Nov  3 04:49 control
lrwxrwxrwx. 1 root root       7 Nov  3 04:51 mpathh -> ../dm-7
lrwxrwxrwx. 1 root root       7 Nov  3 04:51 mpathi -> ../dm-8
lrwxrwxrwx. 1 root root       7 Nov  3 04:51 mpathj -> ../dm-6

Looking at the iscsiadm session shows there is no session for this volume and it seems like the driver then uses /var/lib device

csi driver version

v1.0.0-beta: Pulling from hpestorage/csi-driver
Digest: sha256:89e9d0fd81b09eba0db567114962bd9eb957cac62f134e07712240605b75e9a6
Status: Image is up to date for hpestorage/csi-driver:v1.0.0-beta
docker.io/hpestorage/csi-driver:v1.0.0-beta

CSI driver needs port 443 and 5392 in order to allow refreshing the token/relogin - documentation/consolidation needed

Setup

  • Nimble HF40 with 6.0.0.300-956221-opt
  • Rancher RKE cluster with K8s v1.20.15
  • HPE CSI driver 2.1.0/2.1.1 deployed with Helm

Description of the Problem

After the initial deployment of the CSI driver, everything works as expected, new volumes can be created and used. After a time which exceeds either 70 minutes (this seems to be the expiration duration of the token as shown below) or the inactivity timeout (if set to a value lower than 70min set in the NimbleUI Administration -> Security -> Inactivity timeout) it is no longer possible to create new PVs while the excisting PVs continue to work. This behavior is easily reproducible by setting the timeout value to something like 5min and redeploying all CSI driver related pods (maybe not all are necessary, but this worked for us) or by waiting 70min and attempting to create a new PV/PVC. After longer investigation it seems that

  • the token is being refreshed via port 443 if the CSI driver is creating new volumes while the token is still valid and the remaining valid time of the token is below an unknown threshold
  • the token is being refreshed via port 5392 if no new volumes are being created before the token expires

What we observe

The creation of new PVs fail after either the inactivity timeout is triggered or after the token expires. This stays in this state until a redeploy of the CSI related pods. Triggering the creation and subsequent deletion of a PV/PVC every ~10mins seems to trigger the request for a new token, so that the system keeps working longer than 70mins.

What we expect

The CSI driver keeps the session running or triggers a reauthentication after the session has terminated (for whatever reasons). This must cover refreshing the auth token as well as getting a new token in a new session. For this it should be clear which port is used and it should preferrably only one port being used for the API access. At the very least, the port requirements must be described in SCOD

Additional information

Logs:
Logs of the failing state - works-not.txt
Logs of the working state - works.txt

HPE Support case number 04941823

Authentication Token - expiration duration of 4200sec/70min deduced from expiry_time and creation_time

{
   "data":[
      {
         "id":"19532e249f862567d3000000000000000000007ad4",
         "session_token":"****",
         "username":"general1",
         "app_name":"",
         "source_ip":"127.0.0.1",
         "creation_time":1646892911,
         "last_modified":1646895311,
         "expiry_time":1646897111
      }
   ],
   "start_row":0,
   "end_row":0,
   "total_rows":0
}

Openshift 4.12 certification

We have a production Openshift cluster that we would like to upgrade from 4.10 to 4.12. Can you tell us when is Openshift 4.12 certification planned?

fsGroup sometimes works sometimes breaks

Hi all

I am using this CSI driver to access HPE nimble storage over fiber channel.
Lately I noticed that sometimes the fsGroup is not applied to the storage.

Currently, there are three applications on the cluster running on the same node.

  1. Gitlab with working fsGroup
  2. Gitlab without working fsGroup
  3. mfw where the fsGroup works in ~30% of deployments.

The relevant yaml:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: multi-file-writer
  namespace: snapshot-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: multi-file-writer
  minReadySeconds: 5
  strategy:
    type: Recreate
  template:
    spec:
      securityContext:
        runAsUser: 65534
        fsGroup: 65534

Debugging

The logs did not contain any hints regarding these applications:

grep -ri fsGroup /var/log/nimble* /var/log/syslog

Other containers emitted logs containing FSGroup:nil. Since they did not request a fsGroup that appears to be okay.

Cheers,
Stefan

OpenShift 4.10 certification

Hi, when can we expect OpenShift 4.10 certification?

CSI driver version 2.1.1 supports Kubernetes 1.23 which is included in OpenShift 4.10 but the OpenShift SCOD page still doesn't include OCP 4.10 although it was released in March.

initiator group wasn't created correctly

Environment

  • Kubernetes Distribution: OpenShift
  • Storage: Nimble connected with Fiber channell (version: 5.2.1.400-796142-opt)
  • HPE CIS Driver: Managed by Operator (version: hpe-csi-operator.v1.3.0)

Summary
I detected that WWPN in Initiator Group was wrong with WWPN of Worker Node which was scheduling a Pod.
(I also found Initiator Group Name was defferent from Worker Node Name.)

Procedure

  • PVC are deployed successfully.(PV and Volume on Nimble are created automatically)
$ oc get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS         REASON   AGE
persistentvolume/pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c   1Gi        RWO            Delete           Bound    storage-test/rwo-pvc   hpe-standard-block            5m56s

NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS         AGE
persistentvolumeclaim/rwo-pvc   Bound    pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c   1Gi        RWO            hpe-standard-block   5m57s
Nimble OS $ vol --list
----------------+---------+------+-------+---------+---------+-------+---------
Name             Size      Online Offline Usage     Reserve % Limit % Path
                 (MiB)            Reason  (MiB)
----------------+---------+------+-------+---------+---------+-------+---------
pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c      1024 Yes    N/A             0         0     100 default:/
  • And then deployed Deployment with PVC and replicas=1, but Pod kept ContainerCreating status.I could see the below messages in kubernetes events.
$ oc get events
LAST SEEN   TYPE      REASON                   OBJECT                          MESSAGE
71s         Normal    Scheduled                pod/app-rwo-59b5d69d88-sfjxq    Successfully assigned storage-test/app-rwo-59b5d69d88-sfjxq to openshift-worker-0
70s         Normal    SuccessfulAttachVolume   pod/app-rwo-59b5d69d88-sfjxq    AttachVolume.Attach succeeded for volume "pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c"
32s         Warning   FailedMount              pod/app-rwo-59b5d69d88-sfjxq    MountVolume.MountDevice failed for volume "pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c" : rpc error: code = Internal desc = Failed to stage volume 06063ae4eb3ef8ece4000000000000000000000009, err: rpc error: code = Internal desc = Error creating device for volume 06063ae4eb3ef8ece4000000000000000000000009, err: device not found with serial 2fe012418a542a736c9ce9000f08c238 or target
<omit>
  • The Pod worked on openshift-worker-0.
$ oc get pods -o wide
NAME                       READY   STATUS              RESTARTS   AGE     IP       NODE                 NOMINATED NODE   READINESS GATES
app-rwo-59b5d69d88-sfjxq   0/1     ContainerCreating   0          9m25s   <none>   openshift-worker-0   <none>           <none>
  • openshift-worker-0's WWPN were 0x10008e5407d00024 and 0x10008e5407d00026.
[root@openshift-worker-0 ~]# cat /sys/class/fc_host/host*/port_name
0x10008e5407d00024
0x10008e5407d00026
  • Initiator Group of Nimble was created automatically.The configuration is below.The WWPN in Initiator Group wasn't openshift-worker-0's (and also the name).
Nimble OS $ initiatorgrp --list
------------------------------+------------------------------+------------------
Initiator Group Name           Number of Initiators           Number of Subnets
------------------------------+------------------------------+------------------
container-node-openshift-worker-2-fc 2

Nimble OS $ initiatorgrp --info container-node-openshift-worker-1-fc
Name: container-node-openshift-worker-2-fc
Description: Created by com.nimblestorage.hi.nos.group.InitiatorGroupService
Host Type: auto
Access Protocol: fc
Application identifier: 0a58a9fe-0001-6f70-656e-73686966742dfc
Created: Jan  7 2021 17:54:49
Last configuration change: Jan  7 2021 17:54:50
Number of TDZ Ports: 4
        Interface Name: fc1c.1  Array: AF-222768
        Interface Name: fc1a.1  Array: AF-222768
        Interface Name: fc1d.1  Array: AF-222768
        Interface Name: fc1b.1  Array: AF-222768
Number of Initiators: 2
        Initiator: bay11_p1 (10:00:8e:54:07:d0:00:28)
        Initiator: bay11_p2 (10:00:8e:54:07:d0:00:2a)
Number of Volumes: 1
        Volume Name: pvc-1c0fa0d1-39fa-4351-890c-6cf2c6cddb3c
                LUN: 0

Logs

  • container logs of hpe-csi-driver and nimble-csp.
$ oc get pods -n hpe-csi-driver -o wide
NAME                                      READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
hpe-csi-controller-597578dd9b-srs2h       7/7     Running   0          23m   10.10.2.19    openshift-worker-2   <none>           <none>
hpe-csi-driver-operator-dc56c6d9f-cmv5s   1/1     Running   0          23m   10.130.6.18   openshift-worker-0   <none>           <none>
hpe-csi-node-697gb                        2/2     Running   0          23m   10.10.2.18    openshift-worker-1   <none>           <none>
hpe-csi-node-tk9m4                        2/2     Running   0          23m   10.10.2.19    openshift-worker-2   <none>           <none>
hpe-csi-node-wcq9t                        2/2     Running   0          23m   10.10.2.17    openshift-worker-0   <none>           <none>
nimble-csp-84845b579b-9lt5j               1/1     Running   0          23m   10.129.6.7    openshift-worker-2   <none>           <none>
primera3par-csp-6d87785fcb-bsmfd          1/1     Running   0          23m   10.130.6.19   openshift-worker-0   <none>           <none>

hpe-csi-controller-597578dd9b-srs2h_hpe-csi-driver.log
hpe-csi-node-wcq9t_hpe-csi-driver.log
nimble-csp-84845b579b-9lt5j .log
hpe-csi-controller-597578dd9b-srs2h_csi-attacher.log
hpe-csi-controller-597578dd9b-srs2h_csi-provisioner.log

Configurations

  • HPECSIDriver and StorageClass manifest

storageclass.yaml.txt
HPECSIDriver.yaml.txt

  • PVC and Deployment manifest

pvc.yaml.txt
deployment.yaml.txt

Using G as a storage unit provision to small volume

We created a PVC with size 20G and once provisioned we got the following error

failed to provision volume with StorageClass "standard-nim01": created volume capacity 19999490048 less than requested capacity 20000000000

Once we changed the unit to 20Gi it worked as expected

it would be good to display the driver / container versions versions in the logs when are getting initialized.

As of now csinode,csicontroller,csp pods log files does not show the version they are running on .
it will be really helpful to display their respective versions in logs when are getting initialized (started).
Example of current logging :
time="2021-03-19T19:50:48Z" level=info msg="Initialized logging." alsoLogToStderr=true logFileLocation=/var/log/hpe-csi-node.log logLevel=trace
time="2021-03-19T19:50:48Z" level=info msg="" file="csi-driver.go:54"
time="2021-03-19T19:50:48Z" level=info msg="
HPE CSI DRIVER " file="csi-driver.go:55"
time="2021-03-19T19:50:48Z" level=info msg="
" file="csi-driver.go:56"

NFS Server Provisioner - Volume Expansion

Hi

I wonder how to extend RWX/ROX PVC.
I have no problem to extend a standard RWO PVC.

I have changed the storage request from 10GB to 12GB and applied the pvc-yaml

Got following errors in csi-resizer_hpe-csi-controller

I1105 14:12:14.576894 1 controller.go:224] Started PVC processing "default/my-rwx-pvc"
I1105 14:12:14.589511 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-rwx-pvc", UID:"8891b8c1-9220-458a-90d6-1016f226eb23", APIVersion:"v1", ResourceVersion:"520547845", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-8891b8c1-9220-458a-90d6-1016f226eb23
I1105 14:12:14.591935 1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerExpandVolume
I1105 14:12:14.591947 1 connection.go:183] GRPC request: {"capacity_range":{"required_bytes":4294967296},"secrets":"stripped","volume_id":"8891b8c1-9220-458a-90d6-1016f226eb23"}
I1105 14:12:14.612596 1 connection.go:185] GRPC response: {}
I1105 14:12:14.613006 1 connection.go:186] GRPC error: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found
E1105 14:12:14.613048 1 controller.go:360] Resize volume "pvc-8891b8c1-9220-458a-90d6-1016f226eb23" by resizer "csi.hpe.com" failed: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found
I1105 14:12:14.613155 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-rwx-pvc", UID:"8891b8c1-9220-458a-90d6-1016f226eb23", APIVersion:"v1", ResourceVersion:"520547845", FieldPath:""}): type: 'Warning' reason: 'VolumeResizeFailed' resize volume pvc-8891b8c1-9220-458a-90d6-1016f226eb23 failed: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found
I1105 14:12:14.618283 1 controller.go:224] Started PVC processing "default/my-rwx-pvc"
I1105 14:12:14.630654 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-rwx-pvc", UID:"8891b8c1-9220-458a-90d6-1016f226eb23", APIVersion:"v1", ResourceVersion:"520547845", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-8891b8c1-9220-458a-90d6-1016f226eb23
I1105 14:12:14.633395 1 connection.go:182] GRPC call: /csi.v1.Controller/ControllerExpandVolume
I1105 14:12:14.633411 1 connection.go:183] GRPC request: {"capacity_range":{"required_bytes":4294967296},"secrets":"stripped","volume_id":"8891b8c1-9220-458a-90d6-1016f226eb23"}
I1105 14:12:14.644676 1 connection.go:185] GRPC response: {}
I1105 14:12:14.645060 1 connection.go:186] GRPC error: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found
E1105 14:12:14.645072 1 controller.go:360] Resize volume "pvc-8891b8c1-9220-458a-90d6-1016f226eb23" by resizer "csi.hpe.com" failed: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found
I1105 14:12:14.645437 1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"my-rwx-pvc", UID:"8891b8c1-9220-458a-90d6-1016f226eb23", APIVersion:"v1", ResourceVersion:"520547845", FieldPath:""}): type: 'Warning' reason: 'VolumeResizeFailed' resize volume pvc-8891b8c1-9220-458a-90d6-1016f226eb23 failed: rpc error: code = NotFound desc = Volume with ID 8891b8c1-9220-458a-90d6-1016f226eb23 not found

/Nicola

Specifying volume parameters at the PVC

I would like to specify volume parameter when creating a PVC, rather that having an array of different StorageClasses. It's not clear from the docs whether this is possible.

OpenShift - error validating existing CRs against new CRD's schema

hello,

We had HPE CSI Driver 2.1.0 installed and we were showing an update for 2.2.0 in the Installed Operators page, after clicking update, I am seeing this error:

"error validating existing CRs against new CRD's schema for "hpecsidrivers.storage.hpe.com": error validating custom resource against new schema for HPECSIDriver hpe-csi-driver/csi-driver: [[].spec.csp: Required value, [].spec.controller: Required value, [].spec.node: Required value]"

Screen Shot 2022-08-01 at 4 24 08 PM

Screen Shot 2022-08-01 at 4 24 15 PM

Unbounded PVCs on OpenShift v4.6.58 / v2.1.3

After upgrading the driver to version v2.1.3 on OpenShift v4.6.58 it is no longer possible to bound PVCs from the volumesnapshots:

E0624 08:53:57.887054 1 controller.go:957] error syncing claim "1038e006-977b-44f5-9be9-7930bd1ce984": failed to provision volume with StorageClass "block": error getting handle for DataSource Type VolumeSnapshot by Name backup.202206240651: error getting snapshot backup.202206240651 from api server: the server could not find the requested resource (get volumesnapshots.snapshot.storage.k8s.io backup.202206240651)

failed to mount a PVC on Nimble

In a Rancher 2,5.2 environment I created a new cluster and installed and configured the csi driver per the instructions using the version 1.3.0 of the app (ver 1.3.1 failed with a yaml error)

Nimble OS 5.2.1.400

this is my storageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: hpe-standard
provisioner: csi.hpe.com
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: hpe-nimble-wec-prod
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/controller-publish-secret-name: hpe-nimble-wec-prod
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/node-publish-secret-name: hpe-nimble-wec-prod
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/node-stage-secret-name: hpe-nimble-wec-prod
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-csi-driver
  csi.storage.k8s.io/provisioner-secret-name: hpe-nimble-wec-prod
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-csi-driver
  description: Volume created by the HPE CSI Driver for Kubernetes
  destroyOnDelete: "true"
  folder: rancher-prod
  performancePolicy: "DockerDefault"
reclaimPolicy: Delete
allowVolumeExpansion: true

To test I created a PVC using the following:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-test-pvc
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: hpe-standard

and a Pod using:

kind: Pod
apiVersion: v1
metadata:
  name: my-pod1
  namespace: default
spec:
  containers:
    - name: pod-datelog-2
      image: debian
      command: ["bin/sh"]
      args: ["-c", "while true; do date >> /data/mydata.txt; sleep 1; done"]
      volumeMounts:
        - name: export1
          mountPath: /data
  volumes:
    - name: export1
      persistentVolumeClaim:
        claimName: my-test-pvc

all basically taken from the SCOD install instructions.

The POD initially reported the following as an event:

MountVolume.MountDevice failed for volume "pvc-2f0b2279-72a9-4c6a-bfd8-fd5fc38ca230" : rpc error: code = Unavailable desc = Failed to get storage provider from secrets, Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout

which I overcame by using the ClusterIP in the secret as per #185 instead of serviceName

Now I get:

MountVolume.MountDevice failed for volume "pvc-2f0b2279-72a9-4c6a-bfd8-fd5fc38ca230" : rpc error: code = Internal desc = Failed to stage volume 0636cf84f4aa9b1ea5000000000000000000000051, err: rpc error: code = Internal desc = Error creating device for volume 0636cf84f4aa9b1ea5000000000000000000000051, err: device not found with serial edc309352b5720cc6c9ce900519a549c or target

even though the volume does exist on the Nimble.

logs from the nimble-csp are attached as are the details from "my-pod1" and the PVC

nimble-csp.txt
pod1.txt
volumes.txt

hpe-csi-controller cannot resolve nimble-csp-svc

Hello, i just deployed the Nimble CSI Driver 1.1.0 from the Helm chart to my Kubernetes cluster and i am getting this error in the hpe-csi-driver containers :

time="2020-04-02T18:15:51Z" level=info msg="Adding connection to CSP at IP 192.168.x.x, port 8080, context path , with username admin and serviceName nimble-csp-svc" file="driver.go:245"
time="2020-04-02T18:15:51Z" level=info msg="About to attempt login to CSP for backend 192.168.x.x" file="container_storage_provider.go:114"
time="2020-04-02T18:16:21Z" level=error msg="Failed to login to CSP.  Status code: 0.  Error: Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="container_storage_provider.go:88"
time="2020-04-02T18:16:21Z" level=error msg="Failed to create new CSP connection from given parameters" file="driver.go:251"
time="2020-04-02T18:16:21Z" level=error msg="err: Post http://nimble-csp-svc:8080/containers/v1/tokens: dial tcp: i/o timeout" file="controller_server.go:371"

I have censored the IP in the log. I am using a valid IP.

When i try to ping any services in my cluster, i have a name resolution problem :

chroot: failed to run command '/host/bin/ping': No such file or directory
bash-4.2# chroot /host /usr/bin/env -i PATH="/sbin:/bin:/usr/bin:/usr/sbin" ping nimble-csp-svc
ping: nimble-csp-svc: Temporary failure in name resolution
bash-4.2# chroot /host /usr/bin/env -i PATH="/sbin:/bin:/usr/bin:/usr/sbin" ping kube-dns      
ping: kube-dns: Temporary failure in name resolution
bash-4.2# chroot /host /usr/bin/env -i PATH="/sbin:/bin:/usr/bin:/usr/sbin" ping nginx   
ping: nginx: Temporary failure in name resolution

When i try it on another pod from another deployment, it resolves without problem.

I tried to edit the deployment with this change :

      dnsConfig:
        options:
        - name: ndots
          value: "5" ----> was 1

because the config was this on my other deployments but i still have the error.

Volumeattachment stays on crashed node

If node1 crashed the volumeattachment will stay there despite pods from node1 trying to start on node2.
The pods will then get a ""Multi-Attach error for volume"" when coming up.
The only way to fix this is to delete the volumeattachment by hand

Expected behavior is to be inline with the in tree iscsi provider which leaves multi attach issues to be handled by Nimble. That way when it comes up it will be allowed to login to the volume once the old session has timed out

Please add instruction to change the default StorageClass optionally

Especially for those who are not familiar with system side yaml configs, like app developers or infra people who prefer managed k8s clusters, this CSI driver is one of the components they do not really care even though it is necessary.
So the cluster operation people may have to execute kubectl patch storageclass default -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' when they don't want developers use the default StorageClass and let them use the HPE storage for PV.

I think this would be very important to mention about this in the document.

Ref. https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/

Update building instructions

Hi!

Please, update building instructions:

make: [Makefile:70: clean] Error 1 (ignored)
Running lint
go version go1.15.15 linux/arm64
export PATH=$PATH:/root/go/bin && golangci-lint run --disable-all --enable=vet --enable=vetshadow --enable=golint --enable=ineffassign --enable=goconst --enable=deadcode --enable=dupl --enable=varcheck --enable=gocyclo --enable=misspell --deadline=240s --exclude vendor
WARN [runner] The linter 'varcheck' is deprecated (since v1.49.0) due to: The owner seems to have abandoned the linter. Replaced by unused.
WARN [runner] The linter 'golint' is deprecated (since v1.41.0) due to: The repository of the linter has been archived by the owner. Replaced by revive.
WARN [runner] The linter 'deadcode' is deprecated (since v1.49.0) due to: The owner seems to have abandoned the linter. Replaced by unused.
WARN [runner] Can't run linter goanalysis_metalinter: inspect: failed to load package csi: could not load export data: no export data for "github.com/container-storage-interface/spec/lib/go/csi"
ERRO Running error: 1 error occurred:
	* can't run linter goanalysis_metalinter: inspect: failed to load package csi: could not load export data: no export data for "github.com/container-storage-interface/spec/lib/go/csi"

make: *** [Makefile:63: lint] Error 3

Tried to build driver on

root@arm1:~/csi-driver# go version
go version go1.15.15 linux/arm64
root@arm1:~/csi-driver# $GOPATH/bin/golangci-lint --version
golangci-lint has version 1.52.2 built with go1.20.2 from da04413a on 2023-03-25T18:11:28Z

Also, can you, please, explain, why did you predefined architecture here: https://github.com/hpe-storage/csi-driver/blob/master/Makefile#L8

Are there any problems building for ARM arch?

Failed to pull a container image on quay.io

OpenShift 4.6
HPE CSI Driver Operator v1.4.0 (via OperatorHub)

Pulling hpestorage container image which is specified HPE CSI Driver Operatorv1.4.0 was failed.

  • Operator version
$ oc get csv
NAME                                           DISPLAY                           VERSION                 REPLACES   PHASE
hpe-csi-operator.v1.4.0                        HPE CSI Operator for Kubernetes   1.4.0                              Succeeded
  • HPECSIDriver.yaml
$ cat HPECSIDriver.yaml
apiVersion: storage.hpe.com/v1
kind: HPECSIDriver
metadata:
  name: csi-driver
  namespace: hpe-csi-driver
spec:
  disableNodeConformance: false
  imagePullPolicy: IfNotPresent
  iscsi:
    chapPassword: ""
    chapUser: ""
  logLevel: info
  registry: quey.io
  • Pods status that was created by operator
$ oc get pods
NAME                                       READY   STATUS             RESTARTS   AGE
hpe-csi-controller-7c954d99c4-p2qs5        4/9     ImagePullBackOff   0          109m
hpe-csi-driver-operator-84cb7b8995-sqwf4   1/1     Running            0          124m
hpe-csi-node-6rhmz                         1/2     ImagePullBackOff   0          109m
hpe-csi-node-grrp8                         1/2     ImagePullBackOff   0          109m
hpe-csi-node-k468c                         1/2     ImagePullBackOff   0          109m
nimble-csp-5d56c9d4b6-b9rx5                0/1     ErrImagePull       0          109m
primera3par-csp-9ccf675b6-56gcs            0/1     ErrImagePull       0          109m
  • kubernetes Events on the project for hpe csi driver
$ oc get event
LAST SEEN   TYPE      REASON                OBJECT                                          MESSAGE
114m        Normal    Scheduled             pod/hpe-csi-controller-7c954d99c4-p2qs5         Successfully assigned hpe-csi-driver/hpe-csi-controller-7c954d99c4-p2qs5 to openshift-worker-0
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quay.io/k8scsi/csi-provisioner:v1.5.0"
114m        Normal    Pulled                pod/hpe-csi-controller-7c954d99c4-p2qs5         Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.5.0" in 6.307556687s
114m        Normal    Created               pod/hpe-csi-controller-7c954d99c4-p2qs5         Created container csi-provisioner
114m        Normal    Started               pod/hpe-csi-controller-7c954d99c4-p2qs5         Started container csi-provisioner
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quay.io/k8scsi/csi-attacher:v2.1.1"
114m        Normal    Pulled                pod/hpe-csi-controller-7c954d99c4-p2qs5         Successfully pulled image "quay.io/k8scsi/csi-attacher:v2.1.1" in 8.435693571s
114m        Normal    Created               pod/hpe-csi-controller-7c954d99c4-p2qs5         Created container csi-attacher
114m        Normal    Started               pod/hpe-csi-controller-7c954d99c4-p2qs5         Started container csi-attacher
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quay.io/k8scsi/csi-snapshotter:v3.0.2"
114m        Normal    Pulled                pod/hpe-csi-controller-7c954d99c4-p2qs5         Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v3.0.2" in 10.22304632s
114m        Normal    Created               pod/hpe-csi-controller-7c954d99c4-p2qs5         Created container csi-snapshotter
114m        Normal    Started               pod/hpe-csi-controller-7c954d99c4-p2qs5         Started container csi-snapshotter
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quay.io/k8scsi/csi-resizer:v0.4.0"
114m        Normal    Pulled                pod/hpe-csi-controller-7c954d99c4-p2qs5         Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.4.0" in 8.341872818s
114m        Normal    Created               pod/hpe-csi-controller-7c954d99c4-p2qs5         Created container csi-resizer
114m        Normal    Started               pod/hpe-csi-controller-7c954d99c4-p2qs5         Started container csi-resizer
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quey.io/hpestorage/csi-driver:v1.4.0"
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Failed to pull image "quey.io/hpestorage/csi-driver:v1.4.0": rpc error: code = Unknown desc = invalid character '<' looking for beginning of value
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Error: ErrImagePull
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quey.io/hpestorage/volume-mutator:v1.2.0"
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Failed to pull image "quey.io/hpestorage/volume-mutator:v1.2.0": rpc error: code = Unknown desc = invalid character '<' looking for beginning of value
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Error: ErrImagePull
114m        Normal    Pulling               pod/hpe-csi-controller-7c954d99c4-p2qs5         Pulling image "quey.io/hpestorage/volume-group-snapshotter:v1.0.0"
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Failed to pull image "quey.io/hpestorage/volume-group-snapshotter:v1.0.0": rpc error: code = Unknown desc = invalid character '<' looking for beginning of value
114m        Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Error: ErrImagePull
94m         Normal    BackOff               pod/hpe-csi-controller-7c954d99c4-p2qs5         Back-off pulling image "quey.io/hpestorage/csi-driver:v1.4.0"
8m30s       Normal    Pulled                pod/hpe-csi-controller-7c954d99c4-p2qs5         Container image "quay.io/k8scsi/csi-provisioner:v1.5.0" already present on machine
8m30s       Warning   Failed                pod/hpe-csi-controller-7c954d99c4-p2qs5         Error: services have not yet been read at least once, cannot construct envvars
<ommit>

All events is bellow.
oc_get_event.txt

And also hpe csi driver operator didn't have correct authority and couldn't deploy some clusterroles.
hpe-csi-operator-sa need patch operation for storage.hpe.com API.
I adding it in the clusterrole attached the SA manually.

  • I could find the bellow messages on operator container logger
$ oc logs hpe-csi-driver-operator-84cb7b8995-sqwf4
<ommit>
I0117 13:35:40.958479       1 request.go:645] Throttling request took 1.048280091s, request: GET:https://172.30.0.1:443/apis/batch/v1beta1?timeout=32s
{"level":"error","ts":1610890547.0710552,"logger":"helm.controller","msg":"Release failed","namespace":"hpe-csi-driver","name":"csi-driver","apiVersion":"storage.hpe.com/v1","kind":"HPECSIDriver","release":"csi-driver","error":"failed to install release: clusterroles.rbac.authorization.k8s.io \"hpe-csi-driver-role\" is forbidden: user \"system:serviceaccount:hpe-csi-driver:hpe-csi-operator-sa\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:hpe-csi-driver\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"storage.hpe.com\"], Resources:[\"hpesnapshotgroupinfos\"], Verbs:[\"patch\"]}\n{APIGroups:[\"storage.hpe.com\"], Resources:[\"hpevolumegroupinfos\"], Verbs:[\"patch\"]}","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/travis/gopath/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\ngithub.com/operator-framework/operator-sdk/internal/helm/controller.HelmOperatorReconciler.Reconcile\n\toperator-sdk/internal/helm/controller/reconcile.go:180\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:263\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}
{"level":"error","ts":1610890547.0751362,"logger":"controller-runtime.manager.controller.hpecsidriver-controller","msg":"Reconciler error","name":"csi-driver","namespace":"hpe-csi-driver","error":"failed to install release: clusterroles.rbac.authorization.k8s.io \"hpe-csi-driver-role\" is forbidden: user \"system:serviceaccount:hpe-csi-driver:hpe-csi-operator-sa\" (groups=[\"system:serviceaccounts\" \"system:serviceaccounts:hpe-csi-driver\" \"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"storage.hpe.com\"], Resources:[\"hpesnapshotgroupinfos\"], Verbs:[\"patch\"]}\n{APIGroups:[\"storage.hpe.com\"], Resources:[\"hpevolumegroupinfos\"], Verbs:[\"patch\"]}","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/home/travis/gopath/pkg/mod/github.com/go-logr/[email protected]/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:267\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:235\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.1\n\t/home/travis/gopath/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:198\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\t/home/travis/gopath/pkg/mod/k8s.io/[email protected]/pkg/util/wait/wait.go:99"}
<ommit>
  • I updated clusterrole manually
$ oc get clusterrole hpe-csi-operator.v1.4.0-6d789d6f5c -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: hpe-csi-operator.v1.4.0-6d789d6f5c
rules:
- apiGroups:
  - storage.hpe.com
  resources:
  - '*'
  verbs:
  - get
  - watch
  - list
  - delete
  - update
  - create
  - patch               <---------- I added this
- apiGroups:
  - ""
  resources:
  - namespaces
  - pods
  - services
  - endpoints
<ommit>

UUID conflict on BL460 system

Hello there,

While doing the integration of the CSI with C7000 and BL460 systems, the CSI literally generates the same UUID for all the blades. See the following logs

time="2021-05-12T12:42:43Z" level=trace msg=">>>>> LoadNodeInfo called with node &{ nlhen06st1rm039 0a58a9fe-0001-6e6c-6865-6e3036737431 [0xc0005bc3f0] [0xc00015c340 0xc00015c550 0xc00015c750 0xc00015c900] [0xc00000c780 0xc00000c790]  }" file="flavor.go:173"
time="2021-05-12T12:42:43Z" level=trace msg=">>>>> getNodeInfoByUUID with uuid 0a58a9fe-0001-6e6c-6865-6e3036737431" file="flavor.go:552"
time="2021-05-12T12:42:43Z" level=trace msg="<<<<< getNodeInfoByUUID" file="flavor.go:562"
time="2021-05-12T12:42:43Z" level=info msg="Node info nlhen06st1rm040 already known to cluster\n" file="flavor.go:199"
time="2021-05-12T12:42:43Z" level=info msg="updating Node nlhen06st1rm040 with iqns [iqn.1994-05.com.redhat:24534a2be9e4] wwpns [10000a5fd750001c 10000a5fd750001e] networks [10.128.0.2/23 169.254.0.1/20 10.59.65.234/26 10.59.65.204/32]" file="flavor.go:231"
time="2021-05-12T12:42:40Z" level=trace msg=">>>>> LoadNodeInfo called with node &{ nlhen06st1rm040 0a58a9fe-0001-6e6c-6865-6e3036737431 [0xc0003c8140] [0xc00083e600 0xc00083e7b0 0xc00083e970 0xc00083eb20] [0xc0001e5b60 0xc0001e5b70]  }" file="flavor.go:173"
time="2021-05-12T12:42:40Z" level=trace msg=">>>>> getNodeInfoByUUID with uuid 0a58a9fe-0001-6e6c-6865-6e3036737431" file="flavor.go:552"
time="2021-05-12T12:42:40Z" level=trace msg=">>>>> getNodeInfoByName with name nlhen06st1rm040" file="flavor.go:569"
time="2021-05-12T12:42:40Z" level=info msg="Adding node with name nlhen06st1rm040" file="flavor.go:254"
time="2021-05-12T12:42:40Z" level=info msg="Successfully added node info for node <RAW SNIPPED> [iqn.1994-05.com.redhat:ccd53eae81e] [10.130.0.2/23 169.254.0.1/20 10.59.65.235/26 10.59.65.205/32] [10000a5fd7500020 10000a5fd7500022]  }}" file="flavor.go:262"
time="2021-05-12T12:42:54Z" level=trace msg=">>>>> LoadNodeInfo called with node &{ nlhen06st1rm047 0a58a9fe-0001-6e6c-6865-6e3036737431 [0xc0006c8310] [0xc000518640 0xc0005187f0 0xc0005189b0] [0xc0005c3620 0xc0005c3630]  }" file="flavor.go:173"
time="2021-05-12T12:42:54Z" level=trace msg=">>>>> getNodeInfoByUUID with uuid 0a58a9fe-0001-6e6c-6865-6e3036737431" file="flavor.go:552"
time="2021-05-12T12:42:54Z" level=trace msg="<<<<< getNodeInfoByUUID" file="flavor.go:562"
time="2021-05-12T12:42:54Z" level=info msg="Node info nlhen06st1rm040 already known to cluster\n" file="flavor.go:199"
time="2021-05-12T12:42:54Z" level=info msg="updating Node nlhen06st1rm040 with iqns [iqn.1994-05.com.redhat:60bf19f2dd9] wwpns [10000a5fd7500024 10000a5fd7500026] networks [10.129.0.2/23 169.254.0.1/20 10.59.65.236/26]" file="flavor.go:231"

Because of this UUID conflict, then the CRD hpenodeinfo is populated as follows

  • only a single node reported
  • this node has the name of the first registered node
  • this node has the WWPNs of the last node that has overwritten the CRD
$ oc get hpenodeinfo
NAME              AGE
nlhen06st1rm039   69s
$ oc get hpenodeinfo nlhen06st1rm039 -o yaml
apiVersion: storage.hpe.com/v1
kind: HPENodeInfo
metadata:
  creationTimestamp: "2021-05-12T12:43:19Z"
  generation: 3
  managedFields:
  - apiVersion: storage.hpe.com/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        .: {}
        f:iqns: {}
        f:networks: {}
        f:uuid: {}
        f:wwpns: {}
    manager: csi-driver
    operation: Update
    time: "2021-05-12T12:43:19Z"
  name: nlhen06st1rm039
  resourceVersion: "1835075"
  selfLink: /apis/storage.hpe.com/v1/hpenodeinfos/nlhen06st1rm039
  uid: 5b312dbd-2932-4e6c-aaf8-01a338924eaf
spec:
  iqns:
  - iqn.1994-05.com.redhat:60bf19f2dd9
  networks:
  - 10.129.0.2/23
  - 169.254.0.1/20
  - 10.59.65.236/26
  uuid: 0a58a9fe-0001-6e6c-6865-6e3036737431
  wwpns:
  - 10000a5fd7500024
  - 10000a5fd7500026

Testing in a different environment with rackmounts (albeit not from HPE), the issue is not present.

Platform details

  • Both OCP 4.6 and 4.7 (we tested multiple 4.6 and 4.7 released like 4.6.20, 4.6.27, 4.7.5, 4.7.9, and many more and all have the same behavior)
  • HPE CSI Driver 1.4.1 (installed as Operator)
  • BL460c Gen10 with FlexFabric 650FLB effectively FCoE
  • Both C7000 and blades have the SPP 2020.09.0 with the RHEL 8.3 supplement

A couple of questions:

  • Who is ultimately in charge of generating the node's UUID? From reading the nodeGetInfo, I couldn't figure host.UUID out
  • Do you happen to have a clue what is going wrong here?

Thanks!

disconnecting iscsi volume can break other volumes connections

If there are more than one volume on one node, there is quite a good chance that when one of them is dissconnected (e.g. pod is removed), it will disconnect iscsi sessions used by other volumes. The result is disconected / read-only / broken fs of other pods running on the same node.

Here is normal state - 4 iscsi connections, used by all volumes:

root@m6c20:/home/dc145# iscsiadm -m session
tcp: [191] 10.37.120.5:3260,1024 iqn.2000-05.com.3pardata:20210002ac023192 (non-flash)
tcp: [192] 10.37.120.6:3260,1025 iqn.2000-05.com.3pardata:20220002ac023192 (non-flash)
tcp: [193] 10.37.120.7:3260,1026 iqn.2000-05.com.3pardata:21210002ac023192 (non-flash)
tcp: [194] 10.37.120.8:3260,1027 iqn.2000-05.com.3pardata:21220002ac023192 (non-flash)

root@m6c20:/home/dc145# multipath -l
mpathai (360002ac000000000000002a600023192) dm-17 3PARdata,VV
size=150G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  |- 2:0:0:4 sde     8:64  active undef running
  |- 3:0:0:4 sdj     8:144 active undef running
  |- 4:0:0:4 sdo     8:224 active undef running
  `- 5:0:0:4 sdt     65:48 active undef running
mpathb (360002ac000000000000002a700023192) dm-14 3PARdata,VV
size=150G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  |- 2:0:0:1 sdb     8:16  active undef running
  |- 3:0:0:1 sdg     8:96  active undef running
  |- 4:0:0:1 sdl     8:176 active undef running
  `- 5:0:0:1 sdq     65:0  active undef running
mpathab (360002ac000000000000002a300023192) dm-15 3PARdata,VV
size=150G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  |- 2:0:0:2 sdc     8:32  active undef running
  |- 3:0:0:2 sdh     8:112 active undef running
  |- 4:0:0:2 sdm     8:192 active undef running
  `- 5:0:0:2 sdr     65:16 active undef running
mpathu (360002ac000000000000002d300023192) dm-13 3PARdata,VV
size=150G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  |- 2:0:0:0 sda     8:0   active undef running
  |- 3:0:0:0 sdf     8:80  active undef running
  |- 4:0:0:0 sdk     8:160 active undef running
  `- 5:0:0:0 sdp     8:240 active undef running
mpaths (360002ac000000000000002a400023192) dm-16 3PARdata,VV
size=150G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=0 status=active
  |- 2:0:0:3 sdd     8:48  active undef running
  |- 3:0:0:3 sdi     8:128 active undef running
  |- 4:0:0:3 sdn     8:208 active undef running
  `- 5:0:0:3 sds     65:32 active undef running

And here is the state after one of the pod is killed. All iscsi connections were closed and all volumes are read only, possibly with broken filesystem and data loss:

root@m6c20:/home/dc145# iscsiadm -m session
iscsiadm: No active sessions.
root@m6c20:/home/dc145# multipath -l
mpathai (360002ac000000000000002a600023192) dm-17 ##,##
size=150G features='0' hwhandler='1 alua' wp=rw
mpathb (360002ac000000000000002a700023192) dm-14 ##,##
size=150G features='0' hwhandler='1 alua' wp=rw
mpathab (360002ac000000000000002a300023192) dm-15 ##,##
size=150G features='0' hwhandler='1 alua' wp=rw
mpaths (360002ac000000000000002a400023192) dm-16 ##,##
size=150G features='0' hwhandler='1 alua' wp=rw

root@m6c20:/home/dc145# mount | grep mpath | grep plugins/hpe
/dev/mapper/mpaths on /var/lib/kubelet/plugins/hpe.com/mounts/pvc-3a300bde-7f12-4141-a9c7-258d94dd4cf8 type ext4 (ro,relatime,stripe=4096)
/dev/mapper/mpathb on /var/lib/kubelet/plugins/hpe.com/mounts/pvc-879270e3-7554-4c78-9b0c-7ce462922078 type ext4 (ro,relatime,stripe=4096)
/dev/mapper/mpathai on /var/lib/kubelet/plugins/hpe.com/mounts/pvc-36f4b7a8-fc9b-455b-8476-441d287859ae type ext4 (ro,relatime,stripe=4096)
/dev/mapper/mpathab on /var/lib/kubelet/plugins/hpe.com/mounts/pvc-4ed3550f-dadc-4f9e-9660-4b9480e6570e type ext4 (ro,relatime,stripe=4096)

root@m6c20:/home/dc145# ls /var/lib/kubelet/plugins/hpe.com/mounts/pvc-3a300bde-7f12-4141-a9c7-258d94dd4cf8
ls: reading directory '/var/lib/kubelet/plugins/hpe.com/mounts/pvc-3a300bde-7f12-4141-a9c7-258d94dd4cf8': Input/output error

NFS provisioning fails when PVC was created by an operator

When a PVC is created by an operator (and thus has a metadata.ownerReferences field) with a storageClass that has nfsResources set to "true" the provision of the nfs volume fails. This results in the original PVC staying in the 'Pending' state.

This only happens when nfsResources is set to "true".

StorageClass:

---
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  labels:
    environment: int
  name: hpe-standard
parameters:
  csi.storage.k8s.io/controller-expand-secret-name: nimble-storage
  csi.storage.k8s.io/controller-expand-secret-namespace: nimble-storage
  csi.storage.k8s.io/controller-publish-secret-name: nimble-storage
  csi.storage.k8s.io/controller-publish-secret-namespace: nimble-storage
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/node-publish-secret-name: nimble-storage
  csi.storage.k8s.io/node-publish-secret-namespace: nimble-storage
  csi.storage.k8s.io/node-stage-secret-name: nimble-storage
  csi.storage.k8s.io/node-stage-secret-namespace: nimble-storage
  csi.storage.k8s.io/provisioner-secret-name: nimble-storage
  csi.storage.k8s.io/provisioner-secret-namespace: nimble-storage
  description: Volume created by the HPE CSI Driver for Kubernetes
  destroyOnDelete: "true"
  nfsResources: "true"
provisioner: csi.hpe.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

PersistentVolumeClaim:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: hpe-standard
    volume.beta.kubernetes.io/storage-provisioner: csi.hpe.com
  creationTimestamp: "2021-12-20T13:07:16Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: my-claim-2
  namespace: default
  ownerReferences:
  - apiVersion: zalando.org/v1
    blockOwnerDeletion: true
    controller: true
    kind: EphemeralVolumeClaim
    name: my-claim-2
    uid: 1272133c-546b-4620-8baf-f1113002ff18
  resourceVersion: "11046667"
  uid: 9d20d560-50ef-45c6-bc21-0f915a22c4df
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1G
  volumeMode: Filesystem
status:
  phase: Pending

csi-provisioner.log
csi-driver.log

Handle unpublish on a volume that has already been unpublished

If unpublish is called on a volume that is already unpublished the volumeattachment finalizers hangs instead of completes. This case can happen during a hard reboot of a node

Error message

    message: 'rpc error: code = Aborted desc = Failed to remove ACL via CSP, err:
      status code was 404 Not Found for request: action=PUT path=http://nimble-csp-svc:8080/containers/v1/volumes/0671136b496958f3f500000000000000000000009b/actions/unpublish'```

Logs from nimble csp

14:33:50 INFO [co.ni.hi.cs.v1.ob.VolumeObjectSet]] (executor-thread-168) Executing VolumeObjectSet::actions/unpublish with id 0671136b496958f3f5000000000000000000000092 and options UnpublishOptions [hostUuid=02428603-fe12-7477-2d61-70702d737276]
{"data":{"forceAdvancedCriteria":false,"_constructor":"AdvancedCriteria","operator":"and","criteria":[{"forceAdvancedCriteria":false,"fieldName":"app_uuid","operator":"contains","value":"02428603-fe12-7477-2d61-70702d737276"}]},"operationType":"fetch"}
14:33:50 WARN [co.ni.hi.cs.se.er.WebApplicationExceptionMapper]] (executor-thread-168) NotFound: Could not find iGroup for host with id 02428603-fe12-7477-2d61-70702d737276


The right thing to do is probably just return success if it is already done or continue if partially done (say nimble csp crashes half way through a unpublish)

Handle empty login token

Currently if the csi-driver fails a login it will send an empty token to the csp. Which in turn errors out. The csi-driver should probably try to login again until it has a valid token and if it can't login show an appropriate error message

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.