Coder Social home page Coder Social logo

ibm-powervs-block-csi-driver's Introduction

IBM PowerVS Block CSI Driver

CSI Driver for IBM® Power Systems™ Virtual Servers

Overview

The IBM Power Virtual Systems Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators to manage the lifecycle of Power Virtual System volumes.

CSI Specification Compatibility Matrix

PowerVS CSI Driver Kubernetes CSI Golang
main 1.29 1.9.0 1.21
0.6.0 1.29 1.9.0 1.21
0.5.0 1.28 1.8.0 1.20
0.4.0 1.26 1.7.0 1.19
0.3.0 1.25 1.6.0 1.19
0.2.0 1.24 1.5.0 1.18
0.1.0 1.23 1.5.0 1.17

Features

The following CSI gRPC calls are implemented:

  • Controller Service: CreateVolume, DeleteVolume, ControllerPublishVolume,ControllerUnpublishVolume, ControllerGetCapabilities, ValidateVolumeCapabilities
  • Node Service: NodeStageVolume, NodeUnstageVolume, NodePublishVolume, NodeUnpublishVolume, NodeGetCapabilities, NodeGetInfo
  • Identity Service: GetPluginInfo, GetPluginCapabilities

CreateVolume Parameters

There are several optional parameters that could be passed into CreateVolumeRequest.parameters map, these parameters can be configured in StorageClass, see example:

Parameters Values Default Description
"type" tier1, tier3 tier1 PowerVS Disk type that will be created during volume creation
"csi.storage.k8s.io/fstype" xfs, ext2, ext3, ext4 ext4 File system type that will be formatted during volume creation. This parameter is case sensitive!

Driver Options

There are couple driver options that can be passed as arguments when starting driver container.

Option argument value sample default Description
endpoint tcp://127.0.0.1:10000/ unix:///var/lib/csi/sockets/pluginproxy/csi.sock added to all volumes, for checking if a given volume was already created so that ControllerPublish/CreateVolume is idempotent.
volume-attach-limit 1,2,3 ... -1 Value for the maximum number of volumes attachable per node. If specified, the limit applies to all nodes. If not specified, the value is approximated from the instance type.
debug true false if true, driver will enable the debug log level

IBM PowerVS Block CSI Driver on Kubernetes

Following sections are Kubernetes specific. If you are Kubernetes user, use followings for driver features, installation steps and examples.

Features

  • Static Provisioning - create a new or migrating existing PowerVS volumes, then create persistence volume (PV) from the PowerVS volume and consume the PV from container using persistence volume claim (PVC).
  • Dynamic Provisioning - uses persistence volume claim (PVC) to request the Kuberenetes to create the PowerVS volume on behalf of user and consumes the volume from inside container.
  • Mount Option - mount options could be specified in persistence volume (PV) to define how the volume should be mounted.
  • Volume Resizing - expand the volume size. The corresponding CSI feature (ExpandCSIVolumes) is beta since Kubernetes 1.16.

Prerequisites

  • If you are managing PowerVS volumes using static provisioning, get yourself familiar with Power Virtual Servers.
  • Get yourself familiar with how to setup Kubernetes on IBM Cloud and have a working Kubernetes cluster:
    • Enable flag --allow-privileged=true for kubelet and kube-apiserver
    • Enable kube-apiserver feature gates --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true
    • Enable kubelet feature gates --feature-gates=CSINodeInfo=true,CSIDriverRegistry=true,CSIBlockVolume=true

Installation

Set up driver permission

  • Using secret object - Generate IBMCLOUD_APIKEY from the UI, put that user's credentials in secret manifest, then deploy the secret
curl https://raw.githubusercontent.com/kubernetes-sigs/ibm-powervs-block-csi-driver/main/deploy/kubernetes/secret.yaml > secret.yaml
# Edit the IBMCLOUD_API_KEY
# Edit the secret with user credentials
kubectl apply -f secret.yaml

Deploy driver

Please see the compatibility matrix above before you deploy the driver

To deploy the CSI driver:

kubectl apply -k "https://github.com/kubernetes-sigs/ibm-powervs-block-csi-driver/deploy/kubernetes/overlays/stable/?ref=v0.3.0"

Verify driver is running:

kubectl get pods -n kube-system

Deploy driver with debug mode

To view driver debug logs, run the CSI driver with -v=5 command line option

To enable powervs debug logs, run the CSI driver with debug=true command line option.

Examples

Make sure you follow the Prerequisites before the examples:

Development

Please go through CSI Spec and General CSI driver development guideline to get some basic understanding of CSI driver before you start.

Requirements

  • Golang 1.17.+
  • Ginkgo in your PATH for integration testing and end-to-end testing
  • Docker 20.10+ for releasing

Testing

  • To build image, run: make image
  • To push image, run: make push

Running on PowerVS Staging

To test the driver on the staging IBM Cloud PowerVS environment make use of the following environment variables.

export IBMCLOUD_IAM_API_ENDPOINT=https://iam.test.cloud.ibm.com
export IBMCLOUD_RESOURCE_CONTROLLER_ENDPOINT=https://resource-controller.test.cloud.ibm.com;
export IBMCLOUD_POWER_API_ENDPOINT=https://dal.power-iaas.test.cloud.ibm.com # Replace 'dal' with specific region of your test workspace.

ibm-powervs-block-csi-driver's People

Contributors

cpanato avatar dependabot[bot] avatar dharaneeshvrd avatar k8s-ci-robot avatar kishen-v avatar mkumatag avatar mrbobbytables avatar ppc64le-cloud-bot avatar prajyot-parab avatar prb112 avatar rajalakshmi-girish avatar rcmadhankumar avatar xmudrii avatar yussufsh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ibm-powervs-block-csi-driver's Issues

NodeStage Volume request.PublishContext doesn't contain key wwn/wwnkey value is empty

Error snippet:

func (d *nodeService) NodeStageVolume(ctx context.Context, req *csi.NodeStageVolumeRequest) (*csi.NodeStageVolumeResponse, error) {
    ....
    ....

    wwn, ok := req.PublishContext[WWNKey]
	if !ok || wwn == "" {
		return nil, status.Error(codes.InvalidArgument, "WWN ID is not provided or empty")
	}
    ...
    ...
}

Error logs:

GRPC error: rpc error: code = InvalidArgument desc = WWN ID is not provided or empty

Get device path fails with error `no fc disk found`

Error snippet:

source, err := d.mounter.GetDevicePath(wwn)
        if err != nil {
                return nil, status.Errorf(codes.Internal, "Failed to find device path %s. %v", wwn, err)
        }

Error:

GRPC error: rpc error: code = Internal desc = Failed to find device path 6005076810830198a00000000000127b. no fc disk found

Add Unit tests

Need to add unit tests for each and every module written in PowerVS CSI Driver

Add wait loop to verify the disk creation

Currently no wait loop is added in the cloud package to verify disk state after the disk creation request is given.
CreateDisk should return positive results only if the disk is available for the user.

Current implementation: (pkg/cloud/powervs.go)

func (d *controllerService) CreateVolume(ctx context.Context, req *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
	....
        ....

	opts := &cloud.DiskOptions{
		Shareable:     false,
		CapacityBytes: volSizeBytes,
		VolumeType:    volumeType,
	}

	disk, err := d.cloud.CreateDisk(volName, opts)
	if err != nil {
		return nil, status.Errorf(codes.Internal, "Could not create volume %q: %v", volName, err)
	}
	return newCreateVolumeResponse(disk), nil
}

proposed approach reference: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/pkg/cloud/cloud.go#L378

Govet verification fails with possible nil pointer issue

/kind bug

What happened?
Govet verification fails with possible nil pointer issue

madhankumar@MADHANKUMARs-MacBook-Pro ibm-powervs-block-csi-driver % make verify
echo "Installing golangci-lint..."
Installing golangci-lint...
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.43.0
golangci/golangci-lint info checking GitHub for tag 'v1.43.0'
golangci/golangci-lint info found version: 1.43.0 for v1.43.0/darwin/amd64
golangci/golangci-lint info installed ./bin/golangci-lint

### verify-vendor:
Repo uses 'go mod'.
+ env GO111MODULE=on go mod tidy
Go dependencies up-to-date.
echo "verifying and linting files ..."
verifying and linting files ...
./hack/verify-all
Verifying gofmt
No issue found
Verifying govet
Done
pkg/driver/node.go:386:62: SA5011: possible nil pointer dereference (staticcheck)
	klog.V(4).Infof("NodeGetVolumeStats: called with args %+v", *req)
	                                                            ^
pkg/driver/node.go:389:5: SA5011(related information): this check suggests that the pointer can be nil (staticcheck)
	if req == nil || req.VolumeId == "" {
	   ^
make: *** [verify] Error 1
madhankumar@MADHANKUMARs-MacBook-Pro ibm-powervs-block-csi-driver % 

What you expected to happen?
Don't have to check if req is nil as its passed from provisioner and will not be nil.

How to reproduce it (as minimally and precisely as possible)?
Run make verify on the repo.

Add scale tests

Do research on scalability testing for csi and add for the PowerVS CSI driver.

Attach Volume fails as it fails to validate that the volume is attached to the node

attach volume using volClient is failing with the error:
Failed to validate that the volume [c7077cfb-6343-4300-9b27-6b086b9d1e2f] is attached to the pvminstance [9552c51d-5916-4ce5-a061-1e8bd7315ca8]

Error snippet:

func (p *powerVSCloud) AttachDisk(volumeID string, nodeID string) (err error) {
	_, err = p.volClient.Attach(nodeID, volumeID, p.cloudInstanceID, TIMEOUT)
	if err != nil {
		return err
	}

	err = p.WaitForAttachmentState(volumeID, VolumeInUseState)
	if err != nil {
		return err
	}
	return nil
}

Track run time metrics

Measure and track run time metrics for various operations like

  • batch volume creation
  • large volume formatting
  • large volume mount
  • etc.

IBM cloud should not allow VM names having capital letters and _

IBM cloud does case insensitive comparison and fails the request for VM creation with same name having different cases.
If there is a VM called "VMFirst" exists already in the IBM cloud, then I cannot create a VM "VMFIRst".

IBM cloud should not allow capital letters and _ in the VM names.
k8s also doesn't allow to have caps in the object name via - https://github.com/kubernetes/community/blob/master/contributors/design-proposals/architecture/identifiers.md#definitions, the creation of machines via k8s spec itself doesn't happen if caps or created with _ etc..

Undefined error due to [email protected]

Package [email protected] throws Undefined errors while verifying govet and running e2e tests.

madhankumar@MADHANKUMARs-MacBook-Pro ibm-powervs-block-csi-driver % make verify
echo "Installing golangci-lint..."
Installing golangci-lint...
curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s v1.43.0
golangci/golangci-lint info checking GitHub for tag 'v1.43.0'
golangci/golangci-lint info found version: 1.43.0 for v1.43.0/darwin/amd64
golangci/golangci-lint info installed ./bin/golangci-lint

### verify-vendor:
Repo uses 'go mod'.
+ env GO111MODULE=on go mod tidy
Go dependencies up-to-date.
echo "verifying and linting files ..."
verifying and linting files ...
./hack/verify-all
Verifying gofmt
No issue found
Verifying govet
# k8s.io/kubernetes/pkg/features
../../../../go/pkg/mod/k8s.io/[email protected]/pkg/features/kube_features.go:995:2: undefined: features.OpenAPIEnums
../../../../go/pkg/mod/k8s.io/[email protected]/pkg/features/kube_features.go:996:2: undefined: features.CustomResourceValidationExpressions
../../../../go/pkg/mod/k8s.io/[email protected]/pkg/features/kube_features.go:997:2: undefined: features.OpenAPIV3
../../../../go/pkg/mod/k8s.io/[email protected]/pkg/features/kube_features.go:998:2: undefined: features.ServerSideFieldValidation
make: *** [verify] Error 2

NodeStageVolume is called again even after the successful response

NodeStageVolume is called and the respective disk is attached to the node, mounted successfully to the pod.
But, NodeStageVolume is called again and mount fails as the disk already mounted.

ERROR: logging before flag.Parse: I1214 10:23:58.538572       1 fibrechannel.go:246] Attaching fibre channel volume
ERROR: logging before flag.Parse: I1214 10:23:58.540199       1 fibrechannel.go:231]  = = = = = = = = = =  = = = = Disk : /dev/dm-11 dm: 
I1214 10:24:00.366208       1 mount_linux.go:376] Disk successfully formatted (mkfs): xfs - /dev/dm-11 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount
I1214 10:24:00.366228       1 mount_linux.go:394] Attempting to mount disk /dev/dm-11 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount
I1214 10:24:00.366247       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t xfs -o defaults /dev/dm-11 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount)
I1214 10:24:01.026631       1 node.go:190] Node stage volume finished ++++++++++++++++++++ source /dev/dm-11 target /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount wwn 6005076810830198a0000000000015d1


......
......

I1214 10:24:32.977847       1 node.go:148] NodeStageVolume: find device path for wwn 6005076810830198a0000000000015d1 -> /dev/dm-11
I1214 10:24:32.978728       1 node.go:184] NodeStageVolume: formatting /dev/dm-11 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount with fstype xfs
I1214 10:24:32.978751       1 mount_linux.go:405] Attempting to determine if disk "/dev/dm-11" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/dm-11])
I1214 10:24:32.980885       1 mount_linux.go:408] Output: "DEVNAME=/dev/dm-11\nTYPE=xfs\n", err: <nil>
I1214 10:24:32.980902       1 mount_linux.go:298] Checking for issues with fsck on disk: /dev/dm-11
I1214 10:24:32.985722       1 mount_linux.go:394] Attempting to mount disk /dev/dm-11 in xfs format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount
I1214 10:24:32.985745       1 mount_linux.go:146] Mounting cmd (mount) with arguments (-t xfs -o defaults /dev/dm-11 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount)
E1214 10:24:32.988904       1 mount_linux.go:150] Mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t xfs -o defaults /dev/dm-11 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount
Output: mount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount: /dev/mapper/mpathu already mounted on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount.

E1214 10:24:32.989378       1 driver.go:116] GRPC error: rpc error: code = Internal desc = could not format "/dev/dm-11" and mnt it at "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-6d954fa8-d913-410e-ba8e-a23accdd1b53/globalmount"

Wait loops are failing

Wait loops from k8s.io/kubernetes/test is used throughout the e2e test automation.

Wait loops fail strongly during the middle of the run.

Sample log where wait loops fails with panic during pod deletion.

STEP: Saw pod success
Nov 16 04:55:30.916: INFO: Pod "powervs-volume-tester-lrzb8" satisfied condition "Succeeded or Failed"
Nov 16 04:55:30.916: INFO: deleting Pod "powervs-8087"/"powervs-volume-tester-lrzb8"
Nov 16 04:55:30.946: INFO: Pod powervs-volume-tester-lrzb8 has the following logs: hello world

STEP: Deleting pod powervs-volume-tester-lrzb8 in namespace powervs-8087
Nov 16 04:55:30.953: INFO: deleting PVC "powervs-8087"/"pvc-ptkfh"
Nov 16 04:55:30.953: INFO: Deleting PersistentVolumeClaim "pvc-ptkfh"
STEP: waiting for claim's PV "pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a" to be deleted
Nov 16 04:55:30.959: INFO: Waiting up to 10m0s for PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a to get deleted
Nov 16 04:55:30.965: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Bound (5.471448ms)
Nov 16 04:55:35.967: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (5.007921884s)
Nov 16 04:55:40.970: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (10.010273399s)
Nov 16 04:55:45.972: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (15.012465718s)
Nov 16 04:55:50.974: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (20.014503456s)
Nov 16 04:55:55.976: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (25.017004809s)
Nov 16 04:56:00.978: INFO: PersistentVolume pvc-dbba1694-77d6-48c4-8b22-0de03d0bac8a found and phase=Released (30.019080219s)
panic: test timed out after 10m0s

goroutine 438 [running]:
testing.(*M).startAlarm.func1()
	/usr/lib/golang/src/testing/testing.go:1618 +0xcc
created by time.goFunc
	/usr/lib/golang/src/time/sleep.go:167 +0x44

goroutine 1 [chan receive, 10 minutes]:
testing.(*T).Run(0xc000636900, 0x1163c9c0, 0x7, 0x11716118, 0x402)
	/usr/lib/golang/src/testing/testing.go:1169 +0x280
testing.runTests.func1(0xc000636780)
	/usr/lib/golang/src/testing/testing.go:1439 +0x78
testing.tRunner(0xc000636780, 0xc0008afd58)
	/usr/lib/golang/src/testing/testing.go:1123 +0xd8
testing.runTests(0xc000584100, 0x12454de0, 0x1, 0x1, 0xc05cec5977367afe, 0x8bb9fce3f2, 0x1258cec0, 0x1258fb38)
	/usr/lib/golang/src/testing/testing.go:1437 +0x2b4
testing.(*M).Run(0xc00000cd80, 0x0)
	/usr/lib/golang/src/testing/testing.go:1345 +0x1a0
main.main()
	_testmain.go:43 +0x130

goroutine 19 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x1258d400)
	/root/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1169 +0x78
created by k8s.io/klog/v2.init.0
	/root/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:417 +0xe4

goroutine 24 [chan receive]:
github.com/golang/glog.(*loggingT).flushDaemon(0x1258d180)
	/root/go/pkg/mod/github.com/golang/[email protected]/glog.go:882 +0x78
created by github.com/golang/glog.init.0
	/root/go/pkg/mod/github.com/golang/[email protected]/glog.go:410 +0x2ac

goroutine 32 [IO wait]:
internal/poll.runtime_pollWait(0x7fff635e8e88, 0x72, 0xc0003b4005)
	/usr/lib/golang/src/runtime/netpoll.go:222 +0x50
internal/poll.(*pollDesc).wait(0xc000976d18, 0x72, 0x11868400, 0x124843c0, 0x0)
	/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:87 +0x40
internal/poll.(*pollDesc).waitRead(...)
	/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000976d00, 0xc0003b4000, 0x382c, 0x382c, 0x0, 0x0, 0x0)
	/usr/lib/golang/src/internal/poll/fd_unix.go:159 +0x184
net.(*netFD).Read(0xc000976d00, 0xc0003b4000, 0x382c, 0x382c, 0x0, 0x0, 0x1032d57c)
	/usr/lib/golang/src/net/fd_posix.go:55 +0x48
net.(*conn).Read(0xc000632780, 0xc0003b4000, 0x382c, 0x382c, 0x0, 0x0, 0x0)
	/usr/lib/golang/src/net/net.go:182 +0x84
crypto/tls.(*atLeastReader).Read(0xc0003acba0, 0xc0003b4000, 0x382c, 0x382c, 0x1032fa1c, 0x54c, 0xc00003c000)
	/usr/lib/golang/src/crypto/tls/conn.go:779 +0x60
bytes.(*Buffer).ReadFrom(0xc00027e980, 0x1185fa00, 0xc0003acba0, 0xc0003b4005, 0x113f6c80, 0x115bfc40)
	/usr/lib/golang/src/bytes/buffer.go:204 +0xac
crypto/tls.(*Conn).readFromUntil(0xc00027e700, 0x118669e0, 0xc000632780, 0x5, 0xc000632780, 0x54c)
	/usr/lib/golang/src/crypto/tls/conn.go:801 +0xd4
crypto/tls.(*Conn).readRecordOrCCS(0xc00027e700, 0x0, 0x0, 0x18)
	/usr/lib/golang/src/crypto/tls/conn.go:608 +0xf4
crypto/tls.(*Conn).readRecord(...)
	/usr/lib/golang/src/crypto/tls/conn.go:576
crypto/tls.(*Conn).Read(0xc00027e700, 0xc0003a0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/golang/src/crypto/tls/conn.go:1252 +0x150
bufio.(*Reader).Read(0xc00035b860, 0xc0008c8118, 0x9, 0x9, 0x11719d00, 0x1040c560, 0x11719d01)
	/usr/lib/golang/src/bufio/bufio.go:227 +0x238
io.ReadAtLeast(0x1185f820, 0xc00035b860, 0xc0008c8118, 0x9, 0x9, 0x9, 0x0, 0x1185fc00, 0xc000118030)
	/usr/lib/golang/src/io/io.go:314 +0x84
io.ReadFull(...)
	/usr/lib/golang/src/io/io.go:333
golang.org/x/net/http2.readFrameHeader(0xc0008c8118, 0x9, 0x9, 0x1185f820, 0xc00035b860, 0x0, 0x0, 0xc000419770, 0x0)
	/root/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:237 +0x60
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0008c80e0, 0xc000419770, 0x0, 0x0, 0x0)
	/root/go/pkg/mod/golang.org/x/[email protected]/http2/frame.go:492 +0x84
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00093ff98, 0x0, 0x0)
	/root/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1794 +0xc0
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000448c00)
	/root/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:1716 +0x68
created by golang.org/x/net/http2.(*Transport).newClientConn
	/root/go/pkg/mod/golang.org/x/[email protected]/http2/transport.go:695 +0x554

goroutine 10 [sleep]:
time.Sleep(0x12a05f200)
	/usr/lib/golang/src/runtime/time.go:188 +0xc4
k8s.io/kubernetes/test/e2e/framework/pv.WaitForPersistentVolumeDeleted(0x118cd6c0, 0xc000ac89a0, 0xc000ac63c0, 0x28, 0x12a05f200, 0x8bb2c97000, 0x4f, 0x0)
	/root/go/pkg/mod/k8s.io/[email protected]/test/e2e/framework/pv/pv.go:832 +0xf8
github.com/ppc64le-cloud/powervs-csi-driver/tests/e2e/testsuites.(*TestPersistentVolumeClaim).Cleanup(0xc0003746c0)
	/root/e2etest/powervs-csi-driver/tests/e2e/testsuites/testsuites.go:299 +0x2d0
github.com/ppc64le-cloud/powervs-csi-driver/tests/e2e/testsuites.(*DynamicallyProvisionedCmdVolumeTest).Run(0xc0009dd0d8, 0x118cd6c0, 0xc000ac89a0, 0xc000956b00)
	/root/e2etest/powervs-csi-driver/tests/e2e/testsuites/dynamically_provisioned_cmd_volume_tester.go:29 +0x1f4
github.com/ppc64le-cloud/powervs-csi-driver/tests/e2e.glob..func1.2()
	/root/e2etest/powervs-csi-driver/tests/e2e/dynamic_provisioning.go:60 +0x198
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0000a4ae0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:113 +0xa0
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0000a4ae0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:64 +0x170
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc0003cd6e0, 0x11865220, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/it_node.go:26 +0x88
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00093c2d0, 0x0, 0x11865220, 0xc00005cac0)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/spec/spec.go:215 +0x21c
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00093c2d0, 0x11865220, 0xc00005cac0)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/spec/spec.go:138 +0xf8
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0006177c0, 0xc00093c2d0, 0x2)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:200 +0xec
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0006177c0, 0x1)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:170 +0x124
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0006177c0, 0xc000872970)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:66 +0x104
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0003c0320, 0x7fff606afdc0, 0xc000636900, 0x1166c5d1, 0x24, 0xc000602060, 0x1, 0x1, 0x118ae9c0, 0xc00005cac0, ...)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/suite/suite.go:62 +0x364
github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x11867020, 0xc000636900, 0x1166c5d1, 0x24, 0xc000602050, 0x1, 0x1, 0x1)
	/root/go/pkg/mod/github.com/onsi/[email protected]/ginkgo_dsl.go:226 +0x1d0
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x11867020, 0xc000636900, 0x1166c5d1, 0x24, 0x0, 0x0, 0x0, 0x1006e84c)
	/root/go/pkg/mod/github.com/onsi/[email protected]/ginkgo_dsl.go:214 +0x98
github.com/ppc64le-cloud/powervs-csi-driver/tests/e2e.TestE2E(0xc000636900)
	/root/e2etest/powervs-csi-driver/tests/e2e/suite_test.go:59 +0x184
testing.tRunner(0xc000636900, 0x11716118)
	/usr/lib/golang/src/testing/testing.go:1123 +0xd8
created by testing.(*T).Run
	/usr/lib/golang/src/testing/testing.go:1168 +0x264

goroutine 11 [chan receive, 10 minutes]:
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).registerForInterrupts(0xc0006177c0, 0xc000912fc0)
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:223 +0xbc
created by github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run
	/root/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:60 +0x78

goroutine 13 [syscall, 10 minutes]:
os/signal.signal_recv(0x0)
	/usr/lib/golang/src/runtime/sigqueue.go:147 +0xf8
os/signal.loop()
	/usr/lib/golang/src/os/signal/signal_unix.go:23 +0x24
created by os/signal.Notify.func1.1
	/usr/lib/golang/src/os/signal/signal.go:150 +0x4c
exit status 2
FAIL	github.com/ppc64le-cloud/powervs-csi-driver/tests/e2e	600.127s
[root@madhan-1-kube-1-22-2 e2e]# 

Format and Mount fails

In the e2e tests, we are creating pod and attaching a volume through automation.
But, the pod is always in pending state.

The pod description shows the below error logs:

events:
  Type     Reason                  Age                   From                     Message
  ----     ------                  ----                  ----                     -------
  Normal   Scheduled               57m                   default-scheduler        Successfully assigned powervs-4827/powervs-volume-tester-9jnwq to madhan-multinode-kube-worker-2
  Normal   SuccessfulAttachVolume  57m                   attachdetach-controller  AttachVolume.Attach succeeded for volume "powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9"
  Warning  FailedMount             28m                   kubelet                  Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[default-token-292cb test-volume-1]: timed out waiting for the condition
  Warning  FailedMount             5m46s (x22 over 55m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[test-volume-1 default-token-292cb]: timed out waiting for the condition
  Warning  FailedMount             20s (x23 over 55m)    kubelet                  MountVolume.MountDevice failed for volume "powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9" : rpc error: code = Internal desc = could not format "/dev/dm-15" and mnt it at "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9/globalmount"

The stage operation in the csi plugin on the node is failing.
Node plugin logs:

ERROR: logging before flag.Parse: I1115 05:56:31.330343       1 fibrechannel.go:243] Attaching fibre channel volume
I1115 05:56:37.117604       1 node.go:348] NodeGetCapabilities: called with args {XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I1115 05:57:18.113694       1 node.go:140] NodeStageVolume: find device path for wwn 6005076810830198a000000000000c48 -> /dev/dm-15
I1115 05:57:18.114196       1 node.go:175] NodeStageVolume: formatting /dev/dm-15 and mounting at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9/globalmount with fstype ext4
I1115 05:57:18.114210       1 mount_linux.go:405] Attempting to determine if disk "/dev/dm-15" is formatted using blkid with args: ([-p -s TYPE -s PTTYPE -o export /dev/dm-15])
I1115 05:57:18.115941       1 mount_linux.go:408] Output: "", err: exit status 2
I1115 05:57:18.115993       1 mount_linux.go:366] Disk "/dev/dm-15" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/dm-15]
E1115 05:57:39.039401       1 mount_linux.go:372] format of disk "/dev/dm-15" failed: type:("ext4") target:("/var/lib/kubelet/plugins/kubernetes.io/csi/pv/powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9/globalmount") options:("defaults") errcode:(exit status 1) output:(mke2fs 1.44.5 (15-Dec-2018)
Discarding device blocks: failed - Remote I/O error
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 35dad7ea-e90d-490d-891e-fed7c2524ebd
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: mkfs.ext4: Input/output error while writing out and closing file system
) 
E1115 05:57:39.039442       1 driver.go:114] GRPC error: rpc error: code = Internal desc = could not format "/dev/dm-15" and mnt it at "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/powervs-4827-powervs.csi.ibm.com-preprovsioned-pv-llsq9/globalmount"

Add examples

Add examples(yaml) for

  • Dynamic Provisioning
  • Static Provisioning
  • Block Volume
  • Configure StorageClass
  • Volume Resizing

Need to add mock stats in sanity test

/kind bug

What happened?
Currently sanity tests don't have mock for stats object in node service.
stats object was recently added in the node service when getVolumeStats was implemented.
Commit : 4ba7312
But, necessary changes were not made in sanity tests.
This causes failures in sanity tests. Need to add mock methods to avoid sanity test failures.

Reference: https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/blob/9c4865260a0d3b816fa2c3cb63f250a5870c4adc/tests/sanity/sanity_test.go#L170

What you expected to happen?
Adding can make a successful sanity run.

How to reproduce it (as minimally and precisely as possible)?
go test -v pkg/driver/sanity_test.go

Volume Creation fails with Duplicate name error

Creating 60/iteration workloads using Kube-burner on the cluster.
Which will create 60 CreateVolumeRequests to the IBM Cloud via PowerVS-CSI driver controller plugin.

Create Volume is called at 07:39:10

I1217 07:39:10.106005       1 controller.go:80] CreateVolume: called with args {Name:pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8 CapacityRange:required_bytes:1000000000  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[type:tier3] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.powervs.csi.ibm.com/disk-type" value:"tier3" > > preferred:<segments:<key:"topology.powervs.csi.ibm.com/disk-type" value:"tier3" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
POST /pcloud/v1/cloud-instances/7845d372-d4e1-46b8-91fc-41051c984601/volumes HTTP/1.1^M
Host: lon.power-iaas.cloud.ibm.com^M
User-Agent: Go-http-client/1.1^M
Content-Length: 157^M
Accept: application/json^M
Authorization: Bearer 
Content-Type: application/json^M
Crn: crn:v1:bluemix:public:power-iaas:lon04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:7845d372-d4e1-46b8-91fc-41051c984601::^M
Accept-Encoding: gzip^M
^M
{"antiAffinityPVMInstances":null,"antiAffinityVolumes":null,"diskType":"tier3","name":"pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8","shareable":false,"size":1}

After 30 seconds the CreateVolume requests were failing due to tcp error

E1217 07:39:40.452742       1 driver.go:116] GRPC error: rpc error: code = Internal desc = Could not create volume "pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8": failed to perform the Create volume Operation for volume pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8 with error Post "https://lon.power-iaas.cloud.ibm.com/pcloud/v1/cloud-instances/7845d372-d4e1-46b8-91fc-41051c984601/volumes": read tcp 172.20.199.198:56236->104.19.187.123:443: read: connection reset by peer

Since the createVolume call was failing, Controller retries creating the Volume.
But, in the back end the volume was already created. So the request is failing with Duplicate name error.

E1217 07:40:24.727505       1 driver.go:116] GRPC error: rpc error: code = Internal desc = Could not create volume "pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8": failed to perform the Create volume Operation for volume pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8 with error [POST /pcloud/v1/cloud-instances/{cloud_instance_id}/volumes][400] pcloudCloudinstancesVolumesPostBadRequest  &{Code:0 Description:bad request: pvc-dbb635a1-b73f-4cd1-9ce9-9f97286da3d8 volume name already exists for cloud instance 7031b049297e4588a3eafb21335d6a2b; duplicate names are not allowed Error:bad request Message:}

Improve docs

Add necessary documentation for PowerVS CSI Driver.

  • Add Readme for Power VS CSI Driver

Need to add PVM instance id as part of node service

Metadata service should be called only when calling newNodeService.
But in our current implementation, metadata service is called in NodeGetInfo Method inorder to get the pvmInstanceId.

func (d *nodeService) NodeGetInfo(ctx context.Context, req *csi.NodeGetInfoRequest) (*csi.NodeGetInfoResponse, error) {
	klog.V(4).Infof("NodeGetInfo: called with args %+v", *req)
	metadata, err := cloud.NewMetadataService(cloud.DefaultKubernetesAPIClient)
	if err != nil {
		panic(err)
	}
	pvmInstanceId := metadata.GetPvmInstanceId()

	in, err := d.cloud.GetPVMInstanceByID(pvmInstanceId)
       ....
       ....
}

While unit testing NodeGetInfo method, the test case will fail as if it is not running inside a pod.

This can be optimised by adding pvmInstanceId as part of NodeService when it is initialised as below.

func newNodeService(driverOptions *Options) nodeService {
	klog.V(4).Infof("retrieving node info from metadata service")
	....
        ....
        ....
	return nodeService{
		cloud:         pvsCloud,
		mounter:       newNodeMounter(),
		driverOptions: driverOptions,
		pvmInstanceId: metadata.GetCloudInstanceId(),
	}
}

This also allows our unit tests to run anywhere(could be out side cluster, inside cluster node).
If the metadata is other than service initialisation then we can run the unit tests only inside the pod as the metadata service uses in cluster configuration.

Referred EFS csi driver code for following this approach.

Optimize the CSI code for rescanning the device

Rescanning the newly attached volume to the node is causing the plugin to wait more time.
Sometimes, too many rescans running in the background and new volume creations fail as the system is busy with the rescans.

Resize PVC is not working even if the storage class allows volume expansion

Created a storage class with volume expansion support with below yaml.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: powervs-sc
provisioner: powervs.csi.ibm.com
volumeBindingMode: WaitForFirstConsumer
parameters:
  type: tier3
  csi.storage.k8s.io/fstype: xfs
allowVolumeExpansion: true

Created a pvc using the above storage class

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: powervs-claim
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: powervs-sc
  resources:
    requests:
      storage: 1Gi

PVC details

[root@madhan-multinode-kube-master powervs-csi-driver]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
powervs-claim   Bound    pvc-a91b920f-fac0-43ba-adeb-ce85a7530c8a   1Gi        RWO            powervs-sc     11d

Edited the PVC as below:

[root@madhan-multinode-kube-master powervs-csi-driver]# kubectl edit pvc powervs-claim

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"powervs-claim","namespace":"kube-system"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"powervs-sc"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: powervs.csi.ibm.com
    volume.kubernetes.io/selected-node: madhan-multinode-kube-worker-1
  creationTimestamp: "2021-10-07T09:17:27Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: powervs-claim
  namespace: kube-system
  resourceVersion: "2904572"
  selfLink: /api/v1/namespaces/kube-system/persistentvolumeclaims/powervs-claim
  uid: a91b920f-fac0-43ba-adeb-ce85a7530c8a
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: powervs-sc
  volumeMode: Filesystem
  volumeName: pvc-a91b920f-fac0-43ba-adeb-ce85a7530c8a
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  phase: Bound

Changed spec.resources.requests.storage to "2Gi".

[root@madhan-multinode-kube-master powervs-csi-driver]# kubectl edit pvc powervs-claim
persistentvolumeclaim/powervs-claim edited
[root@madhan-multinode-kube-master powervs-csi-driver]# kubectl get pvc powervs-claim
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
powervs-claim   Bound    pvc-a91b920f-fac0-43ba-adeb-ce85a7530c8a   1Gi        RWO            powervs-sc     11d

But, the size is not changed.
ControllerExpandVolume method is not called from power vs CSI Driver.

ControllerCreateVolume fails as the volume already exists

While running multiple workloads from the Kube-burner,
ControllerCreateVolume method is called and failed for few volumes of the batch with the error

{"description":"bad request: pvc-d5d0d6f7-1c1e-4c13-8a2a-ebabd31fa9b0 volume name already exists for cloud instance 7031b049297e4588a3eafb21335d6a2b; duplicate names are not allowed","error":"bad request"}

Log:


I1206 11:10:37.257617       1 controller.go:78] CreateVolume: called with args {Name:pvc-e79fa4ea-e1bb-484d-9a7a-6539f23635f5 CapacityRange:required_bytes:1073741824  VolumeCapabilities:[mount:<fs_type:"xfs" > access_mode:<mode:SINGLE_NODE_WRITER > ] Parameters:map[type:tier3] Secrets:map[] VolumeContentSource:<nil> AccessibilityRequirements:requisite:<segments:<key:"topology.powervs.csi.ibm.com/disk-type" value:"tier1" > > preferred:<segments:<key:"topology.powervs.csi.ibm.com/disk-type" value:"tier1" > >  XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
2021/12/06 11:10:37 calling the PowerVolume Create Method
2021/12/06 11:10:37 Calling the New Auth Method in the IBMPower Session Code
2021/12/06 11:10:37 Calling the crn constructor that is to be passed back to the caller  65b64c1f1c29460e8c2e4bbfbd893c2c
2021/12/06 11:10:37 the region is lon and the zone is  lon04
2021/12/06 11:10:37 the crndata is ... crn:v1:bluemix:public:power-iaas:lon04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:7845d372-d4e1-46b8-91fc-41051c984601:: 
POST /pcloud/v1/cloud-instances/7845d372-d4e1-46b8-91fc-41051c984601/volumes HTTP/1.1
Host: lon.power-iaas.cloud.ibm.com
User-Agent: Go-http-client/1.1
Content-Length: 98
Accept: application/json
Authorization: Bearer
Content-Type: application/json
Crn: crn:v1:bluemix:public:power-iaas:lon04:a/65b64c1f1c29460e8c2e4bbfbd893c2c:7845d372-d4e1-46b8-91fc-41051c984601::
Accept-Encoding: gzip

{"diskType":"tier3","name":"pvc-e79fa4ea-e1bb-484d-9a7a-6539f23635f5","shareable":false,"size":1}

HTTP/1.1 400 Bad Request
Content-Length: 206
Cf-Cache-Status: DYNAMIC
Cf-Ray: 6b95118b3e204e97-FRA
Connection: keep-alive
Content-Type: application/json
Date: Mon, 06 Dec 2021 11:10:39 GMT
Expect-Ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Server: cloudflare
Strict-Transport-Security: max-age=15724800; includeSubDomains

{"description":"bad request: pvc-e00e231d-f97b-44ab-8a9b-23cd677ad612 volume name already exists for cloud instance 7031b049297e4588a3eafb21335d6a2b; duplicate names are not allowed","error":"bad request"}

E1206 11:10:39.924560       1 driver.go:116] GRPC error: rpc error: code = Internal desc = Could not create volume "pvc-e00e231d-f97b-44ab-8a9b-23cd677ad612": {"description":"bad request: pvc-e00e231d-f97b-44ab-8a9b-23cd677ad612 volume name already exists for cloud instance 7031b049297e4588a3eafb21335d6a2b; duplicate names are not allowed","error":"bad request"}

Mount timed out

In the e2e tests, we are creating pod and attaching a volume through automation.
But, the pod is always in pending state.

The pod description shows the below error logs:

Events:
  Type     Reason                  Age                  From                     Message
  ----     ------                  ----                 ----                     -------
  Normal   Scheduled               7m6s                 default-scheduler        Successfully assigned powervs-1560/powervs-volume-tester-tng55 to madhan-1-kube-1-22-2
  Normal   SuccessfulAttachVolume  6m36s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-41aacd4d-9d53-4168-bbc0-77cf56596e26"
  Warning  FailedMount             32s (x3 over 4m34s)  kubelet                  MountVolume.MountDevice failed for volume "pvc-41aacd4d-9d53-4168-bbc0-77cf56596e26" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  FailedMount             29s (x3 over 5m3s)   kubelet                  Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[test-volume-1 kube-api-access-j9rbb]: timed out waiting for the condition
  

Error snippet:

Unable to attach or mount volumes: unmounted volumes=[test-volume-1], unattached volumes=[test-volume-1 kube-api-access-j9rbb]: timed out waiting for the condition

The mount is failing as the waiting is timeout.
The Node Stage Volume is call on node plugin is timed out.
Node plugin logs:

I1118 03:38:37.265973       1 node.go:92] NodeStageVolume: called with args {VolumeId:21a0ff66-bf01-45a3-add1-b4f4982854f9 PublishContext:map[wwn:6005076810830198a000000000000d09] StagingTargetPath:/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-41aacd4d-9d53-4168-bbc0-77cf56596e26/globalmount VolumeCapability:mount:<fs_type:"ext4" mount_flags:"rw" > access_mode:<mode:SINGLE_NODE_WRITER >  Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1637056010413-8081-powervs.csi.ibm.com] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}

The NodeStageVolume method is called with args.
But after that, the method is not completely get executed, hence the mount is timed out.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.