Coder Social home page Coder Social logo

kubernetes-sigs / nfs-subdir-external-provisioner Goto Github PK

View Code? Open in Web Editor NEW
2.5K 31.0 740.0 9.26 MB

Dynamic sub-dir volume provisioner on a remote NFS server.

License: Apache License 2.0

Makefile 11.27% Go 9.08% Dockerfile 1.87% Shell 70.59% Mustache 2.37% Python 4.81%
k8s-sig-storage

nfs-subdir-external-provisioner's Introduction

Kubernetes NFS Subdir External Provisioner

NFS subdir external provisioner is an automatic provisioner that use your existing and already configured NFS server to support dynamic provisioning of Kubernetes Persistent Volumes via Persistent Volume Claims. Persistent volumes are provisioned as ${namespace}-${pvcName}-${pvName}.

Note: This repository is migrated from https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client. As part of the migration:

  • The container image name and repository has changed to registry.k8s.io/sig-storage and nfs-subdir-external-provisioner respectively.
  • To maintain backward compatibility with earlier deployment files, the naming of NFS Client Provisioner is retained as nfs-client-provisioner in the deployment YAMLs.
  • One of the pending areas for development on this repository is to add automated e2e tests. If you would like to contribute, please raise an issue or reach us on the Kubernetes slack #sig-storage channel.

How to deploy NFS Subdir External Provisioner to your cluster

To note again, you must already have an NFS Server.

With Helm

Follow the instructions from the helm chart README.

The tl;dr is

$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=x.x.x.x \
    --set nfs.path=/exported/path

With Kustomize

Step 1: Get connection information for your NFS server

Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname and exported share path.

Step 2: Add the base resource

Create a kustomization.yaml file in a directory of your choice, and add the deploy directory as a base. This will use the kustomization file within that directory as our base.

namespace: nfs-provisioner
bases:
  - github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy

Step 3: Create namespace resource

Create a file with your namespace resource. The name can be anything as it will get overwritten by the namespace in your kustomization file.

# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-provisioner

Step 4: Configure deployment

To configure the deployment, you will need to patch it's container variables with the connection information for your NFS Server.

# patch_nfs_details.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nfs-client-provisioner
  name: nfs-client-provisioner
spec:
  template:
    spec:
      containers:
        - name: nfs-client-provisioner
          env:
            - name: NFS_SERVER
              value: <YOUR_NFS_SERVER_IP>
            - name: NFS_PATH
              value: <YOUR_NFS_SERVER_SHARE>
      volumes:
        - name: nfs-client-root
          nfs:
            server: <YOUR_NFS_SERVER_IP>
            path: <YOUR_NFS_SERVER_SHARE>

Replace occurrences of <YOUR_NFS_SERVER_IP> and <YOUR_NFS_SERVER_SHARE> with your connection information.

Step 5: Add resources and deploy

Add the namespace resource and patch you created in earlier steps.

namespace: nfs-provisioner
bases:
  - github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
resources:
  - namespace.yaml
patchesStrategicMerge:
  - patch_nfs_details.yaml

Deploy (run inside directory with your kustomization file):

kubectl apply -k .

Step 6: Finally, test your environment!

Now we'll test your NFS subdir external provisioner by creating a persistent volume claim and a pod that writes a test file to the volume. This will make sure that the provisioner is provisioning and that the NFS server is reachable and writable.

Deploy the test resources:

$ kubectl create -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-claim.yaml -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-pod.yaml

Now check your NFS Server for the SUCCESS inside the PVC's directory.

Delete the test resources:

$ kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-claim.yaml -f https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/test-pod.yaml

Now check the PVC's directory has been deleted.

Step 7: Deploying your own PersistentVolumeClaims

To deploy your own PVC, make sure that you have the correct storageClassName (by default nfs-client). You can also patch the StorageClass resource to change it, like so:

# kustomization.yaml
namespace: nfs-provisioner
resources:
  - github.com/kubernetes-sigs/nfs-subdir-external-provisioner//deploy
  - namespace.yaml
patches:
- target:
    kind: StorageClass
    name: nfs-client
  patch: |-
    - op: replace
      path: /metadata/name
      value: <YOUR-STORAGECLASS-NAME>

Manually

Step 1: Get connection information for your NFS server

Make sure your NFS server is accessible from your Kubernetes cluster and get the information you need to connect to it. At a minimum you will need its hostname.

Step 2: Get the NFS Subdir External Provisioner files

To setup the provisioner you will download a set of YAML files, edit them to add your NFS server's connection information and then apply each with the kubectl / oc command.

Get all of the files in the deploy directory of this repository. These instructions assume that you have cloned the kubernetes-sigs/nfs-subdir-external-provisioner repository and have a bash-shell open in the root directory.

Step 3: Setup authorization

If your cluster has RBAC enabled or you are running OpenShift you must authorize the provisioner. If you are in a namespace/project other than "default" edit deploy/rbac.yaml.

Kubernetes:

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NS=$(kubectl config get-contexts|grep -e "^\*" |awk '{print $5}')
$ NAMESPACE=${NS:-default}
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ kubectl create -f deploy/rbac.yaml

OpenShift:

On some installations of OpenShift the default admin user does not have cluster-admin permissions. If these commands fail refer to the OpenShift documentation for User and Role Management or contact your OpenShift provider to help you grant the right permissions to your admin user. On OpenShift the service account used to bind volumes does not have the necessary permissions required to use the hostmount-anyuid SCC. See also Role based access to SCC for more information. If these commands fail refer to the OpenShift documentation for User and Role Management or contact your OpenShift provider to help you grant the right permissions to your admin user.

# Set the subject of the RBAC objects to the current namespace where the provisioner is being deployed
$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml
$ oc create -f deploy/rbac.yaml
$ oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner

Step 4: Configure the NFS subdir external provisioner

If you would like to use a custom built nfs-subdir-external-provisioner image, you must edit the provisioner's deployment file to specify the correct location of your nfs-client-provisioner container image.

Next you must edit the provisioner's deployment file to add connection information for your NFS server. Edit deploy/deployment.yaml and replace the two occurences of with your server's hostname.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: <YOUR NFS SERVER HOSTNAME>
            - name: NFS_PATH
              value: /var/nfs
      volumes:
        - name: nfs-client-root
          nfs:
            server: <YOUR NFS SERVER HOSTNAME>
            path: /var/nfs

Note: If you want to change the PROVISIONER_NAME above from k8s-sigs.io/nfs-subdir-external-provisioner to something else like myorg/nfs-storage, remember to also change the PROVISIONER_NAME in the storage class definition below.

To disable leader election, define an env variable named ENABLE_LEADER_ELECTION and set its value to false.

Step 5: Deploying your storage class

Parameters:

Name Description Default
onDelete If it exists and has a delete value, delete the directory, if it exists and has a retain value, save the directory. will be archived with name on the share: archived-<volume.Name>
archiveOnDelete If it exists and has a false value, delete the directory. if onDelete exists, archiveOnDelete will be ignored. will be archived with name on the share: archived-<volume.Name>
pathPattern Specifies a template for creating a directory path via PVC metadata's such as labels, annotations, name or namespace. To specify metadata use ${.PVC.<metadata>}. Example: If folder should be named like <pvc-namespace>-<pvc-name>, use ${.PVC.namespace}-${.PVC.name} as pathPattern. n/a

This is deploy/class.yaml which defines the NFS subdir external provisioner's Kubernetes Storage Class:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}" # waits for nfs.io/storage-path annotation, if not specified will accept as empty string.
  onDelete: delete

Step 6: Finally, test your environment!

Now we'll test your NFS subdir external provisioner.

Deploy:

$ kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

Now check your NFS Server for the file SUCCESS.

kubectl delete -f deploy/test-pod.yaml -f deploy/test-claim.yaml

Now check the folder has been deleted.

Step 7: Deploying your own PersistentVolumeClaims

To deploy your own PVC, make sure that you have the correct storageClassName as indicated by your deploy/class.yaml file.

For example:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    nfs.io/storage-path: "test-path" # not required, depending on whether this annotation was shown in the storage class description
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

Build and publish your own container image

To build your own custom container image from this repository, you will have to build and push the nfs-subdir-external-provisioner image using the following instructions.

make build
make container
# `nfs-subdir-external-provisioner:latest` will be created.
# Note: This will build a single-arch image that matches the machine on which container is built.
# To upload this to your custom registry, say `quay.io/myorg` and arch as amd64, you can use
# docker tag nfs-subdir-external-provisioner:latest quay.io/myorg/nfs-subdir-external-provisioner-amd64:latest
# docker push quay.io/myorg/nfs-subdir-external-provisioner-amd64:latest

Build and publish with GitHub Actions

In a forked repository you can use GitHub Actions pipeline defined in .github/workflows/release.yml. The pipeline builds Docker images for linux/amd64, linux/arm64, and linux/arm/v7 platforms and publishes them using a multi-arch manifest. The pipeline is triggered when you add a tag like gh-v{major}.{minor}.{patch} to your commit and push it to GitHub. The tag is used for generating Docker image tags: latest, {major}, {major}:{minor}, {major}:{minor}:{patch}.

The pipeline adds several labels:

  • org.opencontainers.image.title=${{ github.event.repository.name }}
  • org.opencontainers.image.description=${{ github.event.repository.description }}
  • org.opencontainers.image.url=${{ github.event.repository.html_url }}
  • org.opencontainers.image.source=${{ github.event.repository.clone_url }}
  • org.opencontainers.image.created=${{ steps.prep.outputs.created }}
  • org.opencontainers.image.revision=${{ github.sha }}
  • org.opencontainers.image.licenses=${{ github.event.repository.license.spdx_id }}

Important:

  • The pipeline performs the docker login command using REGISTRY_USERNAME and REGISTRY_TOKEN secrets, which have to be provided.
  • You also need to provide the DOCKER_IMAGE secret specifying your Docker image name, e.g., quay.io/[username]/nfs-subdir-external-provisioner.

NFS provisioner limitations/pitfalls

  • The provisioned storage is not guaranteed. You may allocate more than the NFS share's total size. The share may also not have enough storage space left to actually accommodate the request.
  • The provisioned storage limit is not enforced. The application can expand to use all the available storage regardless of the provisioned size.
  • Storage resize/expansion operations are not presently supported in any form. You will end up in an error state: Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

nfs-subdir-external-provisioner's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nfs-subdir-external-provisioner's Issues

Permission denied when creating /persistentvolumes subdirectory

It seems that using /persistentvolumes to mount the NFS share is hardcoded and fails since the node doesn't have permission to create that directory (or subdirectories):

I0111 17:40:04.382995       1 controller.go:987] provision "default/test-claim" class "nfs-client": started
I0111 17:40:04.393352       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"37b7b714-39c1-43fb-82f2-8cc01b037288", APIVersion:"v1", ResourceVersion:"11830", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/test-claim"
W0111 17:40:04.395679       1 controller.go:746] Retrying syncing claim "default/test-claim" because failures 4 < threshold 15
E0111 17:40:04.396629       1 controller.go:761] error syncing claim "default/test-claim": failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/default-test-claim-pvc-37b7b714-39c1-43fb-82f2-8cc01b037288: permission denied
I0111 17:40:04.396020       1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"test-claim", UID:"37b7b714-39c1-43fb-82f2-8cc01b037288", APIVersion:"v1", ResourceVersion:"11830", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/default-test-claim-pvc-37b7b714-39c1-43fb-82f2-8cc01b037288: permission denied

I installed the ARM version using helm, like so:

helm install nfs-subdir-external-provisioner . --set nfs.server=192.168.0.194 --set nfs.path=/nfs-storage

My NFS server exports are set up as follows:

$ cat /etc/exports 
/export       192.168.0.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/nfs-storage 192.168.0.0/24(rw,nohide,insecure,no_subtree_check,async)

[Enhancement] Make use of CSI Volume Health Check coming to beta with K8s 1.21

Hi all,

it would be awesome if the provisioner could make use of this new feature as described here:
https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/1432-volume-health-monitor#motivation

Motivation

Currently there is no way to monitor Persistent Volumes after they are provisioned in Kubernetes. This makes it very hard to debug and detect root causes when something happens to the volumes. An application may detect that it can no longer write to volumes. However, the root cause happened at the underlying storage system level. This requires application and infrastructure team jointly debug and find out what has triggered the problem. It could be that the volume runs out of capacity and needs an expansion. It could be that the volume was deleted by accident outside of Kubernetes. In the case of local storage, the local PVs may not be accessed any more due to the nodes break down. In either case, it will take lots of effort to find out what has happened at the infrastructure layer. It will also take complicated process to recover. Admin usually needs to intervene and fix the problem.

With health monitoring, unhealthy volumes can be detected and fixed early and therefore could prevent more serious problems to occur. While some problems may be corrected automatically by a Kubernetes controller, most problems may involve manual admin intervention or need to be fixed with specific application knowledge. In any case, volume health monitoring is a very important tool that would be beneficial to Kubernetes users.

There is no way to set a relative path and save the data after deletion PV

We would like to see a more flexible solution, the idea is to be able to create path templates through the parameters of the storage classes, as well as using annotations for PVC

There are also cases in which, after deleting a PV, we would like to save data on NFS share.
Examle:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: nfs.lgi.io/dynamic-provisioner
parameters:
  onDelete: "retain"
  pathPattern: "{pvc.namespace}/{pvc.annotations.nfs.lgi.io/storage-path}"
mountOptions:
  - lookupcache=pos
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hollow-producer
  namespace: lab5a
  annotations:
    volume.beta.kubernetes.io/storage-class: "nfs"
    nfs.lgi.io/storage-path: "hollow-export" # path which will created: /lab5a/hollow-export
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 500Gi
---

Can run 2+ replicas?

default replica: 1
want repicas: 2+
problem: If the k8s node(nfs-subdir-external-provisioner on it) is down๏ผŒthe nfs-provisoner storageclass can't use.

So, can change 1 replica to 2+ replicas to avoid nfs-provisoner temporary unavailable?

Don't use staging registry as stable images

Container images in the staging registry are only for dev/CI purposes and automatically get deleted after 30 days.

Official images should be promoted from staging to the release repository when new releases are cut.

/assign @kmova

No Releases, No Artifacts, No Helm Chart Support, No updated CHANGELOG

  • There are no tags or releases for this project.
  • The CHANGELOG is stale with last update for v2.0.1 on Aug 8, 2017. There are no corresponding tags, sha code, etc.
  • Original archived release had release of nfs-client-provisioner-v3.1.0-k8s1.11
  • Official Docker Image quay.io/external_storage/nfs-client-provisioner has a release from 2 years ago for v3.1.0-k8s1.11
  • Official Helm Chart Maintained soon to be archived repo: https://github.com/helm/charts/tree/master/stable/nfs-client-provisioner

Avoid crashing when API is not reachable

Iโ€™m running a kubernetes cluster with a single controller node and from time to time this controller node (e.g. during upgrades or reboots) might not be reachable.

Currently, when that happen, the provisioner โ€œcrashesโ€ (exit with a non 0 status) and its pod goes into a crash loop as Kubernetes tries to restart it.

It would be great if the provisioner could handle this error in some way and not exit immediately (maybe retry for a certain time before exiting).

Make default directory permissions configurable

https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/cmd/nfs-subdir-external-provisioner/provisioner.go#L112

The directory permissions on created volumes are hardcoded to 777.

Some applications (Nextcloud in my case) will fail if their data directories have too open permissions, and rightfully so. I would suggest an environment variable that can override the default directory permissions to, at least, 770.

I'll gladly open up a PR, but wanted to check in first if there would be downsides to this change that I'm unaware of.

nfs-subdir-external-provisioner NFS user and password

I need provide user and password for NFS server,
How I can do it via nfs-subdir-external-provisioner helm chart,
I do not see such an option in this chart:
value.yaml

nfs:
server: xxxxxx
path: /xxx/xx/xx
mountOptions:

Thanks,
Alex

Cannot create volumes on OKD4.7 + FCOS 33

I'm trying to get this stood up and working on a newly installed OKD/OpenShift 4.7 cluster using an external NFS server. The storage class is there, the replicaset fires up a pod, but when I fire up a test claim, I get:

E0326 17:06:38.369380 1 controller.go:966] error syncing claim "3845c38b-be19-49a1-b941-b32d978edd82": failed to provision volume with StorageClass "nfs-client": unable to create directory to provision new pv: mkdir /persistentvolumes/default-test-claim-pvc-3845c38b-be19-49a1-b941-b32d978edd82: input/output error

I've been able to mount the NFS share on another machine, and create test text files on it, so I believe I've exported it correctly and have the right permissions. When I track down the worker node the provisioner is running on, I see what appears to be a mount:

sh-4.4# mount | grep tank [redacted]:/mnt/tank/k8s/nfs on /host/var/lib/kubelet/pods/32bbbf74-ad42-4d29-80ee-10b7a2adc18b/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=[redacted],local_lock=none,addr=[redacted])

But when I actually try to interact w/ that mount point, I get input/output errors:

sh-4.4# ls /host/var/lib/kubelet/pods/32bbbf74-ad42-4d29-80ee-10b7a2adc18b/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root ls: reading directory '/host/var/lib/kubelet/pods/32bbbf74-ad42-4d29-80ee-10b7a2adc18b/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root': Input/output error

I'm not quite sure what to poke at next...

arm-image not available

I see on some closed issues and PR's that it appears that arm(read:Raspberry Pi's) sould work fine.

I did the helm install on a k3s cluster, when trying to pull the image, k8s logs the following error:

Failed to pull image "gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0": rpc error: code = NotFound desc = failed to pull and unpack image "gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0": no match for platform in manifest: not found

uname -a
Linux kube1 5.4.83-v7l+ #1379 SMP Mon Dec 14 13:11:54 GMT 2020 armv7l GNU/Linux

Using Kubernetes v1.20.0, getting "unexpected error getting claim reference: selfLink was empty, can't make reference"

Using Kubernetes v1.20.0

Attempting to create pvc's, they all remain in "pending" status

Doing a kubectl logs nfs-client-provisioner gives this:

I1210 14:42:01.396466 1 leaderelection.go:194] successfully acquired lease default/fuseim.pri-ifs
I1210 14:42:01.396534 1 controller.go:631] Starting provisioner controller fuseim.pri/ifs_nfs-client-provisioner-64b7476494-p4fcm_dcfca333-3af5-11eb-8248-5aed4ceb7af7!
I1210 14:42:01.496922 1 controller.go:680] Started provisioner controller fuseim.pri/ifs_nfs-client-provisioner-64b7476494-p4fcm_dcfca333-3af5-11eb-8248-5aed4ceb7af7!
I1210 14:42:01.497152 1 controller.go:987] provision "default/pvc1" class "managed-nfs-storage": started
I1210 14:42:01.497157 1 controller.go:987] provision "default/test-claim" class "managed-nfs-storage": started
E1210 14:42:01.500487 1 controller.go:1004] provision "default/pvc1" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
E1210 14:42:01.500502 1 controller.go:1004] provision "default/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

selfLink has been disabled in v1.20.0
kubernetes/enhancements#1164

nfs version 4 not working

nfs version 4 not working on any kubernetes installation. Client mounts only version 3 and helm not pass few mount options like nfs version. If i'll set server version only nfsv4 it's also doesn't work.

nfs-subdir-external-provisioner as DaemonSet instead of Deploymet

Hi!

First of all, thank you very much for this amazing project.

We are deploying this controller and we are seeing that, although replicaCount is set to a greater number than 1, all the replicas could be deployed on the same node. The problem is when this node is down. If it happens, all the applications deployed and dependent on the controller will fail until new pods of the controller are recreated again.

Would it be possible to deploy the provisioner as a DaemonSet instead of a Deployment?. This way, at least one pod will be present on every node and this issue would be less problematic. The pods available should only be worried about acquiring the leadership and this time is shorter than recreating the pods within another node I guess.

Thanks again :)

Migrating from efs-provisioner

Deploying the now deprecated efs-provisioner leads you here as a replacement, but there is no migration path.

What I see as the only issue right now is that there is a required NFS_SERVER address, while EFS has one IP per Availability Zone. I've been working with kubernetes for years, but I'm new to AWS.

I tried using file-system-id.efs.aws-region.amazonaws.com as NFS_SERVER, but I get a mount failed: exit status 32 during scheduling.

MountVolume.SetUp failed for volume "nfs-subdir-external-provisioner-root" : mount failed: exit status 32

  • This seems to be similar to this issue in old repository
  • I Install nfs-subdir-external-provisioner using helm following this instruction
  • The exact command I used:
$ helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner --set nfs.server=192.168.10.140 --set nfs.path=/volume1/hieudemo
  • The volume is always unmounted from any connected machines before running helm install

With Minikube

  • Minikube IP: 192.168.99.101
  • Network mode: NAT
  • The pod get stuck at ContainerCreating state
$ kubecrtl pod describe nfs-subdir-external-provisioner-6658f75474-tksrl
Name:           nfs-subdir-external-provisioner-6658f75474-tksrl
Namespace:      default
Priority:       0
Node:           minikube/192.168.99.101
Start Time:     Mon, 15 Mar 2021 11:19:33 +0700
Labels:         app=nfs-subdir-external-provisioner
                pod-template-hash=6658f75474
                release=nfs-subdir-external-provisioner
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-6658f75474
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   
    Image:          gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.10.140
      NFS_PATH:          /volume1/hieudemo
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-subdir-external-provisioner-token-fdgvl (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.10.140
    Path:      /volume1/hieudemo
    ReadOnly:  false
  nfs-subdir-external-provisioner-token-fdgvl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-subdir-external-provisioner-token-fdgvl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    22m                  default-scheduler  Successfully assigned default/nfs-subdir-external-provisioner-6658f75474-tksrl to minikube
  Warning  FailedMount  9m4s (x3 over 20m)   kubelet            Unable to attach or mount volumes: unmounted volumes=[nfs-subdir-external-provisioner-root], unattached volumes=[nfs-subdir-external-provisioner-token-fdgvl nfs-subdir-external-provisioner-root]: timed out waiting for the condition
  Warning  FailedMount  114s (x18 over 22m)  kubelet            MountVolume.SetUp failed for volume "nfs-subdir-external-provisioner-root" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.10.140:/volume1/hieudemo /var/lib/kubelet/pods/1ae86e9a-a602-4f35-ae28-1b8ab6ac6439/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root
Output: mount.nfs: Operation not permitted
  Warning  FailedMount  1s (x7 over 18m)  kubelet  Unable to attach or mount volumes: unmounted volumes=[nfs-subdir-external-provisioner-root], unattached volumes=[nfs-subdir-external-provisioner-root nfs-subdir-external-provisioner-token-fdgvl]: timed out waiting for the condition

With kubeadm

  • 3 node on VM (1 master)
  • IP 192.168.156.2(master) + 192.168.156.3 + 192.168.156.4 (Network mode: NAT)
  • Install helm on master node
  • Install nfs-common on 3 nodes
  • The pod get stuck at ContainerCreating state
$ kubectl describe pod nfs-subdir-external-provisioner-6658f75474-j4h88
Name:           nfs-subdir-external-provisioner-6658f75474-j4h88
Namespace:      default
Priority:       0
Node:           kubenode01/192.168.56.3
Start Time:     Mon, 15 Mar 2021 05:06:06 +0000
Labels:         app=nfs-subdir-external-provisioner
                pod-template-hash=6658f75474
                release=nfs-subdir-external-provisioner
Annotations:    <none>
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-6658f75474
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   
    Image:          gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.0
    Image ID:       
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.10.140
      NFS_PATH:          /volume1/hieudemo
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-subdir-external-provisioner-token-lspr6 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.10.140
    Path:      /volume1/hieudemo
    ReadOnly:  false
  nfs-subdir-external-provisioner-token-lspr6:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-subdir-external-provisioner-token-lspr6
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    4m49s                 default-scheduler  Successfully assigned default/nfs-subdir-external-provisioner-6658f75474-j4h88 to kubenode01
  Warning  FailedMount  2m46s                 kubelet            Unable to attach or mount volumes: unmounted volumes=[nfs-subdir-external-provisioner-root], unattached volumes=[nfs-subdir-external-provisioner-root nfs-subdir-external-provisioner-token-lspr6]: timed out waiting for the condition
  Warning  FailedMount  37s (x10 over 4m49s)  kubelet            MountVolume.SetUp failed for volume "nfs-subdir-external-provisioner-root" : mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.10.140:/volume1/hieudemo /var/lib/kubelet/pods/4285e0f8-91ba-4a9f-8a68-f43bbbd0846d/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root
Output: mount.nfs: access denied by server while mounting 192.168.10.140:/volume1/hieudemo
  Warning  FailedMount  29s  kubelet  Unable to attach or mount volumes: unmounted volumes=[nfs-subdir-external-provisioner-root], unattached volumes=[nfs-subdir-external-provisioner-token-lspr6 nfs-subdir-external-provisioner-root]: timed out waiting for the condition
  • The server is a Synology NAS, configuring using this guide

    • NAS only allow my machine IP to access the folder
  • The mount command ran successfully on local machine

$ sudo mount -t nfs 192.168.10.140:/volume1/hieudemo mount-vol/
  • On local machine (ubuntu 20.04), without the package nfs-common being installed, the same error appears:
mount.nfs: Operation not permitted
  • How do I investigate further than Operation not permitted?

Deploy with Helm on OKD v4.6.0-0.okd-2021-01-17-185703 fails because of kubeVersion restrictions

Helm install fails in OKD v4.6 because of the k8s version restrictions defined inCharts.yaml

helm install nfs-client-provisioner . -n nfs-client-provisioner --set nfs.server=xxx.xxx.xxx.xxx,nfs.path=/xxxxx,storageClass.defaultClass=true,replicaCount=2
Error: chart requires kubeVersion: >=1.9.0 <1.20.0 which is incompatible with Kubernetes v1.19.2-1019+3b012051f912a6-dirty

Removingthe linekubeVersion: ">=1.9.0 <1.20.0" inCharts.yamlmake it deploy and work (after given the right permissions to serviceaccount nfs-client-provisioner-nfs-subdir-external-provisioner of course)

oc version
Client Version: 4.6.9
Server Version: 4.6.0-0.okd-2021-01-17-185703
Kubernetes Version: v1.19.2-1019+3b012051f912a6-dirty

helm version
version.BuildInfo{Version:"v3.4.1", GitCommit:"c4e74854886b2efe3321e185578e6db9be0a6e29", GitTreeState:"clean", GoVersion:"go1.14.11"}

PV provisioning fails for storageclass with volumeBindingMode as 'WaitForFirstConsumer'

When using any storage class with volumeBindingMode as 'WaitForFirstConsumer', the nfs-subdir-external-provisioner fails to provision PV for the given PVC. The PVC remains in the 'Pending' state and gives the error: 'failed to get target node: nodes "worker07-dev-cluster" is forbidden: User "system:serviceaccount:nfs-test:nfs-subdir-external-provisioner" cannot get resource "nodes" in API group "" at the cluster scope'.

Notes:

  1. Helm chart is installed in namespace: 'nfs-test'.
  2. Helm chart is named as: 'nfs-subdir-external-provisioner'.

K8S Resources to reproduce the issue:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: non-immediate-sc
provisioner: cluster.local/nfs-subdir-external-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: non-immediate-sc-claim
spec:
  storageClassName: non-immediate-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Mi
---

apiVersion: v1
kind: Pod
metadata:
  name: test-non-immediate-sc
spec:
  containers:
    - name: test
      image: busybox:latest
      imagePullPolicy: Always
      command: ["sh"]
      args: ["-c", "tail -f /dev/null"]
      volumeMounts:
        - name: data
          mountPath: /app/data/
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: non-immediate-sc-claim
---

Kubernetes events:

$ kubectl get events --sort-by '.metadata.creationTimestamp'
LAST SEEN   TYPE      REASON                  OBJECT                                                      MESSAGE
25m         Warning   ProvisioningFailed      persistentvolumeclaim/non-immediate-sc-claim                failed to get target node: nodes "worker07-dev-cluster" is forbidden: User "system:serviceaccount:nfs-test:nfs-subdir-external-provisioner" cannot get resource "nodes" in API group "" at the cluster scope
25m         Normal    ExternalProvisioning    persistentvolumeclaim/non-immediate-sc-claim                waiting for a volume to be created, either by external provisioner "cluster.local/nfs-subdir-external-provisioner" or manually created by system administrator
29m         Normal    WaitForFirstConsumer    persistentvolumeclaim/non-immediate-sc-claim                waiting for first consumer to be created before binding
25m         Warning   FailedScheduling        pod/test-non-immediate-sc                                   error while running "VolumeBinding" prebind plugin for pod "test-non-immediate-sc": Failed to bind volumes: pod "default/test-non-immediate-sc" does not exist any more
25m         Warning   FailedScheduling        pod/test-non-immediate-sc                                   skip schedule deleting pod: default/test-non-immediate-sc
19m         Normal    WaitForFirstConsumer    persistentvolumeclaim/non-immediate-sc-claim                waiting for first consumer to be created before binding
4m23s       Normal    ExternalProvisioning    persistentvolumeclaim/non-immediate-sc-claim                waiting for a volume to be created, either by external provisioner "cluster.local/nfs-subdir-external-provisioner" or manually created by system administrator
3m40s       Warning   ProvisioningFailed      persistentvolumeclaim/non-immediate-sc-claim                failed to get target node: nodes "worker08-dev-cluster" is forbidden: User "system:serviceaccount:nfs-test:nfs-subdir-external-provisioner" cannot get resource "nodes" in API group "" at the cluster scope
9m24s       Warning   FailedScheduling        pod/test-non-immediate-sc                                   error while running "VolumeBinding" prebind plugin for pod "test-non-immediate-sc": Failed to bind volumes: timed out waiting for the condition

Fix: This is caused due to, the service account does not have adequate RBAC permissions, precisely to get nodes in the cluster. Since, for storage classes with volumeBindingMode as 'WaitForFirstConsumer', the PVs are provisioned only when there is any consumer available (on its first appearance). So, the service account needs this permission to get the node on which the respective pod is scheduled.

Unable to supply `nodeSelector` on command line

Presumably I'm doing something wrong but I tried to supply a value for nodeSelector in the following manner:

helm install foo stable/nfs-client-provisioner --set nodeSelector='{"node-role.kubernetes.io/master": ""}' ...

which results in the following error:

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.nodeSelector): invalid type for io.k8s.api.core.v1.PodSpec.nodeSelector: got "array", expected "map"

Could you please provide a concrete example of the correct way to do this?

I was able to work around the problem by doing helm pull, editing values.yaml and then installing the chart from the local directory.

clusterrole.yaml not complete, access to nodes are missing

I needed to add the following to the cluster-role to get pvc's bound.
Header of cluster-role:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    meta.helm.sh/release-name: nfs-subdir-external-provisioner
    meta.helm.sh/release-namespace: default
  creationTimestamp: "2021-03-31T17:23:03Z"
  labels:
    app: nfs-subdir-external-provisioner
    app.kubernetes.io/managed-by: Helm
    chart: nfs-subdir-external-provisioner-4.0.6
    heritage: Helm
    release: nfs-subdir-external-provisioner

I added the following:

- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch

Before adding the nodes resources the following error was received:

andreas@lubuntu2:~/helm/nfs-client-provisioner$ kubectl describe pvc test-claim
Name:          test-claim
Namespace:     default
StorageClass:  managed-nfs-storage
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
               volume.kubernetes.io/selected-node: vaio2
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       test-pod
Events:
  Type     Reason                Age                   From                                                                                                                                 Message
  ----     ------                ----                  ----                                                                                                                                 -------
  Normal   WaitForFirstConsumer  18m                   persistentvolume-controller                                                                                                          waiting for first consumer to be created before binding
  Normal   ExternalProvisioning  3m43s (x63 over 18m)  persistentvolume-controller                                                                                                          waiting for a volume to be created, either by external provisioner "cluster.local/nfs-subdir-external-provisioner" or manually created by system administrator
  Warning  ProvisioningFailed    2m59s (x7 over 18m)   cluster.local/nfs-subdir-external-provisioner_nfs-subdir-external-provisioner-74577d4bcc-f4c26_726b8f21-71cb-4fc0-ac5c-34d7f6509ae4  failed to get target node: nodes "vaio2" is forbidden: User "system:serviceaccount:default:nfs-subdir-external-provisioner" cannot get resource "nodes" in API group "" at the cluster scope
```

Helm chart does not support pathPattern

Hi all,

From what I can see there is currently no way of setting a pathPattern with the Helm chart? This would be quite handy for us, so if there is not a reason against it, I might can create a PR for this.

Cheers

Unable to deploy on OpenShift 4.6

Has anyone tried this on OpenShift (OCP) 4.6 lately? I tried installing via Helm and also deploying this manually but am seeing the same results.

# oc get events
LAST SEEN   TYPE      REASON              OBJECT                                        MESSAGE
9m          Warning   FailedCreate        replicaset/nfs-client-provisioner-bc57fff48   Error creating: pods "nfs-client-provisioner-bc57fff48-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]
14m         Normal    ScalingReplicaSet   deployment/nfs-client-provisioner             Scaled up replica set nfs-client-provisioner-bc57fff48 to 1

# oc get all
NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nfs-client-provisioner   0/1     0            0           20m

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/nfs-client-provisioner-bc57fff48   1         0         0       20m

Service account info

# oc describe sa nfs-client-provisioner 
Name:                nfs-client-provisioner
Namespace:           devnfs
Labels:              <none>
Annotations:         <none>
Image pull secrets:  nfs-client-provisioner-dockercfg-zksx4
Mountable secrets:   nfs-client-provisioner-dockercfg-zksx4
                     nfs-client-provisioner-token-5wpzh
Tokens:              nfs-client-provisioner-token-5wpzh
                     nfs-client-provisioner-token-9pxq8
Events:              <none>

Role bindings seem to be assigned

# kubectl get rolebindings,clusterrolebindings --all-namespaces  -o custom-columns='KIND:kind,NAMESPACE:metadata.namespace,NAME:metadata.name,SERVICE_ACCOUNTS:subjects[?(@.kind=="ServiceAccount")].name' |grep devnfs
RoleBinding          devnfs                                             admin                                                                            <none>
RoleBinding          devnfs                                             leader-locking-nfs-client-provisioner                                            nfs-client-provisioner
RoleBinding          devnfs                                             system:deployers                                                                 deployer
RoleBinding          devnfs                                             system:image-builders                                                            builder
RoleBinding          devnfs                                             system:image-pullers                                                             <none>
RoleBinding          devnfs                                             use-scc-hostmount-anyuid                                                         nfs-client-provisioner

Roles and SCC

# oc get roles
NAME                                    CREATED AT
leader-locking-nfs-client-provisioner   2021-01-03T13:25:34Z
use-scc-hostmount-anyuid                2021-01-03T13:25:54Z
[root@tatooine deploy]# oc describe role leader-locking-nfs-client-provisioner
Name:         leader-locking-nfs-client-provisioner
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources  Non-Resource URLs  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  endpoints  []                 []              [get list watch create update patch]

# oc describe role use-scc-hostmount-anyuid
Name:         use-scc-hostmount-anyuid
Labels:       <none>
Annotations:  <none>
PolicyRule:
  Resources                                         Non-Resource URLs  Resource Names      Verbs
  ---------                                         -----------------  --------------      -----
  securitycontextconstraints.security.openshift.io  []                 [hostmount-anyuid]  [use]

# oc get scc
NAME               PRIV    CAPS         SELINUX     RUNASUSER          FSGROUP     SUPGROUP    PRIORITY     READONLYROOTFS   VOLUMES
anyuid             false   <no value>   MustRunAs   RunAsAny           RunAsAny    RunAsAny    10           false            ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
hostaccess         false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <no value>   false            ["configMap","downwardAPI","emptyDir","hostPath","persistentVolumeClaim","projected","secret"]
hostmount-anyuid   false   <no value>   MustRunAs   RunAsAny           RunAsAny    RunAsAny    <no value>   false            ["configMap","downwardAPI","emptyDir","hostPath","nfs","persistentVolumeClaim","projected","secret"]
hostnetwork        false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   MustRunAs   <no value>   false            ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
node-exporter      true    <no value>   RunAsAny    RunAsAny           RunAsAny    RunAsAny    <no value>   false            ["*"]
nonroot            false   <no value>   MustRunAs   MustRunAsNonRoot   RunAsAny    RunAsAny    <no value>   false            ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]
privileged         true    ["*"]        RunAsAny    RunAsAny           RunAsAny    RunAsAny    <no value>   false            ["*"]
restricted         false   <no value>   MustRunAs   MustRunAsRange     MustRunAs   RunAsAny    <no value>   false            ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]

The Helm install worked fine on OCP 4.5. However, with a fresh installation of OCP 4.6, I'm not able to deploy this either via Helm or manually. Any input would be greatly appreciated! Thanks.

ROX(readonlymany) does not work

I created a PVC with access modes = ROX and mounted this PVC to a pod, I found I can write data to this volume, so it's not read-only.

I'm using nfs-client-provisioner:v3.1.0-k8s1.11.

  • Is it possible to support ROX?
  • If ROX is not supported, can nfs-client-provisioner reject/fail the PVC request with ROX?

Test info for reference:

root@i-os8hfxab:~/stone
$ kubectl get pvc nfs-rom -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"nfs-rom","namespace":"default"},"spec":{"accessModes":["ReadOnlyMany"],"resources":{"requests":{"storage":"1Gi"}},"storageClassName":"nfs-client"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-client-provisioner
  creationTimestamp: "2020-11-09T01:35:11Z"
  finalizers:
  - kubernetes.io/pvc-protection
  name: nfs-rom
  namespace: default
  resourceVersion: "27851685"
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/nfs-rom
  uid: 3e774225-3088-4390-849c-7f621ecb8ae0
spec:
  accessModes:
  - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs-client
  volumeMode: Filesystem
  volumeName: pvc-3e774225-3088-4390-849c-7f621ecb8ae0
status:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  phase: Bound
root@i-os8hfxab:~/stone
$ kubectl get sc nfs-client -o yaml
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    meta.helm.sh/release-name: nfs-client-provisioner
    meta.helm.sh/release-namespace: kube-system
    storageclass.kubesphere.io/support-snapshot: "false"
  creationTimestamp: "2020-09-22T07:47:46Z"
  labels:
    app: nfs-client-provisioner
    app.kubernetes.io/managed-by: Helm
    chart: nfs-client-provisioner-1.2.8
    heritage: Helm
    release: nfs-client-provisioner
  name: nfs-client
  resourceVersion: "70982"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/nfs-client
  uid: a1a0b886-bc0d-45e8-931d-89f3c536942f
parameters:
  archiveOnDelete: "true"
provisioner: cluster.local/nfs-client-provisioner
reclaimPolicy: Delete
volumeBindingMode: Immediate
root@i-os8hfxab:~/stone
$ nfsstat -m
/var/lib/kubelet/pods/3333727d-24f8-4b78-b1a7-1fc44125afb2/volumes/kubernetes.io~nfs/nfs-client-root from 192.168.0.27:/mnt/csi
 Flags:	rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.40,local_lock=none,addr=192.168.0.27

/var/lib/kubelet/pods/e6774bb2-87e9-4a69-9717-6c599a8978bb/volumes/kubernetes.io~nfs/pvc-3e774225-3088-4390-849c-7f621ecb8ae0 from 192.168.0.27:/mnt/csi/default-nfs-rom-pvc-3e774225-3088-4390-849c-7f621ecb8ae0
 Flags:	rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.40,local_lock=none,addr=192.168.0.27

Kubectl delete pv and pvc problem

Hello,

I installed and deploy pvc test and pod test but after I want to delete pv and pvc and they are not deleted. They remind in terminating state :

NAME                               STATUS        VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-claim   Terminating   pvc-8b4c4dc6-67f1-4d3a-9cf1-7a7f0b51ba37   1Mi        RWX            nfs-storage    28m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                STORAGECLASS       REASON   AGE
persistentvolume/pvc-8b4c4dc6-67f1-4d3a-9cf1-7a7f0b51ba37   1Mi        RWX            Delete           Terminating   default/test-claim   nfs-storage                 28m

I have a debian 10 nfs server for nfs-storage.

Where is the problem ?

Regards

What is the purpose of the leader election mechanism

Hello,
I have tried to use this nfs provisioner and it seems to respond to a lot of my needs especially the possilibity to have a single folder per PV(c) that matchs very well the multi-tenant architecture we have.
I have tried to install the helm chart with 2 replicas and from what I understand the second replica is waiting for the current leader to fail to answer requests.
What is the gain of having a second pod waiting for the first to fail comparing to a solution with a single pod restarting after a possible failure.
Is it a matter of time availability ? because from what I have seen the restart seem to be even faster than the time the election occurs.

OCP3.11 error: can not perform 'use' on 'securitycontextconstraints' in group '' (with fix)

When deploying on Openshift, the README explains that the "hostmount-anyuid" scc needs to be granted to the service account. When I run the provided command in my OCP3.11 cluster, I get this error message.

$ oc create role use-scc-hostmount-anyuid --verb=use --resource=scc --resource-name=hostmount-anyuid -n $NAMESPACE
error: can not perform 'use' on 'securitycontextconstraints' in group ''

The securitycontextconstraints resource is in the api group security.openshift.io. I appended the following to rbac.yaml to get the role added correctly.

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: use-scc-hostmount-anyuid
  namespace: default
rules:
  - apiGroups: ["security.openshift.io"]
    resources: ["securitycontextconstraints"]
    resourceNames: ["hostmount-anyuid"]
    verbs: ["use"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: use-scc-hostmount-anyuid
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: use-scc-hostmount-anyuid
  apiGroup: rbac.authorization.k8s.io

support multiple (many) namespaces

This is a continuation, more or less, of issue 1210 in the old repo.
I even had a PR that implements what is described here:
kubernetes-retired/external-storage#1210 (comment)

Basically, if you need to export the same NFS mount to a large number of namespaces, now you would do this:

  1. create a PVC for the mount, using whatever NFS provisioner you want
  2. run a second NFS provisioner in the new alias mode. It also needs to mount the PVC from the previous step.
  3. In every namespace that needs it, create a new PVC with the new provisioner. The namespace users don't need to know or care about the details of where the original mount lives, the NFS provisioner forwards those when needed.

Should I revive PR 1318?

Using multiple NFS server

i want use 2 NFS server, with helm i launch
the first nfs-client

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=172.20.20.93 \
--set nfs.path=/nfs

and another

helm install nfsbis-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
--set nfs.server=172.20.20.39 \
--set nfs.path=/nfs \
--set storageClass.name=nfs-clientbis \
--set storageClass.provisionerName=nfsbis-subdir-external-provisioner
when create a volume on the second nfs-clientbis in the pod logs i see the volume but it's create in the first nfs-client

I don't know if it's possible to do that
thanks

Disabling or delaying leader elections

I am running the nfs-client-provisioner in a simple self-hosted cluster. I am trying to narrow down excessive control plane disk writes, and traced lots of them back to etcd writing to /registry/services/endpoints/FOO/cluster.local-nfs-client-nfs-client-provisioner.

I got 896 writes to that key in 30 minutes, so it looks like once very two seconds, which is quite a lot ( actually half of the current writes, after disabling leader election for kube-scheduler and kube-controller-manager.

I tried to find a way of disabling or tweaking the leader elections timeout but did not find a way to do so. It would be great if this could be offered for simple deployments that don't need the redundancy of multiple replicas.

Plex can't write /config on NFS

Hello,

I try to enable persistence config on NFS but Plex don't right to write on /config :/

I have create NFS with nfs-subdir-external-provisioner :

nfs:
  server: 192.168.1.10
  path: /volume1/k3s

storageClass:
  name: nfs-k3s
  accessModes: ReadWriteMany
  reclaimPolicy: Retain

I have create volume + claim :

apiVersion: v1
kind: PersistentVolume
metadata:
  name: plex-nfs
spec:
  capacity:
    storage: 30Gi
  storageClassName: nfs-k3s
  accessModes:
  - ReadWriteMany
  nfs:
    server: 192.168.1.10
    path: /volume1/k3s/plex
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: plex-data
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: nfs-k3s
  resources:
    requests:
      storage: 30Gi

Change values.yaml of Plex.

...
persistence:
  config:
    enabled: true
    emptyDir:
      enabled: true
    mountPath: /config 
    accessMode: ReadWriteMany
    storageClass: "nfs-k3s"
    size: 30Gi
    existingClaim: "plex-data"
...

And i have always this errors :
PMS: failure detected. Read/write access is required for path: /config/Library/Application Support/Plex Media Server

Any Idea ?

How to lock down access?

Is it possible to lockdown access so only the privileged nfs container (the provisioner?) can mount and access the NFS share directly?

I don't want other containers to be able to mount the share themselves.

Perhaps this is possible with a NetworkPolicy?

Would love to see an example of this in the readme.

Support for ARM64 architecture

I see there is support for ARM v7 from the make file. I am attempting currently to get this working on my Rock64 Armbian k3s cluster. I can help test and work on the pipeline I just need to be pointed in a direction. Currently thinking about trying to build and deploy to from a fork to another quay repo just to have something working.

Thank you,
Cody Moss

Helm does not support archiveOnDelete when setting pathPattern

I do overwrite values like:

nfs:
  server: xxx.xxx.xxx.xxx
  path: /mnt/share

storageClass:
  pathPattern: "${.PVC.namespace}/${.PVC.name}"
  archiveOnDelete: "true"
  onDelete: ~

But because of pathPattern (maybe because of subdirectories?), it does not archive it. When creating claim and pod, the directory for the claim is created. When removing the claim (and pod), directory is still there. I would assume that there will be something like archive-default/claim-name, but nothing happened.

Example pod:

kind: Pod
apiVersion: v1
metadata:
  name: test-pod
spec:
  volumes:
    - name: local-storage
      persistentVolumeClaim:
        claimName: test-claim
  containers:
    - name: hello-container
      image: busybox
      command:
        - sh
        - -c
        - 'while true; do echo "`date` [`hostname`] Hello from OpenEBS Local PV." >> /mnt/store/greet.txt; sleep $(($RANDOM % 5 + 300)); done'
      volumeMounts:
        - mountPath: /mnt/store
          name: local-storage

Example claim:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi

unable to successfully deploy PVC on raspberry pi

Hi,

I am trying to deploy nextcloud on k3s and store my data on a nfs-share.

I already setup the k3s on a (for now) single raspberry pi via k3s-ansible which worked perfectly fine. I tried to setup a PV+PVC for a NFS share and eventually found that I should use this nfs provisioner.

I have

  • setup a NFS mount (verified via showmount -e IP) and was able to mount it manually via mount.nfs and mount -t nfs
  • cloned this repository
  • modified deploy/helm/values.yaml (nfs.server, nfs.path, repository: quay.io/external_storage/nfs-client-provisioner-arm)
  • also changed kubeVersion to ">=1.9.0 <1.21.0" in deploy/helm/Chart.yaml because k3s version is "v1.20.0+k3s2"
  • deployed the helm chart in a namespace "nfs"

The deployment seems to be working fine.
When I try to deploy deploy/test-claim.yml all I get is

Normal ExternalProvisioning 4m33s (x263 over 69m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuseim.pri/ifs" or manually created by system administrator

$ kubectl describe deployment nfs-subdir-external-provisioner -n nfs
Name:               nfs-subdir-external-provisioner
Namespace:          nfs
CreationTimestamp:  Wed, 13 Jan 2021 19:31:07 +0100
Labels:             app=nfs-subdir-external-provisioner
                    app.kubernetes.io/managed-by=Helm
                    chart=nfs-subdir-external-provisioner-3.0.0
                    heritage=Helm
                    release=nfs-subdir-external-provisioner
Annotations:        deployment.kubernetes.io/revision: 1
                    meta.helm.sh/release-name: nfs-subdir-external-provisioner
                    meta.helm.sh/release-namespace: nfs
Selector:           app=nfs-subdir-external-provisioner,release=nfs-subdir-external-provisioner
Replicas:           1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:       Recreate
MinReadySeconds:    0
Pod Template:
  Labels:           app=nfs-subdir-external-provisioner
                    release=nfs-subdir-external-provisioner
  Service Account:  nfs-subdir-external-provisioner
  Containers:
   nfs-subdir-external-provisioner:
    Image:      quay.io/external_storage/nfs-client-provisioner-arm:v3.1.0-k8s1.11
    Port:       <none>
    Host Port:  <none>
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.0.227
      NFS_PATH:          /mnt/data
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
  Volumes:
   nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.0.227
    Path:      /mnt/data
    ReadOnly:  false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nfs-subdir-external-provisioner-598475ff9b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  65s   deployment-controller  Scaled up replica set nfs-subdir-external-provisioner-598475ff9b to 1
$ kubectl describe pvc test-claim -n nfs
Name:          test-claim
Namespace:     nfs
StorageClass:  nfs-client
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                Age               From                         Message
  ----    ------                ----              ----                         -------
  Normal  ExternalProvisioning  7s (x3 over 12s)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "cluster.local/nfs-subdir-external-provisioner" or manually created by system administrator

I tried to deploy without helm but still the same issue

Thank you for your help and sorry if I am missing any information. Quite new to the k8s universe ;)

pod has unbound immediate PersistentVolumeClaims

Can't get it to work. The test-pod fails with:

Warning FailedScheduling 15s (x2 over 15s) default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

I have done the following:

1. I have exported the folder in /etc/exports as:

/export         192.168.176.0/24(rw,sync,fsid=0,crossmnt,no_subtree_check,no_root_squash,sec=sys)
/export/example 192.168.176.0/24(rw,sync,no_subtree_check,no_root_squash,sec=sys)

2. I have verified that this NFS export works by connecting to it from another machine and I am able to create files in it.

3. I have used Helm to install it and provided the following parameters:

--set nfs.server=192.168.176.131 --set nfs.path=/export/example

The nfs pod is running. describe pod output:

Name:         nfs-subdir-external-provisioner-797d858c5c-zvrvb
Namespace:    default
Priority:     0
Node:         ubuntu/192.168.176.131
Start Time:   Thu, 29 Apr 2021 21:10:01 +0000
Labels:       app=nfs-subdir-external-provisioner
              pod-template-hash=797d858c5c
              release=nfs-subdir-external-provisioner
Annotations:  cni.projectcalico.org/podIP: 10.1.243.199/32
              cni.projectcalico.org/podIPs: 10.1.243.199/32
Status:       Running
IP:           10.1.243.199
IPs:
  IP:           10.1.243.199
Controlled By:  ReplicaSet/nfs-subdir-external-provisioner-797d858c5c
Containers:
  nfs-subdir-external-provisioner:
    Container ID:   containerd://6588820a7513a3815c65711fe6e3310e18d83a9f817d701de3bf13e9f8dfb9dc
    Image:          k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
    Image ID:       k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 29 Apr 2021 21:10:01 +0000
    Ready:          True
    Restart Count:  0
    Environment:
      PROVISIONER_NAME:  cluster.local/nfs-subdir-external-provisioner
      NFS_SERVER:        192.168.176.131
      NFS_PATH:          /export/example
    Mounts:
      /persistentvolumes from nfs-subdir-external-provisioner-root (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from nfs-subdir-external-provisioner-token-j559x (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  nfs-subdir-external-provisioner-root:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    192.168.176.131
    Path:      /export/example
    ReadOnly:  false
  nfs-subdir-external-provisioner-token-j559x:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nfs-subdir-external-provisioner-token-j559x
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  7m43s  default-scheduler  Successfully assigned default/nfs-subdir-external-provisioner-797d858c5c-zvrvb to ubuntu
  Normal  Pulled     7m43s  kubelet            Container image "k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
  Normal  Created    7m43s  kubelet            Created container nfs-subdir-external-provisioner
  Normal  Started    7m43s  kubelet            Started container nfs-subdir-external-provisioner

4. I have deployed the example:

kubectl create -f deploy/test-claim.yaml -f deploy/test-pod.yaml

But it fails with "pod has unbound immediate PersistentVolumeClaims". describe pod output:

Name:         test-pod
Namespace:    default
Priority:     0
Node:         <none>
Labels:       <none>
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
  test-pod:
    Image:      gcr.io/google_containers/busybox:1.24
    Port:       <none>
    Host Port:  <none>
    Command:
      /bin/sh
    Args:
      -c
      touch /mnt/SUCCESS && exit 0 || exit 1
    Environment:  <none>
    Mounts:
      /mnt from nfs-pvc (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-d9njl (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  nfs-pvc:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  test-claim
    ReadOnly:   false
  default-token-d9njl:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-d9njl
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age                From               Message
  ----     ------            ----               ----               -------
  Warning  FailedScheduling  1s (x12 over 10m)  default-scheduler  0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.

I have also tried setting folder permissions for the exported NFS share as 777 but that did not help.

The nfsstat -m reports the following:

$ nfsstat -m
/var/snap/microk8s/common/var/lib/kubelet/pods/98a24993-b7b9-4ebd-9d98-bd11aaa87aaa/volumes/kubernetes.io~nfs/nfs-subdir-external-provisioner-root from 192.168.176.131:/export/example
 Flags: rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,port=2049,timeo=600,retrans=2,sec=sys,mountaddr=192.168.176.131,mountvers=3,mountport=35741,mountproto=tcp,local_lock=none,addr=192.168.176.131

What am I doing wrong?

[NOOB Question] Cannot see PV

This is brand new to me. From reading the docs, I am deducting that when I deploy this as a Helm chart, that a PV should be created. (Perhaps this is where I'm going wrong). I see if that if I don't use Helm, that in step 4, I need to create my own provisioner. I used a variation of the test-claim.yaml as follows:

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nfs spec: storageClassName: managed-nfs-storage accessModes: - ReadWriteMany resources: requests: storage: 1Mi

But when I do a kubectl get pvc I'm getting the following:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs Pending managed-nfs-storage 7m53s

And it never actually binds. What am I doing wrong please? TIA!

Repository certificate problem ?

Hello,

I have a certificate error when I try to deploy

  Warning  Failed     <invalid> (x2 over 0s)  kubelet            Failed to pull image "gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1": failed to resolve reference "gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.1": failed to do request: Head https://gcr.io/v2/k8s-staging-sig-storage/nfs-subdir-external-provisioner/manifests/v4.0.1: x509: certificate has expired or is not yet valid

NB I have same problem with git clone, I could force it but how to force it with Kubernetes ?

Regards

Is helm chart contains updated image tag

Hi!

Is it possible that helm chart contains image tag, that does not correspond repo tag?

Because I see 4.0.2 in values.yaml and this feature is not work for me

b8e2036

But when I build and push it from fork to my own repository, it works fine

An option to reuse provisioned subdirs

Current identity for provisioned subir includes the persisted volume name like this:

pvName := strings.Join([]string{pvcNamespace, pvcName, options.PVName}, "-")

Would it be possible to include an option to omit options.PVName so that when I re-provision my cluster I could re-use the already existing subdirs as PVs?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.