Coder Social home page Coder Social logo

Comments (14)

gman0 avatar gman0 commented on August 14, 2024

Hi @johnstile, indeed the chart doesn't have that anymore (the previous deployed it though). How exactly are you deploying it? Is it possible that you have some automation in the cluster that applies PodSecurityPolicy by default?

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

Hello @gman0 I have cvmfs-csi commit 18097c8 with helm chart v3.2.1

I installed via a shim-tool that runs helm install for me, but my k8s version does list several PodSecurityPolicies, and they aren't limited to a namespace, so I'm not sure how to tell which one would affect me.

I have k8s

Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.13", GitCommit:"2444b3347a2c45eb965b182fb836e1f51dc61b70", GitTreeState:"clean", BuildDate:"2021-11-17T13:00:29Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}

Is there an older release of this chart that supports my k8s Server Version?

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

it looks like part of cvmfs-csi Chart v2.0.0 removed the PodSecurityPolicies.

Is there a way to use cvmfs-csi Chart v2.0.0 in kubernetes v1.20.13?

from cvmfs-csi.

gman0 avatar gman0 commented on August 14, 2024

@johnstile cvmfs-csi 2.0.0 helm chart is compatible with Kubernetes >=v1.20.0.

Just to clarify, you're deploying the driver anew, or upgrading from an older version? I just created a fresh v1.20.4 cluster, installed the driver with helm and it all works fine.

$ helm repo add cern https://registry.cern.ch/chartrepo/cern
$ helm repo update
$ helm install cvmfs cern/cvmfs-csi --version 2.0.0
NAME: cvmfs
LAST DEPLOYED: Fri Nov  4 11:11:55 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

$ kubectl get all
NAME                                                    READY   STATUS    RESTARTS   AGE
pod/cvmfs-cvmfs-csi-controllerplugin-7d8b4cfdfb-szthp   2/2     Running   0          12m
pod/cvmfs-cvmfs-csi-nodeplugin-2fcnv                    2/2     Running   0          12m

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.254.0.1   <none>        443/TCP   33m

NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/cvmfs-cvmfs-csi-nodeplugin   1         1         1       1            1           <none>          12m

NAME                                               READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cvmfs-cvmfs-csi-controllerplugin   1/1     1            1           12m

NAME                                                          DESIRED   CURRENT   READY   AGE
replicaset.apps/cvmfs-cvmfs-csi-controllerplugin-7d8b4cfdfb   1         1         1       12m

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.20) exceeds the supported minor version skew of +/-1

(I ran the demos from example dir and it all works too, btw)

The driver's helm chart doesn't touch any PodSecuritPolicy-related stuff, so I don't think the error you are seeing is coming from the driver deployment itself. I would check if the cluster is running some 3rd-party component (e.g. OPA Gatekeeper, or perhaps the tool you're using instead of running helm install) that is adding the policies globally to all Pods, causing the installation to fail. If so, it needs to be configured to allow driver's Pods to run as privileged.

from cvmfs-csi.

jstile-lbl avatar jstile-lbl commented on August 14, 2024

There is a default psp applied across the system, so I had to create a psp for the nodeplugin but now everything starts

kubectl get all

NAME                                              READY   STATUS                       RESTARTS   AGE
pod/cvmfs-atlas-nightlies                         0/1     ContainerCreating            0          100m
pod/cvmfs-csi-controllerplugin-7f7d9dbfd8-jlb7k   2/2     Running                      0          3h53m
pod/cvmfs-csi-nodeplugin-dktbj                    2/2     Running                      0          3h8m

NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR         AGE
daemonset.apps/cvmfs-csi-nodeplugin   5         5         5       5            5           metallb_nfs_lb=true   4h48m

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cvmfs-csi-controllerplugin   1/1     1            1           4h48m

NAME                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/cvmfs-csi-controllerplugin-7f7d9dbfd8   1         1         1       4h48m

Although this allowed everything to start, I get a mount error (exit status 32) trying to run this example:
https://github.com/cvmfs-contrib/cvmfs-csi/blob/master/docs/how-to-use.md#example-mounting-single-cvmfs-repository-using-repository-parameter

Error

I1108 22:54:59.471951  144212 mountutil.go:70] Exec-ID 53: Running command env=[] prog=/usr/bin/mount args=[mount --bind /cvmfs/atlas-nightlies.cern.ch /var/lib/kubelet/pods/d151b454-a855-4f6c-985e-1d769a1a4460/volumes/kubernetes.io~csi/pvc-6bf5574a-dd7a-4a2b-897d-792c06f7e120/mount]
I1108 22:55:02.387852  144212 mountutil.go:70] Exec-ID 53: Process exited: exit status 32
E1108 22:55:02.387887  144212 mountutil.go:70] Exec-ID 53: Error: exit status 32; Output: mount: special device /cvmfs/atlas-nightlies.cern.ch does not exist
E1108 22:55:02.387930  144212 grpcserver.go:141] Call-ID 156: Error: rpc error: code = Internal desc = failed to bind mount: exit status 32

my changes:

----
#nodeplugin-rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ include "cvmfs-csi.nodeplugin.fullname" . }}
  labels:
    {{- include "cvmfs-csi.nodeplugin.labels" .  | nindent 4 }}
rules:
  - apiGroups: ["policy"]
    resources: ["podsecuritypolicies"]
    verbs: ["use"]
    resourceNames:
      - {{ include "cvmfs-csi.nodeplugin.fullname" . }}
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["csinodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: {{ include "cvmfs-csi.nodeplugin.fullname" . }}
  labels:
    {{- include "cvmfs-csi.nodeplugin.labels" .  | nindent 4 }}
subjects:
  - kind: ServiceAccount
    name: {{ include "cvmfs-csi.serviceAccountName.nodeplugin" . }}
    namespace: {{ .Release.Namespace }}
roleRef:
  kind: ClusterRole
  name: {{ include "cvmfs-csi.nodeplugin.fullname" . }}
  apiGroup: rbac.authorization.k8s.io
----
#nodeplugin-psp.yaml
{{- if .Values.nodeplugin.podSecurityPolicy.enabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: {{ include "cvmfs-csi.nodeplugin.fullname" . }}
  labels:
    app: {{ template "cvmfs-csi.name" . }}
    release: {{ .Release.Name }}
    chart: {{ .Chart.Name }}
    component: {{ .Values.nodeplugin.name }}
    heritage: {{ .Release.Service }}
spec:
  allowPrivilegeEscalation: true
  allowedCapabilities:
    - 'SYS_ADMIN'
  fsGroup:
    rule: RunAsAny
  privileged: true
  hostNetwork: true
  hostPID: true
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  volumes:
    - 'configMap'
    - 'emptyDir'
    - 'projected'
    - 'secret'
    - 'downwardAPI'
    - 'hostPath'
  allowedHostPaths:
    - pathPrefix: '/dev'
      readOnly: false
    - pathPrefix: '/run/mount'
      readOnly: false
    - pathPrefix: '/sys'
      readOnly: false
    - pathPrefix: '/lib/modules'
      readOnly: true
    - pathPrefix: '/var/lib/kubelet/pods'
      readOnly: false
    - pathPrefix: '{{ .Values.kubeletDirectory }}/plugins/{{ .Values.csiDriverName }}/{{ .Values.cvmfsCSIPluginSocketFile }}'
      readOnly: false
    - pathPrefix: '{{ .Values.registrationDir }}'
      readOnly: false
    - pathPrefix: '{{ .Values.kubeletDirectory }}/plugins'
      readOnly: false
{{- end }}

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

@gman0 Can I ask how this is supposed to work? Where should I find the mount on the worker?

kubectl describe pod/cvmfs-atlas-nightlies  |tail -6
  Type     Reason       Age                   From               Message
  ----     ------       ----                  ----               -------
  Normal   Scheduled    52m                   default-scheduler  Successfully assigned cvmfs-atlas-nightlies to ncn-w003
  Warning  FailedMount  29m                   kubelet, ncn-w003  Unable to attach or mount volumes: unmounted volumes=[my-cvmfs-atlas-nightlies], unattached volumes=[kube-api-access-nfcl7 my-cvmfs-atlas-nightlies]: timed out waiting for the condition
  Warning  FailedMount  6m55s (x16 over 50m)  kubelet, ncn-w003  Unable to attach or mount volumes: unmounted volumes=[my-cvmfs-atlas-nightlies], unattached volumes=[my-cvmfs-atlas-nightlies kube-api-access-nfcl7]: timed out waiting for the condition
  Warning  FailedMount  38s (x32 over 52m)    kubelet, ncn-w003  MountVolume.SetUp failed for volume "pvc-f4b1356c-3063-4de7-9faf-99bd4be71555" : rpc error: code = Internal desc = failed to bind mount: exit status 32

I see the pvc, and the details show storageClassName: cvmfs-atlas-nightlies

kubectl -n  get pvc cvmfs-atlas-nightlies
NAME                    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
cvmfs-atlas-nightlies   Bound    pvc-f4b1356c-3063-4de7-9faf-99bd4be71555   1          ROX            cvmfs-atlas-nightlies   54m

The storageClassName exists, and specifies the repository for the test

kubectl get storageclass cvmfs-atlas-nightlies 
NAME                    PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
cvmfs-atlas-nightlies   cvmfs.csi.cern.ch   Delete          Immediate           false                  57m

kubectl get storageclass cvmfs-atlas-nightlies -o json |jq -rc '.parameters'
{"repository":"atlas-nightlies.cern.ch"}

Are there debug options I can enable?

from cvmfs-csi.

gman0 avatar gman0 commented on August 14, 2024

Can I ask how this is supposed to work? Where should I find the mount on the worker?

If you followed the atlas-nightlies.cern.ch demo, the volume is mounted under /atlas-nightlies in the Pod. Once it starts, the volume can be accessed like so:

kubectl exec cvmfs-atlas-nightlies -- ls -l /atlas-nightlies

The mount is failing though, so the Pod won't start up. Can you please send full logs of the node plugin?

kubectl logs cvmfs-csi-nodeplugin-dktbj -c nodeplugin

from cvmfs-csi.

gman0 avatar gman0 commented on August 14, 2024

And also uname -a of your host system on the node, if you have access to that.

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

in my testing the pod name changed. but I have the logs:

Current state

kubectl get all  |egrep 'NAME|cvmfs'
NAME                                              READY   STATUS              RESTARTS   AGE
pod/cvmfs-atlas-nightlies                         0/1     ContainerCreating   0          9m8s
pod/cvmfs-csi-controllerplugin-7f7d9dbfd8-vhj2n   2/2     Running             0          12h
pod/cvmfs-csi-nodeplugin-cwc2f                    2/2     Running             0          12h
NAME                                  DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR         AGE
daemonset.apps/cvmfs-csi-nodeplugin   1         1         1       1            1           metallb_nfs_lb=true   12h
NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/cvmfs-csi-controllerplugin   1/1     1            1           12h
NAME                                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/cvmfs-csi-controllerplugin-7f7d9dbfd8   1         1         1       12h

nodeplugin Log

kubectl logs -l "app=cvmfs-csi,component=nodeplugin" -c nodeplugin
E1110 15:02:44.032571 2525277 grpcserver.go:141] Call-ID 33: Error: rpc error: code = Internal desc = failed to bind mount: exit status 32
I1110 15:03:48.102510 2525277 grpcserver.go:136] Call-ID 34: Call: /csi.v1.Node/NodeGetCapabilities
I1110 15:03:48.102588 2525277 grpcserver.go:137] Call-ID 34: Request: {}
I1110 15:03:48.102612 2525277 grpcserver.go:143] Call-ID 34: Response: {}
I1110 15:03:48.103527 2525277 grpcserver.go:136] Call-ID 35: Call: /csi.v1.Node/NodePublishVolume
I1110 15:03:48.103661 2525277 grpcserver.go:137] Call-ID 35: Request: {"target_path":"/var/lib/kubelet/pods/9a7185e1-34f4-42ce-96a4-8f7ef6d7400a/volumes/kubernetes.io~csi/pvc-00b8f1cf-aee9-4e29-8a33-85bd95745b79/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"repository":"atlas-nightlies.cern.ch","storage.kubernetes.io/csiProvisionerIdentity":"1668047069197-8081-cvmfs.csi.cern.ch"},"volume_id":"pvc-00b8f1cf-aee9-4e29-8a33-85bd95745b79"}
I1110 15:03:48.106660 2525277 mountutil.go:70] Exec-ID 19: Running command env=[] prog=/usr/bin/mount args=[mount --bind /cvmfs/atlas-nightlies.cern.ch /var/lib/kubelet/pods/9a7185e1-34f4-42ce-96a4-8f7ef6d7400a/volumes/kubernetes.io~csi/pvc-00b8f1cf-aee9-4e29-8a33-85bd95745b79/mount]
I1110 15:03:54.341194 2525277 mountutil.go:70] Exec-ID 19: Process exited: exit status 32
E1110 15:03:54.341224 2525277 mountutil.go:70] Exec-ID 19: Error: exit status 32; Output: mount: special device /cvmfs/atlas-nightlies.cern.ch does not exist
E1110 15:03:54.341257 2525277 grpcserver.go:141] Call-ID 35: Error: rpc error: code = Internal desc = failed to bind mount: exit status 32

worker uname -a

pdsh -l root -w ncn-w003 "uname -a"
ncn-w003: Linux ncn-w003 5.3.18-150300.59.87-default #1 SMP Thu Jul 21 14:31:28 UTC 2022 (cc90276) x86_64 x86_64 x86_64 GNU/Linux

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

On the worker, I see an empty directory

ncn-w003:~ # ls -tlr /var/lib/kubelet/pods/9a7185e1-34f4-42ce-96a4-8f7ef6d7400a/volumes/kubernetes.io~csi/
total 0

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

I tried rolling out again, noting the pvc name and then looked for that as well

kubectl logs pod/cvmfs-csi-nodeplugin-zbmgp -c nodeplugin
I1110 15:46:09.778705 1308366 main.go:92] Running CVMFS CSI plugin with [/csi-cvmfsplugin -v=5 --nodeid=ncn-w003 --endpoint=unix:///var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock --drivername=cvmfs.csi.cern.ch --start-automount-daemon=true --role=identity,node]
I1110 15:46:09.778811 1308366 driver.go:139] Driver: cvmfs.csi.cern.ch
I1110 15:46:09.778819 1308366 driver.go:141] Version: v2.0.0 (commit: 1e30ea639263f74179f971aea1cfad170a6b5c89; build time: 2022-11-10 15:46:09.77627803 +0000 UTC m=+0.003283452; metadata: )
I1110 15:46:09.778865 1308366 driver.go:155] Registering Identity server
I1110 15:46:09.778920 1308366 driver.go:210] Exec-ID 1: Running command env=[] prog=/usr/bin/cvmfs2 args=[cvmfs2 --version]
I1110 15:46:09.785272 1308366 driver.go:210] Exec-ID 1: Process exited: exit status 0
I1110 15:46:09.785299 1308366 driver.go:170] CernVM-FS version 2.9.4
I1110 15:46:09.785356 1308366 driver.go:227] Exec-ID 2: Running command env=[] prog=/usr/bin/cvmfs_config args=[cvmfs_config setup]
I1110 15:46:10.553807 1308366 driver.go:227] Exec-ID 2: Process exited: exit status 0
I1110 15:46:10.553931 1308366 driver.go:233] Exec-ID 3: Running command env=[] prog=/usr/sbin/automount args=[automount]
I1110 15:46:20.725551 1308366 driver.go:233] Exec-ID 3: Process exited: exit status 0
I1110 15:46:20.725657 1308366 driver.go:240] Exec-ID 4: Running command env=[] prog=/usr/bin/mount args=[mount --make-shared /cvmfs]
I1110 15:46:20.727381 1308366 driver.go:240] Exec-ID 4: Process exited: exit status 0
I1110 15:46:20.727397 1308366 driver.go:187] Registering Node server with capabilities []
I1110 15:46:20.727650 1308366 grpcserver.go:106] Listening for connections on /var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock
I1110 15:46:20.747695 1308366 grpcserver.go:136] Call-ID 1: Call: /csi.v1.Identity/GetPluginInfo
I1110 15:46:20.749532 1308366 grpcserver.go:137] Call-ID 1: Request: {}
I1110 15:46:20.749623 1308366 grpcserver.go:143] Call-ID 1: Response: {"name":"cvmfs.csi.cern.ch","vendor_version":"v2.0.0"}
I1110 15:46:21.428120 1308366 grpcserver.go:136] Call-ID 2: Call: /csi.v1.Node/NodeGetInfo
I1110 15:46:21.428186 1308366 grpcserver.go:137] Call-ID 2: Request: {}
I1110 15:46:21.428306 1308366 grpcserver.go:143] Call-ID 2: Response: {"node_id":"ncn-w003"}
I1110 15:46:57.509264 1308366 grpcserver.go:136] Call-ID 3: Call: /csi.v1.Node/NodeGetCapabilities
I1110 15:46:57.509356 1308366 grpcserver.go:137] Call-ID 3: Request: {}
I1110 15:46:57.509508 1308366 grpcserver.go:143] Call-ID 3: Response: {}
I1110 15:46:57.511277 1308366 grpcserver.go:136] Call-ID 4: Call: /csi.v1.Node/NodePublishVolume
I1110 15:46:57.511591 1308366 grpcserver.go:137] Call-ID 4: Request: {"target_path":"/var/lib/kubelet/pods/9071eea2-ca12-40d7-94a1-ac3d7e6d8945/volumes/kubernetes.io~csi/pvc-3f2a1797-091d-443f-86f7-602b1aae4e10/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":3}},"volume_context":{"repository":"atlas-nightlies.cern.ch","storage.kubernetes.io/csiProvisionerIdentity":"1668047069197-8081-cvmfs.csi.cern.ch"},"volume_id":"pvc-3f2a1797-091d-443f-86f7-602b1aae4e10"}
I1110 15:46:57.514736 1308366 mountutil.go:70] Exec-ID 5: Running command env=[] prog=/usr/bin/mount args=[mount --bind /cvmfs/atlas-nightlies.cern.ch /var/lib/kubelet/pods/9071eea2-ca12-40d7-94a1-ac3d7e6d8945/volumes/kubernetes.io~csi/pvc-3f2a1797-091d-443f-86f7-602b1aae4e10/mount]
I1110 15:47:03.373869 1308366 mountutil.go:70] Exec-ID 5: Process exited: exit status 32
E1110 15:47:03.373900 1308366 mountutil.go:70] Exec-ID 5: Error: exit status 32; Output: mount: special device /cvmfs/atlas-nightlies.cern.ch does not exist
E1110 15:47:03.373948 1308366 grpcserver.go:141] Call-ID 4: Error: rpc error: code = Internal desc = failed to bind mount: exit status 32
ncn-w003:~ # ls -tlr /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f2a1797-091d-443f-86f7-602b1aae4e10/
total 4
-rw-r--r-- 1 root root 93 Nov 10 15:36 vol_data.json
drwxr-x--- 2 root root  6 Nov 10 15:36 globalmount

ncn-w003:~ # cat  /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f2a1797-091d-443f-86f7-602b1aae4e10/vol_data.json
{"driverName":"cvmfs.csi.cern.ch","volumeHandle":"pvc-3f2a1797-091d-443f-86f7-602b1aae4e10"}

ls -tlr /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-3f2a1797-091d-443f-86f7-602b1aae4e10/globalmount/
total 0

ncn-w003:~ # ls -tlr /var/lib/kubelet/pods/9071eea2-ca12-40d7-94a1-ac3d7e6d8945/volumes/kubernetes.io~csi/
total 0

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

On the worker I see:

ncn-w003:~ # ps -elf |grep cvmfs

4 S root     1307803 1307646  0  80   0 - 180095 -     15:46 ?        00:00:02 /csi-node-driver-registrar -v=5 --csi-address=/csi/csi.sock --kubelet-registration-path=/var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock
4 S root     1308221 1307367  0  80   0 - 454511 -     15:46 ?        00:00:01 /csi-cvmfsplugin -v=5 --nodeid=$(NODE_ID) --endpoint=unix:///csi/csi.sock --drivername=cvmfs.csi.cern.ch --role=identity,controller
4 S root     1308366 1307646  0  80   0 - 405423 -     15:46 ?        00:00:01 /csi-cvmfsplugin -v=5 --nodeid=ncn-w003 --endpoint=unix:///var/lib/kubelet/plugins/cvmfs.csi.cern.ch/csi.sock --drivername=cvmfs.csi.cern.ch --start-automount-daemon=true --role=identity,node

ncn-w003:~ # mount -l |grep cvmf
/dev/sdc3 on /var/lib/kubelet/pods/85fd1925-eb7c-4d65-a905-071360ddf1ea/volume-subpaths/cvmfs-config-default-local/nodeplugin/6 type xfs (rw,noatime,swalloc,attr2,largeio,inode64,allocsize=131072k,logbufs=8,logbsize=32k,noquota) [K8SLET]

Is there a manual mount I could do to help simulate what the cvmfs-csi is doing to help debug?

from cvmfs-csi.

johnstile avatar johnstile commented on August 14, 2024

The issue is resolved.

The problem was config file default.local had a very large CVMFS_NFILES=8000000

This became apparent by trying to manually mount from a shell on the nodeplugin container.

    kubectl exec -it pod/cvmfs-csi-nodeplugin-r9llf -c nodeplugin -- /bin/bash
    [root@cvmfs-csi-nodeplugin-r9llf /]# ls -alF /cvmfs
      cvmfs/            cvmfs-localcache/
    [root@cvmfs-csi-nodeplugin-r9llf /]# ls -alF /cvmfs
      total 4
      drwxr-xr-x  2 root root    0 Nov 10 23:59 ./
      drwxr-xr-x 20 root root 4096 Nov 10 23:25 ../
    [root@cvmfs-csi-nodeplugin-r9llf /]# mount -v -r -t cvmfs alice.cern.ch /cvmfs
      Failed to set maximum number of open files, insufficient permissions

Comment out CVMFS_NFILES and redeploy helm chart and everything worked.

The test pod worked fine as well.

Thank you for you help!

from cvmfs-csi.

gman0 avatar gman0 commented on August 14, 2024

@johnstile thanks for debugging this! I've pushed #50 so that at least some CVMFS client errors are logged.

from cvmfs-csi.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.