Coder Social home page Coder Social logo

ehough / docker-nfs-server Goto Github PK

View Code? Open in Web Editor NEW
659.0 16.0 219.0 96 KB

A lightweight, robust, flexible, and containerized NFS server.

Home Page: https://hub.docker.com/r/erichough/nfs-server/

License: GNU General Public License v3.0

Shell 96.87% Dockerfile 3.13%
nfs nfs-server docker nfsv4

docker-nfs-server's Introduction

erichough/nfs-server

A lightweight, robust, flexible, and containerized NFS server.

Why?

This is the only containerized NFS server that offers all of the following features:

  • small (~15MB) Alpine Linux image
  • NFS versions 3, 4, or both simultaneously
  • clean teardown of services upon termination (no lingering nfsd processes on Docker host)
  • flexible construction of /etc/exports
  • extensive server configuration via environment variables
  • human-readable logging (with a helpful debug mode)
  • optional bonus features

Table of Contents

Requirements

  1. The Docker host kernel will need the following kernel modules

    • nfs
    • nfsd
    • rpcsec_gss_krb5 (only if Kerberos is used)

    You can manually enable these modules on the Docker host with:

    modprobe {nfs,nfsd,rpcsec_gss_krb5}

    or you can just allow the container to load them automatically.

  2. The container will need to run with CAP_SYS_ADMIN (or --privileged). This is necessary as the server needs to mount several filesystems inside the container to support its operation, and performing mounts from inside a container is impossible without these capabilities.

  3. The container will need local access to the files you'd like to serve via NFS. You can use Docker volumes, bind mounts, files baked into a custom image, or virtually any other means of supplying files to a Docker container.

Usage

Starting the server

Starting the erichough/nfs-server image will launch an NFS server. You'll need to supply some information upon container startup, which we'll cover below, but briefly speaking your docker run command might look something like this:

docker run                                            \
  -v /host/path/to/shared/files:/some/container/path  \
  -v /host/path/to/exports.txt:/etc/exports:ro        \
  --cap-add SYS_ADMIN                                 \
  -p 2049:2049                                        \
  erichough/nfs-server

Let's break that command down into its individual pieces to see what's required for a successful server startup.

  1. Provide the files to be shared over NFS

    As noted in the requirements, the container will need local access to the files you'd like to share over NFS. Some ideas for supplying these files:

    • bind mounts (-v /host/path/to/shared/files:/some/container/path)
    • volumes (-v some_volume:/some/container/path)
    • files baked into custom image (e.g. in a Dockerfile: COPY /host/files /some/container/path)

    You may use any combination of the above, or any other means to supply files to the container.

  2. Provide your desired NFS exports (/etc/exports)

    You'll need to tell the server which container directories to share. You have three options for this; choose whichever one you prefer:

    1. bind mount /etc/exports into the container

      docker run                                      \
        -v /host/path/to/exports.txt:/etc/exports:ro  \
        ...                                           \
        erichough/nfs-server
      
    2. provide each line of /etc/exports as an environment variable

      The container will look for environment variables that start with NFS_EXPORT_ and end with an integer. e.g. NFS_EXPORT_0, NFS_EXPORT_1, etc.

      docker run                                                                       \
        -e NFS_EXPORT_0='/container/path/foo                  *(ro,no_subtree_check)'  \
        -e NFS_EXPORT_1='/container/path/bar 123.123.123.123/32(rw,no_subtree_check)'  \
        ...                                                                            \
        erichough/nfs-server
      
    3. bake /etc/exports into a custom image

      e.g. in a Dockerfile:

      FROM erichough/nfs-server
      ADD /host/path/to/exports.txt /etc/exports
  3. Use --cap-add SYS_ADMIN or --privileged

    As noted in the requirements, the container will need additional privileges. So your run command will need either:

    docker run --cap-add SYS_ADMIN ... erichough/nfs-server
    

    or

    docker run --privileged ... erichough/nfs-server
    

    Not sure which to use? Go for --cap-add SYS_ADMIN as it's the lesser of two evils.

  4. Expose the server ports

    You'll need to open up at least one server port for your client connections. The ports listed in the examples below are the defaults used by this image and most can be customized.

    • If your clients connect via NFSv4 only, you can get by with just TCP port 2049:

      docker run -p 2049:2049 ... erichough/nfs-server
      
    • If you'd like to support NFSv3, you'll need to expose a lot more ports:

      docker run                          \
        -p 2049:2049   -p 2049:2049/udp   \
        -p 111:111     -p 111:111/udp     \
        -p 32765:32765 -p 32765:32765/udp \
        -p 32767:32767 -p 32767:32767/udp \
        ...                               \
        erichough/nfs-server
      

If you pay close attention to each of the items in this section, the server should start quickly and be ready to accept your NFS clients.

Mounting filesystems from a client

# mount <container-IP>:/some/export /some/local/path

Optional Features

Advanced

Help!

Please open an issue if you have any questions, constructive criticism, or can't get something to work.

Remaining tasks

Acknowledgements

This work was based on prior projects:

docker-nfs-server's People

Contributors

dalazx avatar ehough avatar phlax avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-nfs-server's Issues

unable to connect via rpcbind behind NAT / docker port forwarding

This is expected behavior but it remains the port forwarding usage of rpcbind useless.

Background: With newer versions of rpcbind clients will use a newer protocol version. These versions will use the GETADDR call which returns the IP from the server on application level. As with a typical docker port forwarding we are doing a source NAT, the IP in response of GETADDR will not be equal to the external one and even worse, it is typically an unreachable address. So basically listing the exports is not possible with newer clients (depending on the version of rpcbind).

I do not have any idea how to fix that properly. But a note in the docs would be nice.

This image is pretty useful in some use cases but e.g. with newer versions of Proxmox it just doesn't work for non obvious reasons.

Exposing nfs service on host

Hello, thanks for your container.

I cannot mount an nfs share using the host ip address :

$ sudo mount -v -t nfs -o proto=tcp,port=2049 10.2.4.110:/var/nfs-export /tmp/toto
mount.nfs: timeout set for Mon May  7 13:17:49 2018
mount.nfs: trying text-based options 'proto=tcp,port=2049,vers=4.2,addr=10.2.4.110,clientaddr=10.2.4.111'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'proto=tcp,port=2049,addr=10.2.4.110'
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported

But using the container ip, it works :

sudo mount -v -t nfs -o proto=tcp,port=2049 172.17.0.2:/var/nfs-export /tmp/toto
mount.nfs: timeout set for Mon May  7 13:19:49 2018
mount.nfs: trying text-based options 'proto=tcp,port=2049,vers=4.2,addr=172.17.0.2,clientaddr=172.17.0.1'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'proto=tcp,port=2049,addr=172.17.0.2'
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: trying 172.17.0.2 prog 100005 vers 3 prot TCP port 32767

We are publishing the port 2049 :

docker run -v /etc/exports:/etc/exports -v /var/nfs-export:/var/nfs-export --cap-add SYS_ADMIN -p 2049:2049 -e NFS_VERSION=3 erichough/nfs-server:latest

Did you already succeed mounting the share using the host ip ? What is the goal of mounting the share using container ip address ?

Does this support ubuntu?

I google the web & try other solution, seems all ubuntu container solution cannot work, and someone told me "nfs-kernel-server" was a kernel module, so nfs server cannot work in container.

But your solution seems make the nfs server start in container, so my question is:
is it possible your solution works on ubuntu conatainer?

access denied reaching docker from pod within minikube

Hello,
i'm trying to build the same environment I have in production with a nfs distant server, but in local, which means I have my cluster with my pods within minikube, and it should access to the nfs docker on my host machine.

So for instance I have the following pod:

apiVersion: apps/v1  
kind: StatefulSet  
metadata:  
  name: db-statefulset
spec:
  replicas: 1
  serviceName: db-service
  selector:
    matchLabels:
      app: db-pod
  template:
    metadata:
      labels:
        app: db-pod
    spec:
      containers:
      - name: db-container
        securityContext:
          privileged: true
        image: postgres:10-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 5432
        volumeMounts:
          - mountPath: /var/lib/postgresql/data
            name: nfs-db
            subPath: db
      restartPolicy: Always
      volumes:  
        - name: nfs-db   
          persistentVolumeClaim:  
            claimName: nfs  

Yes I know the nfs is not the best choice for database, but this is not the final usage, and this is not the question.

my volumes:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: standard
  nfs:
    server: 10.0.2.2
    path: '/volume'
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nfs
spec:
  storageClassName: standard
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

And I have this error on my pod:

.scope mount.nfs: access denied by server while mounting 10.0.2.2:/volume

My docker compose for the nfs server:

version: "3"
services:
  nfs:
    image: erichough/nfs-server
    environment:
      - NFS_EXPORT_0=/volume   *(rw,no_root_squash,no_subtree_check,fsid=0)
    volumes:
      - /tmp/toast:/volume
      - /lib/modules:/lib/modules:ro
    cap_add:
      - SYS_ADMIN
    ports:
      - 111:111 
      - 111:111/udp
      - 2049:2049
      - 2049:2049/udp
      - 32765:32765
      - 32765:32765/udp
      - 32767:32767
      - 32767:32767/udp

When i try showmount when I exec on my pod I have this:

showmount -e 10.0.2.2                                                       
Export list for 10.0.2.2:                                                                                
/volume *             

Thanks for any help, hope i didn't forget something obvious..

NFS4 initial command performance

The first command run on the mounted directory on client seems to take around a minute and a half to execute (touch hello.txt). All the commands run after the 1st command seems to execute normal. Do you happen to know what is causing this?

Docker run command:

docker run  --volume /data:/home --volume /tempData/exports.txt:/etc/exports:ro --privileged --env NFS_PORT=3001 -p 3001:3001 erichough/nfs-server

exports.txt

/home  *(rw,all_squash,anonuid=0,anongid=0,fsid=0)

Mount command

mount -o port=3001 -t nfs4 10.128.0.100:/ ./mnt

Interestingly enough, the 1st command runs normal when mounted as nfs3 (if i run server as nfs3 too).

"No such file or directory"

The following docker-compose file spins up the container with no issues:

versions: "2"
  nfs-server:
    container_name: nfs-server
    image: erichough/nfs-server
    ports: 
      - '2049:2049'
    volumes:
      - '/MediaDownloads:/nfs'
      - '/home/adriano/Code/nfs-server/exports.txt:/etc/exports:ro'
    cap_add:
      - SYS_ADMIN

But the container fails to run. My exports file is the following:

/MediaDownloads 192.168.1.0/24(rw,all_squash,anonuid=1000,anongid=974)

And here is the docker log:

==================================================================
      SETTING UP
==================================================================
----> /etc/exports is bind-mounted
----> checking for presence of kernel module: nfs
----> checking for presence of kernel module: nfsd
----> setup complete

==================================================================
      STARTING SERVICES
==================================================================
----> mounting rpc_pipefs onto /var/lib/nfs/rpc_pipefs
mount: rpc_pipefs mounted on /var/lib/nfs/rpc_pipefs.
----> mounting nfsd onto /proc/fs/nfsd
mount: nfsd mounted on /proc/fs/nfsd.
----> starting rpcbind
----> exporting filesystems
exportfs: /etc/exports [1]: Neither 'subtree_check' or 'no_subtree_check' specified for export "192.168.1.0/24:/MediaDownloads".
  Assuming default behaviour ('no_subtree_check').
  NOTE: this default has changed since nfs-utils version 1.0.x

exportfs: Failed to stat /MediaDownloads: No such file or directory
exporting 192.168.1.0/24:/MediaDownloads
----> exportfs failed

==================================================================
      TERMINATING ...
==================================================================
----> rpc.svcgssd was not running
----> stopping nfsd
----> rpc.idmapd was not running
----> rpc.statd was not running
----> rpc.mountd was not running
----> un-exporting filesystems
----> killing rpcbind
----> un-mounting nfsd from /proc/fs/nfsd
umount: /proc/fs/nfsd (nfsd) unmounted
----> un-mounting rpc_pipefs from /var/lib/nfs/rpc_pipefs
umount: /var/lib/nfs/rpc_pipefs (rpc_pipefs) unmounted

==================================================================
      TERMINATED
==================================================================

If I ls -l my root directory, you'll see I definitely have a MediaDownloads dir in there...

adriano in / at adriano-server
➜ ls -l
total 60
lrwxrwxrwx   1 root    root       7 Aug 22 04:44 bin -> usr/bin
drwxr-xr-x   1 root    root     422 Sep  9 19:39 boot
-rw-r--r--   1 root    root   18462 Aug 21 12:19 desktopfs-pkgs.txt
drwxr-xr-x  20 root    root    4320 Sep 10 18:34 dev
drwxr-xr-x   1 root    root    3844 Sep 10 18:16 etc
drwxr-xr-x   1 root    root      14 Sep  7 13:06 home
lrwxrwxrwx   1 root    root       7 Aug 22 04:44 lib -> usr/lib
lrwxrwxrwx   1 root    root       7 Aug 22 04:44 lib64 -> usr/lib
drwxr-xr-x   1 root    root       0 Sep  8 23:13 media
drwxrwx---   5 adriano docker  4096 Sep 10 14:58 MediaDownloads
drwxr-xr-x   1 root    root      18 Sep  7 13:16 mnt
drwxr-xr-x   1 root    root      70 Sep 10 17:18 opt
dr-xr-xr-x 251 root    root       0 Sep 10 10:21 proc
drwxr-x---   1 root    root     318 Sep 10 12:21 root
-rw-r--r--   1 root    root    3973 Aug 21 12:13 rootfs-pkgs.txt
drwxr-xr-x  26 root    root     740 Sep 10 15:11 run
lrwxrwxrwx   1 root    root       7 Aug 22 04:44 sbin -> usr/bin
drwxr-xr-x   1 root    root      26 Sep 10 02:34 srv
dr-xr-xr-x  13 root    root       0 Sep 10 10:21 sys
drwxr-xr-x   1 adriano docker     0 Sep 11 00:32 test
drwxrwxrwt  21 root    root     740 Sep 11 00:28 tmp
drwxr-xr-x   1 root    root     128 Sep 10 18:16 usr
drwxr-xr-x   1 root    root     124 Sep 10 14:21 var

If it matters this is Arch Linux, the MediaDownloads dir is a mount of an entire HDD, and the filesystem is ext4.

"mount(2): Operation not permitted" in plain docker installation

I'm getting mount(2): Operation not permitted when I try to mount the nfs-share.

I've adapted apparmor and added cap_sys_admin for my current user (Which you mentioned in the linked issue).

Since I only have a very limited idea what this whole capability-thing is, I've followed some stackoverflow-questions and added cap_sys_admin benke in /etc/security/capability.conf as well as putting auth optional pam_cap.so in /etc/pam.d/su (Although, while it seems to have worked, I guess this is probably not the right place, as I don't understand how su comes into this). In any case, after adding these changes, capsh --print for the user running the docker-container contains cap_sys_admin+i in Current:

Current: = cap_sys_admin+i                                                                                                                                                                                     
Bounding set =cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read
Securebits: 00/0x0/1'b0
 secure-noroot: no (unlocked)
 secure-no-suid-fixup: no (unlocked)
 secure-keep-caps: no (unlocked)
uid=1000(benke)
gid=1000(benke)
groups=4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare),133(docker),1000(benke),1001(fuse)

However, this didn't fix the issue, nothing has changed. I hope you can help me out here, as I'm in the dark how this is supposed to work.

This is the full debug-output when trying to mount

sudo mount -v workbench.local:/ /media/nfs/ -v
mount.nfs: timeout set for Wed Apr  1 13:46:24 2020
mount.nfs: trying text-based options 'vers=4.2,addr=127.0.11.20,clientaddr=127.0.0.1'
mount.nfs: mount(2): Operation not permitted
mount.nfs: trying text-based options 'addr=127.0.11.20'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported

Here's the server output:

benke@id92 ~workbench/nfs $ docker-compose up nfs-server
Starting workbench_nfs-server_1 ... done
Attaching to workbench_nfs-server_1                                                                                                                                                                            
nfs-server_1                     |
nfs-server_1                     | ==================================================================
nfs-server_1                     |       SETTING UP ...
nfs-server_1                     | ==================================================================
nfs-server_1                     | ----> kernel module nfs is missing
nfs-server_1                     | ----> attempting to load kernel module nfs
nfs-server_1                     | ----> kernel module nfsd is missing
nfs-server_1                     | ----> attempting to load kernel module nfsd
nfs-server_1                     | ----> setup complete
nfs-server_1                     |
nfs-server_1                     | ==================================================================
nfs-server_1                     |       STARTING SERVICES ...
nfs-server_1                     | ==================================================================
nfs-server_1                     | ----> starting rpcbind
nfs-server_1                     | ----> starting exportfs
nfs-server_1                     | ----> starting rpc.mountd on port 32767
nfs-server_1                     | ----> starting rpc.nfsd on port 2049 with 4 server thread(s)
nfs-server_1                     | ----> terminating rpcbind
nfs-server_1                     | ----> all services started normally
nfs-server_1                     |
nfs-server_1                     | ==================================================================
nfs-server_1                     |       SERVER STARTUP COMPLETE
nfs-server_1                     | ==================================================================
nfs-server_1                     | ----> list of enabled NFS protocol versions: 4.2, 4.1, 4
nfs-server_1                     | ----> list of container exports:
nfs-server_1                     | ---->   /export         *(rw,fsid=0,no_subtree_check,sync)
nfs-server_1                     | ---->   /export/debian  *(rw,nohide,insecure,no_subtree_check,sync)
nfs-server_1                     | ----> list of container ports that should be exposed: 2049 (TCP)
nfs-server_1                     |
nfs-server_1                     | ==================================================================
nfs-server_1                     |       READY AND WAITING FOR NFS CLIENT CONNECTIONS
nfs-server_1                     | ==================================================================

And this is my docker-compose.yml

nfs-server:
  image: erichough/nfs-server
  ports:
    - 127.0.11.20:2049:2049
  volumes:
    - ./nfs/exports.txt:/etc/exports:ro
    - ./data/nfs-export:/export
    - /lib/modules:/lib/modules:ro
  cap_add:
    - SYS_ADMIN
    - SYS_MODULE
  environment:
    NFS_VERSION: 4.2
    NFS_DISABLE_VERSION_3: 1
  security_opt:
    - apparmor=erichough-nfs

idmapping new files always nobody

Hello, I have set up krb5 kerberised nfsv4 with id mapping. It would appear to work as expected except that files created on the client are owned by nobody.

I have directory on the server, from the export point downwards it's owned by a user, say foo with uid and gid 2000. I have an attached client that has the export mounted and all the files appear within owned by foo (but the local uid/gid is 1000).

The directory appears on the client

drwxr-xr-x 2 foo foo 9 Dec  8 12:54 /home/foo/nfs

and

drwxr-xr-x 2 1000 1000 9 Dec  8 12:54 /home/foo/nfs

I can read and write existing files in the directory without problems.

So far, so good. Now for the problem...

If I try to create new files or directories into the directory (as user foo) I get permission denied. If I chmod 777 the directory on the server then I can write to it but the files written are owned by nobody.

If I try as root the files are also nobody but I think that's due to root squash. The directory is exported like this:

entering poll
---->   /foo	*(rw,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=krb5p,rw,secure,root_squash,no_all_squash)

feature: ARM support

Currently the image is only built for x86-64. We should build an ARM-based image that can run on, for example, Raspberry Pi.

Getting "Stale file handle" in kubernetes deployment

Hi,

I've been using this image for a while in a Kubernetes deployment and it's working fine. I can create my NFS POD and connect it in other PODs.

However, there is one specific situation that is happening. When the Kubernetes node that has the NFS Server POD deployed starts to run out of space, i.e., it reaches 85% or more of usage, Kubernetes starts to put PODs in the evict state. This is fine and is a normal behaviour of Kubernetes. Kubernetes evicts the PODs and keep trying to reschedule new ones. Once I cleanup the files in the node to get more space, all the PODs go to a stable state again and everything from now on is on the running state, i.e., normal state. However, after it, the NFS share starts to output Stale File Handle in the shared folders that the PODs use to connect to the NFS POD.

Any idea why is it failing? While I understood by searching this issue on internet is that the stale NFS handle indicates that the client has a file open, but the server no longer recognizes the file handle (https://serverfault.com/questions/617610/stale-nfs-file-handle-after-reboot). Shouldn't the NFS Server container itself recovery of this situation. It's important to not ehere too that IPs in this case, even after the pod eviction, they don't change because they are backed by Kubernetes services.

NFS client unable to connect

This might be a noob question, but I have two VMs running in the same bridged network. I configured VM1 as the NFS server:

$ sudo docker run \
-v /nfsroot:/nfsroot \
-v /etc/exports:/etc/exports:ro \
--cap-add SYS_ADMIN \
-p 2049:2049 \
--security-opt apparmor=erichough-nfs \
erichough/nfs-server

==================================================================
      SETTING UP ...
==================================================================
----> setup complete

==================================================================
      STARTING SERVICES ...
==================================================================
----> starting rpcbind
----> starting exportfs
----> starting rpc.mountd on port 32767
----> starting rpc.statd on port 32765 (outgoing from port 32766)
----> starting rpc.nfsd on port 2049 with 4 server thread(s)
----> all services started normally

==================================================================
      SERVER STARTUP COMPLETE
==================================================================
----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
----> list of container exports:
---->   /nfsroot 10.0.0.0/24(ro,no_root_squash,no_subtree_check)
----> list of container ports that should be exposed:
---->   111 (TCP and UDP)
---->   2049 (TCP and UDP)
---->   32765 (TCP and UDP)
---->   32767 (TCP and UDP)

==================================================================
      READY AND WAITING FOR NFS CLIENT CONNECTIONS
==================================================================

Here's VM1's network status:

ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:76:8b:0e brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.94/24 brd 10.0.0.255 scope global dynamic noprefixroute ens33
       valid_lft 594618sec preferred_lft 594618sec
...

VM2 will be using the interface ens33 at 10.0.0.94.

When I connect to the server using VM2, I get:

sudo mount 10.0.0.94:/nfsroot /mnt/shared_client
mount.nfs: requested NFS version or transport protocol is not supported

Some more verbose output:

$ sudo mount -v -t nfs -o vers=3,nfsvers=3 10.0.0.94:/nfsroot /mnt/shared_client
mount.nfs: timeout set for Sun Mar  8 17:34:18 2020
mount.nfs: trying text-based options 'vers=3,nfsvers=3,addr=10.0.0.94'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 10.0.0.94 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported
$ sudo mount -v -t nfs -o vers=4,nfsvers=4 10.0.0.94:/nfsroot /mnt/shared_client
mount.nfs: timeout set for Sun Mar  8 17:34:53 2020
mount.nfs: trying text-based options 'vers=4,addr=10.0.0.94,clientaddr=192.168.117.129'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=10.0.0.94'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 10.0.0.94 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100005, trying vers=3, prot=6
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: requested NFS version or transport protocol is not supported

Strangely, when I set up VM1's server by systemctl start nfs-kernel-server in the same network setup, VM2 CAN connect to the server without issue.

Am I missing anything?

Squashing of everybody with Kerberos

Thank you for this nice and useful project!

Unfortunatily, I'm stuck and desperate because of Kerberos ..
I've followed the instructions to configure the container (created a keytab for the service, etc.). I can mount my share, and I can browse it if (and only if) my user has a ticket. So I guess that my Kerberos configuration is fine.

The diagram illustrates my lab. All the Kerberos heavy lifting is done by the IPA server. The nodes are enrolled in the ipa, which provides in return the keytabs and the correct krb5.conf (I had to modify the one for nfs-server, because of some includes inside the file that where not available in the container).
Untitled Diagram

BUT, all access from any users are "squashed". "root" or non-privileged user (it's an IPA' domain user, with very high UID like 20000), same effect, the server is seeing me as "nfsnobody".
I've collected some debug of rpc.gssd on the client side, nothing seems suspicious: principals and UIDs are correct. On the server side, even with the container in DEBUG it's difficult to me to understand how rpc.svgssd works.. I see some "uid=-1" but I don't know if it's relevant.. I tried to "reproduce" the container outside of docker, but in my CentOS environment rpc.svcgssd seems to no longer exist (replaced by gssproxy).

If I don't use kerberos (without the flag), my users are linked to their legitim UID (reason for why I suspect kerberos).

Important fact: All of this in NFSv3, because .. legacy.

NFS & RPC are kind of alien technology to me, my problem could be an obvious mistake, so thank you for any advice, clue, explanation ..

Method 2 to provide exports fails

Even if I try to expose "/" it says "is not a container directory".
Mounting the same folders via -v .exports:/etc/exports does work.

And btw the Alpine Bug seems to be fixed ;-).

User permissions

I have a question about user specific permissions, specifically with this container build. Normally on NFS, the nfs-server directory/file permissions would map 1-1 with the nfs-client permissions, so if uid=1000 has write access on the server then uid=1000 on the client would also have write access.
At which levels do I need the users to exist? Just inside the container, just inside the docker-host, or both? Also, do I need to run the container as a particular user? On all of my other containers I've built them to run as a docker user that only has access to areas of the host it needs to function. Any help on how nfs user mapping works with regard to this container would be great. Thank you!

standard_init_linux.go:211: exec user process caused "exec format error"

Getting this error on a Pi 4 running hypriotOS

standard_init_linux.go:211: exec user process caused "exec format error"

the command I run

sudo docker run -v /mnt/SSD:/ssd -v /etc/exports:/etc/exports:ro \
    --privileged -p 2049:2049 \
    --name nfs \
    erichough/nfs-server

environment

$ uname -a
Linux black-pearl 4.19.75-v7l+ #1270 SMP Tue Sep 24 18:51:41 BST 2019 armv7l GNU/Linux

$ docker version
Client: Docker Engine - Community
 Version:           19.03.5
 API version:       1.40
 Go version:        go1.12.12
 Git commit:        633a0ea
 Built:             Wed Nov 13 07:37:22 2019
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.5
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.12
  Git commit:       633a0ea
  Built:            Wed Nov 13 07:31:17 2019
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.10
  GitCommit:        b34a5c8af56e510852c35414db4c1f4fa6172339
 runc:
  Version:          1.0.0-rc8+dev
  GitCommit:        3e425f80a8c931f88e6d94a8c831b9d5aa481657
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Cannot bind to port 111 in Docker for Windows

I am trying to set this up on Windows with Docker. When I start the container I get Error starting userland proxy: listen tcp 0.0.0.0:111: bind: address already in use.

It seems strange because there is no processes listening on port 111 on my Windows host OS. There are no other containers running either (I am checking with netstat -ano in Powershell and I cannot see the port number 111 anwhere).

I a stuck here because I do not know how to identify why the bind is not possible because to me it seems that port 111 should be free.

can't mount rpc_pipefs

I tried unsuccessfully to run the nfs-server image on an up-to-date Ubuntu LTS 18.04 host. The setup is pretty similar to the example given in the Usage section of the docs.

There appears to be a problem with the mounting of a file system.

The docker run command is:
$ docker run -v /srv/nfs:/nfs -v /srv/exports.nfs:/etc/exports:ro --cap-add SYS_ADMIN -p 2049:2049 erichough/nfs-server

Its output:
...
mount: rpc_pipefs is write-protected, mounting read-only
mount: cannot mount rpc_pipefs read-only
...

Disussion: necessary ports

Hi
I'd like to open a discussion about necessary ports for the application to work. I used your package in kubernetes cluster to share data from one node (physical host) to multiple workloads along several nodes.
For this, I registered a service, so the container is accessible via DNS. Then I needed to define to forward the necessary ports. I tried several variants, but I ended up with this ports (NFSv4) to get it working:
TCP: 111, 2049, 32765, 32767
UDP: 111, 632, 646, 2049, 32765, 32767
You only forwarded 2049 to bridged network. So, I’m curious: any ideas why this behavior happened? Maybe if it’s normal behavior, it should be added to the readme, as it costs some time to find that out.

Does not work on ubuntu server 20.04

While the container starts up properly without any errors, I'm unable to connect from any client.
While using the hosts nfs-server.service everything works as expected.

I sadly have no idea how to debug any of this but will gladly help any way I can.
Not sure if this is connected to #41 maybe?

Infos

Apparmor I disabled and I ultimately outright purged it (don't need it)

Edit Friend just told me that it's part of the kernel. I did test with apparmor=0 though which
didn't work either

docker-compose

version: '3'

services:
  nfs:
    container_name: nfs
    image: erichough/nfs-server:latest
    network_mode: 'host'
    privileged: true
    volumes:
      # Config
      - '/docker/data/nfs/exports:/etc/exports:ro'
      # Shares
      - '/mnt/Backups:/Backups'
      - '/mnt/Documents:/Documents'
      - '/mnt/Multimedia:/Multimedia'
    restart: unless-stopped

Startup-log

==================================================================
      SETTING UP ...
==================================================================
----> log level set to DEBUG
----> will use 4 rpc.nfsd server thread(s) (1 thread per CPU)
----> /etc/exports is bind-mounted
----> kernel module nfs is loaded
----> kernel module nfsd is loaded
----> setup complete

==================================================================
      STARTING SERVICES ...
==================================================================
----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
----> mounting nfsd filesystem onto /proc/fs/nfsd
mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
----> starting rpcbind
----> starting exportfs
exporting *:/Multimedia
exporting *:/Documents
exporting *:/Backups
----> starting rpc.mountd on port 32767
----> starting rpc.statd on port 32765 (outgoing from port 32766)
----> starting rpc.nfsd on port 2049 with 4 server thread(s)
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 +3 +4 +4.1 +4.2
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET UDP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
rpc.nfsd: Created AF_INET6 UDP socket.
rpc.statd: Version 2.3.4 starting
rpc.statd: Flags: No-Daemon Log-STDERR TI-RPC 
rpc.statd: Local NSM state number: 3
rpc.statd: Running as root.  chown /var/lib/nfs to choose different user
rpc.statd: Waiting for client connections
----> all services started normally

==================================================================
      SERVER STARTUP COMPLETE
==================================================================
----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
----> list of container exports:
---->   /Multimedia	*(rw,sync,wdelay,hide,crossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
---->   /Documents	*(rw,sync,wdelay,hide,crossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
---->   /Backups	*(rw,sync,wdelay,hide,crossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
----> list of container ports that should be exposed:
---->   111 (TCP and UDP)
---->   2049 (TCP and UDP)
---->   32765 (TCP and UDP)
---->   32767 (TCP and UDP)

==================================================================
      READY AND WAITING FOR NFS CLIENT CONNECTIONS
==================================================================

feature: Docker for Mac support

Hello,
I try to run this docker conainer with the following docker-compose.yml file:

version: '3'

services:
  nfs-server:
    image: erichough/nfs-server
    volumes:
      - nfs:/filesystem
      - ./nfs/exports.txt:/etc/exports:ro
    ports:
      - 20490:2049
    #    privileged: true
    cap_add:
      - CAP_SYS_ADMIN

volumes:
  nfs:

My OS: macOS Mojave (10.14.5 (18F132))
docker-compose version 1.24.0, build 0aa59064
docker-py version: 3.7.2
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j 20 Nov 2018

Client: Docker Engine - Community
Version: 19.03.0-rc2
API version: 1.40
Go version: go1.12.5

I receive the following error:

Attaching to dev-setup_nfs-server_1
nfs-server_1  | 
nfs-server_1  | ==================================================================
nfs-server_1  |       SETTING UP ...
nfs-server_1  | ==================================================================
nfs-server_1  | ----> kernel module nfs is missing
nfs-server_1  | ----> 
nfs-server_1  | ----> ERROR: nfs module is not loaded in the Docker host's kernel (try: modprobe nfs)
nfs-server_1  | ----> 
dev-setup_nfs-server_1 exited with code 1

How can I get this to work? :)

Problem running on ext4+overlay2 docker data directory

I've run into a strange problem with this docker, and I've yet to figure out what is going on. For some strange reason I can no longer successfully use this docker (when I could several kernels ago. I'm still using the same docker-compose I had ~ a year and a half ago).

I'm out of ideas, I cannot figure out how the same image (the SHAs match) can work with one docker data directory and not the other. What am I missing to make this container work?

Observations

Not working

When I run it in my normal docker data folder I get:

Using /var/lib/docker:

# docker-compose down
Removing nfs_nfs_1 ... done
Removing network nfs_default
# docker-compose up
... (everything starts up the same)
# ls /mnt/data

# mount -vvv XservernameX:/nfs /mnt/data
mount.nfs: timeout set for Mon Feb 10 12:30:58 2020
mount.nfs: trying text-based options 'vers=4.2,addr=10.XX.XX.59,clientaddr=10.XX.XX.59'
mount.nfs: mount(2): No such file or directory
mount.nfs: trying text-based options 'addr=10.XX.XX.59'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: portmap query retrying: RPC: Program not registered
mount.nfs: prog 100003, trying vers=3, prot=17
mount.nfs: portmap query failed: RPC: Program not registered
mount.nfs: mounting XservernameX:/nfs failed, reason given by server: No such file or directory

Clean docker dir Working

However, I accidentally discovered that if I change the docker data dir, and restart the daemon, it works all of a sudden 😮

Using /var/lib/docker2

# docker-compose down
Removing nfs_nfs_1 ... done
Removing network nfs_default
# docker-compose up
... (everything starts up the same)
# ls /mnt/data

# mount -vvv XservernameX:/nfs /mnt/data
mount.nfs: timeout set for Mon Feb 10 12:24:51 2020
mount.nfs: trying text-based options 'vers=4.2,addr=10.XX.XX.59,clientaddr=10.XX.XX.59'
# ls /mnt/data
foobar
# umount -vvv /mnt/data
/mnt/data: nfs4 mount point detected
/mnt/data: umounted

Here is the docker-compose file I'm using

version: '2.3'
services:
  nfs:
    image: erichough/nfs-server
    ports:
      - '2049:2049'
    cap_add:
      - SYS_ADMIN
      - SYS_MODULE
    volumes:
      - type: bind
        source: /opt/nfs_test
        target: /nfs
        read_only: false
      - type: bind
        source: /lib/modules
        target: /lib/modules
        read_only: true
    environment:
      - NFS_EXPORT_0=/nfs *(rw,insecure,no_subtree_check,fsid=1,no_root_squash,async)
      - NFS_DISABLE_VERSION_3=1

Other things I tried that did not work

  • docker run -p 2049:2049 --cap-add SYS_ADMIN --cap-add SYS_MODULE -v /opt/nfs_test:/nfs:rw -v /lib/modules:/lib/modules:ro -e NFS_EXPORT_0='/nfs *(rw,insecure,no_subtree_check,fsid=1,no_root_squash,async)' -e NFS_DISABLE_VERSION_3=1 erichough/nfs-server
  • --privileged
  • Pulled latest erichough/nfs-server image
  • Removed docker "network", which the docker version of the command didn't even use.
  • network=host and 127.0.0.1
  • Tried not setting NFS_DISABLE_VERSION_3, but that doesn't really work because systemd uses port 111, so that messes up rpc-statd

A workaround

Now this is surprising, but I discovered that I could:

  1. Start the nfs container with /var/lib/docker2
  2. Mount /mnt/data. Success
  3. Leave the dir mounted, and stop the docker
  4. Switch the docker daemon back to /var/lib/docker
  5. Start the nfs container with /var/lib/docker
  6. ls /mnt/data works using the /var/lib/docker now.

This is not a great work around, but it seems to me to present even more questions than answers.

Other notes

  • Running kernel: 5.4.13-201.fc31.x86_64
  • I noticed I can no longer use the auto load nfsd trick in the docker, if I do, it perma hangs, and I have eventually reboot my machine and modprobe on the host before running the container. I suspect that this is due to me having a newer kernel, but have not tested this yet. I'm beginning to wonder that the kernel the image is built with is too different from my kernel to run modprobe command.

example of docker-compose.yml

Can some one please provide a sample of docker-compose with couple of services where docker-nfs-server is configured and used?

problem on mounting shared volume

I'm trying to mount a network shared directory for shareing html files and certificates. I'm using the below setup

version: '3.8'
services:
    nfs:
        image: erichough/nfs-server
        restart: unless-stopped
        environment:
            - NFS_EXPORT_0='/share/data0                  *(rw,no_subtree_check)'
            - NFS_EXPORT_1='/share/data1                  *(rw,no_subtree_check)'
        volumes:
            - ./html1:/share/data1
            - /certs/etc/letsencrypt/live:/share/data0
    proxy:
        image: nginx
        restart: unless-stopped
        depends_on:
            - nfs
        ports:
          - '80:80'
          - '443:443'
        volumes:
           - ./conf_data/proxy_template:/etc/nginx_templates
           - share1:/www/data1
           - share0:/etc/letsencrypt/live
    web_1:
        image: httpd
        restart: unless-stopped
        depends_on:
            - proxy
        volumes:
            - ./conf_data/mqtt_conf/mosquitto.conf:/mosquitto/config/mosquitto.conf
            - ./conf_data/mqtt_conf/more/:/mosquitto/config/more/
volumes:
    share0:
        type: "nfs"
        o: "addr=nfs,nolock,soft,rw"
        device: ":/share/data0"
    share1:
        type: "nfs"
        o: "addr=nfs,nolock,soft,rw"
        device: ":/share/data1"

but when I try to run using docker-compose up it gave me error as

ERROR: The Compose file './docker-compose.yml' is invalid because:
volumes.share0 value 'device', 'o', 'type' do not match any of the regexes: '^x-'
volumes.share1 value 'device', 'o', 'type' do not match any of the regexes: '^x-'

what can be the volume setup to mount the nfs shared?

please provide an example to mount the volume in compose setup

rpc.nfsd hangs on container start up

I'll preface this with saying this is probably a bug against me not knowing nfs things. I'm using the following compose file to spin the container.

version: "3.7"
services:
  nfs-server:
    container_name: nfs-server
    image: erichough/nfs-server:latest
    ports:
      - "2049:2049"
    volumes:
      - type: bind
        source: /mnt/nfstest
        target: /shares/nfstest

      - type: bind
        source: /home/cubeadmin/nfs/nfstest-exports-docker
        target: /etc/exports
        read_only: true
    environment:
      - NFS_DISABLE_VERSION_3=YES
      - NFS_LOG_LEVEL=DEBUG
    cap_add:
      - SYS_ADMIN
    security_opt:
      - apparmor=erichough-nfs

Exports file looks like this:

/shares/nfstest 192.168.1.0/24(fsid=0,rw,no_root_squash,no_subtree_check)

I am able to basically docker-compose restart down/up just fine until I mount the share from another system. After I do that, regardless of whether or not I unmount from the other system, the next time I try to restart the container, it hangs on starting rpc.nfsd on port 2049 with 4 server thread(s) . I'm unable to stop the container or even kill -9 the rpc.nfsd process at that point. It must be some sort of low level deadlock, but my brain has been too small to find the root cause thus far. 💯

nfsv4 not working

I don't seem to be able to mount my share if I disable nfsv3 support in the container.

sudo mount -t nfs4 dev.ryan-ewen.com:/test /home/rewen/test

If I leave nfsv3 support enabled, I still cannot mount my share using that command, unless the nfs server has all 4 ports forwarded. It may be using nfs3 even though I specify nfs4? Not sure.

Unable to run on ubuntu 18.10

First off I think this project is pretty cool, and I hope it helps me with my current problem

Anyways, I'm following the guide at https://github.com/ehough/docker-nfs-server/blob/develop/doc/feature/apparmor.md and when I run step 3 (apparmor_parser) I get the following error

AppArmor parser error for /opt/nfs/apparmor_nfs.txt in /opt/nfs/apparmor_nfs.txt at line 3: Could not open 'abstractions/lxc/container-base'

I have the apparmor-utils install and have also tried installing both apparmor-profiles and apparmor-profiles-extra. no joy

Any help would be greatly appreciated

automatic module loading not working for nfsd

OS: Ubuntu 18.04 / 20.04
Kernel: 5.3.0-53-generic
Container command (example): docker run --rm -it -v /lib/modules:/lib/modules:ro --security-opt apparmor=erichough-nfs ubuntu:bionic modprobe -v nfsd

Logs:

Container:

insmod /lib/modules/5.3.0-53-generic/kernel/net/sunrpc/auth_gss/auth_rpcgss.ko 
insmod /lib/modules/5.3.0-53-generic/kernel/fs/nfsd/nfsd.ko 
Killed

Host:

Mai 27 08:52:15 host kernel: Installing knfsd (copyright (C) 1996 [email protected]).
Mai 27 08:52:15 host kernel: BUG: kernel NULL pointer dereference, address: 0000000000000058
Mai 27 08:52:15 host kernel: #PF: supervisor write access in kernel mode
Mai 27 08:52:15 host kernel: #PF: error_code(0x0002) - not-present page
Mai 27 08:52:15 host kernel: PGD 0 P4D 0 
Mai 27 08:52:15 host kernel: Oops: 0002 [#1] SMP NOPTI
Mai 27 08:52:15 host kernel: CPU: 3 PID: 5964 Comm: modprobe Not tainted 5.3.0-53-generic #47~18.04.1-Ubuntu
Mai 27 08:52:15 host kernel: Hardware name: LENOVO 20N8005UGE/20N8005UGE, BIOS R0YET41W (1.24 ) 12/18/2019
Mai 27 08:52:15 host kernel: RIP: 0010:nfsd_fill_super+0x71/0x90 [nfsd]
Mai 27 08:52:15 host kernel: Code: 85 c0 89 c3 74 09 89 d8 5b 41 5c 41 5d 5d c3 49 8b 7c 24 68 31 f6 48 c7 c2 70 b4 31 c1 e8 97 fe ff ff 48 3d 00 f0 ff ff 77 0d <49> 89 45 58 89 d8 5b 41 5c 41 5d 5d c3 89 c3 eb cb 0f 1f 40 00 66
Mai 27 08:52:15 host kernel: RSP: 0018:ffffb5f881c97aa8 EFLAGS: 00010287
Mai 27 08:52:15 host kernel: RAX: ffff9c714033a900 RBX: 0000000000000000 RCX: 0000000000000002
Mai 27 08:52:15 host kernel: RDX: 0000000000000000 RSI: 0000000000000100 RDI: ffff9c71d31eed28
Mai 27 08:52:15 host kernel: RBP: ffffb5f881c97ac0 R08: ffff9c714033a920 R09: 0000000000000000
Mai 27 08:52:15 host kernel: R10: 0000000000000000 R11: fefefefefefefeff R12: ffff9c71e8bd8800
Mai 27 08:52:15 host kernel: R13: 0000000000000000 R14: ffffffffc12e24d0 R15: ffff9c71734a9c80
Mai 27 08:52:15 host kernel: FS:  00007fd98be92540(0000) GS:ffff9c71f12c0000(0000) knlGS:0000000000000000
Mai 27 08:52:15 host kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mai 27 08:52:15 host kernel: CR2: 0000000000000058 CR3: 00000003c60fa001 CR4: 00000000003606e0
Mai 27 08:52:15 host kernel: Call Trace:
Mai 27 08:52:15 host kernel:  vfs_get_super+0x5b/0xe0
Mai 27 08:52:15 host kernel:  ? vfs_parse_fs_param+0xdc/0x1c0
Mai 27 08:52:15 host kernel:  nfsd_fs_get_tree+0x2c/0x30 [nfsd]
Mai 27 08:52:15 host kernel:  vfs_get_tree+0x2a/0x100
Mai 27 08:52:15 host kernel:  fc_mount+0x12/0x40
Mai 27 08:52:15 host kernel:  vfs_kern_mount.part.31+0x76/0x90
Mai 27 08:52:15 host kernel:  vfs_kern_mount+0x13/0x20
Mai 27 08:52:15 host kernel:  nfsd_init_net+0x101/0x140 [nfsd]
Mai 27 08:52:15 host kernel:  ops_init+0x44/0x120
Mai 27 08:52:15 host kernel:  register_pernet_operations+0xed/0x200
Mai 27 08:52:15 host kernel:  ? trace_event_define_fields_nfsd_stateid_class+0xb3/0xb3 [nfsd]
Mai 27 08:52:15 host kernel:  register_pernet_subsys+0x28/0x40
Mai 27 08:52:15 host kernel:  init_nfsd+0x22/0xcbc [nfsd]
Mai 27 08:52:15 host kernel:  do_one_initcall+0x4a/0x1fa
Mai 27 08:52:15 host kernel:  ? _cond_resched+0x19/0x40
Mai 27 08:52:15 host kernel:  ? kmem_cache_alloc_trace+0x165/0x220
Mai 27 08:52:15 host kernel:  do_init_module+0x5f/0x227
Mai 27 08:52:15 host kernel:  load_module+0x1aa4/0x2140
Mai 27 08:52:15 host kernel:  __do_sys_finit_module+0xfc/0x120
Mai 27 08:52:15 host kernel:  ? __do_sys_finit_module+0xfc/0x120
Mai 27 08:52:15 host kernel:  __x64_sys_finit_module+0x1a/0x20
Mai 27 08:52:15 host kernel:  do_syscall_64+0x5a/0x130
Mai 27 08:52:15 host kernel:  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Mai 27 08:52:15 host kernel: RIP: 0033:0x7fd98b9ba839
Mai 27 08:52:15 host kernel: Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48
Mai 27 08:52:15 host kernel: RSP: 002b:00007fffb653b848 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
Mai 27 08:52:15 host kernel: RAX: ffffffffffffffda RBX: 000055e6c1078d80 RCX: 00007fd98b9ba839
Mai 27 08:52:15 host kernel: RDX: 0000000000000000 RSI: 000055e6bf45fcee RDI: 0000000000000004
Mai 27 08:52:15 host kernel: RBP: 000055e6bf45fcee R08: 0000000000000000 R09: 0000000000000000
Mai 27 08:52:15 host kernel: R10: 0000000000000004 R11: 0000000000000246 R12: 0000000000000000
Mai 27 08:52:15 host kernel: R13: 000055e6c1078c80 R14: 0000000000040000 R15: 000055e6c1078d80
Mai 27 08:52:15 host kernel: Modules linked in: nfsd(+) auth_rpcgss nfsv3 nfs_acl nfsv4 nfs lockd grace fscache hid_generic snd_usb_audio usbhid hid snd_usbmidi_lib cdc_ether usbnet r8152 mii rfcomm xt_nat veth vxlan ip6_udp_tunnel udp_tunnel xt_mark nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xt_CHECKSUM iptable_mangle xt_MASQUERADE iptable_nat nf_nat bridge stp llc ccm cmac aufs overlay bnep zram binfmt_misc nls_iso8859_1 mei_hdcp intel_rapl_msr sof_pci_dev snd_sof_intel_hda_common snd_soc_hdac_hda snd_sof_intel_hda snd_sof_intel_byt snd_sof_intel_ipc snd_sof snd_sof_xtensa_dsp snd_hda_ext_core snd_soc_acpi_intel_match snd_soc_acpi snd_soc_core x86_pkg_temp_thermal intel_powerclamp iwlmvm snd_compress coretemp ac97_bus snd_hda_codec_hdmi mac80211 snd_pcm_dmaengine kvm_intel snd_hda_codec_conexant snd_hda_codec_generic libarc4 kvm snd_hda_intel irqbypass joydev snd_intel_dspcfg intel_cstate snd_hda_codec input_leds intel_rapl_perf iwlwifi snd_hda_core uvcvideo btusb
Mai 27 08:52:15 host kernel:  serio_raw snd_hwdep thinkpad_acpi btrtl snd_seq_midi intel_wmi_thunderbolt btbcm v4l2_common snd_pcm videobuf2_vmalloc btintel wmi_bmof videobuf2_memops snd_seq_midi_event nvram rtsx_pci_ms videobuf2_v4l2 snd_rawmidi cfg80211 ledtrig_audio videobuf2_common bluetooth memstick mei_me videodev snd_seq mc ucsi_acpi processor_thermal_device typec_ucsi ecdh_generic intel_rapl_common snd_seq_device mei ecc typec intel_soc_dts_iosf intel_pch_thermal snd_timer snd soundcore int3403_thermal int340x_thermal_zone int3400_thermal acpi_thermal_rel acpi_pad mac_hid ip6table_filter ip6_tables xt_tcpudp xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 libcrc32c iptable_filter bpfilter sch_fq_codel parport_pc sunrpc ppdev lp parport ip_tables x_tables autofs4 algif_skcipher af_alg dm_crypt crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel i915 rtsx_pci_sdmmc i2c_algo_bit drm_kms_helper aes_x86_64 crypto_simd syscopyarea nvme sysfillrect cryptd sysimgblt glue_helper
Mai 27 08:52:15 host kernel:  fb_sys_fops psmouse drm r8169 nvme_core realtek rtsx_pci wmi pinctrl_cannonlake video pinctrl_intel
Mai 27 08:52:15 host kernel: CR2: 0000000000000058
Mai 27 08:52:15 host kernel: ---[ end trace 737920d87c3aa490 ]---
Mai 27 08:52:15 host kernel: RIP: 0010:nfsd_fill_super+0x71/0x90 [nfsd]
Mai 27 08:52:15 host kernel: Code: 85 c0 89 c3 74 09 89 d8 5b 41 5c 41 5d 5d c3 49 8b 7c 24 68 31 f6 48 c7 c2 70 b4 31 c1 e8 97 fe ff ff 48 3d 00 f0 ff ff 77 0d <49> 89 45 58 89 d8 5b 41 5c 41 5d 5d c3 89 c3 eb cb 0f 1f 40 00 66
Mai 27 08:52:15 host kernel: RSP: 0018:ffffb5f881c97aa8 EFLAGS: 00010287
Mai 27 08:52:15 host kernel: RAX: ffff9c714033a900 RBX: 0000000000000000 RCX: 0000000000000002
Mai 27 08:52:15 host kernel: RDX: 0000000000000000 RSI: 0000000000000100 RDI: ffff9c71d31eed28
Mai 27 08:52:15 host kernel: RBP: ffffb5f881c97ac0 R08: ffff9c714033a920 R09: 0000000000000000
Mai 27 08:52:15 host kernel: R10: 0000000000000000 R11: fefefefefefefeff R12: ffff9c71e8bd8800
Mai 27 08:52:15 host kernel: R13: 0000000000000000 R14: ffffffffc12e24d0 R15: ffff9c71734a9c80
Mai 27 08:52:15 host kernel: FS:  00007fd98be92540(0000) GS:ffff9c71f12c0000(0000) knlGS:0000000000000000
Mai 27 08:52:15 host kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mai 27 08:52:15 host kernel: CR2: 0000000000000058 CR3: 00000003c60fa001 CR4: 00000000003606e0

Looks like this issue: docker/for-linux#996

Afterwards, my machine hangs and needs a cold reset.

Modprobing the nfsd module directly on the host works without issues.

Unable to connect to nfs share from docker host

I'm very new to nfs so I may be doing this wrong, but here is my run command:

docker run \ -e NFS_EXPORT_0='/nfs/share 192.168.1.111(rw,no_subtree_check)' \ -v /home/shivang/share:/nfs/share \ --cap-add SYS_ADMIN \ -p 2049:2049 \ erichough/nfs-server

Essentially, I created a 'share' folder that I want to use as the nfs share. 192.168.1.111 is my local IP. The docker container seems to start up correctly ("READY AND WAITING FOR CONNECTIONS ON PORT 2049")

But attempting to mount the nfs share from my host gives me the error:

mount.nfs: access denied by server while mounting 172.17.0.2:/nfs/share

I tried exec'ing onto the container and pinging my host's ip (192.168.1.111) and it works, so I'm sure that the container can see my host, it just denies it for some reason. Am I configuring this wrong? Any help would be appreciated!

'statd failed'

      SETTING UP
==================================================================
----> building /etc/exports
----> will export /nfs 10.90.100.0/24(rw,no_subtree_check)
----> will export 1 filesystem(s)
----> checking for presenc kernel module: nfs
----> checking for presence of kernel module: nfsd
----> setup complete

==================================================================
      STARTING SERVICES
==================================================================
----> mounting rpc_pipefs onto /var/lib/nfs/rpc_pipefs
mount: rpc_pipefs mounted on /var/lib/nfs/rpc_pipefs.
----> mounting nfsd onto /proc/fs/nfsd
mount: nfsd mounted on /proc/fs/nfsd.
----> starting rpcbind
----> exporting filesystems
exporting 10.99.113.0/24:/nfs
----> starting rpc.mountd for NFS version 4.2 on port 32767
----> starting statd on port 32765 (outgoing connections on port 32766)
----> statd failed

==================================================================
      TERMINATING ...
==================================================================
----> rpc.svcgssd was not running
----> stopping nfsd
----> rpc.idmapd was not running
----> rpc.statd was not running
----> killing rpc.mountd
----> un-exporting filesystems
----> rpcbind was not running
----> un-mounting nfsd from /proc/fs/nfsd
umount: /proc/fs/nfsd (nfsd) unmounted
----> un-mounting rpc_pipefs from /var/lib/nfs/rpc_pipefs
umount: /var/lib/nfs/rpc_pipefs (rpc_pipefs) unmounted

==================================================================
      TERMINATED
==================================================================

Client error 'mounting ... failed, reason given by server: No such file or directory'

Apologies in advance for the amateurish question. I know that there must be something that I have done wrong but I am unsure what. If someone can help it would be appreciated.

On my docker server (192.168.1.39) I am running the following.

docker run \
-v /home/andrew/docker.uat/nfs:/container/path/foo \
-e NFS_EXPORT_0='/container/path/foo *(ro,no_subtree_check)'  \
-e NFS_DISABLE_VERSION_3=1 \
-e NFS_LOG_LEVEL=DEBUG \
--cap-add SYS_ADMIN \
--cap-add SYS_MODULE \
-p 2049:2049 \
erichough/nfs-server

The output when I run the above is the following.

==================================================================
      SETTING UP ...
==================================================================
----> log level set to DEBUG
----> will use 8 rpc.nfsd server thread(s) (1 thread per CPU)
----> building /etc/exports from environment variables
----> collected 1 valid export(s) from NFS_EXPORT_* environment variables
----> kernel module nfs is loaded
----> kernel module nfsd is loaded
----> setup complete

==================================================================
      STARTING SERVICES ...
==================================================================
----> mounting rpc_pipefs filesystem onto /var/lib/nfs/rpc_pipefs
mount: mount('rpc_pipefs','/var/lib/nfs/rpc_pipefs','rpc_pipefs',0x00008000,'(null)'):0
----> mounting nfsd filesystem onto /proc/fs/nfsd
mount: mount('nfsd','/proc/fs/nfsd','nfsd',0x00008000,'(null)'):0
----> starting rpcbind
----> starting exportfs
exporting *:/container/path/foo
----> starting rpc.mountd on port 32767
----> starting rpc.nfsd on port 2049 with 8 server thread(s)
rpc.nfsd: knfsd is currently down
rpc.nfsd: Writing version string to kernel: -2 -3 +4 +4.1 +4.2
rpc.nfsd: Created AF_INET TCP socket.
rpc.nfsd: Created AF_INET UDP socket.
rpc.nfsd: Created AF_INET6 TCP socket.
rpc.nfsd: Created AF_INET6 UDP socket.
----> terminating rpcbind
----> all services started normally

==================================================================
      SERVER STARTUP COMPLETE
==================================================================
----> list of enabled NFS protocol versions: 4.2, 4.1, 4
----> list of container exports:
---->   /container/path/foo     *(ro,sync,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash)
----> list of container ports that should be exposed: 2049 (TCP)

==================================================================
      READY AND WAITING FOR NFS CLIENT CONNECTIONS
==================================================================

But when I try to mount the folder from the client machine (192.168.1.189) I get the following.

showmount -e 192.168.1.39
clnt_create: RPC: Program not registered

mount -vvv -t nfs -o vers=4,port2049 192.168.1.39:/container/path/foo /mnt/test/
mount.nfs: timeout set for Sun Nov 8 09:22:36 2020
mount.nfs: trying text-based options'port=2049,vers=4.2,addr=192.168.1.39,clientaddr=192.168.1.189'
mount.nfs: mount(2): No such file or directory
mount.nfs: mounting 192.168.1.39:/container/path/foo failed, reason given by server: No such file or directory

I feel like it has something to do with the RPC (but I only have a basic understanding of it). However when I set up a native installation (not docker) of the nfs-server package on my server then my client has no problem connecting to it.

I did log into the container and confirmed that the path /container/path/foo exists. While in the container I also confirmed that the /etc/exports was there and correct.

So I am unsure what the problem is or how to follow up any more (I have spent a day on Google with this problem and have not gotten anywhere). I have tried several variations of the above.

This is only for a home server setup but if anyone can suggest how I would appreciate it.

Thankyou

apparmor_parser fail on ububut 18.04

Hi I was trying to create a nfs server container follow you instruction, got failed here.
$ sudo apparmor_parser -r -W /home/lshi/extra/docker/nfs-server-alpine/erichough-nfs

AppArmor parser error for /home/lshi/extra/docker/nfs-server-alpine/erichough-nfs in /home/lshi/extra/docker/nfs-server-alpine/erichough-nfs at line 3: Could not open 'abstractions/lxc/container-base'

----------------file erichough-nfs-------------------
#include <tunables/global>
profile erichough-nfs flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>
mount fstype=nfs*,
mount fstype=rpc_pipefs,
}

Single quotes not removed from export environment variables

Hi,

I had a docker compose block below and the first single quote was preventing the directory from being read which resulted in "WARNING: skipping NFS_EXPORT_0 environment variable since '/downloads is not a container directory"

    environment:
      - NFS_EXPORT_0='/downloads *(ro,all_squash)'

The culprit code is below:

local dir="${line_as_array[0]}"

if [[ ! -d "$dir" ]]; then

We can strip the single quotes easily (see below), lemme know, and I can put up a PR if you want. Might want to consider also stripping double quotes. Or we could make the change against the $line variable before we try to read it into an array.

local dir="${line_as_array[0]//\'/}"

Or lemme know if I'm missing something obvious here.

Thanks

[Suggestion/PR] Support readiness flag for exports volume

Hello, thanks for this project first,

I plan to use it on a kubernetes cluster to share a volume (obviously) across pods, and ran into a small issue that might be addressable.

Situation

Here's the layout of the containers I want to run on a given host

host
    containers:
        - exports-filler
            - /data ----------------
        - nfs-server                |
            - /etc/exports ---------
    volumes:                        |
        - data  --------------------

The problem

I want the exports-filler container to start and finish initializing itself (successfully) before the nfs-server starts (or at least boots up the whole exports setup)

Turns out k8s doesn't have a feature to order containers within a pod (I know, I also assumed it would support this),

So I can't say "start exports-filler, wait for it to be up, then start nfs-server"

(N.B.: initContainers don't cut it in this case because exports-filler won't/mustn't complete, as it syncs with an upstream provider all the time)

Workaround

I made a quick workaround here for now, which involves:

  • loop-checking with a small sleep for a file in exports
  • have the app container drop it in the exports when it's done doing what it must
  • when the file is present, my overriden entrypoint triggers the normal one

Sample (naive) implementation here: https://github.com/ehough/docker-nfs-server/compare/develop...Tristan971:support-awaiting-for-exports-readiness?expand=1

First, is it something you might want to support in this? And if so, any idea of nicer ways to go about it? (I'm totally ok with doing the implementation, for what it's worth)

Thanks

Directory Level Security

I have a bunch of Swarm nodes running hostile code (think Jupyterhub), each node will access their own directory on the NFS. What's the best way to go with this? export file+node IP level isolation? Docker Stack (a single notebook server container on hostile node + a NFS server on NFS node)? I am not sure what's the best way to configure this that is both simple to use and optimal performance-wise.

mount.nfs: access denied by server while mounting 172.17.0.2:/mnt/nfstest/

On my docker server which started properly with next command

sudo docker run 
-v /mnt/nfstest/:/mnt/nfstest 
-v /etc/exports:/etc/exports:ro --privileged
 -p 2049:2049 --security-opt apparmor=erichough-nfs
  erichough/nfs-server

I am running a Virtual Machine Ubuntu 20.04 on my Windows 10 host and run this docker on my VM.
The problem is that my docker's IP is 172.17.0.2, and my home IP domain is 192.168.0.0/24
What bothers me I s that I don't understand why docker has to be on the same network or if he has to be when he works on another OSI layer and requires only open ports?

I get this error from my VM

sudo mount 172.17.0.2:/ /mnt/nfstest
mount.nfs: access denied by server while mounting 172.17.0.2: /mnt/nfstest

and from windows I can't see the docker and shared volume

Cannot mount nfs to client | mount.nfs: access denied by server while mounting 10.148.0.3:/shared

Hi, first of all thanks for the project. It looks very clean and on point.

I guess I deployed the project correctly.. Here is my config.

# exports.txt
/shared *(rw,sync,no_subtree_check,fsid=0,insecure,no_root_squash)

I've run AppArmor config and modified docker run command. Server starts correctly.

# docker run command
$: docker run -d --restart=always \
  -p 10.148.0.3:2049:2049 \
  -v /lib/modules:/lib/modules:ro \
  -v `pwd`/exports.txt:/etc/exports:ro \
  -v `pwd`/shared:/shared \
  --cap-add SYS_ADMIN \
  --cap-add SYS_MODULE \
  --security-opt apparmor=erichough-nfs \
  --name nfs \
  erichough/nfs-server:2.2.1
# server startup log
==================================================================
      SETTING UP ...
==================================================================
----> setup complete

==================================================================
      STARTING SERVICES ...
==================================================================
----> starting rpcbind
----> starting exportfs
----> starting rpc.mountd on port 32767
----> starting rpc.statd on port 32765 (outgoing from port 32766)
----> starting rpc.nfsd on port 2049 with 16 server thread(s)
----> all services started normally

==================================================================
      SERVER STARTUP COMPLETE
==================================================================
----> list of enabled NFS protocol versions: 4.2, 4.1, 4, 3
----> list of container exports:
---->   /shared *(rw,sync,no_subtree_check,fsid=0,insecure,no_root_squash)
----> list of container ports that should be exposed:
---->   111 (TCP and UDP)
---->   2049 (TCP and UDP)
---->   32765 (TCP and UDP)
---->   32767 (TCP and UDP)

==================================================================
      READY AND WAITING FOR NFS CLIENT CONNECTIONS
==================================================================

Now when I try to mount NFS server to test docker container like this I'm getting this error.

# docker run command of test container
$: docker run -dit \
  --cap-add SYS_ADMIN \
  --name test \
  ubuntu:16.04
$: docker exec -it test bash
$: mkdir /testing
$: mount -o rw 10.148.0.3:/shared /testing
mount.nfs: access denied by server while mounting 10.148.0.3:/shared

$: mount -vvv 10.148.0.3:/shared /testing
mount.nfs: timeout set for Wed Aug  7 12:01:15 2019
mount.nfs: trying text-based options 'vers=4,addr=10.148.0.3,clientaddr=172.17.0.19'
mount.nfs: mount(2): Permission denied
mount.nfs: access denied by server while mounting 10.148.0.3:/shared

What am I missing? Looks like #20 is kinda same.

Any how to use on client side docs would be super helpful 🚀

[Permission denied] files not writable

export options:

# NFS_EXPORT_0
/data *(rw,sync,fsid=0,crossmnt,no_subtree_check)

mount options:

172.21.2.249:/ on /data type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.31.209.181,local_lock=none,addr=172.21.2.249)

The mount was successful, but why is the file read-only?

whoami
output: root

cat testfile
output: testcontent

rm testfile 
output: rm: cannot remove ‘testfile’: Permission denied

touch testfile 
output: touch: cannot touch ‘testfile’: Permission denied

Modprobe idea

I found a trick you could optionally include in (maybe your more advanced?) documentation. There is a way to make a docker container run modprobe for you

Dockerfile (image with modprobe in it)

FROM fedora:29

SHELL ["/usr/bin/env", "bash", "-euxvc"]

RUN dnf install -y kmod; \
    rm -rf /var/cache/yum/*

docker-compose.yml

version: '2.3'
services:
  modprobe:
    image: modprobe
    build:
      context: .
    volumes:
      - type: bind
        source: /lib/modules
        target: /lib/modules
        read_only: true
    restart: "no"
    command: bash -euc 'modprobe nfsd; modprobe nfs'
    cap_add:
      - SYS_MODULE

Now you just make your nfs container server depend on that modprobe (maybe you'll have to add a wait routine instead of a check and fail on the lsmod check for nfs and nfsd?) and now you have a container that you can keep running through reboots.

You can of course combine these two into one container instead of two, but I don't know how universal this is (for example using a debian image on fedora, but I suspect that'll work)

Note for the modprobe, you only need SYS_MODULE capability

Base on https://dummdida.tumblr.com/post/117157045170/modprobe-in-a-docker-container


Thank for the awesome image!

Bug with baked exports file

Welcome bug on Docker for Windows/OSX

/etc/exports was baked into the Dockerfile

root@73347d4a00c3:/# cat /etc/exports
"/share" *(rw,async,no_subtree_check,insecure,all_squash,anonuid=65534,anongid=0)

root@73347d4a00c3:/# /usr/local/bin/entrypoint.sh

==================================================================
      SETTING UP
==================================================================
----> building /etc/exports
----> no valid exports
root@73347d4a00c3:/#

Question - nfs module is not loaded on the Docker host's kernel (try: modprobe nfs)

I'm trying to up run this image to create an NFS server next to RancherOS. Rancher OS is a whole Docker-based operating system. There is a Linux Kernel, but I do not know if the limitations of being a Docker-based system can cause a problem here.

I'm raising your image on the system itself - in Docker built in OS (not Rancher), to use inside the Rancher platform (an orchestrator).

I'm doing this only as testing and learning view.

I am having problems with persistence of data next to this operating system. In fact there is how to persist manualy, but not from the Rancher platform to the Rancher OS. So, as Rancher supports NFS, I thought of that solution (NFS Server) and found your project.

Given this context, I would like to know if you can tell me where my error is or if it is a support problem for the operating system itself.

Cheers

docker run                                                            \
  -e NFS_EXPORT_0='/nfs/dgraph 192.168.25.0/24(rw,no_subtree_check)'      \
  -v /host/Dgraph:/nfs/dgraph                                          \
  --cap-add SYS_ADMIN                                                 \
--privileged                                                       \
  -p 2049:2049                                                        \
  erichough/nfs-server:latest

==================================================================
      SETTING UP
==================================================================
----> building /etc/exports
----> will export /nfs/dgraph 192.168.25.0/24(rw,no_subtree_check)
----> will export 1 filesystem(s)
----> checking for presence of kernel module: nfs
----> nfs module is not loaded on the Docker host's kernel (try: modprobe nfs)

docker-nfs-server as a "data/system snapshot engine": feasible?

Is the following use-case feasible? (More-detailed questions at the end of this msg.)

Summary

I am investigating implementing a docker-nfs-server to primarily to provide convenient "snapshot" mechanism for my team's nfs servers with the nfs-filesystem purposely inside the nfs-server container, as part of the docker image and not externally mounted from the docker host or anywhere else.

Purpose, desired features and integrations

  • Convenient and powerful snapshots. We're attempting to enable a means to fully snapshot nfs-server content (filesystem files and complete docker-container state). This seems potentially more easy, powerful, and/or convenient than trying to snapshot with zfs or some other alternative, at least because of the power and convenience of docker (maybe?) enabling easier-to-manage "full system snapshots".
  • Easy docker-image export. Capture all the above state in a single file as a single export of an image for easier archiving and mobility between docker hosts.
  • Run on a LUKS mount within the docker-nfs-server image. Run the nfs-server process, within the docker-nfs-server image/container, on top of a LUKS-based "filesystem in a file," for data-security purpose (and keep the LUKS-decryption secret/key stored separately from the docker-nfs-server image). We realize this may take some of my custom modifications of docker-nfs-server.

Our current environment's scale is SMALL

Pls note: our targeted network environment and nfs-client load is small enough, and the docker host beefy enough (CPU, memory, etc), where I am not yet concerned about system-performance things.

Questions

  1. Is something like this use case part of the original intent for this (docker-nfs-server) project?
  2. Is the LUKS integration feasible (with the NFS-served files and the LUKS mount inside the image/container)?
  3. Does anyone see any additional problem with this (above) application/use case?
  4. Are there any specific constraints or limitations (in this context) for which we should be aware?

NFSD: attempt to initialize ... client tracking ... in a container

I see logs like

Jun  1 08:13:58 localhost kernel: [   84.417179] NFSD: attempt to initialize umh client tracking in a container ignored.
Jun  1 08:13:58 localhost kernel: [   84.417196] NFSD: attempt to initialize legacy client tracking in a container ignored.
Jun  1 08:13:58 localhost kernel: [   84.417197] NFSD: Unable to initialize client recovery tracking! (-22)
Jun  1 08:13:58 localhost kernel: [   84.417198] NFSD: starting 90-second grace period (net f0000233)

You can see these logs on travis here

Im thinking its the failure to initialize nfs tracking that is causing the grace period to come into effect.

kick #44

the logs are in the host syslog not in the container

for ref there is some discussion of similar logs here - https://bugzilla.redhat.com/show_bug.cgi?id=1700098

once the grace period expires, the container comes up and nfs exports as expected

rpc faults && permission denied issues

Hi,

First of all thanks for the image, it is indeed a very good idea containerizing this service.

However, I'm struggling to make it run on my VPS.

Below is my console log, when trying to make it come alive.

root@ourkid /opt/nfs # pwd && ls -lh && cat exports.txt && cat startContainer.sh && ./startContainer.sh
/opt/nfs
total 12K
drw-rw---- 6 root root 4.0K Jun 15 17:11 codeDevelopment
-rw-rw---- 1 root root   48 Jun 16 13:32 exports.txt
-rwxr-xr-x 1 root root  190 Jun 16 13:34 startContainer.sh
/opt/nfs/codeDevelopment *(rw,no_subtree_check)
#!/bin/bash

docker run                                      \
  -v /opt/nfs/exports.txt:/etc/exports:ro       \
  -v /opt/nfs/codeDevelopment:/nfs              \
  --cap-add SYS_ADMIN                           \
  -p 2049:2049                                  \
  erichough/nfs-server:latest

==================================================================
      SETTING UP
==================================================================
----> /etc/exports already exists in the container
----> checking for presence of kernel module: nfs
----> checking for presence of kernel module: nfsd
----> setup complete

==================================================================
      STARTING SERVICES
==================================================================
----> mounting rpc_pipefs onto /var/lib/nfs/rpc_pipefs
mount: rpc_pipefs is write-protected, mounting read-only
mount: cannot mount rpc_pipefs read-only
----> unable to mount rpc_pipefs onto /var/lib/nfs/rpc_pipefs

==================================================================
      TERMINATING ...
==================================================================
----> rpc.svcgssd was not running
----> stopping nfsd
----> WARNING: unable to stop nfsd. if it had started already, check Docker host for lingering [nfsd] processes
----> rpc.idmapd was not running
----> rpc.statd was not running
----> rpc.mountd was not running
----> un-exporting filesystems
exportfs: could not open /proc/fs/nfs/exports for locking: errno 13 (Permission denied)
----> rpcbind was not running
----> nfsd was not mounted on /proc/fs/nfsd
----> rpc_pipefs was not mounted on /var/lib/nfs/rpc_pipefs

==================================================================
      TERMINATED
==================================================================
root@ourkid /opt/nfs # dr ps -a
CONTAINER ID        IMAGE                         COMMAND                  CREATED             STATUS                      PORTS               NAMES
db982c254a00        erichough/nfs-server:latest   "/usr/local/bin/entr…"   11 seconds ago      Exited (0) 11 seconds ago                       adoring_jennings
root@ourkid /opt/nfs #

am I missing something totally obvious?
I had to execute modprobe nfs && modprobe nfsd before getting stuck at this point.

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 18.04 LTS
Release:        18.04
Codename:       bionic

Thanks a lot for your help in advance!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.