Coder Social home page Coder Social logo

docker-services's Introduction

Container services - most of them based on alpine image to be lightweight. All services are build for a graceful shutdown/restart with S6 overlay or a small init script.

docker-services's People

Contributors

cron410 avatar mumie-hub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

docker-services's Issues

Add support for running a command once the rclone drive is mounted

I recently found i was having bad performance partly due to the filesystem not being cached and that it currently takes a long time to cache if calling folder by folder but if instead you run an rclone command it can be done quickly. If you seen the service in the link below it runs "/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5575 _async=true" to quickly get the filelist synced after it starts and i would like to be able to do the same things with your docker image.

https://github.com/animosity22/homescripts/blob/master/systemd/rclone-tv.service

sabnzbd not using USER

Have been able to get the openvpn portion running, however the error is local to the sabnzbd user.

Here is the command to start the docker:
sudo docker run -d --name sabnzbdvpn -v /home/plex/.sabnzbd:/config -v /etc/openvpn:/etc/openvpn -v /etc/localtime:/etc/localtime:ro -v /etc/passwd:/etc/passwd:ro -v /etc/group:/etc/group:ro --device=/dev/net/tun -p 8800:8800 -e "LOCAL_NETWORK=172.16.4.1/23" -e USER_NAME=plex -e PUID=1001 -e PGID=1001 -e DOWNLOAD_DIR="/home/plex/Downloads/complete" -e INCOMPLETE_DIR="/home/plex/Downloads/incomplete" -e OPENVPN_CONFIG="torguardSF_UDP.conf" --cap-add=NET_ADMIN mumiehub/sabnzbdvpn

Here is the log:
`adding route to local network 172.16.4.1/23 via 172.17.0.1 dev eth0,
Error: Invalid prefix for given prefix length.,
Setting owner for Folder paths to 1001:1001,
,

Sabnzbd will run as

User name: plex
User uid: 1001
User gid: 1001

OPEN VPN WORKING

STARTING SABNZBD with USER
Startup script SABnzbd completed.
Wed Jul 22 09:28:57 2020 Initialization Sequence Completed
sudo: unknown user: abc
sudo: unable to initialize policy plugin`

The 'abc' user is hard coded, shouldn't this be overwritten by the plex user that is set?

Passing UID and GUID

Hello,

Just wondering if it's currently possible to enforce or change the UID and GUID now without them being environmental variables? I'm trying to utilize this container as a mount point to gsuite for a Nextcloud test and I cannot access an rclone lsd remote: or get it to interact with other containers since it seems to be running as root.

Thank you! Please let me know if there's a simple edit I could do on a copy of the downloaded container if that's easier.

Image not working with rclone union config

Hi,

The image works fine on my synology nas with a config for a gdrive mount.
But if i try to do so with an union mount, it doesn't work.

Config:
[Series] type = union remotes = /volume1/Allseries/dir1 /volume1/Allseries/dir2 /volume1/Allseries/dir3
Image creation:
sudo docker run -d --name rclone-test --restart=unless-stopped --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -e RemotePath="Series:" -e MountCommands="--allow-other --allow-non-empty --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G --buffer-size 32M" -e ConfigName="rclone.conf" -v /volume1/Docker/rclone:/config -v /volume1/Allseries/dirs:/mnt/mediaefs:shared mumiehub/rclone-mount

i also tried it with none of the MountCommands, as testing it on a raspi with a simple rcloud mount Series: /mnt/dirs/ --read-only works fine

The mount (dirs) remains empty.

Image creation that works fine (for gdrive)
sudo docker run -d --name rclone-mount --restart=unless-stopped --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -e RemotePath="Rcloud:" -e MountCommands="--allow-other --allow-non-empty --dir-cache-time 672h --vfs-cache-max-age 675h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G --buffer-size 32M" -e ConfigName="rclone.conf" -v /volume1/Docker/rclone:/config -v /volume1/shares/rclone:/mnt/mediaefs:shared mumiehub/rclone-mount

Edit: When i bash into the image /mnt/mediaefs is empty as well.

How to create the config

Hey,

it is my first rclone setup. How to create a rclone config inside the docker container?

regards

High memory usage for rclone container

With this definition:

docker run --init -d \
    --name $name \
    --restart=unless-stopped \
    --cap-add SYS_ADMIN \
    --device /dev/fuse \
    --security-opt apparmor:unconfined \
    -e RemotePath="$name:" \
    -e MountCommands="--allow-other --allow-non-empty --dir-cache-time 48h --poll-interval 2m --buffer-size 256M --cache-dir /cache --vfs-cache-max-age 6h --vfs-cache-mode writes --vfs-cache-max-size 1500 --dir-cache-time 12h --vfs-read-chunk-size 256M" \
    -v /rclone-mounts:/config \
    -v /rclone-mounts/$name-cache:/cache \
    -v /mnt/mediaefs:shared \
    --log-opt max-size=1m \
    mumiehub/rclone-mount

Remote is built as:
remote: > cache: > crypt:
The container is mounting the crypt and the cache has the following properties:

info_age = 1h
db_path = /cache/db
chunk_path = /cache/chunk
chunk_size = 10M
chunk_total_size = 3G
workers = 5

I have very very high memory usage.
How can I better control the memory usage of the container without limiting it by resource definition?

Sabnzbd not seeing user or openvpn pass.txt

Ran the following command to start sabnzbd and openvpn. System currently has non-docker apps for each. Stopped the sabnzbd service and ran this command:

sudo docker run -d --name sabnzbdvpn -v /home/plex/.sabnzbd:/config -v /etc/openvpn:/etc/openvpn/custom -v /etc/localtime:/etc/localtime -p 8800:8800 -e "LOCAL_NETWORK=172.16.4.1/23" -e PUID=1001 -e PGID=1001 -e USER_NAME="plex" -e DOWNLOAD_DIR="/home/plex/Downloads/complete" -e INCOMPLETE_DIR="/home/plex/Downloads/incomplete" -e OPENVPN_CONFIG="torguardSF_UDP.conf" mumiehub/sabnzbdvpn

Plan on adding the following
-e OPENVPN_OPTS=--inactive 3600 --ping 10 --ping-exit 60 --restart=always

once up and running without the following error:

Error:
`adding route to local network 172.16.4.1/23 via 172.17.0.1 dev eth0
RTNETLINK answers: Operation not permitted
id: ‘plex’: no such user
/etc/openvpn/start.sh: 16: [: Illegal number:
id: ‘plex’: no such user
/etc/openvpn/start.sh: 17: [: Illegal number:
Setting owner for Folder paths to 1001:1001
chown: invalid user: ‘plex:plex’
id: ‘plex’: no such user
id: ‘plex’: no such user

Sabnzbd will run as

User name: plex
User uid:
User gid:

Sun Jul 12 15:41:45 2020 WARNING: cannot stat file '/etc/openvpn/pass.txt': No such file or directory (errno=2)
Options error: --auth-user-pass fails with '/etc/openvpn/pass.txt': No such file or directory (errno=2)
Options error: Please correct these errors.
Use --help for more information.`

plex is my mediaserver user, not my username. Does that matter?

The pass.txt does exist in the /etc/openvpn folder.

Note: I copied the start.sh from the github openvpn to the /etc/openvpn directory. was that the right thing to do?

Auto-update

I just wanted to drop in and say thanks for creating such a stable and reliable container for rclone. It's been the best so far.

I use Watchtower to auto stop and update a container when the Docker Hub repo creates a new build. Looks like you already have the container build being triggered by something else, most likely the rclone github repo you are pulling the package.

Rcloud mount issue

Hello, I managed to mount my cloud account with the script here. Below, I've shared it via Samba using the Docker container I used. However, when I try to print a file through Samba, it doesn't write. I'd appreciate it if you could help

docker run -d --name rclone-mount --restart=unless-stopped --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -e RemotePath="onedrive:" -e MountCommands="--allow-other --vfs-cache-mode writes --allow-non-empty" -v /root/rclone:/config -v /mnt/onedrive:/mnt/mediaefs:shared mumiehub/rclone-mount

docker run -d -p 139:139 -p 445:445 -e TZ=Europe/Madrid
-v /mnt/onedrive:/share/folder elswork/samba
-u "1000:1000:pirate:pirate:1"
-s "Dosyalar:/share/folder:rw:pirate"

EncFS remote filesystems

I've forked this with the intention of adding EncFS - my existing rclone mount implementation has significant existing footprint that is encrypted with EncFS. I don't know enough to understand best practice here - should I be running an EncFS mount inside the container or should I do that at the host level with the mounted volume from this container? Advice here would be appreciated. (I'm referring specifically to your rclone-mount Docker container here)

container should fail if rclone fails

Hi :),

first things first: nice work!

I'd suggest the container should fail if rclone returns state 1 for the mount e.g. when there is no correct config file supplied. I might get into that myself and do a pull request if you wish.

Thanks again for the nice work and a nice weekend.

Empty mount on host

hi, I have the following problem ... no matter how many times and on which system I try, on the host rclone is not mounted ... in the container it works. Where's the problem? I am slowly despairing.

Rclone-mount config location

@Mumie-hub For some unknown reason when I launch the container rclone doesn't appear to be looking in the correct config location.

$ docker-compose exec rclone-mount sh
$ rclone config file
Configuration file doesn't exist, but rclone will use this path:
/root/.config/rclone/rclone.conf

I can get it to work by mounting (twice), once into the location specified in the docs and once into the aforementioned location. If I do this everything works perfectly.

---
version: "3"
services:
  rclone-mount:
    image: mumiehub/rclone-mount
    container_name: rclone-mount
    cap_add:
      - SYS_ADMIN
    devices:
      - /dev/fuse
    security_opt:
      - apparmor:unconfined
    environment:
      - RemotePath=gcrypt:
      - MountCommands=--allow-other --allow-non-empty
    volumes:
      - /root/.config/rclone:/root/.config/rclone
      - /root/.config/rclone:/config
      # - /root/.config/rclone/rclone.conf:/config/.rclone.conf # ALSO TRIED THIS
      - /mnt/gdrive:/mnt/mediaefs:shared
    restart: unless-stopped

Am I doing something wrong or is this a bug caused by an upstream change?

Show Output of rclone mount command

I'm not overly proficient with bash and I can't find a way to redirect the stdout of the rclone mount command in your start.sh script so users can see errors and any other output of the command. I tried redirecting stderr to stdout with 2>&1 which did not work. Below is the kind of errors I am trying to show in docker logs.

I ran these commands inside the container to find my error. (didn't add a colon to the remote)

/ # /usr/sbin/rclone --config $ConfigPath mount $RemotePath $MountPoint $MountCommands & wait ${!}
Error: unknown command "gcache" for "rclone"
Run 'rclone --help' for usage.
2018/07/26 19:35:26 Fatal error: unknown command "gcache" for "rclone"
[1]+  Done(1)                    /usr/sbin/rclone --config ${ConfigPath} mount ${RemotePath} ${MountPoint} ${MountCommands}
/ # echo "/usr/sbin/rclone --config $ConfigPath mount $RemotePath $MountPoint $MountCommands & wait ${!}"
/usr/sbin/rclone --config  mount gcache /data --allow-other --allow-non-empty --dir-cache-time=160h --cache-chunk-size=10M --cache-info-age=168h --cache-workers=5 --buffer-size=500M --attr-timeout=1s & wait 31
/ # exit

I only see this in docker logs rclone

==================================================
Mounting gcache: to /data at: 2018.07.26-19:37:02
sending SIGTERM to child pid
Unmounting: fusermount -u -z /data at: 2018.07.26-19:52:09
exiting container now

Cannot enter console

Hi,

When trying to enter the console in Portainer I get the following error:

OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"bash\": executable file not found in $PATH": unknown

It would make troubleshooting issues easier if this was possible.

Thanks

official image of rclone

Your image works very well but I have doubts about why you have not relied on the official image to mount the remote relationship systems.

The official image already has the way to be able to mount the units that you configure in rclone, and it already has support for arm and x64 architectures, for some reason or specific improvement that image has been created, it has some benefit or improvement of the official image.

docker hub rclone oficial

help with docker compose

Good day, thank you for producing/sharing this component. I've been trying to get it to work for a couple of days unusccesfully. This is my compoe:

  rclone:
    container_name: rclone
    image: mumiehub/rclone-mount
    restart: unless-stopped
    privileged: true    
    cap_add:
    - SYS_ADMIN    
    devices:
    - /dev/fuse
    security_opt:
    - apparmor:unconfined    
    volumes:
    - /home/juan/Documents/docker/rclone/.rclone.conf:/config/.rclone.conf 
    - /home/juan/Documents/gdrive:/mnt/mediaefs:shared
    environment:
    - PGID=1001
    - PUID=1000
    - TZ=Asia/Dubai
    - RemotePath="gdrive"
    - MountCommands=--allow-other --allow-non-empty --dir-cache-time 48h --poll-interval 5m --buffer-size 128M    
    - MountPoint="/mnt/mediaefs"

Docker log shows the following:
Mounting "gdrive" to "/mnt/mediaefs" at: 2019.08.16-06:24:52
but gdrive doesnt load.

Welcome any advice. Thank you

Host name: mount: bad address 'example.local'

Hi, I have issue when I try to connect to my smb server, I used the argument -e SERVERPATH="//example.local/Root/Docker" in the docker command.
I defined a host name in my router but I have the following error:
mount: bad address 'example.local'
mount: mounting //example.local/Root/Docker on /mnt/smb failed: Resource temporarily unavailable

When I replace example.local by the corresponding IP adress it works again.
I can also connect to the samba server with the host name using windows.
Beside that, your image works great, very usefull, thanks !

Something with the config location not working correctly

Possibly related to #39
The container does not find the configuration file, although it has the folder and also recognizes the correct name.
Exact Config:
The configuration is on the host system under

/home/torben/Docker/NextcloudForJellyfin/config/rclone.conf

The docker-compose.yaml:

version: "3"
services:
  rclone-mount:
    image: mumiehub/rclone-mount
    container_name: NextcloudForJellyfin
    restart: unless-stopped
    cap_add:
      - SYS_ADMIN
    devices:
      - /dev/fuse
    security_opt:
      - apparmor:unconfined
    volumes:
      - /home/torben/Docker/NextcloudForJellyfin/config:/config
      - /home/torben/Docker/Jellyfin/Mediathek:/mediaefs:shared
    environment:
      - ConfigName=rclone.conf
      - RemotePath=NextcloudForJellyfin:Mediathek
      - MountCommands=--allow-other --buffer-size 256M --vfs-cache-mode full --allow-non-empty

The problem I noticed when I run docker compose up without the -d flag

2023/03/03 14:34:57 NOTICE: Config file "/config/rclone.conf" not found - using defaults

The whole "log"

NextcloudForJellyfin  | [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
NextcloudForJellyfin  | [s6-init] ensuring user provided files have correct perms...exited 0.
NextcloudForJellyfin  | [fix-attrs.d] applying ownership & permissions fixes...
NextcloudForJellyfin  | [fix-attrs.d] done.
NextcloudForJellyfin  | [cont-init.d] executing container initialization scripts...
NextcloudForJellyfin  | [cont-init.d] 20-init: executing... 
NextcloudForJellyfin  | [cont-init.d] 20-init: installing system updates
NextcloudForJellyfin  | fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
NextcloudForJellyfin  | fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
NextcloudForJellyfin  | OK: 8 MiB in 20 packages
NextcloudForJellyfin  | [cont-init.d] 20-init: exited 0.
NextcloudForJellyfin  | [cont-init.d] 30-check: executing... 
NextcloudForJellyfin  | [cont-init.d] 30-check: MountPoint /mnt/mediaefs is ready
NextcloudForJellyfin  | [cont-init.d] 30-check: exited 0.
NextcloudForJellyfin  | [cont-init.d] done.
NextcloudForJellyfin  | [services.d] starting services
NextcloudForJellyfin  | [services.d] [rclone-mount]-run: starting rclone mount 2023.03.03-14:34:57
NextcloudForJellyfin  | [services.d] done.
NextcloudForJellyfin  | 2023/03/03 14:34:57 NOTICE: Config file "/config/rclone.conf" not found - using defaults
NextcloudForJellyfin  | 2023/03/03 14:34:57 Failed to create file system for "NextcloudForJellyfin:Mediathek": didn't find section in config file
NextcloudForJellyfin  | [services.d] [rclone-mount]-finish: rclone process not present, restarting container[ERROR]
NextcloudForJellyfin  | [services.d] [rclone-mount]-finish: waiting for rclone shutdown
NextcloudForJellyfin  | [cont-finish.d] executing container finish scripts...
NextcloudForJellyfin  | [cont-finish.d] 20-unmount: executing... 
NextcloudForJellyfin  | [cont-finish.d] 20-unmount: waiting for shutdown of all services
NextcloudForJellyfin  | umount: can't unmount /mnt/mediaefs: Invalid argument
NextcloudForJellyfin  | [cont-finish.d] 20-unmount: successful unmounted
NextcloudForJellyfin  | [cont-finish.d] 20-unmount: exited 0.
NextcloudForJellyfin  | [cont-finish.d] done.
NextcloudForJellyfin  | [s6-finish] waiting for services.
NextcloudForJellyfin  | [s6-finish] sending all processes the TERM signal.
NextcloudForJellyfin  | [s6-finish] sending all processes the KILL signal and exiting.
NextcloudForJellyfin exited with code 0

Rclone mount with UUID and GUID

This is actually not an issue but a finding that might be nice to write in the documentation.

I managed to use rclone-mount with a specific user id and group id via docker. It's a bit of a hack but it works.

docker run \
    ... default options like environment variables and volume mounts
    -e MountCommands="--cache-dir /rclone-cache ... a lot more mount commands" \
    -v /etc/passwd:/etc/passwd:ro \
    -v /home/somebody/rclone-cache:/rclone-cache \
    -u $(id -u):$(id -g) \
     mumiehub/rclone-mount

The user id and group id are set via the Docker --user/-u argument. I used $(id -u) and $(id -g) in order to get the user id and group id of the logged in user. Any other id can also be provided. This user most likely doesn't exist inside the container. That means that this user only has an id and not a name. Rclone/fuse need a username in order to function properly:

static const char *get_user_name(void)
{
	struct passwd *pw = getpwuid(getuid());
	if (pw != NULL && pw->pw_name != NULL)
		return pw->pw_name;
	else {
		fprintf(stderr, "%s: could not determine username\n", progname);
		return NULL;
	}
}

By mounting the local /etc/passwd file into the container (as read only) the user name becomes available.

The second problem is that Rclone is no longer able to write cache to the default cache directory. Because it has insufficient permissions. By explicitly setting the --cache-dir /rclone-cache via the environment variable MountCommands the directory can be changed. Now it's possible to create a mount for the cache directory to make it possible to write here as the specified user.

This covers the TODO: launch with specific USER_ID.

rclone-mount containers always growing

Hi,
after a while, especially when transferring big files, the container grows and disk data is never released.
This caused the container to eat up all disk space in the system drive (docker), with some issues following.
Seems not related to rclone cache, as even if setting up a cachedir the growth is steady anyway.
Recreating the container wipes out everything (obviously).

Cheers

Armhf

Hi, does the image have any support for armhf devices (raspberry pi)?

Mounting multiple drives

Hi!

Currently Im using multiple drives (even multiple shared GDrives as well) and it would be good to have an ability to mount all of them inside docker to make them available

Samba Mount İssue

Hello,

I'm trying to share a folder, which I mounted with rclone, over Samba on the network. However, I keep encountering errors and haven't been able to figure out where I'm going wrong. I would appreciate your assistance.

"docker run -d --name smb-mount
--restart=unless-stopped
--cap-add SYS_ADMIN
--cap-add DAC_READ_SEARCH
--security-opt apparmor:unconfined
-e SERVERPATH="//192.168.2.114/onedrive"
-e MOUNTOPTIONS="vers=3.1.1,uid=1000,gid=1000,rw,username=pi,password=pi"
-v /mnt/onedrive:/mnt/smb:shared
mumiehub/smb-mount"

image

filebot.conf file always replaced at container startup

Hello,

First of all thanks for this amazing work! This container exactly correspond to my need.

I find that the filebot.conf file is always replaced at container startup. The start.sh script refer to a bad path when checking conf file (line 8):
if [ ! -f $CONFIG_FILE/filebot.conf ]
should be better like this
if [ ! -f $CONFIG_DIR/filebot.conf ] or if [ ! -f $CONFIG_FILE ]

Regards
Flo313

Unable to share mounted folder using NFS

I mounted my Gdrive on my Synology NAS using the following command:

sudo docker run -d --name rclone-mount --restart=unless-stopped --cap-add SYS_ADMIN --device /dev/fuse --security-opt apparmor:unconfined -e RemotePath="gdrive_psu_crypt:" -e MountCommands="--allow-other --allow-non-empty --dir-cache-time 24h --vfs-cache-max-age 40h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 32M" -e ConfigName="rclone.conf" -v /volume1/Misc/Downloads:/config -v /volume1/Media/gdrivepsucrypt:/mnt/mediaefs:shared mumiehub/rclone-mount

On my NAS, I can see the files in the gdrivepsucrypt folder. However, I cannot see the contents of the folder from any other device on my network. I have shared the /volume1/Media/ and I see the contents of the other folders in that folder. Only the gdrivepsucrypt is empty.

The only thing that I could point out is that the owner of the folder changes to root after mounting.
image
The other folder in the Media directory has admin as the owner.
image

Any pointers will be highly appreciated.

rclone-mount image contains unnecessary files which takes too much space

In the dockerfile, it copies the whole rclone folder from the builder to /usr/local/sbin of the final image. This folder contains many unnecessary files, including an 170+MB .git folder. These unnecessary files occupy over 2/3 of the size of the image. I suggest only copying the binary file as this pull request: #63

Besides, the compiled binary file is much larger than the pre-compiled binary file from https://rclone.org/downloads/ . Is it a good idea to directly download it rather than compile it manually again?

ERROR: for rclone-mount Cannot start service rclone-mount: path /host_mnt/Users/jandrop/mnt/gdrive is mounted on /host_mnt but it is not a shared mount

I´m getting this error when I start the docker image, if I remove the :shared from the volume the service starts but is not sharing the content with the external volume.

I don´t know if this is an issue because of me or is something in your docker image.

ERROR: for rclone-mount  Cannot start service rclone-mount: path /host_mnt/Users/jandrop/mnt/gdrive is mounted on /host_mnt but it is not a shared mount

This is my docker compose:

version: "3"
services: 
  rclone-mount:
    image: mumiehub/rclone-mount
    restart: unless-stopped
    ports:
    - '80:80'
    cap_add:
    - SYS_ADMIN
    security_opt: 
    - 'apparmor:unconfined'
    volumes: 
    - '/var/run/docker.sock:/tmp/docker.sock:ro'
    - '/Users/myUser/developer/docker/rclone/.config/rclone:/config'
    - '/Users/myUser/mnt/gdrive:/mnt/mediaefs:shared'
    logging: 
      options: 
        max-size: 1g
    container_name: rclone-mount
    devices:
    - /dev/fuse
    environment: 
    - 'RemotePath=gdrive:'
    - 'MountCommands=--allow-other --allow-non-empty --buffer-size 256M'
    - 'ConfigName=rclone.conf'

Shared folder is not updated outside of container

I have been using the container perfectly (it works really well) but suddenly, I find that without having done anything, the content of the mounted unit is not updated (GDrive)

If I access the unit from other containers or from the system (Lubuntu 20.04) I don't see the new files added (by web browser).
But if I enter the rclone container and access the unit by console, if I see the new files.

I have looked to update the cache file in case it solved something, but I did not know how to do it

Other Docker Container cannot see content of mounted volume

Hi,
I am trying since 2 days to get the contents of my google drive into a docker volume, which is shared between the google-drive-cloud container and my story-worker.

The sync in the google-drive-cloud container works fine and I can see the files. However the volume on the story-worker stays empty.

Any idea, how this can be solved. Thanks a lot in advance

services:
  google-drive-cloud:
    image: mumiehub/rclone-mount
    container_name: google-drive-cloud
    restart: always
    privileged: true
    devices:
      - /dev/fuse
    security_opt:
      - apparmor:unconfined
    cap_add:
      - SYS_ADMIN
    volumes:
      - ./docker/google-drive-cloud:/config
      - google-drive:/mnt/gsuite
    environment:
      - "MountPoint=/mnt/gsuite"
      - "RemotePath=gsuite:"
      - "MountCommands=--allow-other --allow-non-empty"
  story-worker:
    container_name: story-worker
    build: ./docker/story-worker
    volumes:
      - google-drive:/data
volumes:
  google-drive:

mkdir: cannot create directory

Hi,

I get every time this if I start the subnzbd container:

sabnzbd       | mkdir: cannot create directory ‘/home/abc’: File exists
sabnzbd       | mkdir: cannot create directory ‘/config’: File exists
sabnzbd       | mkdir: cannot create directory ‘"/mnt/downloads/sabnzbd"’: No such file or directory
sabnzbd       | mkdir: cannot create directory ‘"/mnt/incomplete/sabnzbd"’: No such file or directory
sabnzbd       | Setting owner for Folder paths to 1000:1000
sabnzbd       | chown: cannot access '"/mnt/downloads/sabnzbd"': No such file or directory
sabnzbd       | chown: cannot access '"/mnt/incomplete/sabnzbd"': No such file or directory

I think the problem will be solve if you use mkdir -p /../.. ?

Container states that rclone process doesn't exist, keeps restarting

I've used this container for months and I've had no issues, the container was working a few days ago (I believe since last week) but I noticed today that it no longer works. When I check the logs it repeats this error message:

[services.d] [rclone-mount]-finish: rclone process not present, restarting container[ERROR]

The container then exits and restarts, it continually does this but nothing new happens. I've tried recreating the container as well as deleting the image and downloading it again. Those steps didn't work, the container is still broken. Any ideas on how to fix the issue?

Multiple remote mounts with single container.

Hi @Mumie-hub.

Just wondering is it possible to mount multiple remotes with single container instance. If not perhaps redefining the "RemotePath "and "MountPoint" to arrays, And the going one by one in mounting procedure could be a easy fix. Force to run multiple containers to serve multiple cloud services which feels bit a waist of resources. Great implementation otherwise. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.