Coder Social home page Coder Social logo

buttervolume's Introduction

Travis state

What will Buttervolume allow you to do?

  • Quickly recover recent data after an exploit or failure of your web sites or applications
  • Quickly rollback your data to a previous version after a failed upgrade
  • Implement automatic upgrade of your applications without fear
  • Keep an history of your data
  • Make many backups without consuming more disk space than needed
  • Build a resilient hosting cluster with data replication
  • Quickly move your applications between nodes
  • Create preconfigured or templated applications to deploy in seconds

What can Buttervolume do?

  • Snapshot your Docker volumes
  • Restore a snapshot to its original volume or under a new volume
  • List and remove existing snapshots of your volumes
  • Clone your Docker volumes
  • Replicate or Sync your volumes to another host
  • Run periodic snapshots, sync or replication of your volumes
  • Remove your old snapshots periodically
  • Pause or resume the periodic jobs, either individually or globally

How does it work?

Buttervolume is a Docker Volume Plugin that stores each Docker volume as a BTRFS subvolume.

BTRFS is a next-generation copy-on-write filesystem with subvolume and snapshot support. A BTRFS subvolume can be seen as an independant file namespace that can live in a directory and can be mounted as a filesystem and snapshotted individually.

On the other hand, Docker volumes are commonly used to store persistent data of stateful containers, such as a MySQL/PostgreSQL database or an upload directory of a CMS. By default, Docker volumes are just local directories in the host filesystem. A number of Volume plugins already exist for various storage backends, including distributed filesystems, but small clusters often can't afford to deploy a distributed filesystem.

We believe BTRFS subvolumes are a powerful and lightweight storage solution for Docker volumes, allowing fast and easy replication (and backup) across several nodes of a small cluster.

Make sure the directory /var/lib/buttervolume/ is living in a BTRFS filesystem. It can be a BTRFS mountpoint or a BTRFS subvolume or both.

You should also create the directories for the config and ssh on the host:

sudo mkdir /var/lib/buttervolume
sudo mkdir /var/lib/buttervolume/config
sudo mkdir /var/lib/buttervolume/ssh

If you want to be a contributor, read this chapter. Otherwise jump to the next section.

You first need to create a root filesystem for the plugin, using the provided Dockerfile:

git clone https://github.com/anybox/buttervolume
./build.sh

By default the plugin is built for the latest commit (HEAD). You can build another version by specifying it like this:

./build.sh 3.7

At this point, you can set the SSH_PORT option for the plugin by running:

docker plugin set anybox/buttervolume SSH_PORT=1122

Note that this option is only relevant if you use the replication feature between two nodes.

Now you can enable the plugin, which should start buttervolume in the plugin container:

docker plugin enable anybox/buttervolume:HEAD

You can check it is responding by running a buttervolume command:

export RUNCROOT=/run/docker/runtime-runc/plugins.moby/ # or /run/docker/plugins/runtime-root/plugins.moby/
alias drunc="sudo runc --root $RUNCROOT"
alias buttervolume="drunc exec -t $(drunc list|tail -n+2|awk '{print $1}') buttervolume"
sudo buttervolume scheduled

Increase the log level by writing a /var/lib/buttervolume/config/config.ini file with:

[DEFAULT]
TIMER = 120

Then check the logs with:

sudo journalctl -f -u docker.service

You can also locally install and run the plugin in the foreground with:

python3 -m venv venv
./venv/bin/python setup.py develop
sudo ./venv/bin/buttervolume run

Then you can use the buttervolume CLI that was installed in developer mode in the venv:

./venv/bin/buttervolume --version

If the plugin is already pushed to the image repository, you can install it with:

docker plugin install anybox/buttervolume

Check it is running:

docker plugin ls

Find your runc root, then define useful aliases:

export RUNCROOT=/run/docker/runtime-runc/plugins.moby/ # or /run/docker/plugins/runtime-root/plugins.moby/
alias drunc="sudo runc --root $RUNCROOT"
alias buttervolume="drunc exec -t $(drunc list|tail -n+2|awk '{print $1}') buttervolume"

And try a buttervolume command:

buttervolume scheduled

Or create a volume with the driver. Note that the name of the driver is the name of the plugin:

docker volume create -d anybox/buttervolume:latest myvolume

Note that instead of using aliases, you can also define functions that you can put in your .bash_profile or .bash_aliases:

function drunc () {
  RUNCROOT=/run/docker/runtime-runc/plugins.moby/ # or /run/docker/plugins/runtime-root/plugins.moby/
  sudo runc --root $RUNCROOT $@
}
function buttervolume () {
  drunc exec -t $(docker plugin ls --no-trunc  | grep 'anybox/buttervolume:latest' |  awk '{print $1}') buttervolume $@
}

You must force disable it before reinstalling it (as explained in the docker documentation):

docker plugin disable -f anybox/buttervolume
docker plugin rm -f anybox/buttervolume
docker plugin install anybox/buttervolume

You can configure the following variables:

  • DRIVERNAME: the full name of the driver (with the tag)
  • VOLUMES_PATH: the path where the BTRFS volumes are located
  • SNAPSHOTS_PATH: the path where the BTRFS snapshots are located
  • TEST_REMOTE_PATH: the path during unit tests where the remote BTRFS snapshots are located
  • SCHEDULE: the path of the scheduler configuration
  • RUNPATH: the path of the docker run directory (/run/docker)
  • SOCKET: the path of the unix socket where buttervolume listens
  • TIMER: the number of seconds between two runs of the scheduler jobs
  • DTFORMAT: the format of the datetime in the logs
  • LOGLEVEL: the Python log level (INFO, DEBUG, etc.)

The configuration can be done in this order of priority:

  1. from an environment variable prefixed with BUTTERVOLUME_ (ex: BUTTERVOLUME_TIMER=120)
  2. from the [DEFAULT] section of the /etc/buttervolume/config.ini file inside the container or /var/lib/buttervolume/config/config.ini on the host

Example of config.ini file:

[DEFAULT]
TIMER = 120

If none of this is configured, the following default values are used:

  • DRIVERNAME = anybox/buttervolume:latest
  • VOLUMES_PATH = /var/lib/buttervolume/volumes/
  • SNAPSHOTS_PATH = /var/lib/buttervolume/snapshots/
  • TEST_REMOTE_PATH = /var/lib/buttervolume/received/
  • SCHEDULE = /etc/buttervolume/schedule.csv
  • RUNPATH = /run/docker
  • SOCKET = $RUNPATH/plugins/btrfs.sock # only if run manually
  • TIMER = 60
  • DTFORMAT = %Y-%m-%dT%H:%M:%S.%f
  • LOGLEVEL = INFO

The normal way to run it is as a new-style Docker Plugin as described above in the "Install and run" section, which will start it automatically. This will create a /run/docker/plugins/<uuid>/btrfs.sock file to be used by the Docker daemon. The <uuid> is the unique identifier of the runc/OCI container running it. This means you can probably run several versions of the plugin simultaneously but this is currently not recommended unless you keep in mind the volumes and snapshots are in the same place for the different versions. Otherwise you can configure a different path for the volumes and snapshots of each different versions using the config.ini file.

Then the name of the volume driver is the name of the plugin:

docker volume create -d anybox/buttervolume:latest myvolume

or:

docker volume create --volume-driver=anybox/buttervolume:latest

When creating a volume, you can choose to disable copy-on-write on a per-volume basis. Just use the -o or --opt option as defined in the Docker documentation

docker volume create -d anybox/buttervolume -o copyonwrite=false myvolume

If you installed it locally as a Python distribution, you can also start it manually with:

sudo buttervolume run

In this case it will create a unix socket in /run/docker/plugins/btrfs.sock for use by Docker with the legacy plugin system. Then the name of the volume driver is the name of the socket file:

docker volume create -d btrfs myvolume

or:

docker create --volume-driver=btrfs

When started, the plugin will also start its own scheduler to run periodic jobs (such as a snapshot, replication, purge or synchronization)

Once the plugin is running, whenever you create a container you can specify the volume driver with docker create --volume-driver=anybox/buttervolume --name <name> <image>. You can also manually create a BTRFS volume with docker volume create -d anybox/buttervolume. It also works with docker-compose, by specifying the anybox/buttervolume driver in the volumes section of the compose file.

When you delete the volume with docker rm -v <container> or docker volume rm <volume>, the BTRFS subvolume is deleted. If you snapshotted the volume elsewhere in the meantime, the snapshots won't be deleted.

When buttervolume is installed, it provides a command line tool buttervolume, with the following subcommands:

run                 Run the plugin in foreground
snapshot            Snapshot a volume
snapshots           List snapshots
schedule            Schedule, unschedule, pause or resume a periodic snapshot, replication, synchronization or purge
scheduled           List, pause or resume all the scheduled actions
restore             Restore a snapshot (optionally to a different volume)
clone               Clone a volume as new volume
send                Send a snapshot to another host
sync                Synchronise a volume from a remote host volume
rm                  Delete a snapshot
purge               Purge old snapshot using a purge pattern

You can create a readonly snapshot of the volume with:

buttervolume snapshot <volume>

The volumes are currently expected to live in /var/lib/buttervolume/volumes and the snapshot will be created in /var/lib/buttervolume/snapshots, by appending the datetime to the name of the volume, separated with @.

You can list all the snapshots:

buttervolume snapshots

or just the snapshots corresponding to a volume with:

buttervolume snapshots <volume>

<volume> is the name of the volume, not the full path. It is expected to live in /var/lib/buttervolume/volumes.

You can restore a snapshot as a volume. The current volume will first be snapshotted, deleted, then replaced with the snapshot. If you provide a volume name instead of a snapshot, the latest snapshot is restored. So no data is lost if you do something wrong. Please take care of stopping the container before restoring a snapshot:

buttervolume restore <snapshot>

<snapshot> is the name of the snapshot, not the full path. It is expected to live in /var/lib/buttervolume/snapshots.

By default, the volume name corresponds to the volume the snapshot was created from. But you can optionally restore the snapshot to a different volume name by adding the target as the second argument:

buttervolume restore <snapshot> <volume>

You can clone a volume as a new volume. The current volume will be cloned as a new volume name given as parameter. Please take care of stopping the container before cloning a volume:

buttervolume clone <volume> <new_volume>

<volume> is the name of the volume to be cloned, not the full path. It is expected to live in /var/lib/buttervolume/volumes. <new_volume> is the name of the new volume to be created as clone of previous one, not the full path. It is expected to be created in /var/lib/buttervolume/volumes.

You can delete a snapshot with:

buttervolume rm <snapshot>

<snapshot> is the name of the snapshot, not the full path. It is expected to live in /var/lib/buttervolume/snapshots.

You can incrementally send snapshots to another host, so that data is replicated to several machines, allowing to quickly move a stateful docker container to another host. The first snapshot is first sent as a whole, then the next snapshots are used to only send the difference between the current one and the previous one. This allows to replicate snapshots very often without consuming a lot of bandwith or disk space:

buttervolume send <host> <snapshot>

<snapshot> is the name of the snapshot, not the full path. It is expected to live in /var/lib/buttervolume/snapshots and is replicated to the same path on the remote host.

<host> is the hostname or IP address of the remote host. The snapshot is currently sent using BTRFS send/receive through ssh, with an ssh server direcly included in the plugin. This requires that ssh keys be present and already authorized on the target host (under /var/lib/buttervolume/ssh), and that the StrictHostKeyChecking no option be enabled in /var/lib/buttervolume/ssh/config on local host.

Please note you have to restart you docker daemons each time you change ssh configuration.

The default SSH_PORT of the ssh server included in the plugin is 1122. You can change it with docker plugin set anybox/buttervolume SSH_PORT=<PORT> before enabling the plugin.

You can receive data from a remote volume, so in case there is a volume on the remote host with the same name, it will get new and most recent data from the distant volume and replace in the local volume. Before running the rsync command a snapshot is made on the local machine to manage recovery:

buttervolume sync <volume> <host1> [<host2>][...]

The intent is to synchronize a volume between multi hosts on running containers, so you should schedule that action on each nodes from all remote hosts.

Note

As we are pulling data from multiple hosts we never remove data, consider removing scheduled actions before removing data on each hosts.

Warning

Make sure your application is able to handle such synchronisation

You can purge old snapshot corresponding to the specified volume, using a retention pattern:

buttervolume purge <pattern> <volume>

If you're unsure whether you retention pattern is correct, you can run the purge with the --dryrun option, to inspect what snapshots would be deleted, without deleting them:

buttervolume purge --dryrun <pattern> <volume>

<volume> is the name of the volume, not the full path. It is expected to live in /var/lib/buttervolume/volumes.

<pattern> is the snapshot retention pattern. It is a semicolon-separated list of time length specifiers with a unit. Units can be m for minutes, h for hours, d for days, w for weeks, y for years. The pattern should have at least 2 items.

Here are a few examples of retention patterns:

  • 4h:1d:2w:2y
    Keep all snapshots in the last four hours, then keep only one snapshot every four hours during the first day, then one snapshot per day during the first two weeks, then one snapshot every two weeks during the first two years, then delete everything after two years.
  • 4h:1w
    keep all snapshots during the last four hours, then one snapshot every four hours during the first week, then delete older snapshots.
  • 2h:2h
    keep all snapshots during the last two hours, then delete older snapshots.

You can schedule, pause or resume a periodic job, such as a snapshot, a replication, a synchronization or a purge. The schedule it self is stored in /etc/buttervolume/schedule.csv.

Schedule a snapshot of a volume every 60 minutes:

buttervolume schedule snapshot 60 <volume>

Pause this schedule:

buttervolume schedule snapshot pause <volume>

Resume this schedule:

buttervolume schedule snapshot resume <volume>

Remove this schedule by specifying a timer of 0 min (or delete):

buttervolume schedule snapshot 0 <volume>

Schedule a replication of volume foovolume to remote_host:

buttervolume schedule replicate:remote_host 3600 foovolume

Remove the same schedule:

buttervolume schedule replicate:remote_host 0 foovolume

Schedule a purge every hour of the snapshots of volume foovolume, but keep all the snapshots in the last 4 hours, then only one snapshot every 4 hours during the first week, then one snapshot every week during one year, then delete all snapshots after one year:

buttervolume schedule purge:4h:1w:1y 60 foovolume

Remove the same schedule:

buttervolume schedule purge:4h:1w:1y 0 foovolume

Using the right combination of snapshot schedule timer, purge schedule timer and purge retention pattern, you can create you own backup strategy, from the simplest ones to more elaborate ones. A common one is the following:

buttervolume schedule snapshot 1440 <volume>
buttervolume schedule purge:1d:4w:1y 1440 <volume>

It should create a snapshot every day, then purge snapshots everydays while keeping all snapshots in the last 24h, then one snapshot per day during one month, then one snapshot per month during only one year.

Schedule a syncrhonization of volume foovolume from remote_host1 abd remote_host2:

buttervolume schedule synchronize:remote_host1,remote_host2 60 foovolume

Remove the same schedule:

buttervolume schedule synchronize:remote_host1,remote_host2 0 foovolume

You can list all the scheduled job with:

buttervolume scheduled

or:

buttervolume scheduled list

It will display the schedule in the same format used for adding the schedule, which is convenient to remove an existing schedule or add a similar one.

Pause all the scheduled jobs:

buttervolume scheduled pause

Resume all the scheduled jobs:

buttervolume scheduled resume

The global job pause/resume feature is implemented separately from the individual job pause/resume. So it will not affect your individual pause/resume settings.

Copy-On-Write is enabled by default. You can disable it if you really want.

Why disabling copy-on-write? If your docker volume stores databases such as PostgreSQL or MariaDB, the copy-on-write feature may hurt performance, though the latest kernels have improved a lot. The good news is that disabling copy-on-write does not prevent from doing snaphots.

If your volumes directory is a BTRFS partition or volume, tests can be run with:

./test.sh

If you have no BTRFS partitions or volumes you can setup a virtual partition in a file as follows (tested on Debian 8):

Setup BTRFS virtual partition:

sudo qemu-img create /var/lib/docker/btrfs.img 10G
sudo mkfs.btrfs /var/lib/docker/btrfs.img

Note

you can ignore the error, in fact the new FS is formatted

Mount the partition somewhere temporarily to create 3 new BTRFS subvolumes:

sudo -s
mkdir /tmp/btrfs_mount_point
mount -o loop /var/lib/docker/btrfs.img /tmp/btrfs_mount_point/
btrfs subvolume create /tmp/btrfs_mount_point/snapshots
btrfs subvolume create /tmp/btrfs_mount_point/volumes
btrfs subvolume create /tmp/btrfs_mount_point/received
umount /tmp/btrfs_mount_point/
rm -r /tmp/btrfs_mount_point/

Stop docker, create required mount point and restart docker:

systemctl stop docker
mkdir -p /var/lib/buttervolume/volumes
mkdir -p /var/lib/buttervolume/snapshots
mkdir -p /var/lib/buttervolume/received
mount -o loop,subvol=volumes /var/lib/docker/btrfs.img /var/lib/buttervolume/volumes
mount -o loop,subvol=snapshots /var/lib/docker/btrfs.img /var/lib/buttervolume/snapshots
mount -o loop,subvol=received /var/lib/docker/btrfs.img /var/lib/buttervolume/received
systemctl start docker

Once you are done with your test, you can unmount those volumes and you will find back your previous docker volumes:

systemctl stop docker
umount /var/lib/buttervolume/volumes
umount /var/lib/buttervolume/snapshots
umount /var/lib/buttervolume/received
systemctl start docker
rm /var/lib/docker/btrfs.img

If you're currently using Buttervolume 1.x or 2.0 in production, you must carefully follow the guidelines below to migrate to version 3.

First copy the ssh and config files and disable the scheduler:

sudo -s
docker cp buttervolume_plugin_1:/etc/buttervolume /var/lib/buttervolume/config
docker cp buttervolume_plugin_1:/root/.ssh /var/lib/buttervolume/ssh
mv /var/lib/buttervolume/config/schedule.csv /var/lib/buttervolume/config/schedule.csv.disabled

Then stop all your containers, excepted buttervolume

Now snapshot and delete all your volumes:

volumes=$(docker volume ls -f driver=anybox/buttervolume:latest --format "{{.Name}}")
# or: # volumes=$(docker volume ls -f driver=anybox/buttervolume:latest|tail -n+2|awk '{print $2}')
echo $volumes
for v in $volumes; do docker exec buttervolume_plugin_1 buttervolume snapshot $v; done
for v in $volumes; do docker volume rm $v; done

Then stop the buttervolume container, remove the old btrfs.sock file, and restart docker:

docker stop buttervolume_plugin_1
docker rm -v buttervolume_plugin_1
rm /run/docker/plugins/btrfs.sock
systemctl stop docker

If you were using Buttervolume 1.x, you must move your snapshots to the new location:

mkdir /var/lib/buttervolume/snapshots
cd /var/lib/docker/snapshots
for i in *; do btrfs subvolume snapshot -r $i /var/lib/buttervolume/snapshots/$i; done

Restore /var/lib/docker/volumes as the original folder:

cd /var/lib/docker
mkdir volumes.new
mv volumes/* volumes.new/
umount volumes  # if this was a mounted btrfs subvolume
mv volumes.new/* volumes/
rmdir volumes.new
systemctl start docker

Change your volume configurations (in your compose files) to use the new anybox/buttervolume:latest driver name instead of btrfs

Then start the new buttervolume 3.x as a managed plugin and check it is started:

docker plugin install anybox/buttervolume:latest
docker plugin ls

Then recreate all your volumes with the new driver and restore them from the snapshots:

for v in $volumes; do docker volume create -d anybox/buttervolume:latest $v; done
export RUNCROOT=/run/docker/runtime-runc/plugins.moby/ # or /run/docker/plugins/runtime-root/plugins.moby/
alias drunc="sudo runc --root $RUNCROOT"
alias buttervolume="drunc exec -t $(drunc list|tail -n+2|awk '{print $1}') buttervolume"
# WARNING : check the the volume you will restore are the correct ones
for v in $volumes; do buttervolume restore $v; done

Then restart your containers, check they are ok with the correct data.

Reenable the schedule:

mv /var/lib/buttervolume/config/schedule.csv.disabled /var/lib/buttervolume/config/schedule.csv

Thanks to:

  • Christophe Combelles
  • Pierre Verkest
  • Marcelo Ochoa
  • Christoph Rist
  • Philip Nagler-Frank
  • Yoann MOUGNIBAS

buttervolume's People

Contributors

bananer avatar ccomb avatar mougnibas avatar risteon avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

buttervolume's Issues

use snapshot when mount

hi,
in my use case, i'd like to setup db volume container for QA testing pursue, use the feature of btrfs that each container use the snapshot the volume so that any testing data will be discard once QA finish the test (clean the containers), also the testing data in each container won't pollute the base data in the volume.
so the workflow is such as below:

  1. create base data volume
  2. create container use that base volume
  3. create snapshot of the volume before run the container
  4. [perform routine test as QA team do...]
  5. remove the container
  6. remove the snapshot that container binds.

for now, i was hoping buttervolume can create the snapshot when Mount and remove it when Unmount in automatic manner, however looks like buttervolume does not support this.

i'm thinking to suppose this by introducing a env or conf property, (e.g. btrfs_driver_snapshot_on_mount=true/false), if it's true, then buttervolume will create a snapshot and track it somewhere so that the correct snapshot will be removed when unmount.
what do you think?

see also: https://docs.docker.com/engine/extend/plugins_volume/#volumedrivermount

No Socket /var/run/docker/plugins/btrfs.sock

Hi, many thanks for the great idea so far. Iam loving btrfs and doing scheduled stuff like snapshotting, syncing, etc. directly with docker is a really great addition.

After installing the plugin like described there is no socket inside /var/run/docker/plugins/. The plugin created a socket at some subpath, e.x. /var/run/docker/plugins/4537bd7d3e863809942031a9f87c54c2f5c38922d4286c1d9ebbcea3d6df4f22/btrfs.sock.
I could bring the plugin to work by creating a symbolic link ln -s /4537bd7d3e863809942031a9f87c54c2f5c38922d4286c1d9ebbcea3d6df4f22/btrfs.sock /var/run/docker/plugins/btrfs.sock and manually starting the daemon buttervolume run.

Running buttervolume run printed the following:

Starting scheduler job every 60s
Listening to requests...
Serving on http://unix:/run/docker/plugins/btrfs.sock
INFO:root:New scheduler job at 2018-06-06 12:22:20.475458

freeze due to old-style front-end mode (unix socket)

After running the plugin in front mode with buttervolume run, I end up with some kind of freeze when doing docker volume ls. It's due to the /run/docker/plugins/btrfs.sock being created. After deleting it docker works again, but it should not be a problem.

purge : prevent confusing m with month

In the purge pattern, 'm' is for minutes, not months.
However there is no error when launching a purge or schedule command with the following pattern: 1w:1m:1y

It should fail after checking that 1m is smaller than 1w. Each pattern item should be smaller than the preceding

kill the schedule Timer when the main process is killed?

The main thread reacts to the SIGTERM signal, and propagates this to the Timer by canceling it. It happened once that the buttervolume docker plugin restart endlessly (for any external reason), and relaunch the schedule endlessly, creating too many snapshots, leading to a btrfs Unallocated space too low.
More investigation need to be done, but the plugin is probably killed instead of terminated, so the Timer thread continues to work, blocking the kill, creating other snapshots, etc. It's probaby worth considering reacting to other signals such as SIGKILL.

parsing date from snapshots name

At least while purging snapshots we are reading date that were generated by datetime.isoformat if we are unlucky we get the time with 0 microsecond, in that case the format is a bit different:

(Pdb) datetime(2017,4,3,21,24,0,101).isoformat()
'2017-04-03T21:24:00.000101'
(Pdb) datetime(2017,4,3,21,24,0,0).isoformat()
'2017-04-03T21:24:00'

So while parsing that string we can get a ValueError exception:

(Pdb) datetime.strptime("buttervolume-test-eaf7bb4c65ed4f2ea9c30ef6c33f0f00@2017-04-03T21:24:00@123".split('@')[1], "%Y-%m-%dT%H:%M:%S.%f")
*** ValueError: time data '2017-04-03T21:24:00' does not match format '%Y-%m-%dT%H:%M:%S.%f'
(Pdb) datetime.strptime("buttervolume-test-eaf7bb4c65ed4f2ea9c30ef6c33f0f00@2017-04-03T21:24:00.12@123".split('@')[1], "%Y-%m-%dT%H:%M:%S.%f")
datetime.datetime(2017, 4, 3, 21, 24, 0, 120000)

Egg broken (1.3)

The buttervolume egg v. 1.3 (https://pypi.python.org/pypi/buttervolume/1.3) seems to broken.

# pip3 install buttervolume 
Collecting buttervolume
  Downloading buttervolume-1.3.tar.gz
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/tmp/pip-build-c_alp5nk/buttervolume/setup.py", line 17, in <module>
        + open('CHANGES.rst').read(),
    FileNotFoundError: [Errno 2] No such file or directory: 'CHANGES.rst'
    
    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-c_alp5nk/buttervolume/

Guess the CHANGES.rst file is missing in the package.

unwanted queued replications

After having a slave node disconnected during one day, the following snapshots appear when the slave is reconnected. All these snapshots are not necessary. Even if the purge process will erase most, we have to check how they appear.
Maybe avoid to reschedule a replication if another one for the same volume is not finished?

cluster_caddy_ssl@2018-06-14T11:10:25.549293
cluster_caddy_ssl@2018-06-14T11:35:07.725268
cluster_caddy_ssl@2018-06-14T11:39:03.489281
cluster_caddy_ssl@2018-06-14T11:39:25.813912
cluster_caddy_ssl@2018-06-14T11:54:27.434850
cluster_caddy_ssl@2018-06-14T12:04:00.185161
cluster_caddy_ssl@2018-06-14T12:04:37.264123
cluster_caddy_ssl@2018-06-14T12:19:38.738478
cluster_caddy_ssl@2018-06-14T12:34:40.191789
cluster_caddy_ssl@2018-06-14T12:49:41.919209
cluster_consul_docker_cfg@2018-06-14T11:10:24.695190
cluster_consul_docker_cfg@2018-06-14T11:35:10.019119
cluster_consul_docker_cfg@2018-06-14T11:39:05.787531
cluster_consul_docker_cfg@2018-06-14T11:39:25.127532
cluster_consul_docker_cfg@2018-06-14T11:54:26.636645
cluster_consul_docker_cfg@2018-06-14T12:03:59.193377
cluster_consul_docker_cfg@2018-06-14T12:04:36.565095
cluster_consul_docker_cfg@2018-06-14T12:19:38.045933
cluster_consul_docker_cfg@2018-06-14T12:34:39.497284
cluster_consul_docker_cfg@2018-06-14T12:49:41.143325

disable debug log by default

hi,
I ran buttervolume from the pre-baked docker image:

root@ca4c855d6f0d:/# buttervolume snapshots
DEBUG:requests.packages.urllib3.connectionpool:http://localhost:None "POST /VolumeDriver.Snapshot.List HTTP/1.1" 200 28
  1. http://localhost:None, the port "None" looks confusing, though the result is good
  2. can we disable the DEBUG log by default?
    thx

userns-remap and buttervolume

I'm not able to get buttervolume to work properly when the docker daemon is in "userns-remap" mode. I've mapped "root" in my containers to userid 10,000. Volume creation happens but then when it tries to map it into the /var/lib/docker/10000.10000/btrfs/snapshots it seems to lose the mapping. Instead, the volume ends up with nobody:nogroup as the owner.

Here is the output:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/docker/10000.10000/plugins/a611740455740ff69f9721be7f755ebf100baf337248815c37faf52405f12c02/propagated-mount/volumes/test2\\\" to rootfs \\\"/var/lib/docker/10000.10000/btrfs/subvolumes/a1a2b78e7e08eeb182408aceaa5791a419b8c491e80de0969d93e85eff0c0515\\\" at \\\"/home\\\" caused \\\"stat /var/lib/docker/10000.10000/plugins/a611740455740ff69f9721be7f755ebf100baf337248815c37faf52405f12c02/propagated-mount/volumes/test2: permission denied\\\"\"": unknown. ERRO[0001] error waiting for container: context canceled

Can I use native btrfs commands instead of the buttervolume cli ?

I plan to replace my existing rollback after deployment system to buttervolume and btrfs snapshots, for now I was tar-ing my volumes, but this takes a while.

This is automated in an ansible playbook, and I have two (small) issues here:

  1. Using the runc execution is cumbersome in ansible
  2. I would like to manage the location and name of snapshots created by the playbook (but not by snapshots created outside, so I won't change the path in the config)

So can I just use btrfs subvolume snapshot on a volume directory ? Does buttervolume have any state that could get out of sync if I use btrfs manually on its folders/volumes ?

docker-runc has become runc in docker-ce 18.9.0

Version 18.9.0 of Docker CE no longer includes the docker-runc command. In stead, the containerd.io package is installed as a dependency, which contains the runc command.

This means that the Bash aliases in the readme no longer work. They also have another problem: The command inside the bacticks is run immediately when the alias is defined, so if it's put into .bash_aliases, you get a password prompt when starting a new shell.

I have had success putting the following in .bash_aliases:

function drunc () {
  sudo runc --root /run/docker/plugins/runtime-root/plugins.moby/ $@
}

function buttervolume () {
  drunc exec -t $(drunc list|tail -n+2|awk '{print $1}') buttervolume $@
}

exec failed: container "xxxxx" does not exist

Hi,
I'm currently trying to install buttervolume on a dedicated server.
I followed the install guide, I can now create volume using the anybox/buttervolume:latest driver but I have some problem with the buttervolume command.
I made aliases using

alias drunc="sudo runc --root /run/docker/plugins/runtime-root/plugins.moby/"
alias buttervolume="drunc exec -t $(drunc list|tail -n+2|awk '{print $1}') buttervolume"

But when I execute, for example :

buttervolume scheduled

That return

ERRO[0000] exec failed: container "1ed2837d4ffbddc9994111b57cbfd72a972a64c77a5211099349b8f986af1bcc" does not exist

I noticed that 1ed2837d4ffbddc9994111b57cbfd72a972a64c77a5211099349b8f986af1bcc is the plugin id but I don't understand why this error is talking about container.

Thank for you help!
Have a nice day

Debian : 10.7
Kernel : 4.19.0-12-amd64
Docker : 20.10.0, build 7287ab3
Buttervolume config : default

500 Internal Server

Ubuntu 16.04
Docker 17.06.2-ce
btrfs on /var/lib/docker

docker run -d --privileged -v /var/lib/docker/volumes:/var/lib/docker/volumes -v /run/docker/plugins:/run/docker/plugins anybox/buttervolume docker volume create -d btrfs testbtrfs Error response from daemon: create testbtrfs: VolumeDriver.Create: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html> <head> <title>Error: 500 Internal Server Error</title> <style type="text/css"> html {background-color: #eee; font-family: sans;} body {background-color: #fff; border: 1px solid #ddd; padding: 15px; margin: 15px;} pre {background-color: #eee; border: 1px solid #ddd; padding: 5px;} </style> </head> <body> <h1>Error: 500 Internal Server Error</h1> <p>Sorry, the requested URL <tt>&#039;http://unix:/run/docker/plugins/btrfs.sock/VolumeDriver.Create&#039;</tt> caused an error:</p> <pre>Internal Server Error</pre> </body> </html>

service log show
`WARNING:root:No config file /etc/buttervolume/schedule.csv
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/buttervolume-1.3.1-py3.5.egg/buttervolume/plugin.py", line 42, in volume_create
btrfs.Subvolume(volpath).create()
File "/usr/local/lib/python3.5/dist-packages/buttervolume-1.3.1-py3.5.egg/buttervolume/btrfs.py", line 40, in create
out = run('btrfs subvolume create "{}"'.format(self.path))
File "/usr/local/lib/python3.5/dist-packages/buttervolume-1.3.1-py3.5.egg/buttervolume/btrfs.py", line 7, in run
stderr=stderr).stdout.decode()
File "/usr/lib/python3.5/subprocess.py", line 398, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'btrfs subvolume create "/var/lib/docker/volumes/testbtrfs"' returned non-zero exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/bottle-0.12.13-py3.5.egg/EGG-INFO/scripts/bottle.py", line 862, in _handle
return route.call(**args)
File "/usr/local/lib/python3.5/dist-packages/bottle-0.12.13-py3.5.egg/EGG-INFO/scripts/bottle.py", line 1740, in wrapper
rv = callback(*a, **ka)
File "/usr/local/lib/python3.5/dist-packages/buttervolume-1.3.1-py3.5.egg/buttervolume/plugin.py", line 44, in volume_create
return json.dumps({'Err': e.strerror})
AttributeError: 'CalledProcessError' object has no attribute 'strerror'
INFO:root:New scheduler job at 2018-01-07 15:54:42.349692
WARNING:root:No config file /etc/buttervolume/schedule.csv
INFO:root:New scheduler job at 2018-01-07 15:55:42.351171
WARNING:root:No config file /etc/buttervolume/schedule.csv
`

Error 500 when creating volume in version 3.9

docker compose up results in the following when creating a new volume in 3.9:

Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="Traceback (most recent call last):" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="  File \"/usr/local/lib/python3.9/dist-packages/bottle-0.12.23-py3.9.egg/EGG-INFO/scripts/bottle.py\", line 876, in _handle" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="    return route.call(**args)" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="  File \"/usr/local/lib/python3.9/dist-packages/bottle-0.12.23-py3.9.egg/EGG-INFO/scripts/bottle.py\", line 1756, in wrapper" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="    rv = callback(*a, **ka)" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="  File \"/usr/local/lib/python3.9/dist-packages/buttervolume-3.8-py3.9.egg/buttervolume/plugin.py\", line 62, in new_handler" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="    resp = json.dumps(handler(req))" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="  File \"/usr/local/lib/python3.9/dist-packages/buttervolume-3.8-py3.9.egg/buttervolume/plugin.py\", line 89, in volume_create" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="    option_copyonwrite = opts[\"copyonwrite\"].lower()" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06Z" level=error msg="TypeError: 'NoneType' object is not subscriptable" plugin=37f2845ef60d5735e4dba3401fcf9e603960f694ca3f2917aec557e66d72524e
Oct 11 08:08:06 servilix dockerd[8809]: time="2022-10-11T08:08:06.813106506Z" level=error msg="Handler for POST /v1.41/volumes/create returned error: create homeassistant_haconfig: VolumeDriver.Create: \n    <!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n    <html>\n        <head>\n            <title>Error: 500 Internal Server Error</title>\n            <style type=\"text/css\">\n              html {background-color: #eee; font-family: sans;}\n              body {background-color: #fff; border: 1px solid #ddd;\n                    padding: 15px; margin: 15px;}\n              pre {background-color: #eee; border: 1px solid #ddd; padding: 5px;}\n            </style>\n        </head>\n        <body>\n            <h1>Error: 500 Internal Server Error</h1>\n            <p>Sorry, the requested URL <tt>&#039;http://waitress.invalid:/run/docker/plugins/btrfs.sock/VolumeDriver.Create&#039;</tt>\n               caused an error:</p>\n            <pre>Internal Server Error</pre>\n        </body>\n    </html>\n"

Adding driver_opts in the compose file solves this, but shouldn't be necessary.

volumes:
  haconfig:
    driver: anybox/buttervolume:latest
    driver_opts:
      option_copyonwrite: "true"

Allow to use buttervolume without any BTRFS partition

One thing that can prevent people from using Buttervolume is that it gives the impression that one needs to have a dedicated BTRFS partition. However it is possible to format a single file and mount it as /var/lib/buttervolume. To ease the initial setup, either add something like a buttervolume init command, or explain how to do in the README file.

volume labels are losts

while restoring a volume (which happens when moving volume from nodes in mlfmonde/cluster) volumes labels are lost.

To reproduce:

$ docker volume create -d anybox/buttervolume:latest --label test=True test
gle@rbx-6-any-3 ~ $ docker volume ls
DRIVER                       VOLUME NAME
anybox/buttervolume:latest   test
$ docker volume inspect test
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "anybox/buttervolume:latest",
        "Labels": {
            "test": "True"
        },
        "Mountpoint": "/var/lib/buttervolume/volumes/test",
        "Name": "test",
        "Options": {},
        "Scope": "local"
    }
]
$ buttervolume snapshot test
$ docker volume rm test
$ docker volume inspect test
[]
Error: No such volume: test
buttervolume restore test
$ docker volume inspect test
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "anybox/buttervolume:latest",
        "Labels": null,
        "Mountpoint": "/var/lib/buttervolume/volumes/test",
        "Name": "test",
        "Options": null,
        "Scope": "local"
    }
]

Ambiguity in docs about config.ini

The docs says the configuration and schedules reside in /etc/buttervolume/schedule.csv and /etc/buttervolume/config.ini. As far as I find out this is just the path inside the plugin container, not on the host machine. Is this correct?
I think its better to tell the user to put the files inside /var/lib/buttervolume/config.

cow = opts.get : AttributeError: 'NoneType' object has no attribute 'get'"

Got a exception while creating a volume through docker-compose

cow = opts.get(\"copyonwrite\", \"true\").lower()" plugin=d6ead4dd089e1fd02dfbe65eb6b68377350528d8ffd967459e222f686ae075eb
AttributeError: 'NoneType' object has no attribute 'get'" plugin=d6ead4dd089e1fd02dfbe65eb6b68377350528d8ffd967459e222f686ae075eb

Compression is not working?

I've tested when /usr/lib/buttervolume is mounted with compress (also tested with compress-force) options. Everything written to plugin-backed volume is not compressed (according to btrfs usage stats and compsize utility)

In the same time, when I create a native btrfs volume at the same device, mount it locally completely unrelated to docker, and write large files — compression works as expected (according to btrfs and compsize).

cluster

amazing work!

I think could be very interesting to add a clustered storage system with buttervolume

to sync volumes every X minutes between nodes of a cluster.
you can control source/destination of the sync checking which host have the vol mounted and control the replication factor(x2, x3)
currently there are solutions for clustered storage that have same features, but with buttervolume you can have async replication that is the only way when you want to replicate in a high latency scenario.

sshd zombie process

On each new connection a process is spawned by sshd even the user is not connected.

I guess the zombie process is hold by the docker entrypoint itself ?!

  • We should offer a way to let custom sshd config by adminsys binding /etc/ssh/sshd_config to allow listening sshd only on secure network interface
  • We should bind sshd logs somewhere to let adminsys to configure some fail2ban like apps
  • We must close zombie processus properly
  • We should build this image automatically in order to upgrade software inside

We may want to split those points in different tickets !

Make driver volume plugin complaiant with v1.13

We have to make this plugin complaint with v1.12

https://docs.docker.com/engine/extend/plugins_volume/#command-line-changes

Seen in docker daemon logs:
Mar 23 03:53:38 nepri env[959]: time="2018-03-23T03:53:38.297181357Z" level=warning msg="Volume driver btrfs returned an error while trying to query its capabilities, using default capabilities: VolumeDriver.Cap
Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]: <title>Error: 404 Not Found</title>
Mar 23 03:53:38 nepri env[959]: <style type="text/css">
Mar 23 03:53:38 nepri env[959]: html {background-color: #eee; font-family: sans;}
Mar 23 03:53:38 nepri env[959]: body {background-color: #fff; border: 1px solid #ddd;
Mar 23 03:53:38 nepri env[959]: padding: 15px; margin: 15px;}
Mar 23 03:53:38 nepri env[959]: pre {background-color: #eee; border: 1px solid #ddd; padding: 5px;}
Mar 23 03:53:38 nepri env[959]: </style>
Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]:

Error: 404 Not Found


Mar 23 03:53:38 nepri env[959]:

Sorry, the requested URL 'http://unix:/run/docker/plugins/btrfs.sock/VolumeDriver.Capabilities&#039;
Mar 23 03:53:38 nepri env[959]: caused an error:


Mar 23 03:53:38 nepri env[959]:
Not found: '/VolumeDriver.Capabilities'

Mar 23 03:53:38 nepri env[959]:
Mar 23 03:53:38 nepri env[959]:

$ docker info
Containers: 17
Running: 13
Paused: 0
Stopped: 4
Images: 428
Server Version: 17.09.1-ce
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: btrfs local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: v0.13.2 (expected: 949e6facb77383876aeff8a6944dde66b3089574)
Security Options:
seccomp
Profile: default
selinux
Kernel Version: 4.14.19-coreos
Operating System: Container Linux by CoreOS 1632.3.0 (Ladybug)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 62.71GiB
Name: nepri
ID: WHLI:2RVN:SOD5:TUOM:PWUA:WRVT:KHYZ:OQPB:V7SM:GEDG:FN2Q:4YRK
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

exception during an exception

Found an error in the display of an exception.
To reproduce it : start buttervolume with a wrong mount of /var/lib/buttervolume so that it is not a BTRFS filesystem or subvolume. Then try to create a volume with docker volume create -d btrfs truuc :

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/buttervolume-2.0.0-py3.5.egg/buttervolume/plugin.py", line 64, in volume_create
    btrfs.Subvolume(volpath).create()
  File "/usr/local/lib/python3.5/dist-packages/buttervolume-2.0.0-py3.5.egg/buttervolume/btrfs.py", line 40, in create
    out = run('btrfs subvolume create "{}"'.format(self.path))
  File "/usr/local/lib/python3.5/dist-packages/buttervolume-2.0.0-py3.5.egg/buttervolume/btrfs.py", line 7, in run
    stderr=stderr).stdout.decode()
  File "/usr/lib/python3.5/subprocess.py", line 398, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'btrfs subvolume create "/var/lib/buttervolume/volumes/truuc"' returned non-zero exit status 1

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/bottle-0.12.13-py3.5.egg/EGG-INFO/scripts/bottle.py", line 862, in _handle
    return route.call(**args)
  File "/usr/local/lib/python3.5/dist-packages/bottle-0.12.13-py3.5.egg/EGG-INFO/scripts/bottle.py", line 1740, in wrapper
    rv = callback(*a, **ka)
  File "/usr/local/lib/python3.5/dist-packages/buttervolume-2.0.0-py3.5.egg/buttervolume/plugin.py", line 66, in volume_create
    return json.dumps({'Err': e.stderr})
  File "/usr/lib/python3.5/json/__init__.py", line 230, in dumps
    return _default_encoder.encode(obj)
  File "/usr/lib/python3.5/json/encoder.py", line 198, in encode
    chunks = self.iterencode(o, _one_shot=True)
  File "/usr/lib/python3.5/json/encoder.py", line 256, in iterencode
    return _iterencode(o, 0)
  File "/usr/lib/python3.5/json/encoder.py", line 179, in default
    raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'ERROR: not a btrfs filesystem: /var/lib/buttervolume/volumes\n' is not JSON serializable


[Enhancement] Multiple snapshots as new mounts (clone volume)

Is it possible to implement snapshot create as new volume (cloning)?
My use case is to provide a base mount point with an immutable installation of the software and creates multiples copies of that base image as snapshot for new mount point.
Now I doing this:

$ docker run --shm-size=1g --name apex --hostname apex -p 1521:1521 -p 8080:8080 --volume-driver=btrfs -v apex-5.1.4:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker stop apex
$ docker exec -ti plugin_btrfs buttervolume snapshot apex-5.1.4
$ docker exec -ti plugin_btrfs buttervolume restore "apex-5.1.4@2017-09-14T20:18:29.012797" apex-5.1.4-app1
$ docker exec -ti plugin_btrfs buttervolume restore "apex-5.1.4@2017-09-14T20:18:29.012797" apex-5.1.4-app2
$ docker exec -ti plugin_btrfs buttervolume restore "apex-5.1.4@2017-09-14T20:18:29.012797" apex-5.1.4-app3
$ docker run --shm-size=1g --name apex-app1 --hostname apex -p 15121:1521 -p 8081:8080 --volume-driver=btrfs -v apex-5.1.4-app1:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker run --shm-size=1g --name apex-app2 --hostname apex -p 15221:1521 -p 8082:8080 --volume-driver=btrfs -v apex-5.1.4-app2:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker run --shm-size=1g --name apex-app3 --hostname apex -p 15321:1521 -p 8083:8080 --volume-driver=btrfs -v apex-5.1.4-app3:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe

I would like to do something like this:

$ docker run --shm-size=1g --name apex --hostname apex -p 1521:1521 -p 8080:8080 --volume-driver=btrfs -v apex-5.1.4:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker stop apex
$ docker exec -ti plugin_btrfs buttervolume clone apex-5.1.4 apex-5.1.4-app1
$ docker exec -ti plugin_btrfs buttervolume clone apex-5.1.4 apex-5.1.4-app2
$ docker exec -ti plugin_btrfs buttervolume clone apex-5.1.4 apex-5.1.4-app3
$ docker run --shm-size=1g --name apex-app1 --hostname apex -p 15121:1521 -p 8081:8080 --volume-driver=btrfs -v apex-5.1.4-app1:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker run --shm-size=1g --name apex-app2 --hostname apex -p 15221:1521 -p 8082:8080 --volume-driver=btrfs -v apex-5.1.4-app2:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe
$ docker run --shm-size=1g --name apex-app3 --hostname apex -p 15321:1521 -p 8083:8080 --volume-driver=btrfs -v apex-5.1.4-app3:/u01/app/oracle/oradata oracle/database:11.2.0.2-xe

this enhancement will be great to prepare environment for training purpose.
Many thanks, in advance. Marcelo.

Change SSH port

Is it possible to change the port used for SSH form the default 1122?

This file seems to indicate setting the environment variable SSH_PORT should work, but I tried:

docker plugin set anybox/buttervolume SSH_PORT=3181

which gives me:

Error response from daemon: setting "SSH_PORT" not found in the plugin configuration

Cannot enable plugin: No btrfs.sock created

I'm getting an error when installing or enabling the buttervolume plugin. I've tried to install the plugin through the repository:

> docker plugin install anybox/buttervolume
...
Error response from daemon: dial unix /run/docker/plugins/<uuid>/btrfs.sock: connect: no such file or directory

The plugin was installed but cannot be enabled.

> docker plugin ls
ID          NAME                         DESCRIPTION                      ENABLED
<ID>        anybox/buttervolume:latest   BTRFS Volume Plugin for Docker   false

Trying to enable it returns the same error:

> docker plugin enable anybox/buttervolume:latest 
Error response from daemon: dial unix /run/docker/plugins/<uuid>/btrfs.sock: connect: no such file or directory

Apparently the btrfs.sock file isn't created during the installation.

broken volume

It happened a few times that we find a subvolume under a volume directory, instead of the volume itself being the suvolume.
We suspect this comes from either a restore on an existing volume or something else we have to reproduce.

Using Buttervolume inside Docker container

I am currently using an older version of Buttervolume inside a CI build agent, using docker exec buttervolume buttervolume. That works perfectly.

Is there any way to use the newer, managed plugin based version this way? docker-runc doesn't seem to work, as it doesn't detect that the plugin is running. /var/run/docker.sock and /run/docker/plugins/runtime-root/plugins.moby are mapped from the host.

On the host:

# drunc list
ID                                                                 PID         STATUS      BUNDLE                                                                                                                                       CREATED                          OWNER
abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200   19607       running     /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/plugins.moby/abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200   2018-11-01T16:26:26.605625462Z   root

Inside the CI container:

# drunc list
ID                                                                 PID         STATUS      BUNDLE                                                                                                                                       CREATED                          OWNER
abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200   0           stopped     /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/plugins.moby/abf6245ea65ee121ff48c30f99c283dac49d225221579ee4a140b7d8a843f200   2018-11-01T16:26:26.605625462Z   root

/var/lib/buttervolume/config/config.ini is not read

In the readme it says, I can put configuration settings into

/etc/buttervolume/config.ini file inside the container or /var/lib/buttervolume/config/config.ini on the host

However, I tried changing my snapshots and volumes path in /var/lib/buttervolume/config/config.ini but they still end up in the default place.

Quickly searching through the code also shows no references to that file, it might be an error in the readme.

However, configuring through a file on the host would be useful to me. I could not figure out the other option of setting environment variables for configuration.

Incomplete example in the documentation

The documentation has the following example:

docker run --privileged -v /var/lib/docker/volumes:/var/lib/docker/volumes -v /run/docker/plugins:/run/docker/plugins anybox/buttervolume

However, running docker exec buttervolume buttervolume snapshot some_volume exits with error code 1, as /var/lib/docker/snapshots isnt a btrfs volume.

The following works:

docker run --privileged \
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
-v /var/lib/docker/snapshots:/var/lib/docker/snapshots \
-v /run/docker/plugins:/run/docker/plugins \
anybox/buttervolume

Should /var/lib/docker/received also be added the same way?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.