Coder Social home page Coder Social logo

convoy's Introduction

Convoy Build Status

Overview

Convoy is a Docker volume plugin for a variety of storage back-ends. It supports vendor-specific extensions like snapshots, backups, and restores. It's written in Go and can be deployed as a standalone binary.

Convoy_DEMO

Why use Convoy?

Convoy makes it easy to manage your data in Docker. It provides persistent volumes for Docker containers with support for snapshots, backups, and restores on various back-ends (e.g. device mapper, NFS, EBS).

For example, you can:

  • Migrate volumes between hosts
  • Share the same volumes across hosts
  • Schedule periodic snapshots of volumes
  • Recover a volume from a previous backup

Supported back-ends

  • Device Mapper
  • Virtual File System (VFS) / Network File System (NFS)
  • Amazon Elastic Block Store (EBS)

Quick Start Guide

First, make sure Docker 1.8 or above is running.

docker --version

If not, install the latest Docker daemon as follows:

curl -sSL https://get.docker.com/ | sh

Once the right Docker daemon version is running, install and configure the Convoy volume plugin as follows:

wget https://github.com/rancher/convoy/releases/download/v0.5.2/convoy.tar.gz
tar xvzf convoy.tar.gz
sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/
sudo mkdir -p /etc/docker/plugins/
sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

You can use a file-backed loopback device to test and demo Convoy Device Mapper driver. A loopback device, however, is known to be unstable and should not be used in production.

truncate -s 100G data.vol
truncate -s 1G metadata.vol
sudo losetup /dev/loop5 data.vol
sudo losetup /dev/loop6 metadata.vol

Once the data and metadata devices are set up, you can start the Convoy plugin daemon as follows:

sudo convoy daemon --drivers devicemapper --driver-opts dm.datadev=/dev/loop5 --driver-opts dm.metadatadev=/dev/loop6

You can create a Docker container with a convoy volume. As a test, create a file called /vol1/foo in the Convoy volume:

sudo docker run -v vol1:/vol1 --volume-driver=convoy ubuntu touch /vol1/foo

Next, take a snapshot of the convoy volume and backup the snapshot to a local directory: (You can also make backups to an NFS share or S3 object store.)

sudo convoy snapshot create vol1 --name snap1vol1
sudo mkdir -p /opt/convoy/
sudo convoy backup create snap1vol1 --dest vfs:///opt/convoy/

The convoy backup command returns a URL string representing the backup dataset. You can use this URL to recover the volume on another host:

sudo convoy create res1 --backup <backup_url>

The following command creates a new container and mounts the recovered Convoy volume into that container:

sudo docker run -v res1:/res1 --volume-driver=convoy ubuntu ls /res1/foo

You should see the recovered file in /res1/foo.

Installation

Ensure you have Docker 1.8 or above installed.

Download the latest version of Convoy and unzip it. Put the binaries in a directory in the execution $PATH of sudo and root users (e.g. /usr/local/bin).

wget https://github.com/rancher/convoy/releases/download/v0.5.2/convoy.tar.gz
tar xvzf convoy.tar.gz
sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/

Run the following commands to set up the Convoy volume plugin for Docker:

sudo mkdir -p /etc/docker/plugins/
sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

Start Convoy Daemon

You need to pass different arguments to the Convoy daemon depending on your choice of back-end implementation.

Device Mapper

If you're running in a production environment with the Device Mapper driver, it's recommended to attach a new, empty block device to the host Convoy is running on. Then you can make two partitions on the device using dm_dev_partition.sh to get two block devices ready for the Device Mapper driver. See Device Mapper Partition Helper for more details.

Device Mapper requires two block devices to create storage pool for all volumes and snapshots. Assuming you have two devices created one data device called /dev/convoy-vg/data and the other metadata device called /dev/convoy-vg/metadata, then run the following command to start the Convoy daemon:

sudo convoy daemon --drivers devicemapper --driver-opts dm.datadev=/dev/convoy-vg/data --driver-opts dm.metadatadev=/dev/convoy-vg/metadata
  • The default Device Mapper volume size is 100G. You can override it with the ---driver-opts dm.defaultvolumesize option.
  • You can take a look here if you want to know how much storage should be allocated for the metadata device.

NFS

First, mount the NFS share to the root directory used to store volumes. Substitute <vfs_path> with the appropriate directory of your choice:

sudo mkdir <vfs_path>
sudo mount -t nfs <nfs_server>:/path <vfs_path>

The NFS-based Convoy daemon can be started as follows:

sudo convoy daemon --drivers vfs --driver-opts vfs.path=<vfs_path>

EBS

Make sure you're running on an EC2 instance and have already configured AWS credentials correctly.

sudo convoy daemon --drivers ebs

DigitalOcean

Make sure you're running on a DigitalOcean Droplet and that you have the DO_TOKEN environment variable set with your key.

sudo convoy daemon --drivers digitalocean

Volume Commands

Create a Volume

Volumes can be created using the convoy create command:

sudo convoy create volume_name
  • Device Mapper: Default volume size is 100G. --size option is supported.
  • EBS: Default volume size is 4G. --size and some other options are supported.

You can also create a volume using the docker run command. If the volume does not yet exist, a new volume will be created. Otherwise the existing volume will be used.

sudo docker run -it -v test_volume:/test --volume-driver=convoy ubuntu

Delete a Volume

sudo convoy delete <volume_name>

or

sudo docker rm -v <container_name>
  • NFS, EBS and DigitalOcean: The -r/--reference option instructs the convoy delete command to only delete the reference to the volume from the current host and leave the underlying files on NFS server or EBS volume unchanged. This is useful when the volume need to be reused later.
  • docker rm -v would be treated as convoy delete with -r/--reference.
  • If you use --rm with docker run, all Docker volumes associated with the container would be deleted on container exit with convoy delete --reference. See Docker run reference for details.

List and Inspect a Volume

sudo convoy list
sudo convoy inspect vol1

Take Snapshot of a Volume

sudo convoy snapshot create vol1 --name snap1vol1

Delete a Snapshot

sudo convoy snapshot delete snap1vol1
  • Device Mapper: please make sure you keep the latest backed-up snapshot for the same volume available to enable the incremental backup mechanism. Convoy needs it to calculate the differences between snapshots.

Backup a Snapshot

  • Device Mapper or VFS: You can backup a snapshot to an NFS mount/local directory or an S3 object store:
sudo convoy backup create snap1vol1 --dest vfs:///opt/backup/

or

sudo convoy backup create snap1vol1 --dest s3://backup-bucket@us-west-2/

or if you want to use a custom S3 endpoint (like Minio)

sudo convoy backup --s3-endpoint http://s3.example.com:9000/ create snap1vol1 --dest s3://backup-bucket@us-west-2/

The backup operation returns a URL string that uniquely identifies the backup dataset.

s3://backup-bucket@us-west-2/?backup=f98f9ea1-dd6e-4490-8212-6d50df1982ea\u0026volume=e0d386c5-6a24-446c-8111-1077d10356b0

If you're using S3, please make sure you have AWS credentials ready either in ~/.aws/credentials or as environment variables, as described here. You may need to put credentials in /root/.aws/credentials or set up sudo environment variables in order to get S3 credentials to work.

  • EBS: --dest is not needed. Just do convoy backup create snap1vol1.

Restore a Volume from Backup

sudo convoy create res1 --backup <url>
  • EBS: Current host must be in the same region of the backup to be restored.

Mount a Restored Volume into a Docker Container

You can use the standard docker run command to mount the restored volume into a Docker container:

sudo docker run -it -v res1:/res1 --volume-driver convoy ubuntu

Mount an NFS-Backed Volume on Multiple Servers

You can mount an NFS-backed volume on multiple servers. You can use the standard docker run command to mount an existing NFS-backed mount into a Docker container. For example, if you have already created an NFS-based volume vol1 on one host, you can run the following command to mount the existing vol1 volume into a new container:

sudo docker run -it -v vol1:/vol1 --volume-driver=convoy ubuntu

Support and Discussion

If you need any help with Convoy, please join us at either our forum or #rancher IRC channel.

Feel free to submit any bugs, issues, and feature requests to Convoy Issues.

Contribution

Contribution are welcome! Please take a look at Development Guide if you want to how to build Convoy from source or running test cases.

We love to hear new Convoy Driver ideas from you. Implementations are most welcome! Please consider take a look at enhancement ideas if you want contribute.

And of course, bug fixes are always welcome!

References

Convoy Command Line Reference

Using Convoy with Docker

Driver Specific

Device Mapper

Amazon Elastic Block Store

Virtual File System/Network File System

convoy's People

Contributors

dtx avatar ibuildthecloud avatar jaytaylor avatar jhmartin avatar jinuxstyle avatar johnstarich avatar kapilt avatar mathieugagne avatar nickvanw avatar opb avatar sak0 avatar sheng-liang avatar sprohaska avatar willbryant avatar yasker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

convoy's Issues

vfs driver: cifs also enabled? / suggestion for Readme

to enable multiple vfs mounts in the daemon, you can give a comma-separated list, e.g. for nfs mounts /home and /data, you'd start the daemon as follows:

sudo convoy daemon --drivers vfs --driver-opts vfs.path=/home,/data

works as expected! great!
Could be added to the Readme!

Furthermore I found it also works for passing cifs mounts to the client... Is that expected behavior?

sudo convoy daemon --drivers vfs --driver-opts vfs.path=/home,/cifs

docker run --rm -it --volume-driver=convoy -v /home:/home --volume-driver=convoy -v /cifs:/cifs

df -h
...
192.168.200.200:/home 3.8T 963G 2.7T 27% /home
//samba-server/cifs 18T 17T 800G 96% /cifs
...

AWS creds not loading from ENV or ~/.aws/credentials

I've got a fresh AWS instance with Docker 1.9.1 and Convoy v0.4.1 that refuses to load AWS credentials for S3. I started with setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, then tried setting those in root's environment, then tried creating ~/.aws/credentials for my user and then for root too, all with no luck. No matter what I've tried, Convoy keeps telling me:

ERRO[0000] Error response from server, AWS Error:  NoCredentialProviders no valid providers in chain <nil>

{
    "Error": "Error response from server, AWS Error:  NoCredentialProviders no valid providers in chain \u003cnil\u003e\n\n"
}

For what it's worth, I installed the awscli package to make sure that it wasn't a credential typo or something and it can connect just fine:

$ aws s3 ls
2015-09-22 21:35:55 env-backup
2015-12-14 19:05:07 repo-s3-logs
2015-09-17 04:27:53 repo
2015-12-21 20:59:51 repo-io

Not sure what's different about this environment versus your test environment, have you seen this before?

inconsistent behaviour on container removal

There are 2 ways how a container can be removed:

  • automatic removal on exit when --rm=true is used - leads to unmount & delete of volume
  • manual removal after "docker ps -a" followed by "docker rm {containerId}" of some inactive container - doesn't lead to volume deletion. Volume is already unmounted since container is inactive.

In the 1st case deletion will occur also in case the volume was created by convoy create via command line. If volume is created via command line I think it should not be automatically deleted. It should probably be symmetric - if created automatically, can be deleted automatically. If created manually, must be deleted manually.

This problem was detected with vfs driver, most likely affects all drivers.

Multiple containers in a rancher service with same convoy volume name is pointing to same volume

In a Rancher service, we are creating multiple containers with ebs convoy volumes. If more than one container is getting created in a single AWS instance, then only the first container is creating the vol and all other subsequent containers are mounted to the same pre-exisitng vol

e.g. A zookeeper service is created in Rancher with "ebs" convoy driver and "dev-zkdata:/var/lib/zookeeper/data" data volume. First container is being launched correctly i.e. it's creating EBS volume in AWS (let's say vol-ABC) and mounting vol-ABC to the first container. When we scale up the rancher service and adds the second container, if second container is created on the same AWS instance, second container mounts wrongly to vol-ABC instead of creating a new volume for itself.

have convoy daemon start at boot

Maybe a follow up to: #21

It would be nice if a wrapper script was included for /etc/init.d or other ways to start the convoy daemon at boot.

Ability to search backups by destination

Is there a way to search backups at a given location? We are currently creating snapshots and backing them up to S3 using a cron job. The script that cron runs uses titles the snapshot ${SERVICE}_${INSTANCE_ID}_${TIMESTAMP}. When looking to restore from a backup, its really difficult to recreate this name because the timestamp is formatted TIMESTAMP=$(date +%Y%m%d%H%M%S) and the instance ID is some arbitrary ID created by AWS.

Ideally, we would want to point to our S3 backend and be given a list of backups to restore from. Is this a feature that exists today?

Encrypted backup

This is a feature request for encrypted backup.

When you encrypt/store at a service like s3 would be great if you could also encrypt the data before it's send to that service.

Here is my suggestion of how it should work:
You generate a private/public keypair.

When doing the backup the only thing you give it is the public key. The host generates a 'encryption-key' used for encrypting the data and uploads a key-file. The keyfile is the encryption key but encrypted with the public key.

This means when you want to restore the data you need to specify the private key (encryption key).

The private key can be kept offline and the server encrypting the data doesn't need to have that key and a new encryption key can be generated any time you need a new key (you could even do that every time you upload new data and then removing then losing the encryption key when the process is stopped on the server doing the uploading).

GlusterFS + Convoy + MySQL performance_schema errors

Hello,

I've installed GlusterFS + Convoy, and I'm trying to install a MySQL container on top of it but I got some errors in the logs and I can't connect to mysql even from the container shell.

When I try to install the containers without using convoy everything works.

Any Idea ?

There are the docker-compose file and the logs :

mysql:
  environment:
    MYSQL_ROOT_PASSWORD: my_password
  image: mysql
  volumes:
  - my_volume:/var/lib/mysql
  volume_driver: convoy-gluster
wordpress:
  environment:
    WORDPRESS_DB_USER: root
    WORDPRESS_DB_PASSWORD: my_password
  image: wordpress
  links:
  - mysql:mysql
  volumes:
  - my_volume:/var/www/html
  volume_driver: convoy-gluster

logs :

19/1/2016 12:14:242016-01-19T11:14:22.406638Z 0 [Note] mysqld (mysqld 5.7.10) starting as process 1 ...
19/1/2016 12:14:242016-01-19T11:14:24.254637Z 0 [Note] InnoDB: PUNCH HOLE support available
19/1/2016 12:14:242016-01-19T11:14:24.254667Z 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
19/1/2016 12:14:242016-01-19T11:14:24.254675Z 0 [Note] InnoDB: Uses event mutexes
19/1/2016 12:14:242016-01-19T11:14:24.254680Z 0 [Note] InnoDB: GCC builtin __atomic_thread_fence() is used for memory barrier
19/1/2016 12:14:242016-01-19T11:14:24.254686Z 0 [Note] InnoDB: Compressed tables use zlib 1.2.8
19/1/2016 12:14:242016-01-19T11:14:24.254690Z 0 [Note] InnoDB: Using Linux native AIO
19/1/2016 12:14:242016-01-19T11:14:24.254979Z 0 [Note] InnoDB: Number of pools: 1
19/1/2016 12:14:242016-01-19T11:14:24.255099Z 0 [Note] InnoDB: Using CPU crc32 instructions
19/1/2016 12:14:242016-01-19T11:14:24.263800Z 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
19/1/2016 12:14:242016-01-19T11:14:24.273229Z 0 [Note] InnoDB: Completed initialization of buffer pool
19/1/2016 12:14:242016-01-19T11:14:24.275315Z 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
19/1/2016 12:14:262016-01-19T11:14:26.127752Z 0 [Note] InnoDB: Highest supported file format is Barracuda.
19/1/2016 12:14:322016-01-19T11:14:32.441219Z 0 [Note] InnoDB: Creating shared tablespace for temporary tables
19/1/2016 12:14:332016-01-19T11:14:33.020198Z 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
19/1/2016 12:14:352016-01-19T11:14:35.637541Z 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
19/1/2016 12:14:352016-01-19T11:14:35.803490Z 0 [Note] InnoDB: 96 redo rollback segment(s) found. 96 redo rollback segment(s) are active.
19/1/2016 12:14:352016-01-19T11:14:35.803519Z 0 [Note] InnoDB: 32 non-redo rollback segment(s) are active.
19/1/2016 12:14:352016-01-19T11:14:35.804182Z 0 [Note] InnoDB: Waiting for purge to start
19/1/2016 12:14:352016-01-19T11:14:35.854337Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 11579ms. The settings might not be optimal. (flushed=0 and evicted=0, during the time.)
19/1/2016 12:14:352016-01-19T11:14:35.855302Z 0 [Note] InnoDB: 5.7.10 started; log sequence number 1312096
19/1/2016 12:14:352016-01-19T11:14:35.855534Z 0 [Note] InnoDB: not started
19/1/2016 12:14:352016-01-19T11:14:35.855753Z 0 [Note] Plugin 'FEDERATED' is disabled.
19/1/2016 12:14:352016-01-19T11:14:35.937980Z 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
19/1/2016 12:14:402016-01-19T11:14:40.718569Z 0 [Warning] Failed to set up SSL because of the following SSL library error: SSL context is not usable without certificate and private key
19/1/2016 12:14:402016-01-19T11:14:40.718602Z 0 [Note] Server hostname (bind-address): '*'; port: 3306
19/1/2016 12:14:402016-01-19T11:14:40.718642Z 0 [Note] IPv6 is available.
19/1/2016 12:14:402016-01-19T11:14:40.718651Z 0 [Note] - '::' resolves to '::';
19/1/2016 12:14:402016-01-19T11:14:40.718658Z 0 [Note] Server socket created on IP: '::'.
19/1/2016 12:14:402016-01-19T11:14:40.982808Z 0 [Note] InnoDB: Buffer pool(s) load completed at 160119 11:14:40
19/1/2016 12:14:422016-01-19T11:14:42.465016Z 0 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it
19/1/2016 12:14:462016-01-19T11:14:46.338774Z 0 [ERROR] Native table 'performance_schema'.'cond_instances' has the wrong structure
19/1/2016 12:14:462016-01-19T11:14:46.668918Z 0 [ERROR] Native table 'performance_schema'.'events_waits_current' has the wrong structure
19/1/2016 12:14:472016-01-19T11:14:47.080092Z 0 [ERROR] Native table 'performance_schema'.'events_waits_history' has the wrong structure
19/1/2016 12:14:472016-01-19T11:14:47.407437Z 0 [ERROR] Native table 'performance_schema'.'events_waits_history_long' has the wrong structure
19/1/2016 12:14:472016-01-19T11:14:47.735771Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_host_by_event_name' has the wrong structure
19/1/2016 12:14:482016-01-19T11:14:48.064772Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_instance' has the wrong structure
19/1/2016 12:14:482016-01-19T11:14:48.474355Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_thread_by_event_name' has the wrong structure
19/1/2016 12:14:482016-01-19T11:14:48.804163Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_user_by_event_name' has the wrong structure
19/1/2016 12:14:492016-01-19T11:14:49.132609Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_by_account_by_event_name' has the wrong structure
19/1/2016 12:14:492016-01-19T11:14:49.543040Z 0 [ERROR] Native table 'performance_schema'.'events_waits_summary_global_by_event_name' has the wrong structure
19/1/2016 12:14:492016-01-19T11:14:49.871927Z 0 [ERROR] Native table 'performance_schema'.'file_instances' has the wrong structure
19/1/2016 12:14:502016-01-19T11:14:50.199755Z 0 [ERROR] Native table 'performance_schema'.'file_summary_by_event_name' has the wrong structure
19/1/2016 12:14:502016-01-19T11:14:50.527040Z 0 [ERROR] Native table 'performance_schema'.'file_summary_by_instance' has the wrong structure
19/1/2016 12:14:502016-01-19T11:14:50.938382Z 0 [ERROR] Native table 'performance_schema'.'host_cache' has the wrong structure
19/1/2016 12:14:512016-01-19T11:14:51.265176Z 0 [ERROR] Native table 'performance_schema'.'mutex_instances' has the wrong structure
19/1/2016 12:14:512016-01-19T11:14:51.591358Z 0 [ERROR] Native table 'performance_schema'.'objects_summary_global_by_type' has the wrong structure
19/1/2016 12:14:522016-01-19T11:14:52.002853Z 0 [ERROR] Native table 'performance_schema'.'performance_timers' has the wrong structure
19/1/2016 12:14:522016-01-19T11:14:52.331642Z 0 [ERROR] Native table 'performance_schema'.'rwlock_instances' has the wrong structure
19/1/2016 12:14:522016-01-19T11:14:52.659171Z 0 [ERROR] Native table 'performance_schema'.'setup_actors' has the wrong structure
19/1/2016 12:14:522016-01-19T11:14:52.987004Z 0 [ERROR] Native table 'performance_schema'.'setup_consumers' has the wrong structure
19/1/2016 12:14:532016-01-19T11:14:53.397501Z 0 [ERROR] Native table 'performance_schema'.'setup_instruments' has the wrong structure
19/1/2016 12:14:532016-01-19T11:14:53.725422Z 0 [ERROR] Native table 'performance_schema'.'setup_objects' has the wrong structure
19/1/2016 12:14:542016-01-19T11:14:54.052277Z 0 [ERROR] Native table 'performance_schema'.'setup_timers' has the wrong structure
19/1/2016 12:14:542016-01-19T11:14:54.461640Z 0 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_index_usage' has the wrong structure
19/1/2016 12:14:542016-01-19T11:14:54.790287Z 0 [ERROR] Native table 'performance_schema'.'table_io_waits_summary_by_table' has the wrong structure
19/1/2016 12:14:552016-01-19T11:14:55.118698Z 0 [ERROR] Native table 'performance_schema'.'table_lock_waits_summary_by_table' has the wrong structure
19/1/2016 12:14:552016-01-19T11:14:55.445721Z 0 [ERROR] Native table 'performance_schema'.'threads' has the wrong structure
19/1/2016 12:14:552016-01-19T11:14:55.857888Z 0 [ERROR] Native table 'performance_schema'.'events_stages_current' has the wrong structure
19/1/2016 12:14:562016-01-19T11:14:56.185598Z 0 [ERROR] Native table 'performance_schema'.'events_stages_history' has the wrong structure
19/1/2016 12:14:562016-01-19T11:14:56.512861Z 0 [ERROR] Native table 'performance_schema'.'events_stages_history_long' has the wrong structure
19/1/2016 12:14:562016-01-19T11:14:56.924452Z 0 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_thread_by_event_name' has the wrong structure
19/1/2016 12:14:572016-01-19T11:14:57.250899Z 0 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_account_by_event_name' has the wrong structure
19/1/2016 12:14:572016-01-19T11:14:57.577356Z 0 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_user_by_event_name' has the wrong structure
19/1/2016 12:14:572016-01-19T11:14:57.903596Z 0 [ERROR] Native table 'performance_schema'.'events_stages_summary_by_host_by_event_name' has the wrong structure
19/1/2016 12:14:582016-01-19T11:14:58.313209Z 0 [ERROR] Native table 'performance_schema'.'events_stages_summary_global_by_event_name' has the wrong structure
19/1/2016 12:14:582016-01-19T11:14:58.641648Z 0 [ERROR] Native table 'performance_schema'.'events_statements_current' has the wrong structure
19/1/2016 12:14:582016-01-19T11:14:58.967896Z 0 [ERROR] Native table 'performance_schema'.'events_statements_history' has the wrong structure
19/1/2016 12:14:592016-01-19T11:14:59.376383Z 0 [ERROR] Native table 'performance_schema'.'events_statements_history_long' has the wrong structure
19/1/2016 12:14:592016-01-19T11:14:59.704241Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_thread_by_event_name' has the wrong structure
19/1/2016 12:15:002016-01-19T11:15:00.032221Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_account_by_event_name' has the wrong structure
19/1/2016 12:15:002016-01-19T11:15:00.358302Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_user_by_event_name' has the wrong structure
19/1/2016 12:15:002016-01-19T11:15:00.768115Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_host_by_event_name' has the wrong structure
19/1/2016 12:15:012016-01-19T11:15:01.096404Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_global_by_event_name' has the wrong structure
19/1/2016 12:15:012016-01-19T11:15:01.423266Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_digest' has the wrong structure
19/1/2016 12:15:012016-01-19T11:15:01.832823Z 0 [ERROR] Native table 'performance_schema'.'events_statements_summary_by_program' has the wrong structure
19/1/2016 12:15:022016-01-19T11:15:02.162738Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_current' has the wrong structure
19/1/2016 12:15:022016-01-19T11:15:02.489817Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_history' has the wrong structure
19/1/2016 12:15:022016-01-19T11:15:02.815912Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_history_long' has the wrong structure
19/1/2016 12:15:032016-01-19T11:15:03.225098Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_summary_by_thread_by_event_name' has the wrong structure
19/1/2016 12:15:032016-01-19T11:15:03.552768Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_summary_by_account_by_event_name' has the wrong structure
19/1/2016 12:15:032016-01-19T11:15:03.880735Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_summary_by_user_by_event_name' has the wrong structure
19/1/2016 12:15:042016-01-19T11:15:04.292154Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_summary_by_host_by_event_name' has the wrong structure
19/1/2016 12:15:042016-01-19T11:15:04.620118Z 0 [ERROR] Native table 'performance_schema'.'events_transactions_summary_global_by_event_name' has the wrong structure
19/1/2016 12:15:042016-01-19T11:15:04.948917Z 0 [ERROR] Native table 'performance_schema'.'users' has the wrong structure
19/1/2016 12:15:052016-01-19T11:15:05.275926Z 0 [ERROR] Native table 'performance_schema'.'accounts' has the wrong structure
19/1/2016 12:15:052016-01-19T11:15:05.686695Z 0 [ERROR] Native table 'performance_schema'.'hosts' has the wrong structure
19/1/2016 12:15:062016-01-19T11:15:06.014531Z 0 [ERROR] Native table 'performance_schema'.'socket_instances' has the wrong structure
19/1/2016 12:15:062016-01-19T11:15:06.342260Z 0 [ERROR] Native table 'performance_schema'.'socket_summary_by_instance' has the wrong structure
19/1/2016 12:15:062016-01-19T11:15:06.751293Z 0 [ERROR] Native table 'performance_schema'.'socket_summary_by_event_name' has the wrong structure
19/1/2016 12:15:072016-01-19T11:15:07.080204Z 0 [ERROR] Native table 'performance_schema'.'session_connect_attrs' has the wrong structure
19/1/2016 12:15:072016-01-19T11:15:07.407479Z 0 [ERROR] Native table 'performance_schema'.'session_account_connect_attrs' has the wrong structure
19/1/2016 12:15:072016-01-19T11:15:07.735518Z 0 [ERROR] Native table 'performance_schema'.'memory_summary_global_by_event_name' has the wrong structure
19/1/2016 12:15:082016-01-19T11:15:08.144139Z 0 [ERROR] Native table 'performance_schema'.'memory_summary_by_account_by_event_name' has the wrong structure
19/1/2016 12:15:082016-01-19T11:15:08.471517Z 0 [ERROR] Native table 'performance_schema'.'memory_summary_by_host_by_event_name' has the wrong structure
19/1/2016 12:15:082016-01-19T11:15:08.798204Z 0 [ERROR] Native table 'performance_schema'.'memory_summary_by_thread_by_event_name' has the wrong structure
19/1/2016 12:15:092016-01-19T11:15:09.206967Z 0 [ERROR] Native table 'performance_schema'.'memory_summary_by_user_by_event_name' has the wrong structure
19/1/2016 12:15:092016-01-19T11:15:09.534485Z 0 [ERROR] Native table 'performance_schema'.'table_handles' has the wrong structure
19/1/2016 12:15:092016-01-19T11:15:09.861332Z 0 [ERROR] Native table 'performance_schema'.'metadata_locks' has the wrong structure
19/1/2016 12:15:102016-01-19T11:15:10.188137Z 0 [ERROR] Native table 'performance_schema'.'replication_connection_configuration' has the wrong structure
19/1/2016 12:15:102016-01-19T11:15:10.599755Z 0 [ERROR] Native table 'performance_schema'.'replication_group_members' has the wrong structure
19/1/2016 12:15:102016-01-19T11:15:10.927585Z 0 [ERROR] Native table 'performance_schema'.'replication_connection_status' has the wrong structure
19/1/2016 12:15:112016-01-19T11:15:11.254681Z 0 [ERROR] Native table 'performance_schema'.'replication_applier_configuration' has the wrong structure
19/1/2016 12:15:112016-01-19T11:15:11.665063Z 0 [ERROR] Native table 'performance_schema'.'replication_applier_status' has the wrong structure
19/1/2016 12:15:112016-01-19T11:15:11.992926Z 0 [ERROR] Native table 'performance_schema'.'replication_applier_status_by_coordinator' has the wrong structure
19/1/2016 12:15:122016-01-19T11:15:12.320128Z 0 [ERROR] Native table 'performance_schema'.'replication_applier_status_by_worker' has the wrong structure
19/1/2016 12:15:122016-01-19T11:15:12.647609Z 0 [ERROR] Native table 'performance_schema'.'replication_group_member_stats' has the wrong structure
19/1/2016 12:15:132016-01-19T11:15:13.058473Z 0 [ERROR] Native table 'performance_schema'.'prepared_statements_instances' has the wrong structure
19/1/2016 12:15:132016-01-19T11:15:13.385508Z 0 [ERROR] Native table 'performance_schema'.'user_variables_by_thread' has the wrong structure
19/1/2016 12:15:132016-01-19T11:15:13.713013Z 0 [ERROR] Native table 'performance_schema'.'status_by_account' has the wrong structure
19/1/2016 12:15:142016-01-19T11:15:14.122603Z 0 [ERROR] Native table 'performance_schema'.'status_by_host' has the wrong structure
19/1/2016 12:15:142016-01-19T11:15:14.450531Z 0 [ERROR] Native table 'performance_schema'.'status_by_thread' has the wrong structure
19/1/2016 12:15:142016-01-19T11:15:14.777746Z 0 [ERROR] Native table 'performance_schema'.'status_by_user' has the wrong structure
19/1/2016 12:15:152016-01-19T11:15:15.105991Z 0 [ERROR] Native table 'performance_schema'.'global_status' has the wrong structure
19/1/2016 12:15:152016-01-19T11:15:15.516550Z 0 [ERROR] Native table 'performance_schema'.'session_status' has the wrong structure
19/1/2016 12:15:152016-01-19T11:15:15.844737Z 0 [ERROR] Native table 'performance_schema'.'variables_by_thread' has the wrong structure
19/1/2016 12:15:162016-01-19T11:15:16.172724Z 0 [ERROR] Native table 'performance_schema'.'global_variables' has the wrong structure
19/1/2016 12:15:162016-01-19T11:15:16.581286Z 0 [ERROR] Native table 'performance_schema'.'session_variables' has the wrong structure
19/1/2016 12:15:172016-01-19T11:15:17.401799Z 0 [Note] Event Scheduler: Loaded 0 events
19/1/2016 12:15:172016-01-19T11:15:17.402069Z 0 [Note] mysqld: ready for connections.
19/1/2016 12:15:17Version: '5.7.10' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)

Couldn't get GlusterFS Convoy with v0.50.2 Rancher to work

Hi,

I coudn't get GlusterFS Convoy with v0.50.2 Rancher to work.
I tried this multiple times. I've even still upgraded to the newest rancher version.

The GlusterFS Service itself is started correctly as far as I can see.
But when I'm trying to create a Gluster FS Convoy Service from Rancher Catalog the whole thing hangs in initializing process.

Log output from convoy-gluster_convoy-gluster_1 (volume-agent-glusterfs):

21.12.2015 15:27:55Waiting for metadata.time="2015-12-21T14:27:55Z" level=info msg="Execing [/usr/bin/nsenter --mount=/proc/778/ns/mnt -F -- /var/lib/docker/aufs/mnt/e4e77074b8372129f30a0b9ca864e54feddf414e1ae495404897db546359a248/var/lib/rancher/convoy-agent/share-mnt --stage2 /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91 -- /launch volume-agent-glusterfs-internal]"
21.12.2015 15:27:55Registering convoy socket at /var/run/conoy-convoy-gluster.sock
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Got: root /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Got: drivers [glusterfs]"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Got: driver-opts [glusterfs.defaultvolumepool=herocloud glusterfs.servers=glusterfs]"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Launching convoy with args: [--socket=/host/var/run/conoy-convoy-gluster.sock daemon --root=/var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91 --drivers=glusterfs --driver-opts=glusterfs.defaultvolumepool=herocloud --driver-opts=glusterfs.servers=glusterfs]"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=debug msg="Creating config at /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91" pkg=daemon
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=debug msg= driver=glusterfs driver_opts=map[glusterfs.defaultvolumepool:herocloud glusterfs.servers:glusterfs] event=init pkg=daemon reason=prepare root="/var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91"
21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=debug msg="Volume herocloud is being mounted it to /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91/glusterfs/mounts/herocloud, with option [-t glusterfs]" pkg=util
21.12.2015 15:27:56time="2015-12-21T14:27:56Z" level=error msg="Get http:///host/var/run/conoy-convoy-gluster.sock/v1/volumes/list: dial unix /host/var/run/conoy-convoy-gluster.sock: no such file or directory"
21.12.2015 15:27:57time="2015-12-21T14:27:57Z" level=error msg="Get http:///host/var/run/conoy-convoy-gluster.sock/v1/volumes/list: dial unix /host/var/run/conoy-convoy-gluster.sock: no such file or directory"
21.12.2015 15:27:58time="2015-12-21T14:27:58Z" level=error msg="Get http:///host/var/run/conoy-convoy-gluster.sock/v1/volumes/list: dial unix /host/var/run/conoy-convoy-gluster.sock: no such file or directory"
... repeats very often ...
21.12.2015 15:30:02time="2015-12-21T14:30:02Z" level=error msg="Get http:///host/var/run/conoy-convoy-gluster.sock/v1/volumes/list: dial unix /host/var/run/conoy-convoy-gluster.sock: no such file or directory"
21.12.2015 15:30:02time="2015-12-21T14:30:02Z" level=debug msg="Cleaning up environment..." pkg=daemon
21.12.2015 15:30:02time="2015-12-21T14:30:02Z" level=error msg="Failed to execute: mount [-t glusterfs glusterfs:/herocloud /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91/glusterfs/mounts/herocloud], output Mount failed. Please check the log file for more details.\n, error exit status 1"
21.12.2015 15:30:02{
21.12.2015 15:30:02 "Error": "Failed to execute: mount [-t glusterfs glusterfs:/herocloud /var/lib/rancher/convoy/convoy-gluster-2cf6ae3e-205f-4892-a791-9f526adfda91/glusterfs/mounts/herocloud], output Mount failed. Please check the log file for more details.\n, error exit status 1"
21.12.2015 15:30:02}
21.12.2015 15:30:02time="2015-12-21T14:30:02Z" level=info msg="convoy exited with error: exit status 1"
21.12.2015 15:30:02time="2015-12-21T14:30:02Z" level=info msg=Exiting.

Log Output from convoy-gluster_convoy-gluster-storagepool_1 Container:

21.12.2015 15:27:55time="2015-12-21T14:27:55Z" level=info msg="Listening for health checks on 0.0.0.0:10241/healthcheck"
21.12.2015 15:28:00time="2015-12-21T14:28:00Z" level=debug msg="storagepool event [2c5c6de9-67dd-4a88-a436-2a78b5c4b1f2]"
21.12.2015 15:41:10Waiting for metadata.

Any ideas what's going wrong?

backup create with vfs has case-sensitivity issues

When using convoy v0.3 to create a backup of a snapshot with a vfs destination, it appears that convoy does some sort of toLower() function on the path name causing an invalid path to be attempted.

Example:

~# convoy backup create jenkins-initial-snapshot --dest vfs:///mnt/nfs/mynfsserver/Backup/docker/convoy
ERRO[0000] Error response from server, VFS path /mnt/nfs/mynfsserver/backup/docker/convoy doesn't exist or is not a directory

{
        "Error": "Error response from server, VFS path /mnt/nfs/mynfsserver/backup/docker/convoy doesn't exist or is not a directory\n"
}
~#

Notice in the vfs path I provide, Backup is uppercase, and when convoy returns it back, it's lowercase. I believe this is the reason it's not finding the vfs target.

WAL Backup

Just courious, if there is a sensible way to implement http://www.pgbarman.org/ or similar WAL archiving of as a volume driver, as well? This would be a nice abstraction layer for this very useful tool, at least for the pg world..

Azure blob storage support

It would be nice to have more cloud storage options than just S3 and I was wondering if it'd be hard to adapt the Azure support currently used in Docker Distribution as a volume driver?

If I have some time in the evening in the coming weeks I can try to do this.

Volume quota

Volume quota would be a great feature. Is it possible / planned?

Better log

Now many actions wasn't logged, and would have potential problem. Check and log everything necessary.

Snapshot / Backup maintenance

It would be nice to:

  • have the ability to delete multiple backups / snapshots at once
  • qualify those multiple backups / snapshots by criteria like age, or tag or field

Currently I'm working on a python script that does a system call to convoy and parses the json output to get the various fields and then work based on that. But it would be great if I could either use the convoy cli directly or have a rest api to make calls to.

I understand thats a large scope so I can break this out into multiple requests if desired; however, I figured that some of this is (API) is probably on the roadmap or even exists already in some form.

Recommended Convoy installation on Docker Machine / VirtualBox

I'm working on a POC for a development environment utilizing Rancher, Docker Machine and Convoy for persistent storage. Ultimately, I'll probably add (more permanent) hosts through some other provider (eg: OpenStack or AWS), but for the time being, I'm stuck with Docker Machine and the VirtualBox driver.

The problem with the current installation and configuration steps is that the recommendations are to place things in ephemeral locations on the Docker host (ex: /usr/local/bin/, /etc/docker/plugins/). I can work around the placement of volumes, binaries and startup scripts, but the one that has me stumped is the /etc/docker/plugins location where it seems Docker expects the Convoy socket definition.

Do you have a recommended setup for running Convoy in a Docker Machine/VirtualBox environment?

Command line options not obeyed

Running 'convoy daemon --drivers devicemapper --driver-opts dm.datadev=/dev/loop5 --driver-opts dm.metadatadev=/dev/loop6' per the initial docs for the first time does not daemonize convoy.

A config file is written to /var/lib/convoy/convoy.cfg. Additional runs using the same command line as above shows the following message:

DEBU[0000] Found existing config. Ignoring command line opts, loading config from /var/lib/convoy pkg=server

Convoy daemon surprisingly created 12 additional volumes

Just run sudo convoy create vol-rancher-cattle and got many ebs volumes created in 10 mins. Killed host, stated new one and issue occured again... If volume name contains dots additional volume named . will be created...
Convoy 0.3.0
Daemon in EBS mode
daemon logs:

time="2015-10-11T18:47:54Z" level=debug msg="Handle plugin volume path: POST /VolumeDriver.Path" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg= event=mountpoint object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg= event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg="Volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) is mounted at  for docker" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Handle plugin volume path: POST /VolumeDriver.Path" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg= event=mountpoint object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg= event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg="Volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) is mounted at  for docker" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Handle plugin mount volume: POST /VolumeDriver.Mount" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg="Mount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:47:54Z" level=debug msg= event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg="Volume 5c22798f-7532-4902-93eb-2e3b3190d340 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" pkg=ebs 
time="2015-10-11T18:47:54Z" level=debug msg= event=list mountpoint="/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:47:54Z" level=debug msg="Response:  {\n\t\"Mountpoint\": \"/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340\"\n}" pkg=daemon 
time="2015-10-11T18:48:04Z" level=debug msg="Handle plugin unmount volume: POST /VolumeDriver.Unmount" pkg=daemon 
time="2015-10-11T18:48:04Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:48:04Z" level=debug msg="Unmount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:48:04Z" level=debug msg= event=umount object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:48:04Z" level=debug msg= event=umount object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:48:04Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:48:17Z" level=debug msg="Handle plugin mount volume: POST /VolumeDriver.Mount" pkg=daemon 
time="2015-10-11T18:48:17Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:48:17Z" level=debug msg="Mount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:48:17Z" level=debug msg= event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:48:18Z" level=debug msg="Volume 5c22798f-7532-4902-93eb-2e3b3190d340 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" pkg=ebs 
time="2015-10-11T18:48:18Z" level=debug msg= event=list mountpoint="/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:48:18Z" level=debug msg="Response:  {\n\t\"Mountpoint\": \"/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340\"\n}" pkg=daemon 
time="2015-10-11T18:53:09Z" level=debug msg="Handle plugin unmount volume: POST /VolumeDriver.Unmount" pkg=daemon 
time="2015-10-11T18:53:09Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:53:09Z" level=debug msg="Unmount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:53:09Z" level=debug msg= event=umount object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:53:09Z" level=debug msg= event=umount object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:53:09Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Handle plugin create volume: POST /VolumeDriver.Create" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Found volume 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Handle plugin volume path: POST /VolumeDriver.Path" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg= event=mountpoint object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:27Z" level=debug msg= event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:27Z" level=debug msg="Volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) is mounted at  for docker" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Handle plugin create volume: POST /VolumeDriver.Create" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Request from docker: &{b71009af882f0bc126ed4dec3d7fd5d128178934476a74dbd5071c01b96e9c58 map[]}" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg="Create a new volume b71009af882f0bc126ed4dec3d7fd5d128178934476a74dbd5071c01b96e9c58 for docker" pkg=daemon 
time="2015-10-11T18:54:27Z" level=debug msg= event=create object=volume opts=map[BackupURL: VolumeName:b71009af882f0bc126ed4dec3d7fd5d128178934476a74dbd5071c01b96e9c58 VolumeDriverID: VolumeType: VolumeIOPS:0 Size:0] pkg=daemon reason=prepare volume=250668c7-15f4-4e18-82e9-2f4f312ed381 volume_name=b71009af882f0bc126ed4dec3d7fd5d128178934476a74dbd5071c01b96e9c58 
time="2015-10-11T18:54:32Z" level=debug msg="Waiting for volume vol-4de8ecbd state transiting from creating to available" pkg=ebs 
time="2015-10-11T18:54:37Z" level=debug msg="Adding tags for vol-4de8ecbd, as map[Name:b71009af882f0bc126ed4dec3d7fd5d128178934476a74dbd5071c01b96e9c58 ConvoyVolumeUUID:250668c7-15f4-4e18-82e9-2f4f312ed381]" pkg=ebs 
time="2015-10-11T18:54:37Z" level=debug msg="Created volume 250668c7-15f4-4e18-82e9-2f4f312ed381 from EBS volume vol-4de8ecbd" pkg=ebs 
time="2015-10-11T18:54:37Z" level=debug msg="Response:  {\n\t\"Err\": \"Cannot find an available device for instance i-189904a1\"\n}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Handle plugin volume path: POST /VolumeDriver.Path" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg= event=mountpoint object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg= event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg="Volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) is mounted at  for docker" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Handle plugin volume path: POST /VolumeDriver.Path" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg= event=mountpoint object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg= event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg="Volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) is mounted at  for docker" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Response:  {}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Handle plugin mount volume: POST /VolumeDriver.Mount" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg="Mount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:54:49Z" level=debug msg= event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg="Volume 5c22798f-7532-4902-93eb-2e3b3190d340 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" pkg=ebs 
time="2015-10-11T18:54:49Z" level=debug msg= event=list mountpoint="/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340" object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:54:49Z" level=debug msg="Response:  {\n\t\"Mountpoint\": \"/var/lib/convoy/ebs/mounts/5c22798f-7532-4902-93eb-2e3b3190d340\"\n}" pkg=daemon 
time="2015-10-11T18:56:47Z" level=debug msg="Handle plugin unmount volume: POST /VolumeDriver.Unmount" pkg=daemon 
time="2015-10-11T18:56:47Z" level=debug msg="Request from docker: &{vol-rancher-cattle map[]}" pkg=daemon 
time="2015-10-11T18:56:47Z" level=debug msg="Unmount volume: 5c22798f-7532-4902-93eb-2e3b3190d340 (name vol-rancher-cattle) for docker" pkg=daemon 
time="2015-10-11T18:56:47Z" level=debug msg= event=umount object=volume pkg=daemon reason=prepare volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:56:47Z" level=debug msg= event=umount object=volume pkg=daemon reason=complete volume=5c22798f-7532-4902-93eb-2e3b3190d340 
time="2015-10-11T18:56:47Z" level=debug msg="Response:  {}" pkg=daemon 

fdisk -l

Disk /dev/xvda: 75.2 GB, 75161927680 bytes
255 heads, 63 sectors/track, 9137 cylinders, total 146800640 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

    Device Boot      Start         End      Blocks   Id  System
/dev/xvda1   *       16065   146785904    73384920   83  Linux

Disk /dev/xvdb: 4289 MB, 4289200128 bytes
255 heads, 63 sectors/track, 521 cylinders, total 8377344 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdb doesn't contain a valid partition table

Disk /dev/mapper/docker-202:1-526147-pool: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 65536 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/docker-202:1-526147-pool doesn't contain a valid partition table

Disk /dev/xvdn: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdn doesn't contain a valid partition table

Disk /dev/xvdj: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdj doesn't contain a valid partition table

Disk /dev/xvdi: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdi doesn't contain a valid partition table

Disk /dev/xvdh: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdh doesn't contain a valid partition table

Disk /dev/xvdk: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdk doesn't contain a valid partition table

Disk /dev/xvdl: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdl doesn't contain a valid partition table

Disk /dev/xvdo: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdo doesn't contain a valid partition table

Disk /dev/xvdm: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdm doesn't contain a valid partition table

Disk /dev/xvdp: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdp doesn't contain a valid partition table

Disk /dev/xvdf: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdf doesn't contain a valid partition table

Disk /dev/xvdg: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders, total 8388608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdg doesn't contain a valid partition table

Convoy container fails to start

Rancher 0.47.0, convoy-agent 0.1.0, Docker 1.7.1, CoreOS 766.5.0

A glusterfs stack was successfully started up using the Catalogue.

Using the catalogue, attempted to start a Convoy stack over the top. The storagepool service started, but the main convery service does not start; the container exits with the error message:

26/11/2015 2:50:48 pmWaiting for metadatatime="2015-11-26T01:50:48Z" level=fatal msg="json: cannot unmarshal string into Go value of type []*configs.ThrottleDevice"

Interestingly, the error message about 'cannot unmarshal' is similar to another error we are seeing in Rancher when loading docker-compose.yml. Could this be down to an incompatibility with CoreOS?

Data only container backend

Some times it could be simple to use data only container inside an Rancher service / stack, but sidekicks not supports shared data only containers as needed (each app container get it's own data container and so the volume isn't shared).

Is it possible to build a data only container backend additional to solutions like xfs or dm?

"AWS Error: UnauthorizedOperation... : <nil>" when trying to use EBS backend

Installed convoy to CoreOS with following systemd unit:

[Unit]
Description=Convoy Daemon
Requires=docker.service

[Service]
User=core
WorkingDirectory=/tmp
ExecStartPre=/usr/bin/curl -L -o /tmp/convoy.tar.gz https://github.com/rancher/convoy/releases/download/v0.3/convoy.tar.gz
ExecStartPre=/usr/bin/tar -C /tmp -xvzf /tmp/convoy.tar.gz
ExecStartPre=/usr/bin/sudo mkdir -p /opt/bin
ExecStartPre=/usr/bin/sudo cp /tmp/convoy/convoy /tmp/convoy/convoy-pdata_tools /opt/bin/
ExecStartPre=/usr/bin/sudo mkdir -p /etc/docker/plugins/
ExecStartPre=/usr/bin/sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'
ExecStart=/usr/bin/sudo convoy daemon --drivers ebs

[Install]
WantedBy=multi-user.target

Release info:

core@ip-10-0-10-60 ~ $ cat /etc/os-release 
NAME=CoreOS
ID=coreos
VERSION=835.2.0
VERSION_ID=835.2.0
BUILD_ID=
PRETTY_NAME="CoreOS 835.2.0"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"

core@ip-10-0-10-60 ~ $ docker -v
Docker version 1.8.3, build cedd534-dirty

No love trying to use convoy:

core@ip-10-0-10-60 ~ $ ps -aux | grep convoy
root      6455  0.0  0.3 263340 15360 ?        Ssl  Nov06   0:01 convoy daemon --drivers ebs
core@ip-10-0-10-60 ~ $ docker run --rm --volume-driver=convoy --volume convoy-debian-test:/test debian                                                                                                                                         
Error response from daemon: AWS Error:  UnauthorizedOperation You are not authorized to perform this operation. <nil>
403

core@ip-10-0-10-60 ~ $ sudo convoy create test       
ERRO[0000] Error response from server, AWS Error:  UnauthorizedOperation You are not authorized to perform this operation. <nil>
403  {
        "Error": "Error response from server, AWS Error:  UnauthorizedOperation You are not authorized to perform this operation. \u003cnil\u003e\n403 \n\n"
}

Related IAM role policy:

{
    "Statement": [
        {
            "Sid": "Stmt32854384387634",
            "Action": [
                "ec2:CreateSnapshot",
                "ec2:CreateTags",
                "ec2:DescribeAvailabilityZones",
                "ec2:DescribeSnapshots",
                "ec2:DescribeTags",
                "ec2:DescribeVolumeAttribute",
                "ec2:DescribeVolumeStatus",
                "ec2:DescribeVolumes"
            ],
            "Effect": "Allow",
            "Resource": [
                "*"
            ]
        }
    ]
}

Any ideas?

Can't get convoy FS to work

I wanted to try Convoy GlusterFS, but I can't get it working properly. I retried multiple times following the tutorial in the documentation.

This is what the logs tell me:

12/17/2015 2:45:40 PMtime="2015-12-17T13:45:40Z" level=error msg="Get http:///host/var/run/conoy-convoy-gluster.sock/v1/volumes/list: dial unix /host/var/run/conoy-convoy-gluster.sock: no such file or directory"

Overview of my hosts:

screen shot 2015-12-17 at 14 55 32

Overview of my containers:

screen shot 2015-12-17 at 14 50 24

restart of convoy daemon causes error after restart

I'm new to using convoy so this might be user error but here is what I'm seeing:

after a new install, I can
a) create a volume
b) run the daemon just fine
c) see the created volume
d) attach the volume to a container and work with the volume

ok, works as expected!

but when I ctrl-C the daemon and restart convoy daemon, I am seeing the following behaviour:

a) I can list the volume
b) when I try to run a container and map it to the previously created volume I get the following
error:
[root@localhost convoy]# docker run -it -v pgvol1:/pgdata --volume-driver convoy crunchydata/cpm-node:latest bash
Error response from daemon: Error running CreateDevice dm_task_run failed
c) the daemon log shows this error:
DEBU[0073] [devmapper] CreateDevice(poolName=/dev/mapper/convoy-pool, deviceId=9)
DEBU[0073] libdevmapper(7): ioctl/libdm-iface.c:1750 (4) dm version OF 16384
DEBU[0073] libdevmapper(7): ioctl/libdm-iface.c:1750 (4) dm message convoy-pool OF create_thin 9 16384
ERRO[0073] libdevmapper(3): ioctl/libdm-iface.c:1768 (-1) device-mapper: message ioctl on convoy-pool failed: Operation not permitted
DEBU[0073] Response: {
"Err": "Error running CreateDevice dm_task_run failed"
} pkg=daemon

I'm running both the daemon and the docker 'run' as the root user. The platform is
centos 7 using docker 1.8.3 with selinux in enforcing mode. This is convoy version 0.3.

any ideas?

GS driver support

Maybe google cloud storage is already working? At first glance s3 handling looks pretty similar to gs.

Specifying volume names in compose with convoy

I'm not sure if this is more of a compose question or one for convoy, but I'm trying to determine what the directives in docker-compose.yml would be for using convoy. Specifically, how to define the volume name and how to specify multiple volumes where one isn't a convoy volume.

Example:

postgres:
  build: ./docks/psql
  ports:
    - 5432
  volumes:
    - /var/lib/postgresql/data
  volume_driver: convoy




This seems to work but it creates a programmatic name as shown below. Is there a directive I'm missing in either compose or an option in convoy I can set to get more friendly volume names / aliases?

# convoy list
{
        "0320b9b1-1af0-49d4-b77b-44f096820d1d": {
                "UUID": "0320b9b1-1af0-49d4-b77b-44f096820d1d",
                "Name": "349e35980dabe24bc9cac9ba9e3020382e61dc2941563e68449aacb92a46a88d",
                "Driver": "vfs",
                "MountPoint": "/data/convoy/349e35980dabe24bc9cac9ba9e3020382e61dc2941563e68449aacb92a46a88d",
                "CreatedTime": "Tue Sep 15 12:45:51 -0500 2015",
                "DriverInfo": {
                        "Driver": "vfs",
                        "MountPoint": "/data/convoy/349e35980dabe24bc9cac9ba9e3020382e61dc2941563e68449aacb92a46a88d",
                        "Path": "/data/convoy/349e35980dabe24bc9cac9ba9e3020382e61dc2941563e68449aacb92a46a88d"
                },
                "Snapshots": {}
        },
root:/data/convoy# ls -l 349e35980dabe24bc9cac9ba9e3020382e61dc2941563e68449aacb92a46a88d/
total 116
drwx------ 6 999 docker  4096 Sep 15 12:45 base
drwx------ 2 999 docker  4096 Sep 15 13:10 global
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_clog
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_dynshmem
-rw------- 1 999 docker  4600 Sep 15 12:45 pg_hba.conf
-rw------- 1 999 docker  1636 Sep 15 12:45 pg_ident.conf
drwx------ 4 999 docker  4096 Sep 15 12:45 pg_logical
drwx------ 4 999 docker  4096 Sep 15 12:45 pg_multixact
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_notify
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_replslot
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_serial
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_snapshots
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_stat
drwx------ 2 999 docker  4096 Sep 15 13:10 pg_stat_tmp
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_subtrans
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_tblspc
drwx------ 2 999 docker  4096 Sep 15 12:45 pg_twophase
-rw------- 1 999 docker     4 Sep 15 12:45 PG_VERSION
drwx------ 3 999 docker  4096 Sep 15 12:45 pg_xlog
-rw------- 1 999 docker    88 Sep 15 12:45 postgresql.auto.conf
-rw------- 1 999 docker 21263 Sep 15 12:45 postgresql.conf
-rw------- 1 999 docker    37 Sep 15 12:45 postmaster.opts
-rw------- 1 999 docker    85 Sep 15 12:45 postmaster.pid
root:/data/convoy#

Sync convoy volumes between hosts?

Move also the convoy volume during container migration to another host?
Sync convoy volumes between (RancherOS) hosts to be more fail-save?

Maybe convoy could work together with flocker plugin?
But convoy should keep the simple usage (set driver and use "-v" option), which should be .

Recommended practice for backgrounding convoy daemon process?

I noticed that running convoy daemon does not background by default. What is the recommended way to do this?

  • nohup / background it (when not debugging)
  • write an init script for it
  • dockerize it
  • add a feature to run in background.

Here's my command:

sudo convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/nfs/Backup/docker

Am I missing anything?

Creating multiple ebs volumes for one docker run command

I installed and started convoy on an EC2 (t2.micro, ubuntu 14.04 LTS) machine, and ran the following command after configuring the AWS credentials:

# docker run -it -v masterdb-vol:/var/lib/mysql --volume-driver=convoy tutum/mysql

Expected:
To create one EBS volume and mount /var/lib/mysql to it.

Result:
The command created 2 EBS volumes for and then the container exited with error

chmod: cannot access '/etc/mysql/conf.d/my.cnf': No such file or directory

The debug logs of convoy:

DEBU[0449] Handle plugin create volume: POST /VolumeDriver.Create  pkg=daemon
DEBU[0449] Request from docker: &{masterdb-vol map[]}    pkg=daemon
DEBU[0449] Create a new volume masterdb-vol for docker   pkg=daemon
DEBU[0449]                                               event=create object=volume opts=map[BackupURL: VolumeName:masterdb-vol VolumeDriverID: VolumeType: VolumeIOPS:0 Size:0] pkg=daemon reason=prepare volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1 volume_name=masterdb-vol
DEBU[0454] Waiting for volume vol-8f4b317b state transiting from creating to available  pkg=ebs
DEBU[0460] Adding tags for vol-8f4b317b, as map[Name:masterdb-vol ConvoyVolumeUUID:acf08d0e-c245-4970-b9eb-13a10f33f9a1]  pkg=ebs
DEBU[0460] Created volume acf08d0e-c245-4970-b9eb-13a10f33f9a1 from EBS volume vol-8f4b317b  pkg=ebs
DEBU[0460] Attaching vol-8f4b317b to i-14b373d0's /dev/sdm  pkg=ebs
DEBU[0465] Waiting for volume vol-8f4b317b attaching     pkg=ebs
DEBU[0470] Attached EBS volume vol-8f4b317b to /dev/xvdm  pkg=ebs
DEBU[0471] Created volume                                event=create object=volume pkg=daemon reason=complete volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0471] Found volume acf08d0e-c245-4970-b9eb-13a10f33f9a1 (name masterdb-vol) for docker  pkg=daemon
DEBU[0471] Response:  {}                                 pkg=daemon
DEBU[0471] Handle plugin volume path: POST /VolumeDriver.Path  pkg=daemon
DEBU[0471] Request from docker: &{masterdb-vol map[]}    pkg=daemon
DEBU[0471]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0471]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0471] Volume: acf08d0e-c245-4970-b9eb-13a10f33f9a1 (name masterdb-vol) is mounted at  for docker  pkg=daemon
DEBU[0471] Response:  {}                                 pkg=daemon
DEBU[0471] Handle plugin create volume: POST /VolumeDriver.Create  pkg=daemon
DEBU[0471] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0471] Create a new volume 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 for docker  pkg=daemon
DEBU[0471]                                               event=create object=volume opts=map[VolumeName:0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 VolumeDriverID: VolumeType: VolumeIOPS:0 Size:0 BackupURL:] pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52 volume_name=0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965
DEBU[0476] Waiting for volume vol-a848325c state transiting from creating to available  pkg=ebs
DEBU[0482] Adding tags for vol-a848325c, as map[Name:0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 ConvoyVolumeUUID:e4087278-b375-4f95-b30e-d3cec1087d52]  pkg=ebs
DEBU[0482] Created volume e4087278-b375-4f95-b30e-d3cec1087d52 from EBS volume vol-a848325c  pkg=ebs
DEBU[0482] Attaching vol-a848325c to i-14b373d0's /dev/sdn  pkg=ebs
DEBU[0487] Waiting for volume vol-a848325c attaching     pkg=ebs
DEBU[0493] Attached EBS volume vol-a848325c to /dev/xvdn  pkg=ebs
DEBU[0493] Created volume                                event=create object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0493] Found volume e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) for docker  pkg=daemon
DEBU[0493] Response:  {}                                 pkg=daemon
DEBU[0493] Handle plugin volume path: POST /VolumeDriver.Path  pkg=daemon
DEBU[0493] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0493]                                               event=mountpoint object=volume pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0493]                                               event=mountpoint mountpoint= object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0493] Volume: e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) is mounted at  for docker  pkg=daemon
DEBU[0493] Response:  {}                                 pkg=daemon
DEBU[0493] Handle plugin mount volume: POST /VolumeDriver.Mount  pkg=daemon
DEBU[0493] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0493] Mount volume: e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) for docker  pkg=daemon
DEBU[0493]                                               event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0493] Volume e4087278-b375-4f95-b30e-d3cec1087d52 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52  pkg=ebs
DEBU[0494]                                               event=list mountpoint=/var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52 object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494] Response:  {
    "Mountpoint": "/var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52"
}  pkg=daemon
DEBU[0494] Handle plugin unmount volume: POST /VolumeDriver.Unmount  pkg=daemon
DEBU[0494] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0494] Unmount volume: e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) for docker  pkg=daemon
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494] Response:  {}                                 pkg=daemon
DEBU[0494] Handle plugin mount volume: POST /VolumeDriver.Mount  pkg=daemon
DEBU[0494] Request from docker: &{masterdb-vol map[]}    pkg=daemon
DEBU[0494] Mount volume: acf08d0e-c245-4970-b9eb-13a10f33f9a1 (name masterdb-vol) for docker  pkg=daemon
DEBU[0494]                                               event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0494] Volume acf08d0e-c245-4970-b9eb-13a10f33f9a1 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/acf08d0e-c245-4970-b9eb-13a10f33f9a1  pkg=ebs
DEBU[0494]                                               event=list mountpoint=/var/lib/convoy/ebs/mounts/acf08d0e-c245-4970-b9eb-13a10f33f9a1 object=volume pkg=daemon reason=complete volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0494] Response:  {
    "Mountpoint": "/var/lib/convoy/ebs/mounts/acf08d0e-c245-4970-b9eb-13a10f33f9a1"
}  pkg=daemon
DEBU[0494] Handle plugin mount volume: POST /VolumeDriver.Mount  pkg=daemon
DEBU[0494] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0494] Mount volume: e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) for docker  pkg=daemon
DEBU[0494]                                               event=mount object=volume opts=map[MountPoint:] pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494] Volume e4087278-b375-4f95-b30e-d3cec1087d52 is not mounted, mount it now to /var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52  pkg=ebs
DEBU[0494]                                               event=list mountpoint=/var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52 object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494] Response:  {
    "Mountpoint": "/var/lib/convoy/ebs/mounts/e4087278-b375-4f95-b30e-d3cec1087d52"
}  pkg=daemon
chmod: cannot access '/etc/mysql/conf.d/my.cnf': No such file or directory
DEBU[0494] Handle plugin unmount volume: POST /VolumeDriver.Unmount  pkg=daemon
DEBU[0494] Request from docker: &{masterdb-vol map[]}    pkg=daemon
DEBU[0494] Unmount volume: acf08d0e-c245-4970-b9eb-13a10f33f9a1 (name masterdb-vol) for docker  pkg=daemon
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=prepare volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=complete volume=acf08d0e-c245-4970-b9eb-13a10f33f9a1
DEBU[0494] Response:  {}                                 pkg=daemon
DEBU[0494] Handle plugin unmount volume: POST /VolumeDriver.Unmount  pkg=daemon
DEBU[0494] Request from docker: &{0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965 map[]}  pkg=daemon
DEBU[0494] Unmount volume: e4087278-b375-4f95-b30e-d3cec1087d52 (name 0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965) for docker  pkg=daemon
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=prepare volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494]                                               event=umount object=volume pkg=daemon reason=complete volume=e4087278-b375-4f95-b30e-d3cec1087d52
DEBU[0494] Response:  {}                                 pkg=daemon

convoy list:

root@Lychee-master:/var/lib/convoy/ebs/mounts# convoy list
DEBU[1468] Calling: GET, /volumes/list, request: GET, /v1/volumes/list  pkg=daemon
{
    "acf08d0e-c245-4970-b9eb-13a10f33f9a1": {
        "UUID": "acf08d0e-c245-4970-b9eb-13a10f33f9a1",
        "Name": "masterdb-vol",
        "Driver": "ebs",
        "MountPoint": "",
        "CreatedTime": "Fri Oct 16 02:16:01 +0000 2015",
        "DriverInfo": {
            "AvailablityZone": "us-west-2a",
            "CreatedTime": "Fri Oct 16 02:15:39 +0000 2015",
            "Device": "/dev/xvdm",
            "Driver": "ebs",
            "EBSVolumeID": "vol-8f4b317b",
            "IOPS": "12",
            "MountPoint": "",
            "Size": "4294967296",
            "State": "in-use",
            "Type": "gp2",
            "UUID": "acf08d0e-c245-4970-b9eb-13a10f33f9a1"
        },
        "Snapshots": {}
    },
    "e4087278-b375-4f95-b30e-d3cec1087d52": {
        "UUID": "e4087278-b375-4f95-b30e-d3cec1087d52",
        "Name": "0e7ed611f484092d396504a318f4c748150d56165dd03bdc2b3a288be2644965",
        "Driver": "ebs",
        "MountPoint": "",
        "CreatedTime": "Fri Oct 16 02:16:23 +0000 2015",
        "DriverInfo": {
            "AvailablityZone": "us-west-2a",
            "CreatedTime": "Fri Oct 16 02:16:01 +0000 2015",
            "Device": "/dev/xvdn",
            "Driver": "ebs",
            "EBSVolumeID": "vol-a848325c",
            "IOPS": "12",
            "MountPoint": "",
            "Size": "4294967296",
            "State": "in-use",
            "Type": "gp2",
            "UUID": "e4087278-b375-4f95-b30e-d3cec1087d52"
        },
        "Snapshots": {}
    }
}

Use metadata snapshot for device mapper

Now we're checking against the live kernel metadata tree, which is not very safe.

The metadata snapshot supported by device mapper should be used. But it would involve update docker/pkg/devicemapper as well.

Atomic backup with NFS backend?

Hey folks, Convoy looks really cool, nice work!

I see you have an NFS backend, and support snapshots. But are NFS-backed volume snapshots atomic, i.e. safe for crash-consistent databases? If so, how does this work?

Separately, I'm interested in discussing how it would make sense to integrate Convoy as a Flocker backend.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.