Website: https://camptocamp.github.io/bivac
Bivac lets you backup all your containers volumes deployed on Docker Engine, Cattle or Kubernetes using Restic.
🏕 📦 Backup Interface for Volumes Attached to Containers
Home Page: https://bivac.io
License: Apache License 2.0
Website: https://camptocamp.github.io/bivac
Bivac lets you backup all your containers volumes deployed on Docker Engine, Cattle or Kubernetes using Restic.
It would be interesting to have the backup start time as soon as it starts available in prometheus.
To not delete old events we could POST
the backup start time ASAP, then PUT
the rest of the metrics at the end of the backup, adding the backup start time again (because PUT
deletes all the metrics within the grouping key) so that we can have metrics on the backup time.
Using Exoscale, we are getting errors when the backups are partial:
11/07/2016 10:51:42{"level":"debug","msg":"Local and Remote metadata are synchronized, no sync needed.\r\nLast full backup date: Sat Jul 9 02:00:01 2016\r\nAttempt 1 failed. S3ResponseError: S3ResponseError: 403 Forbidden\r\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\u003cError\u003e\u003cCode\u003eAccessDenied\u003c/Code\u003e\u003cMessage\u003eAccess Denied\u003c/Message\u003e\u003cRequestId\u003e0b6d469e-82a3-499d-af22-c5fa1b5bf6cd\u003c/RequestId\u003e\u003cHostId\u003e0b6d469e-82a3-499d-af22-c5fa1b5bf6cd\u003c/HostId\u003e\u003c/Error\u003e\r\nManifests not equal because different volume numbers\r\nFatal Error: Remote manifest does not match local one. Either the remote backup set or the local archive directory has been corrupted.\r\n","time":"2016-07-11T08:51:42Z"}
Two problems here:
Sometimes we have this error message:
Another instance is already running with this archive directory
If you are sure that this is the only instance running you may delete
the following lockfile and run the command again :
/root/.cache/duplicity/postgresql-puppetdb/lockfile.lock
Since migrating to docker/engine-api
, image pulling does not work anymore. It seems the pullImage
method exits before the thread is done pulling the image.
The image does get pulled, and is found on a second run of conplicity.
Otherwise backups are never purged.
We should use DUPLICITY_REMOVE_OLDER_THAN
to configure the threshold
handler/handler_test.go
(flags)TestMain
) for conplicity.go
providers/
volume/volume_test.go
When conplicity performs two backups and the second backup requires a prepare command, it fails with:
7/5/2016 2:26:17 PM�[31mFATA�[0m[0589] Failed to start exec: Error response from daemon: http: Hijack is incompatible with use of CloseNotifier
Parsing duplicity collection-status
output
TODO:
Use docker verify
and create a prometheus file containing the state of the backup
It looks like a volume metrics are not collected when an error is thrown.
8/2/2016 6:38:21 AM{"level":"error","msg":"Failed to backup volume openvas_cache: failed to backup volume: failed to retrieve last backup info: failed to parse Duplicity output for last full backup date of openvas_cache","time":"2016-08-02T04:38:21Z"}
=> no metrics for openvas_cache
volume (even backupExitCode)
I create a volume with a label io.conplicity.engine=rclone
and conplicity still backups it with duplicity.
Maybe related to moby/moby#22838
This will allow us to pass any option to duplicity without having to expose a new environment variable.
It should be possible to parallellize backups.
First of all, thank you for this really promising tool.
I am struggling a bit here as I do not use AWS and I would like to see Conplicity (which I run in a container) send the backup over ssh.
Problem is: there is currently no way to pass authentication keys all the way down to the Duplicity container, hence a permission denied
is all I can get so far.
I am really happy to help implementing this, but TBH, I do not really now how to do it or where to start.
Currently, if someone goes wrong with one backup, all backups are stopped.
We should handle that better, outputing an error for the current backup and continuing with the next one.
Each container should be able to expose backup policy parameters are labels, in the form io.conplicity.<volume>.param
.
Examples:
In addition to environment variables, support CLI flags for options.
Currently, conplicity uses the container's hostname in the upload path.
For example, on Rancher, it will typically be conplicity_1
, conplicity_2
, etc. However, these containers are not strictly linked to their Docker hosts.
It would be better to use the name of the Docker host instead. The info cannot be obtained from the Docker API, but it can be obtained through orchestration API, such as Rancher's.
For example, we could have a --rancher-hostname
option which would get the hostname using Rancher's metadata API instead of using the container's hostname.
It looks like 2 unnamed volumes are created per volume backed up. I can't figure out why yet...
It looks like janitor removes the duplicity_cache
volume because there is not duplicity
container running when it does the cleanup.
It even looks like it does it the wrong way by deleting /var/lib/docker/volumes/duplicity_cache
without calling docker volume rm duplicity_cache
, so Docker thinks that the named volume still exists and will not recreate it at next run of docker run -v duplicity_cache:/root/.cache/duplicity camptocamp/duplicity ...
and thus fails with:
stat /var/lib/docker/volumes/duplicity_cache/_data: no such file or directory
I don't know if this is a good idea or not...
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.