Coder Social home page Coder Social logo

camptocamp / bivac Goto Github PK

View Code? Open in Web Editor NEW
352.0 22.0 39.0 24.58 MB

🏕 📦 Backup Interface for Volumes Attached to Containers

Home Page: https://bivac.io

License: Apache License 2.0

Go 87.66% Makefile 1.88% Dockerfile 0.73% Ruby 1.04% Shell 8.36% Mustache 0.32%
backup restic kubernetes docker containers go devops-tools

bivac's Introduction

bivac's People

Contributors

clayrisser avatar cryptobioz avatar felixkrohn avatar majklovec avatar mbornoz avatar mcanevet avatar raphink avatar saimonn avatar vampouille avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bivac's Issues

Push backup start time metric to prometheus gateway

It would be interesting to have the backup start time as soon as it starts available in prometheus.
To not delete old events we could POST the backup start time ASAP, then PUT the rest of the metrics at the end of the backup, adding the backup start time again (because PUT deletes all the metrics within the grouping key) so that we can have metrics on the backup time.

Errors with Exoscale

Using Exoscale, we are getting errors when the backups are partial:

11/07/2016 10:51:42{"level":"debug","msg":"Local and Remote metadata are synchronized, no sync needed.\r\nLast full backup date: Sat Jul  9 02:00:01 2016\r\nAttempt 1 failed. S3ResponseError: S3ResponseError: 403 Forbidden\r\n\u003c?xml version=\"1.0\" encoding=\"UTF-8\"?\u003e\u003cError\u003e\u003cCode\u003eAccessDenied\u003c/Code\u003e\u003cMessage\u003eAccess Denied\u003c/Message\u003e\u003cRequestId\u003e0b6d469e-82a3-499d-af22-c5fa1b5bf6cd\u003c/RequestId\u003e\u003cHostId\u003e0b6d469e-82a3-499d-af22-c5fa1b5bf6cd\u003c/HostId\u003e\u003c/Error\u003e\r\nManifests not equal because different volume numbers\r\nFatal Error: Remote manifest does not match local one.  Either the remote backup set or the local archive directory has been corrupted.\r\n","time":"2016-07-11T08:51:42Z"}

Two problems here:

  • the error goes undetected by Conplicity
  • this should not happen, and is probably linked to the fact that we don't set ACLs

Remove lockfile from duplicity_cache

Sometimes we have this error message:

Another instance is already running with this archive directory
If you are sure that this is the  only instance running you may delete
the following lockfile and run the command again :
     /root/.cache/duplicity/postgresql-puppetdb/lockfile.lock

Image pulling is broken

Since migrating to docker/engine-api, image pulling does not work anymore. It seems the pullImage method exits before the thread is done pulling the image.

The image does get pulled, and is found on a second run of conplicity.

Improve coverage

  • Make coverage pass for handler/handler_test.go (flags)
  • Add tests (TestMain) for conplicity.go
  • Add tests for providers/
  • Test bad stdout in volume/volume_test.go

Fails on second backup

When conplicity performs two backups and the second backup requires a prepare command, it fails with:

7/5/2016 2:26:17 PM�[31mFATA�[0m[0589] Failed to start exec: Error response from daemon: http: Hijack is incompatible with use of CloseNotifier

Change project name

TODO:

  • Choose new project name
  • Rename project in GitHub
  • Rename project in Code
  • Change logo
  • Change Hugo site

Metrics not collected when something fails

It looks like a volume metrics are not collected when an error is thrown.

8/2/2016 6:38:21 AM{"level":"error","msg":"Failed to backup volume openvas_cache: failed to backup volume: failed to retrieve last backup info: failed to parse Duplicity output for last full backup date of openvas_cache","time":"2016-08-02T04:38:21Z"}

=> no metrics for openvas_cache volume (even backupExitCode)

Cannot use scp or sftp with ssh keys

First of all, thank you for this really promising tool.

I am struggling a bit here as I do not use AWS and I would like to see Conplicity (which I run in a container) send the backup over ssh.

Problem is: there is currently no way to pass authentication keys all the way down to the Duplicity container, hence a permission denied is all I can get so far.

I am really happy to help implementing this, but TBH, I do not really now how to do it or where to start.

Handle exceptions properly

Currently, if someone goes wrong with one backup, all backups are stopped.

We should handle that better, outputing an error for the current backup and continuing with the next one.

Pass backup policy parameters as volume labels

Each container should be able to expose backup policy parameters are labels, in the form io.conplicity.<volume>.param.

Examples:

  • io.conplicity.pgdata.driver: postgresql
  • io.conplicity.pgdata.retention: 7d

Support CLI flags

In addition to environment variables, support CLI flags for options.

Get real hostname instead of container's

Currently, conplicity uses the container's hostname in the upload path.

For example, on Rancher, it will typically be conplicity_1, conplicity_2, etc. However, these containers are not strictly linked to their Docker hosts.

It would be better to use the name of the Docker host instead. The info cannot be obtained from the Docker API, but it can be obtained through orchestration API, such as Rancher's.

For example, we could have a --rancher-hostname option which would get the hostname using Rancher's metadata API instead of using the container's hostname.

Too much volumes created

It looks like 2 unnamed volumes are created per volume backed up. I can't figure out why yet...

Workaround for janitor

It looks like janitor removes the duplicity_cache volume because there is not duplicity container running when it does the cleanup.
It even looks like it does it the wrong way by deleting /var/lib/docker/volumes/duplicity_cache without calling docker volume rm duplicity_cache, so Docker thinks that the named volume still exists and will not recreate it at next run of docker run -v duplicity_cache:/root/.cache/duplicity camptocamp/duplicity ... and thus fails with:

stat /var/lib/docker/volumes/duplicity_cache/_data: no such file or directory

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.