Coder Social home page Coder Social logo

docker-s3-volume's Introduction

docker-s3-volume

Docker Build Status Docker Layers Count Docker Version Docker Pull Count Docker Stars

Creates a Docker container that is restored and backed up to a directory on s3. You could use this to run short lived processes that work with and persist data to and from S3.

Usage

For the simplest usage, you can just start the data container:

docker run -d --name my-data-container \
           elementar/s3-volume /data s3://mybucket/someprefix

This will download the data from the S3 location you specify into the container's /data directory. When the container shuts down, the data will be synced back to S3.

To use the data from another container, you can use the --volumes-from option:

docker run -it --rm --volumes-from=my-data-container busybox ls -l /data

Configuring a sync interval

When the BACKUP_INTERVAL environment variable is set, a watcher process will sync the /data directory to S3 on the interval you specify. The interval can be specified in seconds, minutes, hours or days (adding s, m, h or d as the suffix):

docker run -d --name my-data-container -e BACKUP_INTERVAL=2m \
           elementar/s3-volume /data s3://mybucket/someprefix

Configuring credentials

If you are running on EC2, IAM role credentials should just work. Otherwise, you can supply credential information using environment variables:

docker run -d --name my-data-container \
           -e AWS_ACCESS_KEY_ID=... -e AWS_SECRET_ACCESS_KEY=... \
           elementar/s3-volume /data s3://mybucket/someprefix

Any environment variable available to the aws-cli command can be used. see http://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for more information.

Configuring an endpoint URL

If you are using an S3-compatible service (such as Oracle OCI Object Storage), you may want to set the service's endpoint URL:

docker run -d --name my-data-container -e ENDPOINT_URL=... \
           elementar/s3-volume /data s3://mybucket/someprefix

Forcing a sync

A final sync will always be performed on container shutdown. A sync can be forced by sending the container the USR1 signal:

docker kill --signal=USR1 my-data-container

Forcing a restoration

The first time the container is ran, it will fetch the contents of the S3 location to initialize the /data directory. If you want to force an initial sync again, you can run the container again with the --force-restore option:

docker run -d --name my-data-container \
           elementar/s3-volume --force-restore /data s3://mybucket/someprefix

Deletion and sync

By default if there are files that are deleted in your local file system, those will be deleted remotely. If you wish to turn this off, set the environment variable S3_SYNC_FLAGS to an empty string:

docker run -d -e S3_SYNC_FLAGS="" elementar/s3-volume /data s3://mybucket/someprefix

Using Compose and named volumes

Most of the time, you will use this image to sync data for another container. You can use docker-compose for that:

# docker-compose.yaml
version: "2"

volumes:
  s3data:
    driver: local

services:
  s3vol:
    image: elementar/s3-volume
    command: /data s3://mybucket/someprefix
    volumes:
      - s3data:/data
  db:
    image: postgres
    volumes:
      - s3data:/var/lib/postgresql/data

Contributing

  1. Fork it!
  2. Create your feature branch: git checkout -b my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin my-new-feature
  5. Submit a pull request :D

Credits

  • Original Developer - Dave Newman (@whatupdave)
  • Current Maintainer - Fábio Batista (@fabiob)

License

This repository is released under the MIT license:

docker-s3-volume's People

Contributors

chrisns avatar fabiob avatar jmahowald avatar tlex avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-s3-volume's Issues

Periodically check for changes and sync with S3

It seems this could be far more useful if you sync periodically, so that if the host server dies unexpectedly, you have most if not all of the data.

If you sync fast enough you could also use many of these on different nodes to sync data across an auto scaling group

Health check to indicate volume is ready

I'd like to know when my container is done booting up (i.e. has pulled all the files in my s3 bucket). I'm imagining a health check I could use in my docker-compose file. Is there something I can use for this?

(Thanks for the project! This is really a lifesaver for me.)

softLayer IBM s3

is it possible to add support for it? only missing env is ENDPOINT

This is a test

Ignore me

  • First task
  • I am not a task
  • But I am!

Hello!

  • One
  • Two
  • Bees

Handle error case when a backup fails

Currently the process dies:

backup /data => s3://my-bucket/data1
ERROR: [Errno -2] Name or service not known
./run.sh: backup failed

It should handle this and continue running.

Restore on period?

Can you add a flag for period restore?

I'm using the data volume as a read-only volume so I don't mind to blow it away and restore from the S3 bucket on a period. e.g. every 5 mins if there are changes.

Potentially add a better guide on how to use this?

Not really entirely sure what to do here with this. Or what exactly to expect. Does this basically make aws's s3 into dropbox? Because if not then well that sucks and I misunderstood what this was for.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.