Coder Social home page Coder Social logo

duplicacy-util's Introduction

duplicacy-util: Schedule and run duplicacy via CLI

Build Status Go Report Card codecov

This repository contains utilities to run Duplicacy on any platform supported by Duplicacy.

Table of contents:


What is duplicacy-util?

In short, duplicacy-util is a utility to run Duplicacy backups. While there are a number of other tools available to do similar things, duplicacy-util has a number of advantages:

  • It is completely portable. It's trivial to run on Windows, Mac OS/X, Linux, or any other platorm supported by the Go language. You schedule it, and duplicacy-util will perform the backups. Note that Duplicacy itself is written in Go, so if you can use Duplicacy, you can use dupliacy-util.
  • It is self-contained. Copy a single executable, and duplicacy-util is fully functional. It is easy to install and easy to upgrade, and you don't need to install packages to make it work properly.
  • It is "set and forget". I use duplicacy-util to send E-Mail upon completion. Then I run scripting on my E-Mail server (I use gmail) to move successful backups to the trash. This means that I can review backups at any time but, if I don't, the mail messages are deleted after 30 days. If any backup fails, it's left in your inbox for you to review. See management of E-Mail messages for details.
  • It is completely configurable with configuration files. You can have one backup that is backed up to a single server while other backups are backed up to multiple servers.
  • It is designed to be easy on resources. For example, any number of complete logs are saved, but older logs are compressed to save space. Very old logs are aged out and deleted.
  • duplicacy-util won't step on itself. You can run multiple backups concurrently, but duplicacy-util will skip a backup if it's already backing up a specific repository. Thus, you can schedule jobs as often as you would like knowing that if a backup of a repository is still running, a second job won't try to back up the same data again.

Note that duplicacy-util is a work in progress. The short term to-do list includes:

  • Create a checkpoint mechnanism. If Duplicacy fails for whatever reason, then duplicacy-util should resume the backup where it left off, even if you back up to many different storages.
  • While designed for my usage, I would very much like feedback to see what others would like. If a new feature makes sense, I'm happy to add it.

Build Instructions

Note that binaries for common platforms are provided. See releases on GitHub for the distributions. However, if you wish to build duplicacy-util yourself, follow instructions in this section.

Building duplicacy-util from source is easy. First install Go itself. Once Go is installed and $GOPATH is set up, run the following commands from the command line to get dependencies:

go get github.com/djherbis/times
go get github.com/mitchellh/go-homedir
go get github.com/spf13/viper
go get github.com/gofrs/flock
go get gopkg.in/gomail.v2

Finally, download duplicacy-util itself:

cd $GOPATH/src
git clone https://github.com/jeffaco/duplicacy-util.git

Once Go is installed and dependencies are downloaded, to build, do:

cd $GOPATH/src/duplicacy-util
go build

This will generate a duplicacy-util binary in the current directory with the appropriate file extension for your platform (i.e. duplicacy-util for Mac OS/X or Linux, or duplicacy-util.exe for Windows).

How do you configure duplicacy-util?

duplicacy-util works off of two (or more) configuration files:

  • A global configuration file (that controls common settings), and
  • A repository-specific file to control how the repository should be backed up.

You can have multiple repository-specific configuration files (if you have many repositories to back up).

Configuration file formats are very flexible. Configuration files can be in JSON, TOML, YAML, HCL, or Java properties config files (configuration files are managed with Viper). All examples for configuration files will be in YAML, but you are to free to use a format of your choosing.

Note that the extension of configuration files can vary based on the format of the file. Sample configuration files are YAML files, and thus have a YAML extension. Change the extension if you wish to use JSON or some other format.

By default, dupliacy-util stores all files in its storage directory, which is $HOME/.duplicacy-util by default. Note that, in this document, $HOME refers to the users home directory (~/ on Mac OS/X and Linux, or /Users/<username> on Windows).

The storage directory is determined in a variety of ways:

  1. First and foremost, if the -sd parameter is specified, this will define the location of the storage directory, and duplicacy-util files will be stored directly in this directory. In this way, the directory where duplicacy-util stores its files could be called anything.

  2. If -sd is not specified on the command line, then the value of environment variable "$HOME" will be evaluated and will be used as a location to look for directory .duplicacy-util.

  3. If environment variable "$HOME" is unmodified (or not normally defined on your system), then it is expected that directory .duplicacy-util exists in the users home directory.

Global configuration file

The global configuration file is called duplicacy-util.yaml, and is searched in the storage directory.

The following fields are checked in the global configuration file:

Field Name Purpose Default Value
duplicacypath Path for the Duplicacy binary program "duplicacy" on your default path ($PATH)
lockdirectory Directory where temporary lock files are stored Storage directory, or $HOME/.duplicacy-util
logdirectory Directory where log files are stored Storage directory, or $HOME/.duplicacy-util/log
logfilecount Number of historical log files that should be stored 5
Notifications

Duplicacy-util supports notifying you when backups start, are skipped (if already running), succeed, and fail. Unless you're planning to only be running dupliacy-util interactively, it's strongly recommended to configure notifications.

For now only email notifications are supported, but more notification channels will be implemented. The following config snippet shows how to subscribe to specific notifications:

notifications:
  onStart: []
  onSkip: ['email']
  onSuccess: ['email']
  onFailure: ['email']
Email notifications
Field Name Purpose Default
fromAddress From address (i.e. [email protected]) None
toAddress To address (i.e. [email protected]) None
serverHostname SMTP server (i.e. [email protected]) None
serverPort Port of SMTP server (i.e. 465 or 587) None
authUsername Username for authentication with SMTP server None
authPassword Password for authentication with SMTP server None
acceptInsecureCerts Accept insecure or self-signed server certificates false

Notes on email fields:

  • If you don't wish to store your email authentication password in the global configuration file, you can set environment variable DU_EMAIL_AUTH_PASSWORD to your email server password. If this environment variable is not defined, then we'll check the global configuration file for the password.
  • If you are using a local email server, you are likely using a self-signed certificate. If that's the case, you should set acceptInsecureCerts to true so duplicacy-util won't reject the server certificate.

Here is an example how to setup email notifications:

notifications:
  onStart: []
  onSkip: ['email']
  onSuccess: ['email']
  onFailure: ['email']

email:
  fromAddress: "Donald Duck <[email protected]>"
  toAddress: "Donald Duck <[email protected]>"
  serverHostname: smtp.gmail.com
  serverPort: 465
  authUsername: [email protected]
  authPassword: gaozqlwbztypagwt

E-Mail subjects from duplicacy-util will be of the following format:

Notification Subject Line
Start duplicacy-util: Backup started for configuration <config-name>
Skip duplicacy-util: Backup results for configuration <config-name> (skipped)
Success duplicacy-util: Backup results for configuration <config-name> (success)
Failure duplicacy-util: Backup results for configuration <config-name> (FAILURE)

You can filter on the subject line to direct the E-Mail appropriately to a folder of your choice. See Management of E-Mail Messages, for E-Mail configuration hints.

Local configuration file

The local configuration file (or repository configuration file) defines how to back up a specific repository. This file must be specified on the command line (discussed later). The repository-specific configuration file may take lists of storages if you back up to multiple cloud providers. In the simple case, a configuration file can short, such as this:

repository: /Volumes/Quicken

storage:
    -   name: b2

prune:
    -   storage: b2
        keep: "0:365 30:180 7:30 1:7"

check:
    -   storage: b2

This configuration shows that:

  • You have a repository, stored in /Volumes/Quicken,
  • That is backed up to storage named b2,
  • You should prune storage b2 with 0:365 30:180 7:30 1:7. See prune documentation for more information on how to specify keep tag.
  • When doing a check operation, you should check revisions in storage b2.

You might wonder why the same storage is specified multiple times. This is evident if you back up to multiple cloud providers.

If you back up to multiple cloud providers, the configuration file may be more involved:

repository: /Volumes/Quicken

storage:
    -   name: b2
        threads: 10
    -   name: azure-direct
        threads: 5

copy:
    -   from: b2
        to: azure
        threads: 10

prune:
    -   storage: b2
        keep: "0:365 30:180 7:30 1:7"
    -   storage: azure
        keep: "0:365 30:180 7:30 1:7"

check:
    -   storage: b2
        all: true
    -   storage: azure
        all: true

The new concept here is the copy section. This defines repositories that should be copied from one storage to another, but using a pseudo storage name (azure-direct) to avoid downloading a lot of data from b2. In this example, we'll back up to both b2 and azure-direct, but then we'll use a duplicacy copy operation to be sure that the two storages are identical when the backup is complete.

Because there are multiple storages involved, we want to prune each storage and check each storage for consistency.

A repository configuration file consists of a few repository-wide settings and sections that define operations. The repository-wide settings are:

Field Name Purpose Default Value
repository Location of the repository to back up None

The reposository field normally points to the root of repository to back up, and is the location that duplicacy itself stores its configuration directory (.duploicacy).

You may change the location of dupliacy's repository configuration directory, .duplicacy (using the -pref-dir and -repository options when creating the repository with duplicacy). If you do so, then the repository field above should refer to the location of duplicacy's .duplicacy directory.

Sections in the repository configuration files consist of:

Section Name Purpose
storage Storage names to back up for duplicacy backup operations*
copy List of storage from-to pairs for duplicacy copy operations
prune List of storage names to prune for duplicacy prune operations*
check List of storage names to check for duplicacy check operations*

Note that * denotes that this section is mandatory and MUST be specified in the configuration file.

The storage list contains a list of repositories to back up to. Note that the list may be as long as required. duplicacy-util will continue loading storages until no additional storages are found. Each storage should be differentiated with the prior storage with a - character (to signify a new section). This is conistent with all sections in the repository configuration file.

Fields in the storage section are:

Field Name Purpose Required Default Value
name Storage name to back up Yes None
threads Number of threads to use for backup No 1
vss Enable Volume Shadow Copy service No false
vssTimeout the timeout in seconds to wait for the Volume Shadow Copy operation to complete No None
quote Specify additional duplicacy parameters (for advanced users only) No None

Fields in the copy section (if one exists), are:

Field Name Purpose Required Default Value
from Storage name to copy from Yes None
to Storage name to copy to Yes None
threads Number of threads to use for copy No 1
quote Specify additional duplicacy parameters (for advanced users only) No None

Fields in the prune section are:

Field Name Purpose Required Default Value
storage Storage name to prune Yes None
keep Retention specification Yes None
threads Number of threads to use (requires duplicacy CLI v2.1.1 or later) No 1
all Should all storages be pruned No true
quote Specify additional duplicacy parameters (for advanced users only) No None

Note that by default pruning is done for all snapshot IDs. If you wish to only prune particular snapshots, you should specify all: false and use the quote option to specify the snapshot ID to prune, like the following:

prune:
    -   storage: b2
        keep: "0:365 30:180 7:30 1:7"
        all: false
        quote: "-id mysnapshot"

Finally, fields in the check section are:

Field Name Purpose Required Default Value
storage Storage name to check Yes None
all Should all revisions be checked No false
quote Specify additional duplicacy parameters (for advanced users only) No None

Note that all sections support a "quote" option. This is for advanced usages only, and you should only use this in conjunction with -v -d (verbose debug). This allows you to specify additional parameters to pass to duplicacy commands. For example, if you needed the duplicacy check command to specify the -fossils -resurrect options, you could do so by including something like:

quote: "-fossils -resurrect"

in the backup configuration file for section check.

Once you have the configuration files set up, running duplicacy-util is simple. Just use a command like:

duplicacy-util -f quicken -a

This says: Back up repository defined in quicken.yaml, performing all operations (back up/copy, prune, and check).

Output from this command is similar to:

17:58:25 Using global config: /Users/jeff/.duplicacy-util/duplicacy-util.yaml
17:58:25 Using config file:   /Users/jeff/.duplicacy-util/quicken.yaml
17:58:25 duplicacy-util starting, version: <dev>, Git Hash: <unknown>
17:58:25 Rotating log files
17:58:25 Beginning backup on 07-17-2018 17:58:25
17:58:25 Backing up to storage b2 with 10 threads
17:58:32   Files: 345 total, 823,165K bytes; 1 new, 7,964K bytes
17:58:32   All chunks: 150 total, 890,186K bytes; 5 new, 8,086K bytes, 3,092K bytes uploaded
17:58:32   Duration: 7 seconds
17:58:32 Backing up to storage azure-direct with 5 threads
17:58:33   Files: 345 total, 823,165K bytes; 1 new, 7,964K bytes
17:58:33   All chunks: 150 total, 889,922K bytes; 5 new, 8,086K bytes, 3,092K bytes uploaded
17:58:33   Duration: 1 second
17:58:33 Copying from storage b2 to storage azure with 10 threads
17:58:37   Copy complete, 110 total chunks, 3 chunks copied, 107 skipped
17:58:37   Duration: 4 seconds
17:58:37 Pruning storage b2
17:58:44 Pruning storage azure
17:58:45 Checking storage b2
17:58:47 Checking storage azure
17:58:48 Operations completed in 23 seconds

A complete log of the backup is saved in the logdirectory setting in the global configuration file.

Command Line Usage

The best way to get command line usage is to run duplicacy-util with the -h option, as follows:

duplicacy-util -h

This will generate output similar to:

Usage of ./duplicacy-util:
  -a    Perform all duplicacy operations (backup, copy, purge, check)
  -b    Perform duplicacy backup operation (deprecated; use -backup -copy)
  -backup
        Perform duplicacy backup operation
  -c    Perform duplicacy check operation (deprecated; use -check)
  -check
        Perform duplicacy check operation
  -copy
        Perform duplicacy copy operation
  -d    Enable debug output (implies verbose)
  -f string
        Configuration file for storage definitions (must be specified)
  -g string
        Global configuration file name
  -m    (Deprecated) Send E-Mail with results of operations (implies quiet)
  -p    Perform duplicacy prune operation (deprecated; use -prune)
  -prune
        Perform duplicacy prune operation
  -q    Quiet operations (generate output only in case of error)
  -sd string
        Full path to storage directory for configuration/log files
  -tm
        (Deprecated: Use -tn instead) Send a test message via E-Mail
  -tn
        Test notifications
  -v    Enable verbose output
  -version
        Display version number

Exit codes from duplicacy-util are as follows:

Exit Code/Range Meaning
0 Success
1-2 Command line errors
500 Operation from duplicacy command failed
6200 Run skipped due to existing job already running

In the event of an error, a notification will be sent with details of the error. Note that 200-201 operations are not considered fatal from an notification perspective, but the fact that the backup was skipped is indicated.

Getting started with duplicacy-util

The duplicacy-util program has no knowledge of Duplicacy repository passwords. As a result, if Duplicacy prompts for a password, duplicacy-util won't be able to respond to the prompt, and the backup will fail (with suitable output in the log file).

To set up the backup for initial use, there is documentation that @mattjm worked up that is pretty good. That said, these are the basic steps I followed to initialize backing up Quicken, one of my repositories:

duplicacy init -e -storage-name b2 quicken b2://<bucket-name>
duplicacy add -e -copy b2 azure quicken azure://<bucket-name>      # Copy
duplicacy add -e azure-direct quicken-direct azure:<bucket-name>   # Direct

duplicacy backup -storage b2 -stats -threads 10
duplicacy backup -storage azure-direct -stats -threads 5
duplicacy copy -from b2 -to azure -stats -threads 10

This initialized the repository and set it up for backup to both Backblaze and Azure. It also performed the first backup, taking care of final password prompts. After this, duplicacy-util should function properly, and Duplicacy should not prompt for passwords.

You should study the Duplicacy Wiki carefully, the documentation is quite good. It explains how Duplicacy works and various commands that Duplicacy supports.

Management of E-Mail Messages

NOTE: This discussion is specific to Gmail, but if you are using a different mail server, you can almost certainly use these ideas in your specific scenerio.

In order to send E-Mail notifications, you must first have configured a number of fields in the Global configuration file. These fields depend on what E-Mail server you are using. I use Google's gmail service, and will define my usage here.

It is recommended that you use an application specific generated password that can be generated in the Gmail Security Center. This works around two-factor authentication or other issues that may create problems. Note that the password stored in the global configuration file is not encrypted at this time. On a shared system, you should set permissions of this file appropriately, or use environment variable DU_EMAIL_AUTH_PASSWORD to override the value stored in the global configuration file.

Once you set up the E-Mail configuration appropriately, you can test it with a command like: ./duplicacy-util -tn. This will trigger a failure notification for all configured notification channels (e.g E-Mail).

It's recommended that you use Gmail filtering so that failed backups are visable in your inbox while successful backups are set aside for deletion. To do this, first create a folder named Backup Logs. After the folder is created, then create a filter rule as follows:

Matches: from:([email protected]) to:([email protected]) subject:(duplicacy-util: Backup results for AND (success))
Do this: Skip Inbox, Mark as read, Apply label "Backup Logs", Never send it to Spam

After this is done, generate a mail test and verify that you have a failed test message in inbox and a success test message in Backup Logs.

To catch if backups that are not running, and to clean up successful backups from folder Backup Logs, it is recommended that you create a small Google Apps Script to do these actions. In this way, if you do nothing, successful backup logs are deleted after 30 days automatically, and failures go to your inbox, where you can see them and act upon them.

Here is one such Google Apps Script named duplicacy-util.gs:

function duplicacy_utils() {
  var threads = GmailApp.search('label:"Backup Logs"');
  var foundBackup = 0;

  // Backups from duplicacy-util with no errors get filtered to label "Backup Logs" via Gmail
  // settings. This makes them easy for us to find and iterate over.
  //
  // Backups are scheduled at least as often as this script runs. Thus, if nothing was run when this
  // script runs, then we get active notification that something is wrong with the backup process.
  //
  // Naming conventions with duplicacy-util are formatted like:
  //   "duplicacy-util: Backup results for configuration test (success)" (for successful backups), or
  //   "duplicacy-util: Backup results for configuration test (FAILURE)" (for failed backups)
  // Check to see that it starts with "duplicacy-util..." and ends with " (Success)", and if so, count
  // the message.

  for (var i = 0; i < threads.length; i++) {
    var subject = threads[i].getFirstMessageSubject();
    if (subject.indexOf('duplicacy-util: Backup results for configuration') == 0 && subject.indexOf(' (success)') != -1)
    {
      threads[i].moveToTrash();
      foundBackup++;
    }
  }

  if (foundBackup == 0)
  {
    GmailApp.sendEmail('<user>@<domain>.com',
                       'WARNING: No duplicacy-util backup logs files received',
                       'Please investigate backup process!');
  }
}

Be certain to replace <user> with your Gmail username and <domain> with your Gmail domain in the script above.

After the script is set up, you can set up Google to run the script automatically on any schedule you wish.

Scheduling duplicacy-util to Run Automatically

Scheduling duplicacy-util to run backups automatically (emailing the results automatically) finishes the job. Now backups run unattended, automatically, relieving you of the job of doing backups yourself.

Backup scheduling differs by operating system. I provide hints here, although there are lots of diferent ways to schedule jobs automatically.

Scheduling for Linux

Linux has a built-in rich scheduler, cron. The cron utility can run jobs as a user or as root; the choice is yours. These instructions assume that you will be running jobs as your user since you'll generally be backing up your user files.

There's a lot of help available for cron. Wikipedia help is a good start for the average user. For purposes of example, you can do something like the following:

crontab -l > crontab
echo "0 1 * * * /Users/jeff/Applications/duplicacy-util -f quicken -a -m -q" >> crontab
crontab < crontab

The first command will dump your existing crontab entries to a file named crontab. This file will likely be empty if you haven't used crontab before.

The second command will add an entry to your crontab file: Run duplicacy-util for Quicken, e-mailing results, at 1:00 AM every morning. See Wikipedia for help in understanding the time format.

Since crontab stores entries internally, the final command will reload your saved crontab entries from your private crontab file.

Scheduling for Mac OS/X

On recent versions of Mac OS/X (macOS High Sierra as of the time of this writing), cron ships with Mac OS/X. So that is an option.

However, on Mac OS/X, the preferred way to add a timed job is to use launchd. Each launchd job is described by a separate file. This means that you can manage launchd timed jobs by simply adding or removing a file.

There are two ways to create these files:

  1. By hand; the file format is documented in launchd documentation, or
  2. By using an automated tool. Lingon is one such tool that makes the job of creating launchd files very simple. While Lingon is commercial, it's very inexpensive. I created the quicken job in seconds using Lingon:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
        <key>EnvironmentVariables</key>
        <dict>
                <key>PATH</key>
                <string>/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/opt/X11/bin:/usr/local/sbin</string>
        </dict>
        <key>Label</key>
        <string>com.duplicacy-util.quicken</string>
        <key>ProgramArguments</key>
        <array>
                <string>/Users/jeff/local/go/bin/duplicacy-util</string>
                <string>-f</string>
                <string>quicken</string>
                <string>-a</string>
                <string>-m</string>
                <string>-q</string>
        </array>
        <key>RunAtLoad</key>
        <false/>
        <key>StartCalendarInterval</key>
        <array>
                <dict>
                        <key>Hour</key>
                        <integer>3</integer>
                        <key>Minute</key>
                        <integer>0</integer>
                </dict>
                <dict>
                        <key>Hour</key>
                        <integer>15</integer>
                        <key>Minute</key>
                        <integer>0</integer>
                </dict>
        </array>
</dict>
</plist>

This plist file will run job quicken twice a day: at 3:00 AM and at 3:00 PM, mailing the results of the backup job.

Scheduling for Windows

Windows includes a build-in rich scheduler called the Windows Task Scheduler. The Windows Task Scheduler is a GUI (graphical) program designed to make scheduling of repetitive tasks easy to perform.

You can find help in numerous forms with a WWW search, including articles and YouTube videos stepping you through the process.

duplicacy-util's People

Contributors

boedy avatar jeffaco avatar jesseshieh avatar mbafford avatar plasticrake avatar wackazong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

duplicacy-util's Issues

Support for on-site storage copy commands

Is it possible to use duplicacy-util to perform the actions for copying backups from onsite storage to a cloud provider? From what I can tell, you can't currently run a copy command without also running a backup to a specified storage. See https://forum.duplicacy.com/t/back-up-to-multiple-storages/1075:

It is also possible to run the copy command directly on your onsite storage server. All you need to do is to create a dummy repository there, and then initialize it with the same default storage (but as a local disk) and add the same additional storage:

Copy to clipboard
log in to your local storage server
mkdir -p \path\to\dummy\repository
cd \path\to\dummy\repository
duplicacy init repository_id onsite_storage_url
duplicacy add -copy default offsite_storage --bit-identical repository_id offsite_storage_url
duplicacy copy -from default -to offsite_storage

This not only frees your work computer from running the copy command, but also speeds up the copy process, since now Duplicacy can read chunks from a local disk

(local_server -> router -> offsite_storage)
instead of over the network
(local server -> router -> your computer -> router -> offsite_storage).

Can not run -copy separately

Now having quote and pruning options (again, thanks a lot for the fast and kind response), I have created appropriate configs with rich options and tried to use them.
My preference is not to use duplicacy-util -a, but separately use backup, copy, check and prune operations - I would like to call prune before copy to decrease data transfer overhead and also to call these operations less frequently than backups.
I have found that -copy command actually does not work for me:

PS D:\Tools> D:\Tools\duplicacy\duplicacy-util.exe -sd D:\Tools\duplicacy\duplicacy-util\ -f de_tools.yaml -copy
21:04:20 Using global config: D:\Tools\duplicacy\duplicacy-util\duplicacy-util.yaml
21:04:20 Using config file:   D:\Tools\duplicacy\duplicacy-util\de_tools.yaml
Error: No operations to perform (specify -b, -p, -c, or -a)

If I call duplicacy-util -a for the same config, it does backup, copy, check and prune normally.

Duplicacy still wants password

Hey following your guide to get this all set up for automation and everything. Just finished doing the first backup to local and a copy to B2 and set up the config for this repository. But when I go to run the file to test it I get errors that Duplicacy is asking for the password and then the backup fails as a result? In your guide you said that the initialization and running the first backups takes care of the final checks and then the passwords wont be required which allows the program to work. Am I missing something here or does this just straight up not work with encrypted backups?

pruning remote repositories

This is probably a long shot, at least in my understanding it does not fit to current model, but might be interesting and helpful for some.
In my environment, multiple clients upload their backups to two sftp storages, but they do not prune, check or copy anything. There is one "control" machine which prunes, checks and copies everything to another cloud storage. This control machine have only one repository, but manages about 20 (some with different retention policies) on the storages.
Right now duplicacy-util requires a local config file in order to do manipulations on a given repository. What would help me is to have a local config file with refers to a repository existing at a given machine but contains pruning instructions for repositories on a remote storage, something like

repository: /local/one

storage:
    1:
        name: b2

prune:
    1:
        storage: b2
        remote_repo_1: "0:365 30:180 7:30 1:7"
        remote_repo_2: "0:180 7:30 1:7"
        remote_repo_3: "0:1440 30:30 1:7"
...

Consider moving from Travis-CI to another CI provider

Travis-CI is slowly dying. After the Travis acquisition followed by the significant layoffs, it's prudent to consider a different CI provider.

Two choices are either GitLab CI or CircleCI:

GitLab CI

GitLab CI: GitLab CI provides 50k build minutes/month (Note that the page says that the GitHub integration will be moved from the GitLab.com Free plan to the Silver (paid) plan, but I think that open source projects have all Gold features available to them by default; get clarification GitLib CI is a serious contender).

CircleCI

CircleCI: CircleCI offers 4 Linux containers with no build limits for open source projects. I've used CircleCI and it was fine, though that was a few years ago.

Given that I have prior experience with CircleCI, this is a strong contender unless someone else wants to do the work and has a stronger preference.

log rotation renaming

A small issue: seems like currently the log rotator first renames the existing config_name.log to config_name.log.1.gz and then puts it into archive.
I have 10 archived logs de_tools.log.1.gz, de_tools.log.2.gz, ..., de_tools.log.10.gz. Inside every archive, there is always a file named 'de_tools.log.1.gz' which actually is not an archive but just a plaint text log file.

Email Notifications Question/Issue

Hi,

Love this tool by the way!

I'm not sure if this is an issue or just simply a question. I've followed the instructions on the README, and everything seems to work as expected except for email notifications. I noticed that the -m flag is now deprecated, but I'm not sure what this means exactly for email notifications. Should they still be working (automatically, I guess)?

If they should be working, then I'm having an issue with them for some reason. I have configured my email and even successfully tested it using the duplicacy-util -tn, but when I execute the command dupliacy-util -f config_file -a -q, I do not receive any emails when the process is finished.

Am I missing something here?

Thanks!

`-tn` always returns exit code 1

I'm writing a script to test if the notifications are setup and working. However when running duplicacy-util -tn it always returns an exit code of 1. Is this by design? It would be nice if it could return an exit code of 0 if there were no issues. For now I'll write my script to see if stderr is empty which should do the same thing.

Error: no storage locations defined in configuration

Hi,

This might be related to #19 although I'm not a developer so I'm not sure.

On Windows 10 I run into the following problem when running duplicacy-util:

C:\Users\user>duplicacy-util -f Laptop-MC -a

20:38:53 Using global config: C:\Users\user\.duplicacy-util\duplicacy-util.yaml
Error: no storage locations defined in configuration
Error: no prune locations defined in configuration
Error: no check locations defined in configuration

Here's some extra info:

duplicacy-util -version

Version: 1.4, Git Hash: cad0ac7

C:\Users\user\.duplicacy-util\Laptop-MC.yaml

repository: C:\Users\user

storage:
    -   name: rp3-32

prune:
    -   storage: rp3-32
        keep: "0:365 30:180 7:30 1:7"

check:
    -   storage: rp3-32

C:\Users\user\.duplicacy\preferences

[
    {
        "name": "rp3-32",
        "id": "Laptop-MC",
        "repository": "",
        "storage": "sftp://user@host//mnt/data/backup/Laptop-MC",
        "encrypted": true,
        "no_backup": false,
        "no_restore": false,
        "no_save_password": false,
        "nobackup_file": "",
        "keys": null
    }
]

A similar setup on my Linux laptop using the old scheme with numbers works well.

repository: /home/user

storage:
           1:
                          name: rp3-32

prune:
           1:
                         storage: rp3-32
                         keep: "0:365 30:180 7:30 1:7"

check:
            1:
                       storage: rp3-32

But if I set this up under Windows 10 it doesn't work.
Am I doing something wrong or have I stumbled upon a bug?

Make prune in the repository config file not mandatory

According to the README prune must be defined for all repository config but this goes against the idea of both duplicacy and this tool to manage multiple repositories on the same storage location (aka. different snapshot-ids). The documentation of duplicacy recommends only running prune from one repository with the -all option instead of letting each snapshot on the storage being pruned separately. This is even supported by this tool as all defaults to true but running multiple prune --all one after the other is superfluous.

I currently want to back up 3 different repositories to the same storage location and thought I just define the prune -all for one, but I can't do that because it forces me to define a prune for each of my repository config files.

HTML email content not escaped

There are several places where text is included in the HTML content without escaping it, which can cause text to seemingly disappear. For example:

21:05:22 duplicacy-util starting, version: <dev>, Git Hash: <unknown><br>

I have next to zero Go experience but I do see the standard library includes a HTML templating package; is there a reason why you don't use it?

Thanks for duplicacy-util and the inspiration to use Gmail for backup log aggregation!

Error: missing mandatory check field: 2.storage

Attempting to run duplicacy-util on Windows Server 2012. I consistently get the following:

PS C:\users\daniel> duplicacy-util -f serverfolders 19:21:47 Using global config: C:\Users\daniel\.duplicacy-util\duplicacy-util.yaml Error: missing mandatory check field: 2.storage

I have also run the command explicitly providing the -sd flag, with the same result. The -v and -d flags provide no further information.

The local config file is in the same directory as the global one, and is named serverfolders.yaml. It contains:

repository: D:/ServerFolders

storage:
    - name: usbdrive
      vss: true
      vss-timeout: 360

copy:
    - from: usbdrive
      to: dreamobjects
      threads: 20

prune:
    - storage: usbdrive
      keep: "0:365 30:180 7:30 1:7"
    - storage: dreamobjects
      keep: "0:365 30:180 7:30 1:7"

check:
    - storage: usbdrive
      all: true
    - storage: dreamobjects
    - all: true

I have tried many variations: repository name (serverfolders), repository location (D:\ServerFolders), storage name (usbdrive), storage location (h:\duplicacy_backup), etc.

The repository has been initialized to use a directory on an external hard drive as storage, and to copy to cloud storage. I have successfully run the backup and copy operations directly with duplicacy several times, but I can't get duplicacy-util started.

Have I misunderstood something about the configuration options?

Make `storage`, `prune`, and `check` optional

I have a use case to only run copy on a repository and never backup. However the storage section is required in the configuration file. I'd like to be able to safely run with -a, but I'd have to make sure to always only run with -copy -check -prune.

I also have a use case to never run prune on certain repositories. The reason for skipping prune is that the docs say that only one device should run prune per storage, so I'd like to only have one machine running prune.

     backup, check        copy, check, prune (to offsite)
                                check, prune (local)
(local source devices) -> (LAN SFTP storage server) -> (offsite cloud storage)

Would you be open to this utility supporting this design via configuration files? Thank you!

Attach log to email?

I have setup duplicacy-util on a couple of remote systems that I do not have easy access to. It would be really nice if the log file was attached to both the success and error email.

Enter the path of the private key file:No private key file is provided

Running duplicay-util version 1.5 & duplicacy 2.1.1 on Linux Mint 18.3 (based on Ubuntu 16.04)

I'm testing automating my back-ups by doing a

crontab -e

and adding

* * * * * /etc/cron.daily/duplicacy-util
so it's running the duplicacy-util entry in cron.daily every minute.

duplicay-util in cron.daily looks like this

#!/bin/sh
/usr/local/bin/duplicacy-util -f LaptopHomefolder -a

Duplicacy-util logfile after being invoked like this:

19:30:01 Beginning backup on 10-19-2018 19:30:01
19:30:01 ######################################################################
19:30:01 Backing up to storage rp3-32 with 1 threads
19:30:01 Storage set to sftp://user@localserver:portnumber//mnt/data/backup/laptop
19:30:01 Enter the path of the private key file:No private key file is provided
19:30:01 Failed to load the SFTP storage at sftp://user@localserver:portnumber//mnt/data/backup/laptop: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
19:30:01 Error executing command: exit status 100

However, running duplicacy in the terminal yields the following result

duplicacy backup
Storage set to sftp://user@localserver:portnumber//mnt/data/backup/laptop
Last backup at revision 14 found
Indexing /home/gerjo
Loaded 4 include/exclude pattern(s)
etc
etc
Backup for /directory/ at revision 15 completed
1 directory was not included due to access errors

so obviously duplicacy knows where to find my privat key.
Running duplicacy-util from the terminal makes this error even more perplexing as:

duplicacy-util -f LaptopHomefolder -a
19:46:15 Using global config: /dir/.duplicacy-util/duplicacy-util.yaml
19:46:15 Using config file: /dir/.duplicacy-util/LaptopHomefolder.yaml
19:46:15 duplicacy-util starting, version: 1.5, Git Hash: f2e7147
19:46:15 Rotating log files
19:46:15 Beginning backup on 10-19-2018 19:46:15
19:46:15 Backing up to storage rp3-32 with 1 threads
19:47:00 Files: 121473 total, 271,463M bytes; 67 new, 95,014K bytes
19:47:00 All chunks: 55485 total, 271,843M bytes; 16 new, 104,392K bytes, 40,303K bytes uploaded
19:47:00 Duration: 45 seconds
19:47:00 Pruning storage rp3-32 using 1 thread(s) -all
19:47:01 Checking storage rp3-32
19:47:37 Operations completed in 1:22

So no complaining about not finding a private key here. I have no idea what's going on here, please help me fix this.

On MacOS using launchd, getting an error - Failed to read the preference file from repository /Users/krishna/Downloads: open /Users/krishna/Downloads/.duplicacy/preferences: operation not permitted

Thank you for this nice wrapper around duplicacy.

When I try to run this using launchd on MacOS, I get an error
'Failed to read the preference file from repository /Users/krishna/Downloads: open /Users/krishna/Downloads/.duplicacy/preferences: operation not permitted'

My repository is the Downloads folder on my Mac.
However, everything works fine when invoking duplicacy-util manually from the command-line. What might be this failure, and how to work around this?

Feature: Desktop notifications

The possibility to get email notifications is nice, but a bit overkill for my need. I wouldn't want my inbox to get littered with daily backup reports. I suspect that a desktop notification will give me enough feedback to go on. Some design considerations:

  • Alert on error
  • Notify on success?
  • Enabled by default?

The https://github.com/gen2brain/beeep package could be implemented to get desktop notifications working. @jeffaco If you like I could draft a PR. Would first like to have some consensus on its requirements though.

Less Chattering Log?

I don't really need which chunks are uploaded and bcs it's on the same spinning hdd I think an option to keep less logs would be great Thx!

Backing up to multiple storages: an issue with one storage results into no backup to subsequent storages

Hi, I have configured duplicacy-util to make backup to multiple storages:

storage:
    -   name: sftp_storage_1
        threads: 1
    -   name: sftp_storage_2
        threads: 1
    -   name: cloud_provider
        threads: 1

Recently I discovered that if the first storage is not accessible (offline), duplicacy-util catches error code 100 from duplicacy and stops without running backups for other storages. It is intended behaviour? Backing up to multiple destinations is meant to improve reliability (if for some reason one destination fails, another will still have a snapshot for a given day), but it does not work like that with duplicacy-util.

Feature: Set home folder

Even when specifying a global configuration from in a different location, the util still defaults the home folder to the user profile location (for logging and storage definition files).

I'm not familiar with Go, but if I'm reading this correctly, you're using go-homedir to read this, and I think there's an issue with a "feature" it has to override the home directory with an environment variable (set to "HOME" in Windows). No matter what I've tried so far, this doesn't seem to work...

Ideally if we can get a "-gf" parameter in duplicacy-util itself to set the global folder (bypassing the go-homedir function call), then any logging and storage configuration files could use that?

Thanks!

Contemplating removing all but YAML configuration files ... input wanted

Since I have no real way to reach my user base, I thought I'd open an issue here.

Currently, duplicacy-util supports reading configuration files in a large variety of formats due to use of Viper. Viper allows configuration files to be in JSON, TOML, YAML, HCL, or Java properties config files. I've always used YAML, and to be best of my knowledge, I never came across anyone that used something else.

I'm contemplating dumping Viper and just sticking to YAML. The problem is that Viper pulls in the kitchen sink (there's a huge number of dependencies). I think fewer dependencies means fewer code paths, which means smaller images and greater reliability.

But I can only do that if folks are only using YAML configuration files.

Do you use something other than YAML for duplicacy-util configuration files? If so, reply here!

Running duplicacy-util_linux_x64_1.5 produces error

In Fedora 29, I'm trying to run duplicacy-util_linux_x64_1.5 by copying and renaming the file to /usr/bin/duplicacy-util, setting permissions with chmod +x then running the file with ./duplicacy-util. When I do this, I receive an error stating: Error: Unable to resolve location for storage directory and the program does not run. I currently have Duplicacy working. How might I be able to get duplicacy-util to run given this error?

Supporting notifications per backup configuration

I am in the process of adding HTTP notification support, but I am having some trouble since notifications are treated as global configurations. The net effect is that it makes it hard to have per-backup URLs and to integrate with services like healthchecks.io .

One of the simplest solutions would be to have per-backup notifications. A similar type of backup automation, Borgmatic, uses hooks per configuration. Have there been thoughts of going this route before?

2 small things, sample config and excludes

Just found this project today, very cool.

A couple of things,

  1. An example .yaml config with examples of all options would be super. I know the read me contains most (all?) of the settings but a config to work off of is super helpful.

  2. And way to setup excludes for the repository?

Thanks

Edit: I guess using the "quote:" to exclude is possible based on the documentation, I think I answered my own question.

Want to run Duplicacy-Util using config files in central location

I have the following files structure:

C:\Users\Bob\AppData\Roaming\Duplicacy\			# Main folder for Duplicacy, holding executables in top  folder, and config files and logs in subfolders
C:\Users\Bob\AppData\Roaming\Duplicacy\Duplicacy.exe
C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy-util.exe
C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy-util.yaml
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\cache
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\cache\B2-Bob
...
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\cache\B2-Bob\snapshots\Bob\14
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\filters\
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\keyring\
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\log\	# Empty folder created automatically during init
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\.duplicacy\preferences\
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\Bob.yaml
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\log	# Logs are added here when running Duplicacy-Util
C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\log\Bob.log

duplicacy-util.yaml:

duplicacypath: 'C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy.exe'

Bob.yaml (located in C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob):

#location of repository (the base directory of the backup)
repository: C:\Users\Bob

#location of storage
storage:
  - name: B2-Bob
    vss: yes
    threads: 18

#prune command, what storage to do it on and what values to use

prune:
  - storage: B2-Bob
    keep: "30:360 7:180 1:30"

#check command and what storage to do it on
check:
  - storage: B2-Bob

Windows Command Line:

C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob>duplicacy-util -d -g "C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy-util.yaml" -f Bob -backup

Duplicacy-Util output:

C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob>duplicacy-util -d -g "C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy-util.yaml" -sd "C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob" -f Bob -backup
17:23:59 Using global config: C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy-util.yaml
17:23:59 Using config file:   C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\Bob.yaml
17:23:59
17:23:59 Backup Information:
17:23:59   Num  Storage             Threads
17:23:59    1   B2-Bob              18
17:23:59
17:23:59 Prune Information:
17:23:59    1: Storage B2-Bob
      Keep: -keep 30:360 -keep 7:180 -keep 1:30
17:23:59
17:23:59 Check Information:
17:23:59   Num  Storage             All Snapshots
17:23:59    1   B2-Bob
17:23:59
17:23:59
17:23:59 Backup Info: [map[name:B2-Bob vss:true threads:18]]
17:23:59 Copy Info: []
17:23:59 Prune Info: [map[storage:B2-Bob keep:-keep 30:360 -keep 7:180 -keep 1:30]]
17:23:59 Check Info[map[storage:B2-Bob]]
17:23:59 duplicacy-util starting, version: 1.5, Git Hash: f2e7147
17:23:59 Rotating log files
17:23:59 Beginning backup on 04-23-2021 17:23:59
17:24:03 Backing up to storage B2-Bob -vss with 18 threads
17:24:03 Executing: C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy.exe[backup -storage B2-Bob -stats -threads 18 -vss]
Error executing command: exit status 100
Error: Backup failed. Check the logs for details

Log (located in C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob\log):

17:23:59 Beginning backup on 04-23-2021 17:23:59
17:24:03 ######################################################################
17:24:03 Backing up to storage B2-Bob -vss with 18 threads
17:24:03 Executing: C:\Users\Bob\AppData\Roaming\Duplicacy\duplicacy.exe[backup -storage B2-Bob -stats -threads 18 -vss]
17:24:04 Repository has not been initialized
17:24:04 Error executing command: exit status 100

It appears that Duplicacy-Util is running Duplicacy from within "C:\Users\Bob\AppData\Roaming\Duplicacy", not from within "C:\Users\Bob\AppData\Roaming\Duplicacy\Config\Bob"

How do I fix this? Or is there a better way to keep the config files together?

Question on usage

This program looks perfect for what I'm looking for!

I just have two questions - and I apologize in advance if I'm just being dense and not seeing the answer.

  1. Do you have to first initialize the directories for duplicacy? Or will this script take care of that? I imagine you do, and also do it to set up the cloud storage passwords and such...

  2. Can you specify multiple directories to back up in a single local yaml config file? I probably need to read the Duplicacy docs some more, just not 100% sure on what the repository value does in the local config yaml file.

Thanks for doing this project - it looks like some great work!

Runtime Errors

Hello,

Hate to open another issue, but just last night I ran into an error after executing the program. I had just finished backing up both to a local NAS box and then copying that backup to the cloud service Wasabi. So, to clarify, there was no new data for duplicacy-util to work on, I just wanted to run it to see if any errors would come up.

It seemed to go over the local NAS data fine, however the issue occurred when it worked on copying to wasabi. Below is the error:

21:34:01 Using global config: C:\Users\mdombro\.duplicacy-util\duplicacy-util.yml
21:34:01 Using config file:   C:\Users\mdombro\.duplicacy-util\backup.yml
21:34:01 duplicacy-util starting, version: 1.2, Git Hash: 4aa94bc
21:34:01 Rotating log files
21:34:01 Beginning backup on 08-09-2018 21:34:01
21:34:01 Backing up to storage nas with 10 threads
21:34:02   Files: 1530 total, 8,548M bytes; 1 new, 863 bytes
21:34:02   All chunks: 1762 total, 8,549M bytes; 4 new, 627K bytes, 266K bytes uploaded
21:34:02   Duration: 1 second
21:34:02 Backing up to storage wasabi with 5 threads
21:34:03   Files: 1530 total, 8,548M bytes; 1 new, 863 bytes
21:34:03   All chunks: 1762 total, 8,549M bytes; 4 new, 627K bytes, 266K bytes uploaded
21:34:03   Duration: 1 second
21:34:03 Copying from storage nas to storage wasabi with 10 threads
21:34:04   Duration: 1 second
21:34:04 Pruning storage wasabi
21:34:04 Checking storage wasabi
panic: runtime error: index out of range

goroutine 1 [running]:
main.performBackup(0xc000000018, 0x896880)
        /Users/jeff/local/go/src/duplicacy-util/duplicacy-util.go:498 +0x2eb8
main.obtainLock(0x0)
        /Users/jeff/local/go/src/duplicacy-util/duplicacy-util.go:323 +0x278
main.processArguments(0x0, 0x0)
        /Users/jeff/local/go/src/duplicacy-util/duplicacy-util.go:296 +0x662
main.main()
        /Users/jeff/local/go/src/duplicacy-util/duplicacy-util.go:160 +0x98

My backup.yaml configuration file is as such:

repository: E:\Backup_repo
storage:
    1:
        name: nas
        threads: 10
    2:
        name: wasabi
        threads: 5

copy:
    1:
        from: nas
        to: wasabi
        threads: 10
        
prune:
    1:
        storage: nas
        keep: "0:365 30:180 7:30 1:7"
    
    1:
        storage: wasabi
        keep: "0:365 30:180 7:30 1:7"
        
check:
    1:
        storage: nas
        all: true
    2:
        storage: wasabi
        all: true

I also just re-ran it tonight and got a different error:

19:00:55 Using global config: C:\Users\user\.duplicacy-util\duplicacy-util.yml
19:00:55 Using config file:   C:\Users\user\.duplicacy-util\backup.yml
19:00:55
19:00:55 Backup Information:
19:00:55   Num  Storage             Threads
19:00:55    1   nas                    10
19:00:55    2   wasabi                 5
19:00:55 Copy Information:
19:00:55   Num  From                To                  Threads
19:00:55    1   nas                 wasabi                 10
19:00:55
19:00:55 Prune Information:
19:00:55    1: Storage wasabi
      Keep: -keep 0:365 -keep 30:180 -keep 7:30 -keep 1:7
19:00:55
19:00:55 Check Information:
19:00:55   Num  Storage             All Snapshots
19:00:55    1   nas                     true
19:00:55    2   wasabi                  true
19:00:55
19:00:55 duplicacy-util starting, version: 1.2, Git Hash: 4aa94bc
19:00:55 Rotating log files
19:00:55 Beginning backup on 08-10-2018 19:00:55
19:00:55 Backing up to storage nas with 10 threads
Error executing command: exit status 100```

Any pointers of things to try would be super appreciated!

Leading spaces in 'quote' cause backup to fail

In my b2.yaml file, I had my storage defined (during testing) as:

storage:
    -   name: "b2"
        threads: 5
        quote: " -dry-run"

When running this with debug on, it fails with:

... <snip> ...
22:57:01 Beginning backup on 07-12-2020 22:57:01
22:57:01 Backing up to storage b2 with 5 threads  -dry-run
22:57:01 Executing: /usr/local/bin/duplicacy[backup -storage b2 -stats -threads 5  -dry-run]
Error executing command: exit status 3
Error: Backup failed. Check the logs for details

The log file isn't much help:

22:57:01 Beginning backup on 07-12-2020 22:57:01
22:57:01 ######################################################################
22:57:01 Backing up to storage b2 with 5 threads  -dry-run
22:57:01 Executing: /usr/local/bin/duplicacy[backup -storage b2 -stats -threads 5  -dry-run]
22:57:01 The backup command requires no arguments.
22:57:01 
22:57:01 NAME:
22:57:01    duplicacy backup - Save a snapshot of the repository to the storage
22:57:01 
22:57:01 USAGE:
22:57:01    duplicacy backup [command options]  
22:57:01 
22:57:01 OPTIONS:
22:57:01    -hash 			detect file differences by hash (rather than size and timestamp)
22:57:01    -t <tag> 			assign a tag to the backup
22:57:01    -stats 			show statistics during and after backup
22:57:01    -threads <n>			number of uploading threads
22:57:01    -limit-rate <kB/s>		the maximum upload rate (in kilobytes/sec)
22:57:01    -dry-run 			dry run for testing, don't backup anything. Use with -stats and -d
22:57:01    -vss 			enable the Volume Shadow Copy service (Windows and macOS using APFS only)
22:57:01    -vss-timeout <timeout>	the timeout in seconds to wait for the Volume Shadow Copy operation to complete
22:57:01    -storage <storage name> 	backup to the specified storage instead of the default one
22:57:01    -enum-only 			enumerate the repository recursively and then exit
22:57:01    
22:57:01 Error executing command: exit status 3

After some experimentation, it turns out that the leading space in the quote option is causing this failure.

Possible daylight savings time bug in test?

I'm writing an AUR PKGBUILD for this project and one of the tests fails w/ an hour difference that seems suspiciously like daylight savings time issue.

=== RUN   TestTimeDiffStringWithDurationDays
--- FAIL: TestTimeDiffStringWithDurationDays (0.00s)
    timeutils_test.go:250: Result was incorrect, got '4 days, 2:02:05', expected '4 days, 3:02:05'.

What else can I provide to be helpful?

Verbose/debug (-v -d flags) don't seem to pass -d flag to duplicacy

Love the utility! I hope I'm not missing something but I am trying to troubleshoot an issue with keychain access when running as root from launchd and find myself needing the -d output from duplicacy.

However when duplicacy-util is run with -v -d flags they don't seem to make it through to the duplicacy call. And using the quote field in the config file puts the -d flag after the backup command rather than before so that fails as well.

Running on OS X.

Error: unable to obtain lock using lockfile

I have created a task schedule (Windows 7) to start a duplicacy-util task every few hours to start a Duplicacy backup/copy task. Most of the time this task is already running because it runs quite long.

If the task is already running I'm getting this error message:

Error: unable to obtain lock using lockfile: C:\Users\Username.duplicacy-util\XyzID.lock

IMHO it shouldn't give an error like this because it's not really an error โ€“ it should just say that the task is already running.

The error email shows in Subject:

duplicacy-util: Backup results for configuration XyzID (FAILURE)

I would like to filter these messages and auto-delete it but it's not possible because it just says "(FAILURE)".

Wrong description of the -all option for the check operation

The README says the all option for the check operation is "Should all revisions be checked" but according to the documentation of duplicacy the -all means "check snapshots with any id" and the source code simply passes -all to duplicacy so the behaviour described does not match the action.

option to add -fossils -resurrect to check?

Hi, @jeffaco, I have just learned about your tool and it works great! Thanks a lot. I have a bunch of suggestions/questions which I will split in separate issues.

The first one is passing additional parameters to check. In my environment I have an sftp storage and multitude of clients with not very good connection, and that results in interrupted uploads, so usually I run prune -exhaustive, which in turn some times results in chunks fossilized during a revision upload. And that is not an issue, because such chunks are normally restored with check -fossils -resurrect. In your tool, however, there are no additional options for check command yet, and running it generates a missing chunk error message. Could you please add an option to pass -fossils -resurrect to duplicacy while checking?

Send test mail to Amazon SES fails

Sending a test mail to SES failed:

root@grad:/opt/duplicacy-util# duplicacy-util -tn -sd /opt/duplicacy-util
07:05:12 Using global config: /opt/duplicacy-util/duplicacy-util.yaml
Error: gomail: could not send email 1: 554 Transaction failed: Expected '/', got ;
Error: gomail: could not send email 1: 554 Transaction failed: Expected '/', got ;
Error: gomail: could not send email 1: 554 Transaction failed: Expected '/', got ;

with config file:

notifications:
  onStart: []
  onSkip: ['email']
  onSuccess: ['email']
  onFailure: ['email']

email:
  fromAddress: "Steve Frank <name@domain>"
  toAddress: "Steve Frank <name@domain>"
  serverHostname: email-smtp.us-east-1.amazonaws.com
  serverPort: 587
  authUsername: ID [these are correct in test]
  authPassword: KEY

Unable to send email if global config does not specify all parameters

This is a minor issue as there is a workaround.

I'm using an internal SMTP server which does not require authentication. So my global config looks like:

emailFromAddress: "Duplicacy <[email protected]>"
emailToAddress: "Systems <[email protected]>"
emailServerHostname: smtputil.company.com
emailServerPort: 25
emailAuthUsername: 
emailAuthPassword: 

Running duplicacy-util -tm with the above configuration successfully sends an email out.

However, if I run a backup or other operation and specify -m, I get the following error:

root@tetlu16util01:~/.duplicacy-util# duplicacy-util -f svn -a -m
13:28:25 Using global config: /root/.duplicacy-util/duplicacy-util.yaml
Error: Unable to send E-Mail; required fields missing from global configuration

The workaround I found was to specify a dummy username/password in the file, and thankfully our SMTP server doesn't care about these (I guess gomail library doesn't even send these if they're not requested by the SMTP server):

emailAuthUsername: user
emailAuthPassword: pass

Updating the above two lines in the global configuration works for me.

nothing happens (?)

I'm sure I'm doing something wrong but I can't see what it is.
I downloaded the windows release and even made my own via installing go and compiling.
But the thing is that nothing happens when I open it. Smartscreen is disabled and I even tried in another box.
Via CMD I even get this weird msg:
screenshot 2019-01-11 21 27 42

Error: exec: "duplicacy": executable file not found in %PATH%

If I run the following command I'm getting this error:
C:\_Duplicacy\_duplicacy-util\duplicacy-util.exe -version
Error: exec: "duplicacy": executable file not found in %PATH%

Similar commands are working like this:
C:\_Duplicacy\_duplicacy-util\duplicacy-util.exe -h

option to work with multiple local configs simultaneously

One one of my clients, I have several different backups running. It would be nice to pass multiple local configs at once to duplicacy-util and to get one email notification in the end. Like

duplicacy-util.exe -f config_1 -f config_2 -f config_3

instead of

duplicacy-util.exe -f config_1
duplicacy-util.exe -f config_2
duplicacy-util.exe -f config_3

Is it possible to implement?

Is duplicacy-util dead?

Is duplicacy-util dead?
No more answers to messages here, no more updates.

Maybe Duplicacy itself is dead. :(

VSS?

How to enable VSS for backup?
Adding "vss: true" to the control file?

Storage definitions configuration file structure

Hi @jeffaco, thanks again for this great tool!

I'm writing some scripts to help automate duplicacy and duplicacy-util configuration and deployment (I hope to have ready to share soon), and I have some feedback on the storage definitions config file.

When I first read about this tool I had assumed the numbered keys for each item had meaning; 1: under storage must refer to 1: under prune, etc. But it appears this is not the case and they are really just an array defined with numbered keys. Also if you happen to skip any numbers in your file (perhaps from copying and pasting) example: 1,2,3,5,6 will only process 1,2,3 and skip 5 and 6.

Since these numbers don't have any meaning I propose they be removed. This will also simplify my tool that can generate these files.

Please let me know what you think, I can take a stab at implementing this in a backwards compatible way and submit a pull request if you're open to it.

Rewriting your example to the proposed format:

repository: /Volumes/Quicken

storage:
  - name: b2
    threads: 10
  
  - name: azure-direct
    threads: 5

copy:
  - from: b2
    to: azure
    threads: 10

prune:
  - storage: b2
    keep: "0:365 30:180 7:30 1:7"

  - storage: azure
    keep: "0:365 30:180 7:30 1:7"

check:
  - storage: b2
    all: true
  
  - storage: azure
    all: true

No dedicated Command Line Option for "Copy"?

It seems there's no dedicated Command Line Option for "Copy", just a combined one:

-b Perform duplicacy backup/copy operation

So I can not start a Copy process without a preceding Backup operation?

I want to run Copy without Backup to copy a local storage to a cloud storage. My workaround: I created a "fake" backup with an empty fake folder that runs prior to Copy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.