Coder Social home page Coder Social logo

wamdam / backy2 Goto Github PK

View Code? Open in Web Editor NEW
191.0 26.0 39.0 7.39 MB

backy2: Deduplicating block based backup software for ceph/rbd, image files and devices

Home Page: http://backy2.com/

License: Other

Python 69.01% Makefile 0.55% Shell 0.38% Mako 0.09% CSS 12.08% JavaScript 2.44% HTML 6.64% SCSS 8.81%
backup ceph scrub snapshot python3 postgresql rbd

backy2's Introduction

What's the state of this project?

backy2 is an artefact of my hosting company where I recognized that there are no sane and reliable backup solutions out there for block-based / snapshot-based backup in the many-terabyte to petabyte range.

backy2 was the second backup software I wrote for our usecases. The first one (you guessed rightm it was called "backy") was designed for .img based virtual machines and had designated features for that.

backy2 was designed around our mostly ceph/rbd-based cluster, that's where the most features come from.

Meanwhile we have switched to local lvm thin pools and we have a very clever, fast and new pull-backup solution, for which I (of course) wrote the third iteration of backup software, however it's neither called backy3, nor it's open source (for now).

That's why this project receives only very minimal maintenance from me and practically no updates. This means you can expect to have installation issues due to non-existing libraries on modern operating system versions. However they should be rather easy to be fixed and in some regard I'm willing to do this. So there's no reason to panic if you're using backy2 in 2022 and plan to use it for the next years. The python code will most likely be valid in several years and libs have shown to be very compatible in never versions in the last years.

The code however is stable (no guarantees though), the software is feature complete and that's the main reason why there have been no significant commits in the last months. We have performend many years of stable backups and restores with it.

What is backy2?

backy2 is a deduplicating block based backup software which encrypts and compresses by default.

The primary usecases for backy are:

  • fast and bandwidth-efficient backup of ceph/rbd virtual machine images to S3 or NFS storage
  • backup of LVM volumes (e.g. from personal computers) to external USB disks

Main features

Small backups
backy2 deduplicates while reading from the block device and only writes blocks once if they have the same checksum (sha512).
Compressed backups
backy2 compresses all data blocks with respect to performance with the zstandard library.
Encrypted backps
All data blocks are encrypted by default. Encryption is managed in integer versions, migration and re-keying procedures exist.
Fast backups
With the help of ceph's rbd diff, backy2 will only read the changed blocks since the last backup. We have virtual machines with 600GB backed up in about 30 seconds with <70MB/s bandwidth.
Continuable backups and restores
If the data backend storage is unreliable (as in storage, network, โ€ฆ) and backups or restores can't finish, backy2 can continue them when the outage has ended.
Small required bandwidth to the backup target
As only changed blocks are written to the backup target, a small (i.e. gbit) connection is sufficient even for larger backups. Even with newly created block devices the traffic to the backup target is small, because these block devices usually are full of \0 and are deduplicated before even reaching the target storage.
As simple as cp, but as clever as backup needs to be

With a very small set of commands, good --help and intuitive usage, backy2 feels mostly like cp. And that's intentional, because we think, a restore must be fool-proof and succeed even if you're woken up at 3am and are drunk.

And it must be hard for you to do stupid things. For example, existing files or rbd volumes will not be overwritten unless you --force, deletion of young backups will fail per default.

Scrubbing with or without source data against bitrod and other data loss

Every backed up block keeps a checksum with it. When backy scrubs the backup, it reads the block from the backup target storage, calculates it's checksum and compares it to the stored checksum (and size). If the checksum differs, it's most likely that there was an error when storing or reading the block, or by bitrod on the backup target storage.

Then, the block and the backups it belongs to, are marked 'invalid' and the block will be re-read for the next backup version even if rbd diff indicates that it hasn't been changed.

Scrubbing can also take a percentage value for how many blocks of the backup it should scrub. So you can statistically scrub 16% each day and have a full scrub each week (16*7 > 100).

Note

Even invalid backups can be restored!

Fast restores
With supporting block storage (like ceph/rbd), a sparse restore is possible. This means, sparse blocks (i.e. blocks which "don't exist" or are all \0) will be skipped on restore.
Parallel: backups while scrubbing while restoring

As backy2 is a long-running process, you will of course not want to wait until something has finished. So there are very few places in backy where a global lock will be applied (especially on a very rarely used full cleanup which you can kill at any time to release the lock).

So you can scrub, backup and restore (multiple times each) on the same machine.

Does not flood your caches
When reading large pieces of data on linux, often buffers/caches get filled with this data (which in case of backups is essentially only needed once). backy2 instructs linux to immediately forget the data once it's processed.
Backs up very large volumes RAM- and CPU efficiently
We backup multiple terabytes per vm (and this multiple times per night). backy2 typically runs in <1GB of RAM with these volume sizes. RAM usage depends mostly on simultaneous reads/writes which are configured through backy.cfg. We have seen ~16GB of RAM usage with large configured queues for 200TB images and a backup performance of ร˜350MB/s to an external s3 storage.
backups can be directly mounted

backy2 brings it's own fuse service. So a simple linux command makes backups directly mountable - even on another machine:

root@backy2:~# backy2 fuse /mnt

And on another terminal:

    root@backy2:~# ls -la /mnt/by_version
    drwx------ 0 root root 0 Mai  3 16:14 0c44841a-8d47-11ea-8b2d-3dc6919c2aca
    drwx------ 0 root root 0 Mai  3 16:14 60ae794e-8d46-11ea-8b2d-3dc6919c2aca
    drwx------ 0 root root 0 Mai  3 16:14 9d8cfe80-8d46-11ea-8b2d-3dc6919c2aca

    root@backy2:~# ls -la /mnt/by_version_uid/9d8cfe80-8d46-11ea-8b2d-3dc6919c2aca
    -rw------- 1 root root 280M Mai  3 14:01 data
    -rw------- 1 root root    0 Mai  3 14:01 expire
    -rw------- 1 root root    9 Mai  3 14:01 name
    -rw------- 1 root root    0 Mai  3 14:01 snapshot_name
    -rw------- 1 root root   51 Mai  3 14:01 tags
    -rw------- 1 root root    5 Mai  3 14:01 valid

    root@backy2:~# cat /mnt/by_version_uid/9d8cfe80-8d46-11ea-8b2d-3dc6919c2aca/name
    sometest1

    root@backy2:~# mount /mnt/by_version_uid/9d8cfe80-8d46-11ea-8b2d-3dc6919c2aca/data /mnt

You get the idea. The data file (and resulting partitions, mounts) read/write!
Writing to them will write to a temporary local file. The original backup version
is *not* modified!
This means, you may even boot a VM from this file from a remote backup.
Automatic tagging of backup versions

You can tag backups with your own tags depending on your usecase. However, backy2 also tags automatically with these tags:

b_daily
b_weekly
b_monthly

It has a clever algorithm to detect how long the backup for any given image and this tag is ago and then tags again with the given tag. So you'll see a b_weekly every 7 days (if you keep these backups).

Prevents you from doing something stupid

By providing a config-value for how old backups need to be in order to be able to delete them, you can't accidentially delete very young backups.

Also, with backy protect you can protect versions from being deleted. This is very important when you need to restore a version which is suspect to be deleted within the next hours. During restore a lock will prevent deletion, however, by protecting it, it cannot be deleted until you decide that it's not needed anymore.

Also, you'll need --force to overwrite existing files or volumes.

Easy installation
Currently under ubuntu 18.04, you simply install the .deb. Please refer to :ref:`installation` for a detailed install process.
Free and Open Source Software
Anyone can review the source code and audit security and functionality. backy2 is licensed under the LGPLv3 license (:ref:`license`).

backy2's People

Contributors

ednt avatar gschoenberger avatar pn-d9t avatar sjbronner avatar wamdam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

backy2's Issues

Status of software

Hi,

Before I start using backy2 for real, can someone confirm the current status please? Last formal release was 2017, so I'm wondering how maintained it is.

Regards,
Andy

Why is the backup time different?

i need change to KST time.

[root@localhost ~]# backy2 -c /disk/home/backup/backy.cfg ls
INFO: $ /usr/local/bin/backy2 -c /disk/home/backup/backy.cfg ls
+---------------------+-------------+---------------+-------+--------------+--------------------------------------+-------+-----------+----------------------------+---------------------+
| date | name | snapshot_name | size | size_bytes | uid | valid | protected | tags | expire |
+---------------------+-------------+---------------+-------+--------------+--------------------------------------+-------+-----------+----------------------------+---------------------+
| 2020-03-26 05:37:29 | full_monday | | 25600 | 107374182400 | e1edf5ac-6f23-11ea-9b11-901b0e6098d7 | 1 | 0 | b_daily,b_monthly,b_weekly | 2020-03-26 14:40:00 |
+---------------------+-------------+---------------+-------+--------------+--------------------------------------+-------+-----------+----------------------------+---------------------+
INFO: Backy complete.

[root@localhost ~]# date
Thu Mar 26 15:11:34 KST 2020

New release

Hey,
The latest release is almost 2 years old and some features are missing from the master version.
Is it possible to release a newer version?

Thanks!

Devel question: Why isn't the data checksum hash used as a key for the data backends?

The data backends generate their own unique ids to address the stored objects. Why isn't the data checksum hash used directly as a key?

From the source code I can see that there is mostly a 1:1 relation between block uids and checksums. I say mostly because there is a special case: When the checksum is the same but the length of the block is different, a new block object is generated with a new block uid, different length but the same checksum. I don't see how this would be relevant as we'd still have a hash collision which is really bad and hopefully extremely rare. Furthermore the block size shouldn't be changed when backups exists, so there'd be a 1:1 relation in any case.

I fear that I'm missing something fundamental. Why is this level of indirection needed? One possibility I could imagine is that a specific data backend has a constrained key space. But this isn't the case with the current backends. I'd appreciate any insight. Thank you!

assert len(data) == block.size

hi.
Two backy2 servers are up and running.
Backupdata exported from server 1.
Error occurred while importing and restoring server 2. How do I fix it?
DB is using postgresql.

server1 metadata

[root@backy2_server1 ~]# backy2 stats
INFO: $ /usr/local/bin/backy2 stats
+----------------------------+--------------------------------------+--------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| date | uid | name | size bytes | size blocks | bytes read | blocks read | bytes written | blocks written | bytes dedup | blocks dedup | bytes sparse | blocks sparse | duration (s) |
+----------------------------+--------------------------------------+--------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| 2020-01-02 13:13:38.292383 | f807adce-2d15-11ea-89be-525400e5d9c4 | full_monday | 13002932224 | 3101 | 13002932224 | 3101 | 3083403264 | 736 | 0 | 0 | 9919528960 | 2365 | 121 |
| 2020-01-02 13:16:31.688367 | 7a69d986-2d16-11ea-b1ab-525400e5d9c4 | diff_tuesday | 18004312064 | 4293 | 18004312064 | 4293 | 111411200 | 27 | 3036676096 | 724 | 14856224768 | 3542 | 76 |
+----------------------------+--------------------------------------+--------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
INFO: Backy complete.

server1 export

[root@backy2_server1 ~]# backy2 export f807adce-2d15-11ea-89be-525400e5d9c4 full_monday
INFO: $ /usr/local/bin/backy2 export f807adce-2d15-11ea-89be-525400e5d9c4 full_monday
INFO: Backy complete.

server1 -> server2 (export file rsync)

[root@backy2_server1 ~]# rsync -av full_monday [email protected]:/home/target/
[email protected]'s password:
sending incremental file list
full_monday

sent 349,523 bytes received 35 bytes 139,823.20 bytes/sec
total size is 349,350 speedup is 1.00

server2 import

[root@backy2_server2 target]# backy2 import full_monday
INFO: $ /usr/local/bin/backy2 import full_monday
INFO: Backy complete.

server2 metadata

[root@backy2_server2 ~]# backy2 ls
INFO: $ /usr/local/bin/backy2 ls
+---------------------+-------------+---------------+------+-------------+--------------------------------------+-------+-----------+------+
| date | name | snapshot_name | size | size_bytes | uid | valid | protected | tags |
+---------------------+-------------+---------------+------+-------------+--------------------------------------+-------+-----------+------+
| 2020-01-02 13:11:36 | full_monday | | 3101 | 13002932224 | f807adce-2d15-11ea-89be-525400e5d9c4 | 1 | 0 | |
+---------------------+-------------+---------------+------+-------------+--------------------------------------+-------+-----------+------+
INFO: Backy complete.

restore error

[root@backy2_server2 ~]# backy2 restore f807adce-2d15-11ea-89be-525400e5d9c4 file:///home/restore/test.img
INFO: $ /usr/local/bin/backy2 restore f807adce-2d15-11ea-89be-525400e5d9c4 file:///home/restore/test.img
ERROR: Unexpected exception
ERROR: object of type 'NoneType' has no len()
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/scripts/backy.py", line 583, in main
func(**func_args)
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/scripts/backy.py", line 42, in restore
backy.restore(version_uid, target, sparse, force)
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/backy.py", line 294, in restore
assert len(data) == block.size
TypeError: object of type 'NoneType' has no len()
INFO: Backy failed.

RPM

Would be great to have an RPM version of the software, especially with python3 missing from the regular EL7 repos

Backy2 cleanup fails

I'm running backy2 to backup ceph rbd images to a dedicated ceph storage radoswg.

After deleting backups older than 15 days, I ran "backy2 cleanup" to delete unused objects on radosgw.

Unfortunately the cleanup fails, returning the following error message:

root@osb1:~# backy2 cleanup
INFO: $ /usr/bin/backy2 cleanup
INFO: Cleanup-fast: 230 false positives, 769 data deletions.
ERROR: Unexpected exception
ERROR: timed out
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 740, in main
func(**func_args)
File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 299, in cleanup
backy.cleanup_fast()
File "/usr/lib/python3/dist-packages/backy2/backy.py", line 674, in cleanup_fast
no_del_uids = self.data_backend.rm_many(uid_list)
File "/usr/lib/python3/dist-packages/backy2/data_backends/s3.py", line 198, in rm_many
errors = self.bucket.delete_keys(uids, quiet=True)
File "/usr/local/lib/python3.6/dist-packages/boto/s3/bucket.py", line 728, in delete_keys
while delete_keys2(headers):
File "/usr/local/lib/python3.6/dist-packages/boto/s3/bucket.py", line 717, in delete_keys2
body = response.read()
File "/usr/local/lib/python3.6/dist-packages/boto/connection.py", line 410, in read
self._cached_response = http_client.HTTPResponse.read(self)
File "/usr/lib/python3.6/http/client.py", line 466, in read
return self._readall_chunked()
File "/usr/lib/python3.6/http/client.py", line 573, in _readall_chunked
chunk_left = self._get_chunk_left()
File "/usr/lib/python3.6/http/client.py", line 556, in _get_chunk_left
chunk_left = self._read_next_chunk_size()
File "/usr/lib/python3.6/http/client.py", line 516, in _read_next_chunk_size
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python3.6/socket.py", line 586, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
INFO: Backy failed.

I'm running the latest version of backy2 (2.10.5) on Ubuntu Bionic.

Does anybody else have that issue?

mount: mount /dev/nbd0 on /mnt/test failed: Function not implemented

`backy2 nbd -r 9353a59c-39df-11e9-80a8-525400597931
INFO: $ /usr/bin/backy2 nbd -r 9353a59c-39df-11e9-80a8-525400597931
INFO: Starting to serve nbd on 127.0.0.1:10809
INFO: You may now start
INFO: nbd-client -l 127.0.0.1 -p 10809
INFO: and then get the backup via
INFO: modprobe nbd
INFO: nbd-client -N 127.0.0.1 -p 10809 /dev/nbd0

INFO: Incoming connection from 127.0.0.1:33572
INFO: [127.0.0.1:33572] Client aborted negotiation
INFO: Incoming connection from 127.0.0.1:33732
INFO: [127.0.0.1:33732] Negotiated export: 9353a59c-39df-11e9-80a8-525400597931
INFO: nbd is read only.`

error log:

mount /dev/nbd0 /mnt/test
mount: /dev/nbd0 is write-protected, mounting read-only
mount: mount /dev/nbd0 on /mnt/test failed: Function not implemented

TypeError: SQLite DateTime type only accepts Python datetime and date objects as input.

i using sqlite. how to fix?.. same as reinstall : (
#48

[root@localhost ~]# backy2 backup file:///home/guest.img full_monday
INFO: $ /usr/local/bin/backy2 backup file:///home/guest.img full_monday
ERROR: Unexpected exception
ERROR: (builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input.
[SQL: INSERT INTO versions (uid, date, expire, name, snapshot_name, size, size_bytes, valid, protected) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: [{'uid': 'b5037ffe-4cae-11ea-8221-90b11c43e4ce', 'size_bytes': 23006609408, 'date': '2020-02-11 09:13:03', 'valid': 0, 'protected': 0, 'size': 5486, 'snapshot_name': '', 'name': 'full_monday', 'expire': None}]]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1173, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 808, in _init_compiled
param.append(processorskey)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/dialects/sqlite/base.py", line 759, in process
"SQLite DateTime type only accepts Python "
TypeError: SQLite DateTime type only accepts Python datetime and date objects as input.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.7-py3.6.egg/backy2/scripts/backy.py", line 742, in main
func(**func_args)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.7-py3.6.egg/backy2/scripts/backy.py", line 95, in backup
version_uid = backy.backup(name, snapshot_name, source, hints, from_version, tags, expire_date)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.7-py3.6.egg/backy2/backy.py", line 521, in backup
version_uid = self._prepare_version(name, snapshot_name, source_size, from_version)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.7-py3.6.egg/backy2/backy.py", line 79, in _prepare_version
version_uid = self.meta_backend.set_version(name, snapshot_name, size, size_bytes, 0)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.7-py3.6.egg/backy2/meta_backends/sql.py", line 204, in set_version
self.session.commit()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 503, in commit
self._prepare_impl()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 482, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2479, in flush
self._flush(objects)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2617, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/util/langhelpers.py", line 68, in exit
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 153, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2577, in _flush
flush_context.execute()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 1084, in _emit_insert_statements
c = cached_connections[connection].execute(statement, multiparams)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 982, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/sql/elements.py", line 293, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1101, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1176, in _execute_context
e, util.text_type(statement), parameters, None, None
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1173, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 808, in _init_compiled
param.append(processorskey)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.13-py3.6-linux-x86_64.egg/sqlalchemy/dialects/sqlite/base.py", line 759, in process
"SQLite DateTime type only accepts Python "
sqlalchemy.exc.StatementError: (builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input.
[SQL: INSERT INTO versions (uid, date, expire, name, snapshot_name, size, size_bytes, valid, protected) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: [{'uid': 'b5037ffe-4cae-11ea-8221-90b11c43e4ce', 'size_bytes': 23006609408, 'date': '2020-02-11 09:13:03', 'valid': 0, 'protected': 0, 'size': 5486, 'snapshot_name': '', 'name': 'full_monday', 'expire': None}]]
INFO: Backy failed.

After deletion, incorrect information is displayed.

1) backupdata before deletion
[root@localhost ~]# backy2 stats
INFO: $ /usr/local/bin/backy2 stats
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| date | uid | name | size bytes | size blocks | bytes read | blocks read | bytes written | blocks written | bytes dedup | blocks dedup | bytes sparse | blocks sparse | duration (s) |
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| 2020-03-17 07:15:08 | fc35739c-681e-11ea-a6f1-90b11c43e4ce | test | 2662662144 | 635 | 2662662144 | 635 | 2662662144 | 635 | 0 | 0 | 0 | 0 | 20 |
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
INFO: Backy complete.

2) rm -> cleanup sucess
[root@localhost ~]# backy2 rm fc35739c-681e-11ea-a6f1-90b11c43e4ce -f
INFO: $ /usr/local/bin/backy2 rm fc35739c-681e-11ea-a6f1-90b11c43e4ce -f
INFO: Removed backup version fc35739c-681e-11ea-a6f1-90b11c43e4ce with 635 blocks.
INFO: Backy complete.

[root@localhost ~]# backy2 cleanup --full
INFO: $ /usr/local/bin/backy2 cleanup --full
INFO: Cleanup: Removed 635 blobs
INFO: Backy complete.

3) Why is it displayed in stats after deletion? help.
[root@localhost ~]# backy2 stats
INFO: $ /usr/local/bin/backy2 stats
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| date | uid | name | size bytes | size blocks | bytes read | blocks read | bytes written | blocks written | bytes dedup | blocks dedup | bytes sparse | blocks sparse | duration (s) |
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| 2020-03-17 07:15:08 | fc35739c-681e-11ea-a6f1-90b11c43e4ce | test | 2662662144 | 635 | 2662662144 | 635 | 2662662144 | 635 | 0 | 0 | 0 | 0 | 20 |
+---------------------+--------------------------------------+------+------------+-------------+------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
INFO: Backy complete.

4) I can delete it directly from sqlite
[root@localhost ~]# sqlite3 /disk/home/backy2/backy.sqlite
sqlite> select * from stats;
date|version_uid|version_name|version_size_bytes|version_size_blocks|bytes_read|blocks_read|bytes_written|blocks_written|bytes_found_dedup|blocks_found_dedup|bytes_sparse|blocks_sparse|duration_seconds
2020-03-17 07:15:08|fc35739c-681e-11ea-a6f1-90b11c43e4ce|test|2662662144|635|2662662144|635|2662662144|635|0|0|0|0|20

sqlite> delete from stats where version_uid = 'fc35739c-681e-11ea-a6f1-90b11c43e4ce';

5) but .. It is very inconvenient to delete manually

Deleting backups fails

After the fix for #26 , I am unable to delete any backups:

    INFO: $ /opt/backy2/bin/backy2 -c /etc/backy2.cfg rm 10fde482-68cd-11e9-9990-003048c63494
   ERROR: Unexpected exception
   ERROR: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (sqlite3.IntegrityError) deleted_blocks.id may not be NULL [SQL: 'INSERT INTO deleted_blocks (uid, size, delete_candidate, time) VALUES (?, ?, ?, ?)'] [parameters: ('d99229d1b3TCsD3X7i6jp9modTzvNLh8', 4194304, 0, 1557314143)]
Traceback (most recent call last):
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/usr/lib64/python3.6/site-packages/sqlalchemy/engine/default.py", line 462, in do_execute
    cursor.execute(statement, parameters)
sqlite3.IntegrityError: deleted_blocks.id may not be NULL

That's with a database using the new schema:

# sqlite3 backy.sqlite '.schema deleted_blocks'
CREATE TABLE deleted_blocks (
        id BIGINT NOT NULL, 
        uid VARCHAR(32), 
        size BIGINT, 
        delete_candidate INTEGER NOT NULL, 
        time BIGINT NOT NULL, 
        PRIMARY KEY (id)
);
CREATE INDEX ix_deleted_blocks_uid ON deleted_blocks (uid);

Any idea what is going wrong?
With SQLite3, size of integers should not matter at all IIUC since when storing them the on-disk size is determined for each single row.

MySQL metadata backend

I'm trying to get MariaDB working as a meta backend but I'm running into some issues.

Using 2.9.17 configured with MySQL meta backend running backy2 initdb results in an error:

sqlalchemy.exc.CompileError: (in table 'stats', column 'version_name'): VARCHAR requires a length on dialect mysql

This was resolved by defining the length of strings in sql.py for the affected entries, see attached patch below.

Next issue I ran into when running initdb is:

sqlalchemy.exc.ProgrammingError: (mysql.connector.errors.ProgrammingError) 1075 (42000): Incorrect table definition; there can be only one auto column and it must be defined as a key [SQL: '\nCREATE TABLE blocks (\n\tuid VARCHAR(32), \n\tversion_uid VARCHAR(36) NOT NULL, \n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tdate DATETIME NOT NULL, \n\tchecksum VARCHAR(128), \n\tsize BIGINT, \n\tvalid INTEGER NOT NULL, \n\tPRIMARY KEY (version_uid, id), \n\tFOREIGN KEY(version_uid) REFERENCES versions (uid)\n)\n\n']

Iโ€™m not sure how to fix this, if I only define one with primary_key=True I can run initdb successful but then I run into errors with duplicated entries when running a backup, example output:

ERROR: (_mysql_exceptions.IntegrityError) (1062, "Duplicate entry 'c2e53e00-1352-11e9-9772-b499baadddb2' for key 'PRIMARY'") [SQL: 'INSERT INTO blocks (uid, version_uid, id, date, checksum, size, valid) VALUES (%s, %s, %s, now(), %s, %s, %s)'] [parameters: ((None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 0, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 1, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 2, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 3, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 4, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 5, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 6, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 7, None, 4194304, 1)  ... displaying 10 of 1000 total bound parameter sets ...  (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 998, None, 4194304, 1), (None, 'c2e53e00-1352-11e9-9772-b499baadddb2', 999, None, 4194304, 1))]

Any idea what Iโ€™m doing wrong? This is the patch with changes I've made.

--- /usr/lib/python3/dist-packages/backy2/meta_backends/sql.py.orig	2019-01-08 16:30:12.033365334 +0100
+++ /usr/lib/python3/dist-packages/backy2/meta_backends/sql.py	2019-01-09 12:17:23.255117738 +0100
@@ -30,7 +30,7 @@
     __tablename__ = 'stats'
     date = Column("date", DateTime , default=func.now(), nullable=False)
     version_uid = Column(String(36), primary_key=True)
-    version_name = Column(String, nullable=False)
+    version_name = Column(String(128), nullable=False)
     version_size_bytes = Column(BigInteger, nullable=False)
     version_size_blocks = Column(BigInteger, nullable=False)
     bytes_read = Column(BigInteger, nullable=False)
@@ -48,8 +48,8 @@
     __tablename__ = 'versions'
     uid = Column(String(36), primary_key=True)
     date = Column("date", DateTime , default=func.now(), nullable=False)
-    name = Column(String, nullable=False, default='')
-    snapshot_name = Column(String, nullable=False, server_default='', default='')
+    name = Column(String(128), nullable=False, default='')
+    snapshot_name = Column(String(128), nullable=False, server_default='', default='')
     size = Column(BigInteger, nullable=False)
     size_bytes = Column(BigInteger, nullable=False)
     valid = Column(Integer, nullable=False)
@@ -68,7 +68,7 @@
 class Tag(Base):
     __tablename__ = 'tags'
     version_uid = Column(String(36), ForeignKey('versions.uid'), primary_key=True, nullable=False)
-    name = Column(String, nullable=False, primary_key=True)
+    name = Column(String(128), nullable=False, primary_key=True)
 
     def __repr__(self):
        return "<Tag(version_uid='%s', name='%s')>" % (
@@ -80,7 +80,7 @@
     __tablename__ = 'blocks'
     uid = Column(String(32), nullable=True, index=True)
     version_uid = Column(String(36), ForeignKey('versions.uid'), primary_key=True, nullable=False)
-    id = Column(Integer, primary_key=True, nullable=False)
+    id = Column(Integer, nullable=False)
     date = Column("date", DateTime , default=func.now(), nullable=False)
     checksum = Column(String(128), index=True, nullable=True)
     size = Column(BigInteger, nullable=True)

Scrubbing needs to be performed per backup version

In a common setup, Backy2 is used to backup snapshots of RBD volumes which are used by KVM machines. In this way, most backup versions will share the very same blocks.

Issuing a scrub for each single Backy2 backup version will scrub the very same blocks many times to my understanding (e.g. if I keep 20 snapshots, and some parts of the system did no change in the last 20 days). Partial scrubbing helps, but does not fully solve the issue.

I'd propose to implement a scrubbing mechanism which allows to scrub all chunks known to the full set of backup versions, which could be used alternatively to scrubbing each single version.

A way to switch from sqlite to postgresql

Hi,

I had the problem to import an already in use sqlite database to postgresql.
Here is a small HowTo:

  1. Install sqlite3 if not already installed.
  2. Create a file called export_backy2_sqlite
  3. Fill it with:

.output backy2.sql
.mode insert versions
select * from versions;
.mode insert stats
select * from stats;
.mode insert tags
select * from tags;
.mode insert blocks
select * from blocks;
.mode insert deleted_blocks
select * from deleted_blocks;
.quit

  1. In the directory of the sqlite database (/var/lib/backy2)
    execute:

sqlite3 backy.sqlite -init export_backy2_sqlite

You have to exit sqlite3 by hand, since the .quit or .exit at the end of the file is ignored.

  1. Install joe (an editor which can handle very large files)
  2. Open the created file with

joe backy2.sql

At the beginning of the file enter a new line with

BEGIN;

as content.

Press CTRL+k v to jump to the end of the file and enter

COMMIT;

Save the file with CTRL+k x

Without this, the import takes ages.

  1. Install postgresql if not already done.
  2. Run psql and execute:

create user backy2 with encrypted password 'backy2';
create database backy2;
grant all privileges on database backy2 to backy2;

\q

  1. Adjust your settings in /etc/backy.cfg or create a new file with postgres settings.
  2. Run

backy2 [c /etc/YourNewFile] initdb

  1. Run

psql -q backy2 < /var/lib/backy2/backy2.sql

And wait .....

That's it.

Now you have your sqlite data imported in postgresql.

I hope this saves someone a long time of try and error.

(builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input

i using sqlite. how to fix?

[root@localhost ~]# backy2 backup file:///home/monday/monday.image full_monday
INFO: $ /usr/local/bin/backy2 backup file:///home/monday/monday.image full_monday
ERROR: Unexpected exception
ERROR: (builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input.
[SQL: INSERT INTO versions (uid, date, expire, name, snapshot_name, size, size_bytes, valid, protected) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: [{'uid': '1315e2e6-35d8-11ea-86af-52540053aa36', 'size_bytes': 10000000000, 'date': '2020-01-13 07:41:13', 'valid': 0, 'protected': 0, 'name': 'full_monday', 'snapshot_name': '', 'size': 2385, 'expire': None}]]
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1173, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 799, in _init_compiled
param.append(processorskey)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/dialects/sqlite/base.py", line 759, in process
"SQLite DateTime type only accepts Python "
TypeError: SQLite DateTime type only accepts Python datetime and date objects as input.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.6-py3.6.egg/backy2/scripts/backy.py", line 740, in main
func(**func_args)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.6-py3.6.egg/backy2/scripts/backy.py", line 95, in backup
backy.backup(name, snapshot_name, source, hints, from_version, tags, expire_date)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.6-py3.6.egg/backy2/backy.py", line 520, in backup
version_uid = self._prepare_version(name, snapshot_name, source_size, from_version)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.6-py3.6.egg/backy2/backy.py", line 79, in _prepare_version
version_uid = self.meta_backend.set_version(name, snapshot_name, size, size_bytes, 0)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.6-py3.6.egg/backy2/meta_backends/sql.py", line 204, in set_version
self.session.commit()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 503, in commit
self._prepare_impl()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 482, in _prepare_impl
self.session.flush()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2479, in flush
self._flush(objects)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2617, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/util/langhelpers.py", line 68, in exit
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 153, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2577, in _flush
flush_context.execute()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 1084, in _emit_insert_statements
c = cached_connections[connection].execute(statement, multiparams)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 982, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1101, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1176, in _execute_context
e, util.text_type(statement), parameters, None, None
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1173, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 799, in _init_compiled
param.append(processorskey)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.12-py3.6-linux-x86_64.egg/sqlalchemy/dialects/sqlite/base.py", line 759, in process
"SQLite DateTime type only accepts Python "
sqlalchemy.exc.StatementError: (builtins.TypeError) SQLite DateTime type only accepts Python datetime and date objects as input.
[SQL: INSERT INTO versions (uid, date, expire, name, snapshot_name, size, size_bytes, valid, protected) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)]
[parameters: [{'uid': '1315e2e6-35d8-11ea-86af-52540053aa36', 'size_bytes': 10000000000, 'date': '2020-01-13 07:41:13', 'valid': 0, 'protected': 0, 'name': 'full_monday', 'snapshot_name': '', 'size': 2385, 'expire': None}]]
INFO: Backy failed.

backy2 uses huge amount of RAM backing up RBD objects

RAM appears to grow linearly as backup proceeds and tracks the size of the object being backed up. Eg 100GB backup using backup.sh will require 100GB RAM or the process will get killed by OOM.
This is on ubuntu 18.04.1 server + backy2_2.9.18 (Earlier versions of OS and backy2 show same symptoms)

Add a "build" section to README

Describe how to build the pex and the deb package as in #1 which resulted in

make deb
Depending on the database you want to use, you might want to install python3- after installing the deb, because only the sqlalchemy dependency is listed.

For more details look at the debian directory.

deb packet dependency not correct

Hi,

to backup disks from an older ProxMox CEPH cluster I installed the deb package.
This ProxMox is based an debian jessie.
I noticed that I need python3-sqlalchemy >= 1.1.11
Ok, so I installed python3-pip and downloaded this by hand.
But still no luck. An error at backy2 initdb occured in sql.py line 178.
After some investigations I found the 'bug':

alembic_cfg.attributes['connection'] = connection

attributes is available since 0.7.5
https://alembic.sqlalchemy.org/en/latest/api/config.html

In debian jessie 0.6.5 is installed.

After installing 0.7.6 with
pip3 install alembic==0.7.6

And copy the files to the correct directory backy2 worked without problems.

The dependencies in the deb packet should be extended to reflect this:
python3-alembic (>= 0.7.5)

It could save some time :)

restore to lvm complains about target size

I tried to restore a backup into another lvm block dev. So I created that a little bit larger than the original (9G). When I try to restore backy2 complains about target beeing to small.

I removed the lv and created a smaller on, backy2 is working now but this must be an error ?

root@xen1:/recovery# lvcreate --name tmproot --size 9G vmvg
Logical volume "tmproot" created.

root@xen1:/recovery# backy2 -c /etc/backy-qnap2.cfg restore -f 2cdd1778-4d0b-11e9-b1a8-001999e0507d file:///dev/vmvg/tmproot
INFO: $ /usr/bin/backy2 -c /etc/backy-qnap2.cfg restore -f 2cdd1778-4d0b-11e9-b1a8-001999e0507d file:///dev/vmvg/tmproot
ERROR: Target size is too small. Has 9663676416b, need 8996782080b.
Error opening restore target.

root@xen1:/recovery# lvremove /dev/vmvg/tmproot
Do you really want to remove active logical volume vmvg/tmproot? [y/n]: y
Logical volume "tmproot" successfully removed

root@xen1:/recovery# lvcreate --name tmproot --size 8G vmvg
Logical volume "tmproot" created.

root@xen1:/recovery# backy2 -c /etc/backy-qnap2.cfg restore -f 2cdd1778-4d0b-11e9-b1a8-001999e0507d file:///dev/vmvg/tmproot
INFO: $ /usr/bin/backy2 -c /etc/backy-qnap2.cfg restore -f 2cdd1778-4d0b-11e9-b1a8-001999e0507d file:///dev/vmvg/tmproot
INFO: Restored 1/2124 blocks (0.0%)
INFO: Restored 12/2124 blocks (0.6%)
...

Debian

What packages are needed to make the package?

  • virtualenv
  • python3-dev
  • python3-pexpect

But I still am unable to make the package.

[code]
pex.resolver.Untranslateable: Package SourcePackage('file:///root/backy2/build/.pex/psycopg2-2.6.2.tar.gz') is not translateable by ChainedTranslator(EggTranslator, SourceTranslator)
Makefile:27: recipe for target 'build/backy2.pex' failed
make: *** [build/backy2.pex] Error 1
[/code]

(sqlite3.OperationalError) disk I/O error

disk I/O error how to fix? i using /backup1(/dev/sdb1 mount) gpt partitions

Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee GPT

[root@localhost ~]# cat /backup1/backy2/backup/test/Monday_20200427/backy.log
2020-04-27 14:50:16,811 [13631] $ /usr/local/bin/backy2 -c /backup1/backy2/backup/test/Monday_20200427/backy.cfg initdb
2020-04-27 14:50:17,132 [13631] Unexpected exception
2020-04-27 14:50:17,133 [13631] (sqlite3.OperationalError) disk I/O error
[SQL:
CREATE TABLE stats (
date DATETIME NOT NULL,
version_uid VARCHAR(36) NOT NULL,
version_name VARCHAR NOT NULL,
version_size_bytes BIGINT NOT NULL,
version_size_blocks BIGINT NOT NULL,
bytes_read BIGINT NOT NULL,
blocks_read BIGINT NOT NULL,
bytes_written BIGINT NOT NULL,
blocks_written BIGINT NOT NULL,
bytes_found_dedup BIGINT NOT NULL,
blocks_found_dedup BIGINT NOT NULL,
bytes_sparse BIGINT NOT NULL,
blocks_sparse BIGINT NOT NULL,
duration_seconds BIGINT NOT NULL,
PRIMARY KEY (version_uid)
)

]
(Background on this error at: http://sqlalche.me/e/e3q8)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1248, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 590, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: disk I/O error

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.5-py3.6.egg/backy2/scripts/backy.py", line 740, in main
func(**func_args)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.5-py3.6.egg/backy2/scripts/backy.py", line 459, in initdb
self.backy(initdb=True)
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.5-py3.6.egg/backy2/backy.py", line 49, in init
meta_backend.initdb()
File "/usr/local/lib/python3.6/site-packages/backy2-2.10.5-py3.6.egg/backy2/meta_backends/sql.py", line 172, in initdb
Base.metadata.create_all(self.engine, checkfirst=False) # checkfirst False will raise when it finds an existing table
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/schema.py", line 4321, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 2058, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1627, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/visitors.py", line 144, in traverse_single
return meth(obj, **kw)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 781, in visit_metadata
_is_metadata_operation=True,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/visitors.py", line 144, in traverse_single
return meth(obj, **kw)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 826, in visit_table
include_foreign_key_constraints, # noqa
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 984, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1046, in _execute_ddl
compiled,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1288, in execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1482, in handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from
=e
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 178, in raise

raise exception
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1248, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.16-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 590, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
[SQL:
CREATE TABLE stats (
date DATETIME NOT NULL,
version_uid VARCHAR(36) NOT NULL,
version_name VARCHAR NOT NULL,
version_size_bytes BIGINT NOT NULL,
version_size_blocks BIGINT NOT NULL,
bytes_read BIGINT NOT NULL,
blocks_read BIGINT NOT NULL,
bytes_written BIGINT NOT NULL,
blocks_written BIGINT NOT NULL,
bytes_found_dedup BIGINT NOT NULL,
blocks_found_dedup BIGINT NOT NULL,
bytes_sparse BIGINT NOT NULL,
blocks_sparse BIGINT NOT NULL,
duration_seconds BIGINT NOT NULL,
PRIMARY KEY (version_uid)
)

]
(Background on this error at: http://sqlalche.me/e/e3q8)
2020-04-27 14:50:17,135 [13631] Backy failed.

How to back up multiple VM diskimages/snapshots from Ceph

Hello,
I don't believe this issue has not been thought of, I must be missing something.
I am trying to test Backy2 to backup disk images stored on Ceph RBD. Here is the scenario:
VM ID #101
Disk images: vm-101-disk-0, vm-101-disk-1
Disk #1 Snapshots: snap-vm-101-0-1, snap-vm-102-0-2, snap-vm-101-0-3
Disk #2 Snapshots: snap-vm-101-1-1, snap-vm-102-1-2, snap-vm-101-1-3

I did a backup using the following :
#backy2 backup rbd://vm-101-disk-0@snap-vm-101-0-1 vm-101-disk-0

Now my question is how do I pull all the disk image/snapshots at the same time? Do I have to back up each snapshot separately for each disk image? How does this work with say 50 VMs with 2 disk images each total of 100 disk images?

I have a script which auto snapshots all of my VMs daily with auto naming. How do I tell Backy2 to always grab the latest snapshot instead of typing it every time for ALL the disk images?

I hope my question/concern makes sense. Any help would be hugely appreciated!

delete the backend block data

Why can't I delete the backend block data when I run the backy2 rm -f command?

backupdata

[root@localhost ~]# backy2 stats
INFO: $ /usr/local/bin/backy2 stats
+---------------------+--------------------------------------+----------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| date | uid | name | size bytes | size blocks | bytes read | blocks read | bytes written | blocks written | bytes dedup | blocks dedup | bytes sparse | blocks sparse | duration (s) |
+---------------------+--------------------------------------+----------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
| 2020-02-12 05:02:02 | 56c3d654-4d54-11ea-96ab-90b11c43e4ce | diff_tuesday | 23008772096 | 5486 | 23008772096 | 5486 | 2720923648 | 649 | 5762973696 | 1374 | 14524874752 | 3463 | 200 |
| 2020-02-12 05:44:23 | 0286f516-4d5a-11ea-832c-90b11c43e4ce | diff_wednesday | 23011131392 | 5487 | 23011131392 | 5487 | 2752643072 | 657 | 8413773824 | 2006 | 11844714496 | 2824 | 306 |
| 2020-02-12 06:36:10 | 7d267ace-4d61-11ea-b4dc-90b11c43e4ce | diff_thursday | 23014932480 | 5488 | 23014932480 | 5488 | 2806775808 | 670 | 11035213824 | 2631 | 9172942848 | 2187 | 200 |
| 2020-02-12 06:54:34 | 1065a362-4d64-11ea-9dfd-90b11c43e4ce | diff_friday | 23017881600 | 5488 | 23017947136 | 5488 | 2788818944 | 665 | 13686013952 | 3263 | 6543114240 | 1560 | 199 |
| 2020-02-12 07:15:48 | 04e3a374-4d67-11ea-9b6f-90b11c43e4ce | diff_saturday | 23029022720 | 5491 | 23029022720 | 5491 | 2774728704 | 662 | 16290676736 | 3884 | 3963617280 | 945 | 203 |
| 2020-02-12 07:42:07 | b04666d6-4d6a-11ea-99f0-90b11c43e4ce | diff_sunday | 23031840768 | 5492 | 23031840768 | 5492 | 2802712576 | 669 | 18970836992 | 4523 | 1258291200 | 300 | 206 |
| 2020-02-12 08:03:41 | ab51bb8c-4d6d-11ea-92de-90b11c43e4ce | full_monday2 | 24360714240 | 5809 | 24360714240 | 5809 | 2734882816 | 653 | 21621637120 | 5155 | 4194304 | 1 | 220 |
+---------------------+--------------------------------------+----------------+-------------+-------------+-------------+-------------+---------------+----------------+-------------+--------------+--------------+---------------+--------------+
INFO: Backy complete.

... backy2 rm -f ...

backupdata deleted

[root@localhost ~]# backy2 ls
INFO: $ /usr/local/bin/backy2 ls
+------+------+---------------+------+------------+-----+-------+-----------+------+
| date | name | snapshot_name | size | size_bytes | uid | valid | protected | tags |
+------+------+---------------+------+------------+-----+-------+-----------+------+
+------+------+---------------+------+------------+-----+-------+-----------+------+
INFO: Backy complete.

why alive the backend data?

[root@localhost ~]# du -sh /disk/home/backy2/backy.sqlite
16M /disk/home/backy2/backy.sqlite

[root@localhost ~]# du -sh /disk/home/backy2/data/
24G /disk/home/backy2/data/

Backy2 status?

Hi, @wamdam, I just found and started using another project: Benji Backup https://github.com/elemental-lf/benji (fork of backy2).

And I just wanted to know what is current status of backy2, is it abandoned/deprecated or not?
Is there any hope to get them merged in the future?

CockroachDB metadata backend

Hello,

I'm trying to use Backy2 with CockroadDB. Cockroachdb is supposedly fully compatible with PostgreSQL syntax. However, I have an issue when trying to initialize the database (backy2 initdb) as it fails to check the database version.

    INFO: $ /usr/bin/backy2 initdb
   ERROR: Unexpected exception
   ERROR: Could not determine version from string 'CockroachDB CCL v19.1.2 (x86_64-unknown-linux-gnu, built 2019/06/07 17:32:15, go1.11.6)'

Any chance to see Backy2 compatibility with CockroachDB implemented at some point?

Thanks!

Be able to specify a retention time

I haven't found an easy way to delete backups say older than 30 days, and such. Is there any way to specify a retention time or is such a feature planned to be implemented ?

deleted_blocks running out of ids

Error found in backy2's output is:

 [SQL: 'INSERT INTO deleted_blocks (uid, size, delete_candidate, time) VALUES (%(uid)s, %(size)s, %(delete_candidate)s, %(time)s) RETURNING deleted_blocks.id'] [parameters: {'uid': 'โ€ฆ', 'size': 4194304, 'delete_candidate': 0, 'time': โ€ฆ}]
Traceback (most recent call last):
  File "/root/.pex/install/SQLAlchemy-1.1.14-py3.5-linux-x86_64.egg.cf435f328b9e35ef42dfcd6909f15b6ac2afeff2/SQLAlchemy-1.1.14-py3.5-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1182, in _execute_context
    context)
  File "/root/.pex/install/SQLAlchemy-1.1.14-py3.5-linux-x86_64.egg.cf435f328b9e35ef42dfcd6909f15b6ac2afeff2/SQLAlchemy-1.1.14-py3.5-linux-x86_64.egg/sqlalchemy/engine/default.py", line 470, in do_execute
    cursor.execute(statement, parameters)
psycopg2.DataError: integer out of range

Solvable by changing the id column in the table deleted_blocks from integer to bigint (any type with more than 4 bytes).
Example in psql:

alter table deleted_blocks alter column id type bigint;

python libraries have too new versions

Hey there,
This looks like a nice piece of software, but I can't test it with my Proxmox system, as the required version of setproctitle is to new. Proxmox has version 1.1.8-1, so I tried to install the newest version, 1.1.10-1, but it requires a newer version of Python than is installed, and that just seems to risky to install without a lot of testing.
So basically I was wondering if the version of setproctitle could be dropped back, or is there some compelling reason to keep it at 1.1.10-1?

Thanks

Cannot specify Ceph Client ID

Hi,

it seems everybody else is just using the client.admin key, but I would prefer to use a less privileged key for backup.
For this, id needs to be specified as outlined here:
haiwen/seafile#800 (comment)
i.e. something like:

ceph_client_id = config.get('ceph_client_id')
self.cluster = rados.Rados(conffile=conf.ceph_conf_file, rados_id=ceph_client_id)

should be used here:

self.cluster = rados.Rados(conffile=ceph_conffile)

Would you be willing to add this?

Backup scheduler for automatic creation and removal of Backy2 backups

We needed to manage the creation and removal of backups, so we created Schelly-Backy2 that is an adapter for Schelly to manage Backy2 backups based on custom retention policies (perform backups every day at 3 a.m and keep 2 daily backups, 3 last weekly backup, 6 monthly backups and so on...)

I read somewhere else that Backy2 was planning to add support for backup scheduling. Maybe this utility can help you:
https://github.com/flaviostutz/schelly-backy2
https://github.com/flaviostutz/schelly

What do you think?

initdb fails with s3's IllegalLocationConstraintException

version: 2.9.17

# backy2 initdb
    INFO: $ /usr/bin/backy2 initdb
Traceback (most recent call last):
  File "/usr/bin/backy2", line 11, in <module>
    load_entry_point('backy2==2.9.17', 'console_scripts', 'backy2')()
  File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 570, in main
    commands = Commands(args.machine_output, Config)
  File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 27, in __init__
    self.backy = backy_from_config(Config)
  File "/usr/lib/python3/dist-packages/backy2/utils.py", line 45, in backy_from_config
    data_backend = DataBackendLib.DataBackend(config_DataBackend)
  File "/usr/lib/python3/dist-packages/backy2/data_backends/s3.py", line 57, in __init__
    self.bucket = self.conn.create_bucket(bucket_name)
  File "/usr/lib/python3/dist-packages/boto/s3/connection.py", line 621, in create_bucket
    response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>IllegalLocationConstraintException</Code><Message>The unspecified location constraint is incompatible for the region specific endpoint this request was sent to.</Message></Error>

this targets amazon s3 in eu-central-1.

Ceph error

Hello,
i have this error:

backy2 backup rbd://ceph-lxc/vm-100-disk-1@backup1 vm-100-disk-1@backup1
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 449, in _build_master
ws.require(requires)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 745, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 644, in resolve
raise VersionConflict(dist, req)
pkg_resources.VersionConflict: (setproctitle 1.1.8 (/usr/lib/python3/dist-packages), Requirement.parse('setproctitle>=1.1.10'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/bin/backy2", line 5, in
from pkg_resources import load_entry_point
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2876, in
working_set = WorkingSet._build_master()
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 451, in _build_master
return cls._build_from_requirements(requires)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 464, in _build_from_requirements
dists = ws.resolve(reqs, Environment())
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 639, in resolve
raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: setproctitle>=1.1.10
root@n1:~#

can you help me ?
thanks

Add per job config file

I am trying to add a new option to the arg parser:

+    parser.add_argument(
+        '-c', '--config', default=None, help='Use a backup specific config file', type=str)
 
     subparsers = parser.add_subparsers()
 
@@ -544,8 +546,10 @@ def main():
         #console_level = logging.INFO
     else:
         console_level = logging.INFO
-
-    Config = partial(_Config, conf_name='backy')
+    if args.config is not None and args.config != '':
+        Config = partial(_Config, conf_name='backy' + '_' + args.config)
+    else:
+        Config = partial(_Config, conf_name='backy')

But I am facing the followting exception:

 ERROR: Unexpected exception
  ERROR: backup() got an unexpected keyword argument 'config'
Traceback (most recent call last):
 File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 575, in main
   func(**func_args)
TypeError: backup() got an unexpected keyword argument 'config'
   INFO: Backy failed.

Unfortunately I do not know were to add the new option, so that the backup method accepts it.
THX, Georg

backy2 initdb failure

Hello,
I am getting the following error when trying to run #backy2 initdb. This is on Debian. I know the doc says Ubuntu 16. But that is not an option for me at this moment.
Thanks!
backy2db-error

Crash when trying to restore to rbd after upgrading ceph cluster

I get the following traceback. Creating the backup works, but if I test a restore, it doesn't work any more.

Tested with 2.9.17 and 2.10.5. How can I help more? Our ceph cluster has been upgraded to 14.2.8.

backy2 restore bb45182e-6332-11ea-aa86-0cc47a886a00 rdb://nvme/vm-127-disk-temp
INFO: $ /usr/bin/backy2 restore bb45182e-6332-11ea-aa86-0cc47a886a00 rdb://nvme/vm-127-disk-temp
ERROR: Unexpected exception
ERROR: No module named 'backy2.io.rdb'
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 740, in main
func(**func_args)
File "/usr/lib/python3/dist-packages/backy2/scripts/backy.py", line 101, in restore
backy.restore(version_uid, target, sparse, force)
File "/usr/lib/python3/dist-packages/backy2/backy.py", line 279, in restore
io = self.get_io_by_source(target)
File "/usr/lib/python3/dist-packages/backy2/backy.py", line 155, in get_io_by_source
IOLib = importlib.import_module('backy2.io.{}'.format(scheme))
File "/usr/lib/python3.5/importlib/init.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 986, in _gcd_import
File "", line 969, in _find_and_load
File "", line 956, in _find_and_load_unlocked
ImportError: No module named 'backy2.io.rdb'
INFO: Backy failed.

initdb is failing with regular local sqlite

/etc/backy.conf:

engine: sqlite:////tml/test.sqlite

#backy2 initdb

INFO: $ /bin/backy2 initdb

ERROR: Unexpected exception
ERROR: (sqlite3.OperationalError) table deleted_blocks already exists [SQL: '\nCREATE TABLE deleted_blocks (\n\tid INTEGER NOT NULL, \n\tuid VARCHAR(32), \n\tsize BIGINT, \n\tdelete_candidate INTEGER NOT NULL, \n\ttime BIGINT NOT NULL, \n\tPRIMARY KEY (id)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/e3q8)
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/default.py", line 507, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: table deleted_blocks already exists

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/backy2-2.9.17-py3.4.egg/backy2/scripts/backy.py", line 583, in main
func(**func_args)
File "/usr/lib/python3.4/site-packages/backy2-2.9.17-py3.4.egg/backy2/scripts/backy.py", line 356, in initdb
self.backy(initdb=True)
File "/usr/lib/python3.4/site-packages/backy2-2.9.17-py3.4.egg/backy2/backy.py", line 49, in init
meta_backend.initdb()
File "/usr/lib/python3.4/site-packages/backy2-2.9.17-py3.4.egg/backy2/meta_backends/sql.py", line 169, in initdb
Base.metadata.create_all(self.engine, checkfirst=False) # checkfirst False will raise when it finds an existing table
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/schema.py", line 4004, in create_all
tables=tables)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1940, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1549, in _run_visitor
**kwargs).traverse_single(element)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/visitors.py", line 121, in traverse_single
return meth(obj, **kw)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 757, in visit_metadata
_is_metadata_operation=True)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/visitors.py", line 121, in traverse_single
return meth(obj, **kw)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 791, in visit_table
include_foreign_key_constraints=include_foreign_key_constraints
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/sql/ddl.py", line 68, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1009, in _execute_ddl
compiled
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/lib/python3.4/site-packages/SQLAlchemy-1.2.5-py3.4-linux-x86_64.egg/sqlalchemy/engine/default.py", line 507, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) table deleted_blocks already exists [SQL: '\nCREATE TABLE deleted_blocks (\n\tid INTEGER NOT NULL, \n\tuid VARCHAR(32), \n\tsize BIGINT, \n\tdelete_candidate INTEGER NOT NULL, \n\ttime BIGINT NOT NULL, \n\tPRIMARY KEY (id)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/e3q8)
INFO: Backy failed.

SELinux disabled, no other errors, sqlite file isn't present and isn't created

deleted_blocks.id may not be NULL

I was just trying to delete the backup.
i dont know this error messages. what is mean? help me

[root@localhost ~]# backy2 ls
INFO: $ /usr/local/bin/backy2 ls
+---------------------+----------------+---------------+------+-------------+--------------------------------------+-------+-----------+----------------------------+
| date | name | snapshot_name | size | size_bytes | uid | valid | protected | tags |
+---------------------+----------------+---------------+------+-------------+--------------------------------------+-------+-----------+----------------------------+
| 2019-12-24 02:17:10 | diff_friday | | 7789 | 32667729920 | 7db45032-25f3-11ea-b507-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 02:35:02 | diff_saturday | | 7789 | 32667992064 | fceae850-25f5-11ea-96df-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 02:47:58 | diff_sunday | | 7789 | 32668319744 | cb64594a-25f7-11ea-b015-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 02:02:55 | diff_thursday | | 7789 | 32667271168 | 80537f72-25f1-11ea-b51b-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 01:35:18 | diff_tuesday | | 7789 | 32667205632 | a497a042-25ed-11ea-976e-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 01:49:12 | diff_wednesday | | 7789 | 32667205632 | 955fdd22-25ef-11ea-b94c-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
| 2019-12-24 01:18:44 | full_monday | | 7789 | 32667205632 | 53c5b570-25eb-11ea-a906-525400e5d9c4 | 1 | 0 | b_daily,b_monthly,b_weekly |
+---------------------+----------------+---------------+------+-------------+--------------------------------------+-------+-----------+----------------------------+
INFO: Backy complete.

[root@localhost~]# backy2 rm 53c5b570-25eb-11ea-a906-525400e5d9c4 -f
INFO: $ /usr/local/bin/backy2 rm 53c5b570-25eb-11ea-a906-525400e5d9c4 -f
ERROR: Unexpected exception
ERROR: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)
(sqlite3.IntegrityError) deleted_blocks.id may not be NULL
[SQL: INSERT INTO deleted_blocks (uid, size, delete_candidate, time) VALUES (?, ?, ?, ?)]
[parameters: ('27e0f57bb3WZTSVmnbwxZy3X7Yr7h7U2', 4194304, 0, 1577164518)]
(Background on this error at: http://sqlalche.me/e/gkpj)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1246, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: deleted_blocks.id may not be NULL

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/scripts/backy.py", line 583, in main
func(**func_args)
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/scripts/backy.py", line 62, in rm
backy.rm(version_uid, force, disallow_rm_when_younger_than_days)
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/backy.py", line 345, in rm
num_blocks = self.meta_backend.rm_version(version_uid)
File "/usr/local/lib/python3.6/site-packages/backy2-2.9.18-py3.6.egg/backy2/meta_backends/sql.py", line 380, in rm_version
affected_blocks.delete()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/query.py", line 3752, in delete
delete_op.exec_()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 1691, in exec_
self._do_pre()
File "", line 1, in
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 1737, in _do_pre
session._autoflush()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 1588, in _autoflush
util.raise_from_cause(e)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 153, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 1577, in _autoflush
self.flush()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2470, in flush
self._flush(objects)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2608, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/langhelpers.py", line 68, in exit
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 153, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/session.py", line 2568, in _flush
flush_context.execute()
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 422, in execute
rec.execute(self)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py", line 589, in execute
uow,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 245, in save_obj
insert,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/orm/persistence.py", line 1137, in _emit_insert_statements
statement, params
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 982, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1101, in _execute_clauseelement
distilled_params,
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1250, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1476, in _handle_dbapi_exception
util.raise_from_cause(sqlalchemy_exception, exc_info)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 398, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/util/compat.py", line 152, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1246, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.6/site-packages/SQLAlchemy-1.3.11-py3.6-linux-x86_64.egg/sqlalchemy/engine/default.py", line 581, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely)
(sqlite3.IntegrityError) deleted_blocks.id may not be NULL
[SQL: INSERT INTO deleted_blocks (uid, size, delete_candidate, time) VALUES (?, ?, ?, ?)]
[parameters: ('27e0f57bb3WZTSVmnbwxZy3X7Yr7h7U2', 4194304, 0, 1577164518)]
(Background on this error at: http://sqlalche.me/e/gkpj)
INFO: Backy failed.

Make the CLI more automation-friendly

I really like the concept of this tool and it does its core functionality very well ... but according to current documentation and help, it is a pain to perform anything that should be scripted.

First and most important feature missing is the ability to list the backups in a simply form to be able to work with it in scripts. This happens for the automated retention policy handling, for ex. to keep 7 daily, 3 weekly, and 1 monthly backup.

As you need the exact hash to be able to rm anything, you need to acquire a proper list in order to do it (with proper date ordering), but there is no support for this in the CLI as its output is unusable for scripting purposes.

You may say that I should start scripting against the sqlite database backend, but:

  • it is dangerous as the internal structure may change any time in the future
  • not convenient for easy tasks
  • every other backup tool has these abilities as backups are NOT created and deleted by humans

I may miss something or there are other frontends to use for this, but I wasn't able to find information about it. Please give me hints if that is the case.

Thank you

ERROR: No option 'cluster_name' in section: 'io_rbd'

Commit 4fcc7d2 introduced config options to set cluster_name and rados_name.

Unfortunately this breaks existing configurations and is not yet documented in the config file documentation either.

I suggest do change

        cluster_name = config.get('cluster_name')
        rados_name = config.get('rados_name')

to:

        cluster_name = config.get('cluster_name', 'ceph')
        rados_name = config.get('rados_name', 'client.admin')

to allow omitting the values from the backy2.conf file like probably intended.

How to backup to s3?

I have commented out type: backy2.data_backends.file and only backy2.data_backends.s3 remains, yet, when I run backup

lvcreate --size 1G --snapshot --name snap /dev/ubuntu-vg/data
backy2 backup file:///dev/ubuntu-vg/snap root
lvremove -y /dev/ubuntu-vg/snap

Backup data gets stored on the host and I'm not seeing any backup data on my S3 bucket.

Here is my config

# backy configuration for backy2 Version 2

[DEFAULTS]
# Where should the backy logfile be placed?
# Backy will by default log INFO, WARNING and ERROR to this log.
# If you also need DEBUG information, please start backy with '-v'.
logfile: /var/log/backy.log

# Default block size. 4MB are recommended.
# DO NOT CHANGE WHEN BACKUPS EXIST!
block_size: 4194304

# Hash function to use. Use a large one to avoid collisions.
# DO NOT CHANGE WHEN BACKUPS EXIST!
hash_function: sha512

# for some operations, full backy or single versions need to be locked for
# simulatanous access. In the lock_dir we will create backy_*.lock files.
lock_dir: /run

# To be able to find other backys running, we need a system-wide unique name
# for all backy processes.
# DO NOT CHANGE WHILE backy2 PROCESSES ARE RUNNING!
process_name: backy2

# Allow rm of backup versions after n days (set to 0 to disable, i.e. to be
# able to delete any version)
disallow_rm_when_younger_than_days: 6


[MetaBackend]
# Of which type is the Metadata Backend Engine?
# Available types:
#   backy2.meta_backends.sql

#######################################
# backy2.meta_backends.sql
#######################################
type: backy2.meta_backends.sql

# Which SQL Server?
# Available servers:
#   sqlite:////path/to/sqlitefile
#   postgresql:///database
#   postgresql://user:password@host:port/database
engine: sqlite:////var/lib/backy2/backy.sqlite


[DataBackend]
# Which data backend to use?
# Available types:
#   backy2.data_backends.file
    backy2.data_backends.s3


#######################################
# backy2.data_backends.file
#######################################
#type: backy2.data_backends.file

# Store data to this path. A structure of 2 folders depth will be created
# in this path (e.g. '0a/33'). Blocks of DEFAULTS.block_size will be stored
# there. This is your backup storage!
#path: /var/lib/backy2/data

# How many writes to perform in parallel. This is useful if your backup space
# can perform parallel writes faster than serial ones.
#simultaneous_writes: 5

# How many reads to perform in parallel. This is useful if your backup space
# can perform parallel reads faster than serial ones.
#simultaneous_reads: 5

# Bandwidth throttling (set to 0 to disable, i.e. use full bandwidth)
# bytes per second
#bandwidth_read: 78643200
#bandwidth_write: 78643200

#######################################
# backy2.data_backends.s3
#######################################
type: backy2.data_backends.s3

# Your s3 access key
aws_access_key_id: JGMSOIKRWE4MHXM7KIRR

# Your s3 secret access key
aws_secret_access_key: 6DFft6LEu2H9AkWpgI3al4PZn8V6auDa9_MAwkAe

# Your aws host (IP or name)
host: 172.18.18.40

# The port to connect to (usually 80 if not secure or 443 if secure)
port: 9000

# Use HTTPS?
is_secure: false

# Store to this bucket name:
bucket_name: backy2

# How many s3 puts to perform in parallel
simultaneous_writes: 5

# How many reads to perform in parallel. This is useful if your backup space
# can perform parallel reads faster than serial ones.
simultaneous_reads: 5

# Bandwidth throttling (set to 0 to disable, i.e. use full bandwidth)
# bytes per second
bandwidth_read: 78643200
bandwidth_write: 78643200


[NBD]
# Where to cache backy blocks when NBD Server is used
# Enterprise version only
cachedir: /tmp


[io_file]
# Configure the file IO (file://<path>)
# This is for a file or a blockdevice (e.g. /dev/sda)

# How many parallel reads are permitted? (also affects the queue length)
simultaneous_reads: 5


[io_rbd]
# Configure the rbd IO (rbd://<pool>/<imagename>[@<snapshotname>])
# This accepts rbd images in the form rbd://pool/image@snapshot or rbd://pool/image

# Where to look for the ceph configfile to read keys and hosts from it
ceph_conffile: /etc/ceph/ceph.conf

# How many parallel reads are permitted? (also affects the queue length)
simultaneous_reads: 10

# When restoring images, new images are created (if you don't --force). For these
# newly created images, use these features:
new_image_features:
    RBD_FEATURE_LAYERING
    RBD_FEATURE_EXCLUSIVE_LOCK
#RBD_FEATURE_STRIPINGV2
#RBD_FEATURE_OBJECT_MAP
#RBD_FEATURE_FAST_DIFF
#RBD_FEATURE_DEEP_FLATTEN

Automation script needs adjustment

To make it work we had to replace the followng line
BACKY_SNAP_VERSION_UID=$(backy2 -m ls -s "$LAST_RBD_SNAP" "$VM"|grep -e '^version'|awk -F '|' '{ print $7 }')
With
BACKY_SNAP_VERSION_UID=$(backy2 -m ls -s "$LAST_RBD_SNAP" "$VM"|awk -F '|' 'NR>1 { print $6 }')

A way to do remote backup

There is no way to have a backy2 agent working on a backed-up machine and pushing backups back to the backup server.

Currently, I have a server dedicated to backy2, with lots of backup space and the sqlite DB in place. If that server can reach the backed up file or block device, it works great.

In my (quite common) scenario, I need to back up VMs located on a SAN based LVM store. The backup server is a network infrastructure machine and has no HBA to access the SAN directly. The only hosts that can access the SAN and hypervisors with minimal disk space available. If I install backy2 on a hypervisor, I have to somehow make it reach the storage space on the backup server and the sqlite database.

I would love to be able to install a backy2 daemon on a hypervisor, and let it send over the bits it can see from the SAN over to the backup server directly. Alternatively, I could probably export the backy2 server's storage space over NFS to one of the hypervisors, but I'm not so sure about the database.

Documentation script issue

Hi everyone!

I think there is an issue with the script found in the backup section of the docs (http://backy2.com/docs/backup.html). The following line:

BACKY_SNAP_VERSION_UID=$(backy2 -m ls -s "$LAST_RBD_SNAP" "$VM"|awk -F '|' '{ print $6 }')

should be changed to:

BACKY_SNAP_VERSION_UID=$(backy2 -ms ls -s "$LAST_RBD_SNAP" "$VM"|awk -F '|' '{ print $6 }')

The difference is the -ms only. The output was incorrect, since it was returning the string "uid" in addition to the actual UID version. The initial backup then worked as expected, but not the differential_backup function.

Best regards,

Rodrigo Brayner

Install issue on xenial64

I can't seem to get past un-met dependancies when installing backy2_2.9.17_all.deb package.

vagrant@ubuntu-xenial:~$ sudo dpkg -i backy2_2.9.17_all.deb
Selecting previously unselected package backy2.
(Reading database ... 54490 files and directories currently installed.)
Preparing to unpack backy2_2.9.17_all.deb ...
Unpacking backy2 (2.9.17) ...
dpkg: dependency problems prevent configuration of backy2:
 backy2 depends on python3-dateutil; however:
  Package python3-dateutil is not installed.
 backy2 depends on python3-psutil; however:
  Package python3-psutil is not installed.
 backy2 depends on python3-setproctitle; however:
  Package python3-setproctitle is not installed.
 backy2 depends on python3-shortuuid; however:
  Package python3-shortuuid is not installed.

dpkg: error processing package backy2 (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 backy2

To re-preduce (Have virtual box + vagrant )

$ vagrant init ubuntu/xenial64 && vagrant up && vagrant ssh

wget https://github.com/wamdam/backy2/releases/download/v2.9.17/backy2_2.9.17_all.deb

sudo dpkg -i backy2_2.9.17_all.deb

vagrant@ubuntu-xenial:~$ sudo dpkg -i backy2_2.9.17_all.deb
Selecting previously unselected package backy2.
(Reading database ... 54490 files and directories currently installed.)
Preparing to unpack backy2_2.9.17_all.deb ...
Unpacking backy2 (2.9.17) ...
dpkg: dependency problems prevent configuration of backy2:
 backy2 depends on python3-dateutil; however:
  Package python3-dateutil is not installed.
 backy2 depends on python3-psutil; however:
  Package python3-psutil is not installed.
 backy2 depends on python3-setproctitle; however:
  Package python3-setproctitle is not installed.
 backy2 depends on python3-shortuuid; however:
  Package python3-shortuuid is not installed.

dpkg: error processing package backy2 (--install):
 dependency problems - leaving unconfigured
Errors were encountered while processing:
 backy2


Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.