rustic-rs / rustic_core Goto Github PK
View Code? Open in Web Editor NEWrustic_core - library for fast, encrypted, deduplicated backups that powers rustic-rs
Home Page: https://rustic.cli.rs/ecosystem/rustic-core/
License: Apache License 2.0
rustic_core - library for fast, encrypted, deduplicated backups that powers rustic-rs
Home Page: https://rustic.cli.rs/ecosystem/rustic-core/
License: Apache License 2.0
With adding rustic_backend
and the change to a workspace, we need to refine a bit the repository structure, e.g. the root Readme doesn't make a lot of sense in the way it is right now. We also need to keep in mind future additions of crates to the workspace, e.g. rustic_config
and rustic_util_schemas
.
rustic currently saves the file # of links, as restic does. Other support for hard links is missing.
Restic uses the information of # of links and more (like device id) to (try to) restore hard links during restore. This so far only works for files within the restore and poses other problems, see restic/restic#3041 (comment).
Correct treatment of hard links during backup and restore is hard, but should be better supported. For instance:
backup
.restore
to suit user's needs for restoring hard links.If we encounter an error, we should return an error and let the caller handle it. In the following code example, we encounter errors, but instead of returning them as such, we are returning Ok(())
and logging an error. IMHO, logging it is fine, but we should also return an Error in case of encountering one. We should fix that all over the code base.
// check header
let header = be.decrypt(&data.split_off(data.len() - header_len as usize))?;
let pack_blobs = PackHeader::from_binary(&header)?.into_blobs();
let mut blobs = index_pack.blobs;
blobs.sort_unstable_by_key(|b| b.offset);
if pack_blobs != blobs {
error!("pack {id}: Header from pack file does not match the index");
debug!("pack file header: {pack_blobs:?}");
debug!("index: {:?}", blobs);
return Ok(());
}
p.inc(u64::from(header_len) + 4);
// check blobs
for blob in blobs {
let blob_id = blob.id;
let mut blob_data = be.decrypt(&data.split_to(blob.length as usize))?;
// TODO: this is identical to backend/decrypt.rs; unify these two parts!
if let Some(length) = blob.uncompressed_length {
blob_data = decode_all(&*blob_data)?;
if blob_data.len() != length.get() as usize {
error!("pack {id}, blob {blob_id}: Actual uncompressed length does not fit saved uncompressed length");
return Ok(());
}
}
let comp_id = hash(&blob_data);
if blob.id != comp_id {
error!("pack {id}, blob {blob_id}: Hash mismatch. Computed hash: {comp_id}");
return Ok(());
}
p.inc(blob.length.into());
I've noticed since I started using Rustic, I have a lot of leftover rcloneorig
processes. It seems like every time a backup runs, the rclone process doesn't get closed afterwards.
19wolf 5733 3.5 0.1 1284416 49796 ? Sl 20:22 0:00 rcloneorig --config ~/.rclone.conf serve restic --stdio --b2-hard-delete GoogleDrive:restic
19wolf 8356 0.0 0.1 1286080 61220 ? Sl Sep13 0:11 rcloneorig --config ~/.rclone.conf serve restic GoogleDrive:restic --addr localhost:0
19wolf 9513 0.0 0.2 1353844 100768 ? Sl 02:18 1:02 rcloneorig --config ~/.rclone.conf serve restic GoogleDrive:restic --addr localhost:0
19wolf 16279 0.0 0.1 1285184 52564 ? Sl Sep13 0:07 rcloneorig --config ~/.rclone.conf serve restic GoogleDrive:restic --addr localhost:0
19wolf 20556 0.0 0.1 1284928 54792 ? Sl Sep13 0:07 rcloneorig --config ~/.rclone.conf serve restic GoogleDrive:restic --addr localhost:0
19wolf 22057 0.0 0.1 1285440 57584 ? Sl Sep13 0:12 rcloneorig --config ~/.rclone.conf serve restic GoogleDrive:restic --addr localhost:0
The prune
command currently already loops over index files and processes the contained pack files by either keeping them, repacking them, marking/unmarking them for deletion or putting them on a list of files-to-be-removed. The actual removal of pack files and the now-unneeded index files is then done at the very end.
This process can be improved in two ways:
I would find it nice of us to show support in our readme and add their newly created logo (as they seem to come out of incubator phase): https://www.apache.org/logos/?#opendal to our readme of rustic_core / rustic_backend
Maybe adding a white background or something, as it seems hard to see on dark mode.
Original restic issue: restic/restic#662 exists since 2016, but still opened.
Many people asked for it for their use cases and it seems its really simple to implemented (7 LoC PoC: restic/restic#662 (comment)).
Restic devs solution is to:
use
restic snapshots --json
, then take the snapshot IDs, userestic cat snapshot <id>
for each and drop the ones where the tree IDs haven't changed
which (if backup to cloud) forces some… useless actions like upload stuff and then delete it…
I would be great if Rustic can implement it to allow more flexible usage.
Hi!
Love the project, thank you so much! When I tried to use my existing restic repository (Hetzner Storagebox), rustic lets me know that the sftp backend is not supported. Am I missing something? :)
Thank you :D
When deleting snapshots, I would link to keep the latest version of a file that has been deleted. An example is:
We pull in cached
and quick_cache
. cached
basically gives a very convenient macro to use which we only use to cache uid/gid to names mappings. But we maybe able to replace it with quick_cache or vice versa. In general, it should be fine to use a single implementation for caching and don't pull in as much dependencies.
If I want a backup block device (bytes array), can it support it?
source: PathList
tosource: FileSystem
) ?pub fn backup(
&self,
opts: &BackupOptions,
source: PathList,
snap: SnapshotFile,
) -> RusticResult<SnapshotFile> {
commands::backup::backup(self, opts, source, snap)
}
EDIT: Mistakenly opened from other issue
Building rustic-0.6.0 fails on OpenBSD current, which is related to rustic_core. Building rustic_core-0.1.1 on OpenBSD current results in the error below.
This issue is related to rustic-rs/rustic#917.
Compiling rustic_core v0.1.1 (/home/code/rustic_core)
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:666:38
|
666 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:706:38
|
706 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
error[E0599]: no variant or associated item named `SettingFilePermissionsFailed` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:749:42
|
749 | .map_err(LocalErrorKind::SettingFilePermissionsFailed)?;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ variant or associated item not found in `LocalErrorKind`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `SettingFilePermissionsFailed` not found for this enum
error[E0599]: no variant named `SymlinkingFailed` found for enum `LocalErrorKind`
--> src/backend/local.rs:918:78
|
918 | symlink(linktarget, &filename).map_err(|err| LocalErrorKind::SymlinkingFailed {
| ^^^^^^^^^^^^^^^^ variant not found in `LocalErrorKind`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant `SymlinkingFailed` not found here
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:936:46
|
936 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:950:46
|
950 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:954:46
|
954 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
error[E0599]: no variant or associated item named `FromErrnoError` found for enum `LocalErrorKind` in the current scope
--> src/backend/local.rs:958:46
|
958 | .map_err(LocalErrorKind::FromErrnoError)?;
| ^^^^^^^^^^^^^^
| |
| variant or associated item not found in `LocalErrorKind`
| help: there is a variant with a similar name: `FromTryIntError`
|
::: src/error.rs:598:1
|
598 | pub enum LocalErrorKind {
| ----------------------- variant or associated item `FromErrnoError` not found for this enum
For more information about this error, try `rustc --explain E0599`.
error: could not compile `rustic_core` (lib) due to 8 previous errors
The copy
command may copy duplicated blob.
How to reproduce:
ffmpeg.exe
ffmpeg.exe
to ffmpeg2.exe
, ffmpeg3.exe
[repository]
repository="source"
password="123456"
[[copy.targets]]
repository="target"
password="789012"
config.toml
.rustic -P config backup ffmpeg.exe
rustic -P config backup ffmpeg2.exe
rustic -P config backup ffmpeg3.exe
rustic -P config copy
It seems that when copying data blobs in src/command/copy.rs
, the index is not updated so duplicate data blobs are copied.
I realized this when copying a 300GB repo into a 512GB hard disk and encountered a surprising disk full.
Note that rustic copy --init
still doesn't ask for the destination password when initializing the destination repository (and it is not specifed).
To implement this, we need more rustic_core support...
Originally posted by @aawsome in rustic-rs/rustic#1061 (comment)
And I think we should open an issue about the prune
output/statistics - There it should be actually get clear why the packs are repacked, but it can be actually multiple reasons: Not fully used + too small can both apply, for example. I didn't get an idea so far how to present such kind of results in a good way...
Originally posted by @aawsome in rustic-rs/rustic#1017 (comment)
And restore is slower even if the files are already existing.
time restic restore latest --target=/srv/ -o s3.connections=64
Summary: Restored 25016167 files/dirs (4.092 TiB) in 3:10:02
real 190m23.339s
time rustic -r rclone:xxxxx:yyyyy restore latest --delete /srv/
using no config file, none of these exist: /root/.config/rustic/rustic.toml, /etc/rustic/rustic.toml, ./rustic.toml
[INFO] repository rclone:xxxxx:yyyyy: password is correct.
[INFO] using cache at /root/.cache/rustic/89660cf9b6e34051e0f099b6a3921def13ce9bd0a990d1150b3030b654c1fd8c
[00:00:02] reading index... ████████████████████████████████████████ 143/143
[00:00:00] getting latest snapshot... ████████████████████████████████████████ 63/63
[00:17:19] collecting file information... Files: 0 to restore, 24894663 unchanged, 13253 verified, 0 to modify, 0 additional
Dirs: 0 to restore, 119560 to modify, 0 additional
[INFO] total restore size: 0 B
[INFO] using 4.1 TiB of existing file contents.
[INFO] all file contents are fine.
[00:00:00] restoring file contents... ████████████████████████████████████████ 0 B/0 B 0 B/s (ETA -)
[00:00:00] setting metadata... ⠁
[03:21:47] setting metadata...
restore done.
real 219m11.290s
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
34.41 0.865089 4 200338 68865 openat
21.79 0.547918 3 153419 close
13.03 0.327553 3 87648 newfstatat
5.24 0.131846 3 37568 epoll_ctl
3.69 0.092759 9 9423 3162 connect
3.13 0.078629 3 25061 epoll_wait
2.39 0.060189 6 9423 socket
2.13 0.053659 8 6262 sendto
1.90 0.047768 3 12522 getdents64
1.83 0.046118 14 3131 fchownat
1.56 0.039172 3 12524 fcntl
1.29 0.032372 5 6277 16 recvfrom
1.06 0.026609 4 6261 epoll_create1
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
fchownat(AT_FDCWD, "/srv/srv/PATHTOFILE", 63456, 504, AT_SYMLINK_NOFOLLOW) = 0
chmod("/srv/srv/PATHTOFILE", 0100660) = 0
llistxattr("/srv/srv/PATHTOFILE", NULL, 0) = 0
utimensat(AT_FDCWD, "/srv/srv/PATHTOFILE", [{tv_sec=1576755668, tv_nsec=0} /* 2019-12-19T11:41:08+0000 */, {tv_sec=1576755668, tv_nsec=0} /* 2019-12-19T11:41:08+0000 */], AT_SYMLINK_NOFOLLOW) = 0
newfstatat(AT_FDCWD, "/etc/nsswitch.conf", {st_mode=S_IFREG|0644, st_size=2980, ...}, 0) = 0
openat(AT_FDCWD, "/etc/group", O_RDONLY|O_CLOEXEC) = 7
newfstatat(7, "", {st_mode=S_IFREG|0644, st_size=621, ...}, AT_EMPTY_PATH) = 0
lseek(7, 0, SEEK_SET) = 0
read(7, "root:x:0:\nbin:x:1:\ndaemon:x:2:\ns"..., 4096) = 621
read(7, "", 4096) = 0
close(7) = 0
openat(AT_FDCWD, "/var/lib/sss/mc/group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/var/lib/sss/mc/group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
getpid() = 2745
socket(AF_UNIX, SOCK_STREAM, 0) = 7
fcntl(7, F_GETFL) = 0x2 (flags O_RDWR)
fcntl(7, F_SETFL, O_RDWR|O_NONBLOCK) = 0
fcntl(7, F_GETFD) = 0
fcntl(7, F_SETFD, FD_CLOEXEC) = 0
connect(7, {sa_family=AF_UNIX, sun_path="/var/lib/sss/pipes/nss"}, 110) = -1 ENOENT (No such file or directory)
close(7) = 0
rt_sigprocmask(SIG_BLOCK, [HUP USR1 USR2 PIPE ALRM CHLD TSTP URG VTALRM PROF WINCH IO], [], 8) = 0
openat(AT_FDCWD, "/run/systemd/userdb/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0755, st_size=60, ...}, AT_EMPTY_PATH) = 0
getdents64(7, 0x55ec01ac4880 /* 3 entries */, 32768) = 96
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 8
connect(8, {sa_family=AF_UNIX, sun_path="/run/systemd/userdb/io.systemd.DynamicUser"}, 45) = 0
epoll_create1(EPOLL_CLOEXEC) = 9
timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC|TFD_NONBLOCK) = 10
epoll_ctl(9, EPOLL_CTL_ADD, 10, {events=EPOLLIN, data={u32=25555648, u64=94472126198464}}) = 0
epoll_ctl(9, EPOLL_CTL_ADD, 8, {events=0, data={u32=28066992, u64=94472128709808}}) = 0
getdents64(7, 0x55ec01ac4880 /* 0 entries */, 32768) = 0
close(7) = 0
epoll_ctl(9, EPOLL_CTL_MOD, 8, {events=EPOLLIN|EPOLLOUT, data={u32=28066992, u64=94472128709808}}) = 0
timerfd_settime(10, TFD_TIMER_ABSTIME, {it_interval={tv_sec=0, tv_nsec=0}, it_value={tv_sec=15055, tv_nsec=569876000}}, NULL) = 0
epoll_wait(9, [{events=EPOLLOUT, data={u32=28066992, u64=94472128709808}}], 8, 0) = 1
sendto(8, "{\"method\":\"io.systemd.UserDataba"..., 133, MSG_DONTWAIT|MSG_NOSIGNAL, NULL, 0) = 133
epoll_ctl(9, EPOLL_CTL_MOD, 8, {events=EPOLLIN, data={u32=28066992, u64=94472128709808}}) = 0
epoll_wait(9, [{events=EPOLLIN, data={u32=28066992, u64=94472128709808}}], 8, 0) = 1
recvfrom(8, "{\"error\":\"io.systemd.UserDatabas"..., 131080, MSG_DONTWAIT, NULL, NULL) = 66
epoll_ctl(9, EPOLL_CTL_MOD, 8, {events=0, data={u32=28066992, u64=94472128709808}}) = 0
epoll_wait(9, [], 8, 0) = 0
epoll_wait(9, [], 8, 0) = 0
epoll_ctl(9, EPOLL_CTL_DEL, 8, NULL) = 0
close(8) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "etc", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=8192, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(8) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "run", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=960, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(8) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "run", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=960, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "host", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(8) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "usr", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=144, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "local", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0755, st_size=131, ...}, AT_EMPTY_PATH) = 0
close(8) = 0
openat(7, "lib", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=6, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(8) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "usr", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 8
newfstatat(8, "", {st_mode=S_IFDIR|0755, st_size=144, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(8, "lib", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0555, st_size=4096, ...}, AT_EMPTY_PATH) = 0
close(8) = 0
openat(7, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(7) = 0
openat(AT_FDCWD, "/etc/userdb/USERNAME_GROUPNAME.group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/run/userdb/USERNAME_GROUPNAME.group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/run/host/userdb/USERNAME_GROUPNAME.group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/local/lib/userdb/USERNAME_GROUPNAME.group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
openat(AT_FDCWD, "/usr/lib/userdb/USERNAME_GROUPNAME.group", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
close(9) = 0
close(10) = 0
openat(AT_FDCWD, "/run/systemd/userdb/", O_RDONLY|O_NONBLOCK|O_CLOEXEC|O_DIRECTORY) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0755, st_size=60, ...}, AT_EMPTY_PATH) = 0
getdents64(7, 0x55ec01ac4880 /* 3 entries */, 32768) = 96
socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 8
connect(8, {sa_family=AF_UNIX, sun_path="/run/systemd/userdb/io.systemd.DynamicUser"}, 45) = 0
epoll_create1(EPOLL_CLOEXEC) = 9
timerfd_create(CLOCK_MONOTONIC, TFD_CLOEXEC|TFD_NONBLOCK) = 10
epoll_ctl(9, EPOLL_CTL_ADD, 10, {events=EPOLLIN, data={u32=25555648, u64=94472126198464}}) = 0
epoll_ctl(9, EPOLL_CTL_ADD, 8, {events=0, data={u32=24888512, u64=94472125531328}}) = 0
getdents64(7, 0x55ec01ac4880 /* 0 entries */, 32768) = 0
close(7) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "etc", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=8192, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(11) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "run", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=960, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(11) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "run", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=960, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "host", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(11) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "usr", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=144, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "local", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0755, st_size=131, ...}, AT_EMPTY_PATH) = 0
close(11) = 0
openat(7, "lib", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=6, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(11) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "usr", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=144, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "lib", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 7
newfstatat(7, "", {st_mode=S_IFDIR|0555, st_size=4096, ...}, AT_EMPTY_PATH) = 0
close(11) = 0
openat(7, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(7) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "etc", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
newfstatat(11, "", {st_mode=S_IFDIR|0755, st_size=8192, ...}, AT_EMPTY_PATH) = 0
close(7) = 0
openat(11, "userdb", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = -1 ENOENT (No such file or directory)
close(11) = 0
openat(AT_FDCWD, "/", O_RDONLY|O_CLOEXEC|O_PATH|O_DIRECTORY) = 7
openat(7, "run", O_RDONLY|O_NOFOLLOW|O_CLOEXEC|O_PATH) = 11
The situation is greatly improved by --numeric-id (~16m to restore), but I don’t understand why it chowns and chmods if nothing has changed, It spends half the time on this (I ran several times sequentially):
strace -cp rustic_pid
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
36.11 1.709402 6 265619 fchownat
23.74 1.123784 4 265620 chmod
22.48 1.063941 4 265620 utimensat
17.67 0.836419 3 265620 llistxattr
------ ----------- ----------- --------- --------- ----------------
100.00 4.733546 4 1062479 total
Perhaps it's worth making numeric-id the default?
There are still a lot of instances of unwrap
/expect
within the code base of rustic_core.
We should activate the clippy lints: #![deny(clippy::expect_used)]
and #![deny(clippy::unwrap_used)]
and tackle this issue. If we want to use rustic_core
for GUI and other things, it shouldn't panic in circumstances that we can fix with good error handling.
I think it would be beneficial to bundle a fully annotated config template in the compiled binary with:
pub static CONFIG_TEMPLATE_TOML: &str = include_str!("config/config_template.toml");
and let users generate a template config with rustic generate config > <path_to_future_config>
or rustic generate config -o <path_to_future_config>
, if -o
/--output
is given by the user.
Besides, I think it would be beneficial, as well, to package and ship rustic
with the /config/
folder and maybe unpack it on installation to ~/.config/rustic/examples
.
Documentation Clarity
Error Handling
A few things that I think we should take care of, before we can stabilize rustic_core
:
error!
anywhere in the API of the library, they should be actual errorsinfo!
/warn!
we should think about handling somehow different and only use, when someone uses rustic_core
with the logging
feature enablederrors
in fallible functions and methodsA good rolemodel here is IMHO: https://github.com/BurntSushi/aho-corasick/blob/master/src/lib.rs
aho-corasick/src/lib.rs
has a complete abstract about:
restic
repository format might be worthwhile to name and link here)I think it would make sense to raise test coverage and documentation effort by requiring newly created PRs with additions/changes to the code base to include tests
and documentation
for what they are changing/adding before they can be merged.
I'm still unsure, how to implement it, as that would involve to check the code coverage of only the changed files and running a check only on changed files for clippy
documentation lints.
Furthermore, I assume, that this is existing already, because it feels like it's not such a specific problem and must be a pretty common need.
Code taken from #106 (webdavfs.rs):
#[derive(Clone, Copy)]
enum RuntimeType {
Basic,
ThreadPool,
}
impl RuntimeType {
fn get() -> Self {
static RUNTIME_TYPE: OnceLock<RuntimeType> = OnceLock::new();
*RUNTIME_TYPE.get_or_init(|| {
let dbg = format!("{:?}", tokio::runtime::Handle::current());
if dbg.contains("ThreadPool") {
Self::ThreadPool
} else {
Self::Basic
}
})
}
}
// Run some code via block_in_place() or spawn_blocking().
async fn blocking<F, R>(func: F) -> R
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
match RuntimeType::get() {
RuntimeType::Basic => tokio::task::spawn_blocking(func).await.unwrap(),
RuntimeType::ThreadPool => tokio::task::block_in_place(func),
}
}
Recently, I found two related libraries with (mostly) the same underlying principles as rustic
(see Related libraries).
acid store
has the same use case with uploading encrypted, deduplicated data to a backend.
elfshaker
is mostly useful for directories, where most things don't change that often.
I find the user-facing CLI-API fascinating, though:
elfshaker store <snapshot>
– capture the state of the current working directory into a named snapshot <snapshot>
.elfshaker pack <pack name>
– capture all 'loose' snapshots into a single pack file (this is what gets you the compression win).elfshaker extract <snapshot>
– restore the state of a previous snapshot into the current working directory.As a side-project, I'm developing a file organization software
and a personal tool for backup and drive management
, where it would be beneficial to easily backup directories and easily restore certain files after an action. I thought of using rustic_core
for that.
Currently, the general API is relatively verbose for smaller use cases. IMHO, it would need another layer of abstraction for certain easy mode
use cases, as in short-term backup -> diff -> restore
.
Like really just taking a directory snapshot before changing things in it. Maybe restoring only the files with differences
or easily jumping between the two states: last_snapshot
/ current state
.
I think it would make sense to adapt such an rustic_core::easy_mode::*
API design or really implement that on top of rustic_core
in an rustic_easy
crate — what would fit best.
I think essentials would be (ideas):
store
– capture the state of the current working directory into a named snapshotdiff
– compare two snapshots/paths (in most cases diff the content of the current working directory against the latest snapshot) and mount a temporary snapshot of the differences to be able to extract datamerge
– combine all available snapshots into a newly named snapshotrestore
– restore the state of a previous snapshot into the current working directoryextract
– just access one file/directory of the snapshot as it would be an archiveAs you can see, there is definitely an intersection with the current API approach (#787). Though this easy mode
(as I would call it for now) would leave a user with a less customizable version of parts of the core API – but everything needed to have a minimal repository interaction.
on-the-fly
created repository
either in /tmp/
, /usr/tmp/
or in ~/.rustic/tmp/
, depending maybe on a persistent = true
flag, that a user can set if they don't want the OS to clean up the repository after a restartconfig.toml
) for it or store the metadata (e.g. backup source) in a local file in ~/.rustic/temporary_repos.toml
or something like thathttps://github.com/elfshaker/elfshaker
https://github.com/lostatc/acid-store
No, depending on the platform a different implementation.
Unix - Fuse: https://crates.io/crates/fuser
Windows - WinFSP: https://crates.io/crates/winfsp | https://crates.io/crates/dokan (https://github.com/dokan-dev/dokany)
Mac - macfuse: https://crates.io/crates/fuser
That is also a reason, why it may take longer, because this changes the way we are packaging things for each platform. Which means, we need to adapt the whole CI/CD workflows. And while we are at it, we should make it, so it's easy to extend again.
Originally posted by @simonsan in rustic-rs/rustic#1029 (reply in thread)
Culprit, most likely in this test and surroundings, probably something with the runner and being limited in usage of RAM or so, most likely false positive.
#[test]
fn chunk_zeros() {
let mut reader = repeat(0u8);
let poly = random_poly().unwrap();
let rabin = Rabin64::new_with_polynom(6, poly);
let mut chunker = ChunkIter::new(&mut reader, usize::MAX, rabin);
let chunk = chunker.next().unwrap().unwrap();
assert_eq!(constants::MIN_SIZE, chunk.len());
}
Line 319 in 5675dd5
Action-Run: https://github.com/rustic-rs/rustic_core/actions/runs/6240530669/job/16940825841#step:9:481
memory allocation of 5368709120 bytes failed
error: test failed, to rerun pass `--lib`
Error: test failed, to rerun pass `--lib`
Caused by:
process didn't exit successfully: `C:\Users\runneradmin\.rustup\toolchains\nightly-x86_64-pc-windows-msvc\bin\cargo-miri.exe runner D:\a\rustic_core\rustic_core\target\miri\x86_64-pc-windows-msvc\debug\deps\rustic_core-a5df6ff461b5f381.exe` (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
note: test exited abnormally; to see the full output pass --nocapture to the harness.
Error: The process 'C:\Users\runneradmin\.cargo\bin\cargo.exe' failed with exit code 3221226505
currently, Localtime::Now()
is used...
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These branches will be created by Renovate only once you click their checkbox below.
These updates are currently rate-limited. Click on a checkbox below to force their creation now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
Cargo.toml
aho-corasick 1.1.2
anyhow 1.0.81
bytes 1.5.0
enum-map 2.7.3
simplelog 0.12.2
rstest 0.18.2
tempfile 3.10.1
crates/backend/Cargo.toml
anyhow 1.0.81
displaydoc 0.2.4
thiserror 1.0.58
log 0.4.21
bytes 1.5.0
derive_setters 0.1.6
humantime 2.1.0
itertools 0.12.1
strum 0.26
strum_macros 0.26
hex 0.4.3
serde 1.0.197
url 2.5.0
clap 4.5.2
merge 0.1.0
shell-words 1.1.0
walkdir 2.5.0
backoff 0.4.0
reqwest 0.11.26
rand 0.8.5
semver 1.0.22
rayon 1.9.0
tokio 1.36.0
opendal 0.45
opendal 0.45
crates/core/Cargo.toml
displaydoc 0.2.4
thiserror 1.0.58
derivative 2.2.0
derive_more 0.99.17
derive_setters 0.1.6
log 0.4.21
crossbeam-channel 0.5.12
pariter 0.5.1
rayon 1.9.0
aes256ctr_poly1305aes 0.2.0
rand 0.8.5
scrypt 0.11.0
binrw 0.13.3
hex 0.4.3
integer-sqrt 0.1.5
serde 1.0.197
serde-aux 4.5.0
serde_derive 1.0.197
serde_json 1.0.114
serde_with 3.7.0
cached 0.49.2
dunce 1.0.4
filetime 0.2.23
ignore 0.4.22
nix 0.28
path-dedot 3.1.1
shell-words 1.1.0
walkdir 2.5.0
cachedir 0.3.1
dirs 5.0.1
clap 4.5.2
merge 0.1.0
dav-server 0.5.8
futures 0.3
runtime-format 0.1.3
bytesize 1.3.0
chrono 0.4.35
enum-map-derive 0.17.0
enumset 1.1.3
gethostname 0.4.3
humantime 2.1.0
itertools 0.12.1
quick_cache 0.4.1
strum 0.26.2
zstd 0.13.0
expect-test 1.4.1
flate2 1.0.28
insta 1.36.1
mockall 0.12.1
pretty_assertions 1.4.0
quickcheck 1.0.3
quickcheck_macros 1.0.0
rustdoc-json 0.8.9
rustup-toolchain 0.1.6
simplelog 0.12.2
tar 0.4.40
sha2 0.10
sha2 0.10
sha2 0.10
xattr 1
crates/testing/Cargo.toml
once_cell 1.19.0
.github/workflows/audit.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
rustsec/audit-check v1.4.1@dd51754d4e59da7395a4cd9b593f0ff2d61a9b95
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
EmbarkStudios/cargo-deny-action v1@64015a69ee7ee08f6c56455089cdaf6ad974fd15
.github/workflows/careful.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
taiki-e/install-action v2@d5ead4fdbf0cb2a037f276e7dfb78bbb9dd0ab8c
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
.github/workflows/ci-heavy.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/upload-artifact v3@a8a3f3ad30e3422c9c7b888a15615d19a852ae32
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
taiki-e/install-action v2@d5ead4fdbf0cb2a037f276e7dfb78bbb9dd0ab8c
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
taiki-e/install-action v2@d5ead4fdbf0cb2a037f276e7dfb78bbb9dd0ab8c
.github/workflows/ci.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
actions/upload-artifact v3@a8a3f3ad30e3422c9c7b888a15615d19a852ae32
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dtolnay/rust-toolchain v1@1482605bfc5719782e1267fd0c0cc350fe7646b8
Swatinem/rust-cache v2@23bce251a8cd2ffc3c1075eaa2367cf899916d84
.github/workflows/coverage.yaml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
taiki-e/install-action v2@d5ead4fdbf0cb2a037f276e7dfb78bbb9dd0ab8c
codecov/codecov-action v4@54bcd8715eee62d40e33596ef5e8f0f48dbbccab
.github/workflows/cross-ci.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
.github/workflows/release-plz.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
MarcoIeni/release-plz-action v0.5@f305593c6550f1f2c625387f359e1ee5a58dbb10
.github/workflows/style.yml
actions/checkout v4@b4ffde65f46336ab88eb53be808477a3936bae11
dprint/check v2.2@2f1cf31537886c3bfb05591c031f7744e48ba8a1
.github/workflows/triage.yml
I have triggered the no time set
warning after running repair index
and prune
. Maybe repair index
command can create index without time.
It may help to solve rustic-rs/rustic#710.
Version: Windows 11, rustic v0.6.0-2-gbf51b1e-nightly
Commands and outputs
> rustic -r F:\restic repair index
using no config file, none of these exist: C:\Users\11951\AppData\Roaming\rustic\config\rustic.toml, C:\ProgramData\rustic\config\rustic.toml, .\rustic.toml
enter repository password: [hidden]
[INFO] repository local:F:\restic: password is correct.
[INFO] using cache at C:\Users\11951\AppData\Local\rustic\81f78009eb062c30070f352e10378f87b9c0ddd81c99e0d8d5b63493f8b80f9d
[00:00:00] listing packs...
[00:00:00] reading index... ██████████████████████████████████████░░ 90/93 [INFO] pack a112374d: size computed by index: 36, actual size: 52806619, will re-read header
[INFO] pack 880777a5: size computed by index: 36, actual size: 51971251, will re-read header
[INFO] pack f05daaf9: size computed by index: 36, actual size: 52733712, will re-read header
[INFO] pack 25375b61: size computed by index: 36, actual size: 55980118, will re-read header
[INFO] pack 6c064abe: size computed by index: 36, actual size: 52520163, will re-read header
[INFO] pack ed0465e6: size computed by index: 36, actual size: 52040902, will re-read header
[INFO] pack 5d6ee649: size computed by index: 36, actual size: 54031920, will re-read header
[INFO] pack 475228b9: size computed by index: 36, actual size: 52872906, will re-read header
[INFO] pack b289b46d: size computed by index: 36, actual size: 53585915, will re-read header
[INFO] pack d3728e9d: size computed by index: 36, actual size: 52075220, will re-read header
[INFO] pack 70144905: size computed by index: 36, actual size: 52610515, will re-read header
[INFO] pack 552e75bb: size computed by index: 36, actual size: 52352924, will re-read header
[INFO] pack 308930d0: size computed by index: 36, actual size: 52562659, will re-read header
[INFO] pack 954d41b7: size computed by index: 36, actual size: 52487149, will re-read header
[INFO] pack 29cb0ef3: size computed by index: 36, actual size: 52279624, will re-read header
[INFO] pack 42eb4e6f: size computed by index: 36, actual size: 52099896, will re-read header
[INFO] pack 07df2f4a: size computed by index: 36, actual size: 52039800, will re-read header
[INFO] pack dfae7ab7: size computed by index: 36, actual size: 53149444, will re-read header
[INFO] pack 150b414f: size computed by index: 36, actual size: 52715801, will re-read header
[INFO] pack 908ec3b8: size computed by index: 36, actual size: 51893345, will re-read header
[INFO] pack 880e8710: size computed by index: 36, actual size: 51957596, will re-read header
[INFO] pack 51f02078: size computed by index: 36, actual size: 52336101, will re-read header
[INFO] pack fd3ae05b: size computed by index: 36, actual size: 52464628, will re-read header
[INFO] pack 5d6aed75: size computed by index: 36, actual size: 52037195, will re-read header
[INFO] pack affbe564: size computed by index: 36, actual size: 54695076, will re-read header
[INFO] pack 638e0d86: size computed by index: 36, actual size: 52260243, will re-read header
[INFO] pack 9cdee713: size computed by index: 36, actual size: 53820003, will re-read header
[INFO] pack 366c274f: size computed by index: 36, actual size: 53600338, will re-read header
[INFO] pack c794fc80: size computed by index: 36, actual size: 52915754, will re-read header
[INFO] pack 9ca047b1: size computed by index: 36, actual size: 53772097, will re-read header
[INFO] pack dac359c1: size computed by index: 36, actual size: 54036129, will re-read header
[INFO] pack 44e7d54a: size computed by index: 36, actual size: 52239590, will re-read header
[INFO] pack 8b28db2a: size computed by index: 36, actual size: 54946946, will re-read header
[INFO] pack 0d782492: size computed by index: 36, actual size: 51942677, will re-read header
[INFO] pack 8271e3d0: size computed by index: 36, actual size: 52007781, will re-read header
[INFO] pack 8e371752: size computed by index: 36, actual size: 52017861, will re-read header
[INFO] pack 7735dace: size computed by index: 36, actual size: 53077163, will re-read header
[INFO] pack b390ce5a: size computed by index: 36, actual size: 51910204, will re-read header
[INFO] pack 22d1266a: size computed by index: 36, actual size: 52889020, will re-read header
[INFO] pack 83251ba4: size computed by index: 36, actual size: 52169344, will re-read header
[INFO] pack c33fbad2: size computed by index: 36, actual size: 52269302, will re-read header
[INFO] pack 5128094c: size computed by index: 36, actual size: 52443242, will re-read header
[INFO] pack d96b2cfc: size computed by index: 36, actual size: 53539410, will re-read header
[INFO] pack 0d3471f9: size computed by index: 36, actual size: 52428192, will re-read header
[INFO] pack 225f223f: size computed by index: 36, actual size: 52743158, will re-read header
[INFO] pack 1a9f96c4: size computed by index: 36, actual size: 52984908, will re-read header
[INFO] pack 47691694: size computed by index: 36, actual size: 43209205, will re-read header
[INFO] pack f891f76d: size computed by index: 36, actual size: 4787948, will re-read header
[INFO] pack ecb3225a: size computed by index: 36, actual size: 4787220, will re-read header
[00:00:02] reading index... ████████████████████████████████████████ 93/93 [00:00:01] reading pack headers ████████████████████████████████████████ 49/49
>rustic -r F:\restic repair index
using no config file, none of these exist: C:\Users\11951\AppData\Roaming\rustic\config\rustic.toml, C:\ProgramData\rustic\config\rustic.toml, .\rustic.toml
enter repository password: [hidden]
[INFO] repository local:F:\restic: password is correct.
[INFO] using cache at C:\Users\11951\AppData\Local\rustic\81f78009eb062c30070f352e10378f87b9c0ddd81c99e0d8d5b63493f8b80f9d
[00:00:00] listing packs...
[00:00:00] reading index... ████████████████████████████████████████ 94/94 [00:00:00] reading pack headers ████████████████████████████████████████ 0/0
>rustic -r F:\restic prune --max-repack 0 --keep-delete 1y
using no config file, none of these exist: C:\Users\11951\AppData\Roaming\rustic\config\rustic.toml, C:\ProgramData\rustic\config\rustic.toml, .\rustic.toml
enter repository password: [hidden]
[INFO] repository local:F:\restic: password is correct.
[INFO] using cache at C:\Users\11951\AppData\Local\rustic\81f78009eb062c30070f352e10378f87b9c0ddd81c99e0d8d5b63493f8b80f9d
[00:00:00] reading index... ████████████████████████████████████████ 95/95 [00:00:00] reading snapshots... ████████████████████████████████████████ 11015/11015 [00:00:00] finding used blobs... ████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 2311/11015 [WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[00:00:00] finding used blobs... ████████░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░ 2312/11015 [WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[WARN] Error in cache backend: `Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }`
[00:00:35] finding used blobs... ██████████████░░░░░░░░░░░░░░░░░░░░░░░░░░ 4007/11015 ^C
>rustic -r F:\restic repair index
using no config file, none of these exist: C:\Users\11951\AppData\Roaming\rustic\config\rustic.toml, C:\ProgramData\rustic\config\rustic.toml, .\rustic.toml
enter repository password: [hidden]
[INFO] repository local:F:\restic: password is correct.
[INFO] using cache at C:\Users\11951\AppData\Local\rustic\81f78009eb062c30070f352e10378f87b9c0ddd81c99e0d8d5b63493f8b80f9d
[00:00:00] listing packs...
[00:00:00] reading index... ████████████████████████████████████████ 95/95 [00:00:00] reading pack headers ████████████████████████████████████████ 0/0
>rustic -r F:\restic prune --max-repack 0 --keep-delete 1y
using no config file, none of these exist: C:\Users\11951\AppData\Roaming\rustic\config\rustic.toml, C:\ProgramData\rustic\config\rustic.toml, .\rustic.toml
enter repository password: [hidden]
enter repository password: [hidden]
enter repository password: [hidden]
[INFO] repository local:F:\restic: password is correct.
[INFO] using cache at C:\Users\11951\AppData\Local\rustic\81f78009eb062c30070f352e10378f87b9c0ddd81c99e0d8d5b63493f8b80f9d
[00:00:00] reading index... ████████████████████████████████████████ 95/95 [00:00:00] reading snapshots... ████████████████████████████████████████ 11015/11015 [00:05:44] finding used blobs... ████████████████████████████████████████ 11015/11015 [00:00:00] getting packs from repository...
[WARN] pack to delete a112374d: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 880777a5: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete f05daaf9: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 25375b61: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 6c064abe: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete ed0465e6: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 5d6ee649: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 475228b9: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete b289b46d: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete d3728e9d: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 70144905: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 552e75bb: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 308930d0: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 954d41b7: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 29cb0ef3: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 42eb4e6f: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 07df2f4a: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete dfae7ab7: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 150b414f: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 908ec3b8: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 880e8710: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 51f02078: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete fd3ae05b: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 5d6aed75: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete affbe564: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 638e0d86: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 9cdee713: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 366c274f: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete c794fc80: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 9ca047b1: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete dac359c1: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 44e7d54a: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 8b28db2a: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 0d782492: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 8271e3d0: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 8e371752: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 7735dace: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete b390ce5a: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 22d1266a: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 83251ba4: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete c33fbad2: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 5128094c: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete d96b2cfc: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 0d3471f9: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 225f223f: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 1a9f96c4: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete 47691694: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete f891f76d: no time set, this should not happen! Keeping this pack.
[WARN] pack to delete ecb3225a: no time set, this should not happen! Keeping this pack.
to repack: 0 packs, 0 blobs, 0 B
this removes: 0 blobs, 0 B
to delete: 0 packs, 0 blobs, 0 B
unindexed: 0 packs, ?? blobs, 0 B
total prune: 0 blobs, 0 B
remaining: 3795579 blobs, 207.2 GiB
unused size after prune: 4.6 GiB (2.22% of remaining size)
packs marked for deletion: 1965, 54.7 GiB
- complete deletion: 0, 0 B
- keep marked: 1965, 54.7 GiB
- recover: 0, 0 B
[00:00:00] rebuilding index...
[00:00:00] removing old index files... ████████████████████████████████████████ 3/3
The dump
commands reads all blobs directly from the backend without caching them.
So the idea is to add a cache, and use the fact that we know a priori which blobs we need and in which order:
This is the tracking issue to collect the needed steps in order to support windows as OS:
-r C:\path\to\repo
) rustic-rs/rustic#495Maybe use a feature for this.
Seems that it is currently only working for linux/macos.
links:
https://crates.io/crates/librclone
https://github.com/rclone/rclone/tree/master/librclone
Based on this new feature: https://blog.rust-lang.org/2023/11/16/Rust-1.74.0.html#lint-configuration-through-cargo
It would increase the MSRV though, so not really sure for now, maybe we wait for another stable release or two?
Example using rustic:
rustic backup src1/ src2/ --as-path /path1/ --as-path /path2/
would backup src1/
as /path1
and src2/
as /path2/
.
// for rclone < 1.52.2 setting user/password via env variable doesn't work. This means
// we are setting up an rclone without authentication which is a security issue!
// (however, it still works, so we give a warning)
in RCloneBackend::new()
I feel like this should be made explicit by introducing a flag to agree to and otherwise error.
Hello 👋
rustic_core 0.2.0
I'm noticing a (perhaps) unusual behaviour when taking a snapshot and pushing it over opendal:s3
.
The backup progressbar seems to increment, then pause, then increment ... until complete.
When backing up locally, the progress bar moves constantly, which leaves me to suspect either:
I haven't had the opportunity to dive in to the implementation, but I was wondering if it would be possible to discuss in the case of:
Could anybody shed some light as to what is going on, and how we might come up with a more performant solution?
My apologies if this is the wrong place to discuss this, and if my understanding is way off.
Many thanks to everyone who is working on this wonderful library.
Trait: Zeroize: Securely zero memory with a simple trait (Zeroize) built on stable Rust primitives which guarantee the operation will not be “optimized away”.
use secrecy::{CloneableSecret, DebugSecret, ExposeSecret, Secret, Zeroize};
pub struct Password(String);
impl Zeroize for Password {
fn zeroize(&mut self) {
self.0.zeroize();
}
}
impl DebugSecret for Password {}
impl CloneableSecret for Password {}
/// Our Secret Password
pub type SecretPassword = Secret<Password>;
https://crates.io/crates/secrecy
related: rustic-rs/rustic#534
I was having a look at implementing it myself, but found the backend traits to not be async. I also found in the changelog, that async was removed in 0.4. Is this going to change soonish again? If so, I'll defer it.
I would use rust-s3
, because afaict the official aws sdk is async only. What do you think?
Introduction:
Recently, users have encountered issues with corrupted data when using the "max" compression feature in the (go) tool restic. These instances of data corruption can lead to data loss and other complications, highlighting the need for a robust identification of such errors beside the fix itself to detect similar bugs. The PR of restics solution is linked below.
Explanation of the Feature:
The proposed feature involves implementing this blob verification before uploading data. This process would detect errors and inconsistencies in the data, preventing corrupted data from being uploaded to the repository. By performing verification checks prior to uploading, rustic can ensure the integrity of stored data and provide users with greater confidence in their backups. Today a corruption is not detectable until the user runs the "read-data" command, which may not that often or sometimes never.
As mentioned in the discussion: I would be nice "to have this as optional feature which can switched on using CLI options or the config file." I personally would suggest a default activation would be great, because on a backup solution security and verification should be the default.
Benefits:
Disadvantages:
Possible disadvantage mentioned in the rustic discussion (linked below):
I think the implementation would be not too hard, but might slow down backup runs a bit...
References:
Hope this is written well enough, thank you for your great work and fast support! <3
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.