Coder Social home page Coder Social logo

vitalif / vitastor Goto Github PK

View Code? Open in Web Editor NEW
125.0 13.0 21.0 3.59 MB

Simplified distributed block and file storage with strong consistency, like in Ceph (repository mirror)

Home Page: https://vitastor.io

License: Other

C++ 79.46% Shell 2.60% C 4.37% JavaScript 7.19% Dockerfile 0.43% CMake 0.52% Makefile 0.01% Go 2.01% Python 1.57% Perl 0.68% RPC 1.17%

vitastor's Introduction

Vitastor

Читать на русском

The Idea

Make Clustered Block Storage Fast Again.

Vitastor is a distributed block and file SDS, direct replacement of Ceph RBD and CephFS, and also internal SDS's of public clouds. However, in contrast to them, Vitastor is fast and simple at the same time. The only thing is it's slightly young :-).

Vitastor is architecturally similar to Ceph which means strong consistency, primary-replication, symmetric clustering and automatic data distribution over any number of drives of any size with configurable redundancy (replication or erasure codes/XOR).

Vitastor targets primarily SSD and SSD+HDD clusters with at least 10 Gbit/s network, supports TCP and RDMA and may achieve 4 KB read and write latency as low as ~0.1 ms with proper hardware which is ~10 times faster than other popular SDS's like Ceph or internal systems of public clouds.

Vitastor supports QEMU, NBD, NFS protocols, OpenStack, Proxmox, Kubernetes drivers. More drivers may be created easily.

Read more details below in the documentation.

Talks and presentations

Documentation

Author and License

Copyright (c) Vitaliy Filippov (vitalif [at] yourcmc.ru), 2019+

Join Vitastor Telegram Chat: https://t.me/vitastor

All server-side code (OSD, Monitor and so on) is licensed under the terms of Vitastor Network Public License 1.1 (VNPL 1.1), a copyleft license based on GNU GPLv3.0 with the additional "Network Interaction" clause which requires opensourcing all programs directly or indirectly interacting with Vitastor through a computer network and expressly designed to be used in conjunction with it ("Proxy Programs"). Proxy Programs may be made public not only under the terms of the same license, but also under the terms of any GPL-Compatible Free Software License, as listed by the Free Software Foundation. This is a stricter copyleft license than the Affero GPL.

Please note that VNPL doesn't require you to open the code of proprietary software running inside a VM if it's not specially designed to be used with Vitastor.

Basically, you can't use the software in a proprietary environment to provide its functionality to users without opensourcing all intermediary components standing between the user and Vitastor or purchasing a commercial license from the author 😀.

Client libraries (cluster_client and so on) are dual-licensed under the same VNPL 1.1 and also GNU GPL 2.0 or later to allow for compatibility with GPLed software like QEMU and fio.

You can find the full text of VNPL-1.1 in the file VNPL-1.1.txt. GPL 2.0 is also included in this repository as GPL-2.0.txt.

vitastor's People

Contributors

0x00ace avatar lklimin avatar lnsyyj avatar mirrorll avatar moly7x avatar mouseratti avatar necron113 avatar vitalif avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

vitastor's Issues

[qemu] qemu data error

hi, @vitalif
i use windows 10 as guest os and found that when it start, some of the qemu iov array item use the same address at a IO request。for example in a iovcnt = 5 request,iov[0] and iov[3] use the same address。when client slice the reqeust (in two part_op), part_op[0] use iov[0]-iov[2] and send to one osd , part_op[1] use iov[3]-iov[4] and send to another osd. if part_op[1] return earlier than part_op[0], the address of iov[0] and iov[3] will use the data of iov[0], this result is not same as use qcow2 file as vm disk.
so, when the request opcode is read, i use another malloc part_op iov buf to save the data read from different osd, and memcpy data to qemu iov after all part_op done. but when i calc crc32 at part_op iov and qemu iov, some time it is different。i mprotect the qemu iov PROT_READ before all part_op done. then mprotect it PROT_READ|PROT_WRITE. but this way will cause qemu error: kvm run failed Bad address.

can not remove inode when cluster state error

with a 3 node and 6 osd cluster.(node1: [osd.1, osd.2]. node2:[osd.3, osd.4], node3:[osd.5, osd.6] ).

  1. if node3 down, then some pg become degrade, in this time can not remove inode.
  2. after osd_out_time pg become active, can remove, and remove it.
  3. then in some times node3 up, join to cluster again. but osd.5 and osd.6 has some object of the already remove inode, with peering they write to other osd.

[performance] Performance degradation over vitastor 32k

Hi @vitalif ,
I tested the performance of vitastor without RDMA.

Below 32k (4k, 8k, 16k performance is higher than ceph, latency is lower than ceph)
Above 32k, including 32k (32k 64k 128k 256k 512k 1m performance is lower than ceph, latency is higher than ceph)
The above rand and sequence have been tested, and the results are the same.

Does it have a special meaning for 32k? Can I adjust some parameters to improve the performance above 32k?

[vitastor-mon] After etcd master switch, the etcd data of vitastor is not synchronized

Hi @vitalif ,

I have 3 machines, deployed etcd cluster, and started vitastor-mon on each machine.
Initial environment:

debian-1 192.168.122.201
debian-2 192.168.122.202
debian-3 192.168.122.203

root@debian-2:/usr/lib/vitastor/mon# ps -ef | grep etcd
etcd      120717       1  1 22:42 ?        00:00:55 /usr/local/bin/etcd -name etcd1 --data-dir /var/lib/etcd1.etcd --advertise-client-urls http://192.168.122.202:2379 --listen-client-urls http://192.168.122.202:2379 --initial-advertise-peer-urls http://192.168.122.202:2380 --listen-peer-urls http://192.168.122.202:2380 --initial-cluster-token vitastor-etcd-1 --initial-cluster etcd0=http://192.168.122.201:2380,etcd1=http://192.168.122.202:2380,etcd2=http://192.168.122.203:2380 --initial-cluster-state new --max-txn-ops=100000 --max-request-bytes=104857600 --auto-compaction-retention=10 --auto-compaction-mode=revision
vitastor  129012       1  0 23:26 ?        00:00:02 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://192.168.122.201:2379,http://192.168.122.202:2379,http://192.168.122.203:2379 --etcd_prefix /vitastor --etcd_start_timeout 5

root@debian-1:~# etcdctl --endpoints 192.168.122.202:2379 get "" --prefix
/vitastor/config/pgs
{"hash":"6ea319e831e1085b45bc25e164b4ab4d6d63095c"}
/vitastor/mon/master
{"ip":["192.168.122.201"]}

When I stopped the master etcd service, the data in the etcd database did not seem to change.
I found that the data of /vitastor/mon/master in etcd still pointed to the ip of the stopped machine.

root@debian-1:~# systemctl status etcd
● etcd.service - etcd for vitastor
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
     Active: inactive (dead)

Jul 27 23:25:31 debian-1 etcd[124808]: peer 263f76c692e97c7c became inactive (message send to peer failed)
Jul 27 23:25:31 debian-1 etcd[124808]: stopped streaming with peer 263f76c692e97c7c (stream Message reader)
Jul 27 23:25:31 debian-1 etcd[124808]: stopped peer 263f76c692e97c7c
Jul 27 23:25:31 debian-1 etcd[124808]: failed to find member dd427e761e03dc4 in cluster d42fce0aa68ba65
Jul 27 23:25:31 debian-1 etcd[124808]: failed to find member dd427e761e03dc4 in cluster d42fce0aa68ba65
Jul 27 23:25:31 debian-1 etcd[124808]: failed to find member 263f76c692e97c7c in cluster d42fce0aa68ba65
Jul 27 23:25:31 debian-1 etcd[124808]: failed to find member 263f76c692e97c7c in cluster d42fce0aa68ba65
Jul 27 23:25:31 debian-1 systemd[1]: etcd.service: Succeeded.
Jul 27 23:25:31 debian-1 systemd[1]: Stopped etcd for vitastor.
Jul 27 23:25:31 debian-1 systemd[1]: etcd.service: Consumed 47.950s CPU time.
root@debian-1:~# systemctl status vitastor-mon.service 
● vitastor-mon.service - Vitastor monitor
     Loaded: loaded (/etc/systemd/system/vitastor-mon.service; disabled; vendor preset: enabled)
     Active: active (running) since Tue 2021-07-27 22:44:04 EDT; 44min ago
   Main PID: 124903 (node)
      Tasks: 7
     Memory: 40.8M
        CPU: 7.851s
     CGroup: /system.slice/vitastor-mon.service
             └─124903 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://192.168.122.201:2379,http://192.168.122.202:2379,http://192.168.122.203:2379 --etcd_prefix /vitastor --etcd_start_timeout 5

Jul 27 22:44:04 debian-1 systemd[1]: Started Vitastor monitor.
Jul 27 22:44:05 debian-1 node[124903]: Became master
Jul 27 22:44:05 debian-1 node[124903]: PG configuration successfully changed
root@debian-1:~# etcdctl --endpoints 192.168.122.201:2379 get "" --prefix
^C
root@debian-1:~# etcdctl --endpoints 192.168.122.202:2379 get "" --prefix
/vitastor/config/pgs
{"hash":"6ea319e831e1085b45bc25e164b4ab4d6d63095c"}
/vitastor/mon/master
{"ip":["192.168.122.201"]}

compile error.

while compile the source code,
i got the following error

vitastor-0.7.1/src/cluster_client.cpp:969:13: sorry, unimplemented: non-trivial designated initializers not supported

vitastor-0.7.1/src/cluster_client.cpp:1021:13: sorry, unimplemented: non-trivial designated initializers not supported

vitastor etcd, issue with connections?

Sample output from etcd, obtained by running systemc

Feb 18 18:50:42 pvecn5 etcd[7406]: {"level":"warn","ts":"2024-02-18T18:50:42.8278+0100","caller":"embed/serve.go:331","msg":"error reading websocket message:websocket: close 1006 (abnormal closure): unexpected EOF"}

Is this normal behaviour?

Не могу запустить vitastor-cli в контейнере cinder не root пользователем.

Всем привет
Выполнение vitastor-cli в контейнере cinder завершается ошибкой:
от пользователя cinder:

docker exec -it --user cinder cinder_volume vitastor-cli status
terminate called after throwing an instance of 'std::runtime_error'
  what():  io_uring_queue_init: Cannot allocate memory

от пользователя root все ок:

docker exec -it --user root cinder_volume vitastor-cli status
cluster:
  etcd: 3 / 3 up, 1.1 M database size
  mon:  3 up, master test-vitastore-02
  osd:  6 / 6 up

data:
  raw:   0 B used, 299.2 G / 299.2 G available
  state: 0 B clean
  pools: 1 / 1 active
  pgs:   256 active

io:
  client: 0 B/s rd, 0 op/s rd, 0 B/s wr, 0 op/s wr

от пользователя cinder vitastor-cli -h работает :

docker exec -it --user cinder cinder_volume vitastor-cli -h
Vitastor command-line tool 1.6.0
(c) Vitaliy Filippov, 2019+ (VNPL-1.1)

COMMANDS:

  vitastor-cli status
  vitastor-cli df
  vitastor-cli ls [-l] [-p POOL] [--sort FIELD] [-r] [-n N] [<glob> ...]
  vitastor-cli create -s|--size <size> [-p|--pool <id|name>] [--parent <parent_name>[@<snapshot>]] <name>
  vitastor-cli create --snapshot <snapshot> [-p|--pool <id|name>] <image>
  vitastor-cli snap-create [-p|--pool <id|name>] <image>@<snapshot>
  vitastor-cli modify <name> [--rename <new-name>] [--resize <size>] [--readonly | --readwrite] [-f|--force] [--down-ok]
  vitastor-cli rm <from> [<to>] [--writers-stopped] [--down-ok]
  vitastor-cli flatten <layer>
  vitastor-cli rm-data --pool <pool> --inode <inode> [--wait-list] [--min-offset <offset>]
  vitastor-cli merge-data <from> <to> [--target <target>]
  vitastor-cli describe [OPTIONS]
  vitastor-cli fix [--objects <objects>] [--bad-osds <osds>] [--part <part>] [--check no]
  vitastor-cli alloc-osd
  vitastor-cli rm-osd [--force] [--allow-data-loss] [--dry-run] <osd_id> [osd_id...]
  vitastor-cli create-pool|pool-create <name> (-s <pg_size>|--ec <N>+<K>) -n <pg_count> [OPTIONS]
  vitastor-cli modify-pool|pool-modify <id|name> [--name <new_name>] [PARAMETERS...]
  vitastor-cli rm-pool|pool-rm [--force] <id|name>
  vitastor-cli ls-pools|pool-ls|ls-pool|pools [-l] [--detail] [--sort FIELD] [-r] [-n N] [--stats] [<glob> ...]

Use vitastor-cli --help <command> for command details or vitastor-cli --help --all for all details.

GLOBAL OPTIONS:
  --config_file FILE  Path to Vitastor configuration file
  --etcd_address URL  Etcd connection address
  --iodepth N         Send N operations in parallel to each OSD when possible (default 32)
  --parallel_osds M   Work with M osds in parallel when possible (default 4)
  --progress 1|0      Report progress (default 1)
  --cas 1|0           Use CAS writes for flatten, merge, rm (default is decide automatically)
  --color 1|0         Enable/disable colored output and CR symbols (default 1 if stdout is a terminal)
  --json              JSON output

Права доступа к /etc/vitastor/vitastor.conf у пользователя cinder есть.
vitastor-cli версия 1.6.0
делал по мануалу - https://github.com/vitalif/vitastor/blob/v1.6.0/docs/installation/openstack.ru.md

[QEMU and vitastor-mon] qemu-img: Unknown protocol 'vitastor' and Type Error

Hi @vitalif, I hope you can help me solve this Issues soon 😢😢 . I really don’t know what should I do now.
Host:

lsb_release -a
Distributor ID: Ubuntu
Description:    Ubuntu 21.04
Release:        21.04
Codename:       hirsute

dpkg -l | grep vitastor
ii  qemu                                  1:5.2+dfsg-10+vitastor1                                              amd64        fast processor emulator, dummy package
ii  vitastor                              0.6.5-1bullseye                                                      amd64        Vitastor, a fast software-defined clustered block storage

Cinder_volume (Inside container):

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 20.04.3 LTS
Release:        20.04
Codename:       focal

dpkg -l | grep qemu
ii  qemu                           1:5.2+dfsg-10+vitastor1           amd64        fast processor emulator, dummy package
ii  qemu-block-extra:amd64         1:4.2-3ubuntu6.17                 amd64        extra block backend modules for qemu-system and qemu-utils
ii  qemu-utils                     1:4.2-3ubuntu6.17                 amd64        QEMU utilities

dpkg -l | grep vitastor
ii  qemu                           1:5.2+dfsg-10+vitastor1           amd64        fast processor emulator, dummy package
ii  vitastor                       0.6.5-1bullseye                   amd64        Vitastor, a fast software-defined clustered block storage

Problem 1 (QEMU):

Before rebuilding QEMU:
I already installed vitastor-cli inside Cinder_volume but when I try using qemu-img (Inside Cinder_volume container). It always has a problem (Both 2 method)

qemu-img convert -f raw cirros-0.4.0-x86_64-disk.img -p -O raw 'vitastor:etcd_host=127.0.0.1\:2379/v3:image=testimg'
qemu-img: Unknown protocol 'vitastor'

LD_PRELOAD=/usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so qemu-img convert -f raw cirros-0.4.0-x86_64-disk.img -p -O raw 'vitastor:etcd_host=127.0.0.1\:2379/v3:image=testimg'
qemu-img: Unknown protocol 'vitastor'

After building QEMU:
I rebuilded QEMU (1:4.2-3ubuntu6.17) and applying a Patch in vitastor/patch/ qemu-4.2-vitastor.patch
But it still got same problem.

qemu-img convert -f raw cirros-0.4.0-x86_64-disk.img -p -O raw 'vitastor:etcd_host=127.0.0.1\:2379/v3:image=testimg'
Failed to initialize module: /usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so
Note: only modules from the same build can be loaded.
Failed to open module: /var/run/qemu/Debian_1_4.2-3ubuntu6.17/block-vitastor.so: failed to map segment from shared object
qemu-img: Unknown protocol 'vitastor'


LD_PRELOAD=/usr/lib/x86_64-linux-gnu/qemu/block-vitastor.so qemu-img convert -f raw cirros-0.4.0-x86_64-disk.img -p -O raw 'vitastor:etcd_host=127.0.0.1\:2379/v3:image=testimg'
qemu-img: /home/moly7x/qemu3/qemu-4.2/util/module.c:125: module_load_file: Assertion `QTAILQ_EMPTY(&dso_init_list)' failed.
Aborted (core dumped)

This is how I rebuild QEMU (I build in another VM, Ubuntu 20.04)

sudo apt install dpkg-dev

#Uncomment Deb source
sudo nano /etc/apt/sources.list

sudo apt-update
sudo apt-get source qemu=1:4.2-3ubuntu6.17
cd ./qemu-4.2/
sudo apt build-dep qemu=1:4.2-3ubuntu6.17

export QUILT_PATCHES=debian/patches
export QUILT_REFRESH_ARGS="-p ab --no-timestamps --no-index"
sudo quilt import ../qemu-4.2-vitastor.patch
sudo quilt push -a
sudo quilt refresh

dpkg-buildpackage -rfakeroot -b -uc -us

Problem 2 (BigInt):

Everytime I start my VM (host), I always need to start etcd either. (Not autostart)

systemctl status etcd
● etcd.service - etcd for vitastor
     Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2021-10-20 04:12:51 UTC; 3h 31min ago
    Process: 12802 ExecStartPre=chown -R etcd /var/lib/etcd0.etcd (code=exited, status=0/SUCCESS)
   Main PID: 12805 (etcd)
      Tasks: 11
     Memory: 113.1M

After I installed vitastor, I started vitastor-mon and vitastor.target services. And run command like example in README.

etcdctl --endpoints=http://127.0.0.1:2379 put /vitastor/config/global '{"immediate_commit":"all"}'

etcdctl --endpoints=http://127.0.0.1:2379 put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":2,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'

etcdctl --endpoints=http://127.0.0.1:2379 put /vitastor/config/inode/1/1 '{"name":"testimg","size":2147483648}'

And run fio testing

fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -etcd=127.0.0.1:2379/v3 -image=testimg

test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=vitastor_cluster, iodepth=16
fio-3.25
Starting 1 thread
No RDMA devices found
[OSD 0] Couldnt initialize RDMA, proceeding with TCP only
Jobs: 1 (f=1): [W(1)][97.3%][w=23.9MiB/s][w=5 IOPS][eta 00m:01s]
test: (groupid=0, jobs=1): err= 0: pid=28987: Thu Sep 23 03:28:17 2021
  write: IOPS=14, BW=57.7MiB/s (60.5MB/s)(2048MiB/35521msec); 0 zone resets
    slat (usec): min=84, max=1037, avg=188.40, stdev=81.25
    clat (msec): min=380, max=2349, avg=1107.34, stdev=345.66
     lat (msec): min=381, max=2349, avg=1107.53, stdev=345.65
    clat percentiles (msec):
     |  1.00th=[  443],  5.00th=[  600], 10.00th=[  701], 20.00th=[  785],
     | 30.00th=[  894], 40.00th=[  995], 50.00th=[ 1062], 60.00th=[ 1150],
     | 70.00th=[ 1250], 80.00th=[ 1435], 90.00th=[ 1603], 95.00th=[ 1754],
     | 99.00th=[ 1989], 99.50th=[ 2022], 99.90th=[ 2366], 99.95th=[ 2366],
     | 99.99th=[ 2366]
   bw (  KiB/s): min= 8159, max=130549, per=100.00%, avg=62443.09, stdev=28879.73, samples=65
   iops        : min=    1, max=   31, avg=14.60, stdev= 7.06, samples=65
  lat (msec)   : 500=1.17%, 750=13.09%, 1000=27.73%, 2000=57.23%, >=2000=0.78%
  cpu          : usr=0.28%, sys=2.74%, ctx=1105, majf=0, minf=290
  IO depths    : 1=0.2%, 2=0.4%, 4=0.8%, 8=58.2%, 16=40.4%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=95.4%, 8=4.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,512,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
  WRITE: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=2048MiB (2147MB), run=35521-35521msec

But after I restart my VM, and check vitastor-mon status. It got a problem.

systemctl status vitastor-mon
● vitastor-mon.service - Vitastor monitor
     Loaded: loaded (/etc/systemd/system/vitastor-mon.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2021-10-20 07:46:46 UTC; 47s ago
   Main PID: 318383 (node)
      Tasks: 7
     Memory: 15.3M
     CGroup: /system.slice/vitastor-mon.service
             └─318383 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://127.0.0.1:2379 --etcd_prefix /vitastor --etcd_start_timeou>

Oct 20 07:46:46 controller systemd[1]: Started Vitastor monitor.
Oct 20 07:46:46 controller node[318383]: Waiting to become master
Oct 20 07:46:51 controller node[318383]: Waiting to become master
Oct 20 07:46:56 controller node[318383]: Waiting to become master
Oct 20 07:47:01 controller node[318383]: Waiting to become master
Oct 20 07:47:06 controller node[318383]: Waiting to become master
Oct 20 07:47:11 controller node[318383]: Waiting to become master
Oct 20 07:47:16 controller node[318383]: Waiting to become master
Oct 20 07:47:21 controller node[318383]: Became master
Oct 20 07:47:21 controller node[318383]: Bad key in etcd: /vitastor/pool/stats/1 = {"used_raw_tb":0.001953125,"total_raw_tb":0.01949596405029297,"raw_to_usable":0.5,"space_efficiency":1}



Oct 20 07:50:40 controller node[318383]: TypeError: Do not know how to serialize a BigInt
Oct 20 07:50:40 controller node[318383]:     at JSON.stringify (<anonymous>)
Oct 20 07:50:40 controller node[318383]:     at Mon.update_total_stats (/usr/lib/vitastor/mon/mon.js:1355:33)
Oct 20 07:50:40 controller node[318383]:     at Timeout._onTimeout (/usr/lib/vitastor/mon/mon.js:1380:18)
Oct 20 07:50:40 controller node[318383]:     at listOnTimeout (internal/timers.js:554:17)
Oct 20 07:50:40 controller node[318383]:     at processTimers (internal/timers.js:497:7)

compile error for tag-0.9.0, may be a code bug

In file osd_primary_subops.cpp, its function submit_primary_stab_subops maybe lost a couple of brace for member "len" of struct blockstore_op_t
void osd_t::submit_primary_stab_subops(osd_op_t *cur_op)
{
……
subops[i].bs_op = new blockstore_op_t((blockstore_op_t){
.opcode = BS_OP_STABLE,
.callback = [subop = &subops[i], this](blockstore_op_t bs_subop)
{
handle_primary_bs_subop(subop);
},
{
.len = (uint32_t)stab_osd.len,
},
.buf = (void
)(op_data->unstable_writes + stab_osd.start),
});
bs->enqueue_op(subops[i].bs_op);
}
……
}

[vitastor] When vdbench rewrites 8000000 small files, it reports an error

Hi @vitalif ,

When I format the xfs file system with vitastor nbd and use the test tool vdbench to read and write, the following problems occur occasionally. The performance of vitastor is very good now, and I think it will be perfect if I increase the stability.

vdbench log

Mar 15, 2022 ..Interval.. .ReqstdOps... ...cpu%...  read ....read..... ....write.... ..mb/sec... mb/sec .xfer.. ...mkdir.... ...rmdir.... ...create... ....open.... ...close.... ...delete... ..getattr... ..setattr... ...a[502/1854]
                            rate   resp total  sys   pct   rate   resp   rate   resp  read write  total    size  rate   resp  rate   resp  rate   resp  rate   resp  rate   resp  rate   resp  rate   resp  rate   resp  rate   resp  
14:57:08.012            1    0.0  0.000  10.9 5.12   0.0    0.0  0.000    0.0  0.000  0.00  0.00   0.00       0   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000   0.0  0.000  
14:57:08.021      avg_2-1 NaN  0.000 NaN NaN   0.0 NaN  0.000 NaN  0.000 NaN NaN NaN       0 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000 NaN  0.000                                       
14:57:08.021      std_2-1                                                                                                                                                                                                             
14:57:08.021      max_2-1                                                                                                                                                                                                             
14:57:08.130                                                                                                                                                                                                                          
14:57:08.130 Miscellaneous statistics:                                                                                                                                                                                                
14:57:08.130 (These statistics do not include activity between the last reported interval and shutdown.)                                                                                                                              
14:57:08.130 DIR_EXISTS          Directory may not exist (yet):                   14          4/sec                                                                                                                                   
14:57:08.130 FILE_MAY_NOT_EXIST  File may not exist (yet):                        16          5/sec                                                                                                                                   
14:57:08.130                                                                                                                                                                                                                          
14:57:38.173 Waiting for slave synchronization: hd1-0. Building and validating file structure(s).                                                                                                                                     
14:58:08.063 hd1-0: Completing the creation of internal file structure for anchor=/root/performance/fg-mp1/verilog_1: 4,096,000 files.                                                                                                
14:58:08.202 Waiting for slave synchronization: hd1-0. Building and validating file structure(s).                                                                                                                                     
14:58:38.232 Waiting for slave synchronization: hd1-0. Building and validating file structure(s).                                                                                                                                     
14:59:08.262 Waiting for slave synchronization: hd1-0. Building and validating file structure(s).                                                                                                                                     
14:59:08.411 hd1-0: Completing the creation of internal file structure for anchor=/root/performance/fg-mp2/verilog_2: 4,096,000 files.                                                                                                
14:59:09.000 Starting RD=rd2; elapsed=300; fwdrate=max. For loops: xfersize=64k threads=128                                                                                                                                           
14:59:09.461 hd1-0: 14:59:09.460 op: write  lun: /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0064.file lba:            0 0x00000000 xfer:     1024 err
no: EINVAL: 'Invalid argument'           
14:59:09.475 hd1-0: 14:59:09.470 Write error using file /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0064.file                                         
Error:         EINVAL: 'Invalid argument'                                                                          
lba:           0                                                                                                                                                                                                                      
xfersize:      1024                                                                                                
blocks_done:   0                                                                                                                                                                                                                      
bytes_done:    0                                                                                                   
open_for_read: false                                                                                                                                                                                                                  
fhandle:       17                                                                                                  
14:59:09.506 hd1-0: 14:59:09.472 op: read   lun: /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0008.file lba:            0 0x00000000 xfer:     2048 err
no: EINVAL: 'Invalid argument'                                                                                     
14:59:09.508 hd1-0: 14:59:09.507 Read error using file /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0008.file                                          
Error:         EINVAL: 'Invalid argument'
lba:           0                                                                                                                                                                                                                      
xfersize:      2048                                                                                                
blocks_done:   0                                                                                                                                                                                                                      
bytes_done:    0
open_for_read: true
fhandle:       13
14:59:09.633 hd1-0: 14:59:09.633 op: read   lun: /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0474.file lba:            0 0x00000000 xfer:     1024 err
no: EINVAL: 'Invalid argument' 
14:59:09.635 hd1-0: 14:59:09.634 op: read   lun: /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0473.file lba:            0 0x00000000 xfer:     1024 err
no: EINVAL: 'Invalid argument' 
14:59:09.635 hd1-0: 14:59:09.635 Read error using file /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0474.file
Error:         EINVAL: 'Invalid argument'
lba:           0
xfersize:      1024
blocks_done:   0
bytes_done:    0
open_for_read: true
fhandle:       97
14:59:09.636 hd1-0: 14:59:09.636 op: read   lun: /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0472.file lba:            0 0x00000000 xfer:     1024 err
no: EINVAL: 'Invalid argument' 

file size

root@vita-1:~/output# ls -l /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0064.file
-rw-r--r-- 1 root root 1024 Mar 15 14:36 /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0064.file
root@vita-1:~/output# ls -l /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0008.file
-rw-r--r-- 1 root root 2048 Mar 15 14:49 /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f0008.file
root@vita-1:~/output# ls -l  /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0474.file
-rw-r--r-- 1 root root 1024 Mar 15 15:18 /root/performance/fg-mp1/verilog_1/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_1.dir/vdb.6_2.dir/vdb.7_1.dir/vdb_f0474.file

csi driver: invalid syntax for size

Hey, I think I have everything setup on the latest version, but I'm running into an error with the csi driver about the size. I'm also sending poolId: "2" but I see that value isn't being added to the CLI command.

controllerserver.go:136] vitastor-cli create pvc-00fca7d6-cdf6-48f2-9065-f5cc38a10eab 
-s � --pool  --etcd_address 10.10.0.161:2379,10.10.0.162:2379,10.10.0.163:2379,
10.10.0.164:2379,10.10.0.165:2379,10.10.0.166:2379 --etcd_prefix /vitastor 
failed: Invalid syntax for size: �
controllerserver.go:145] received controller create volume request {
  "capacity_range": {
    "required_bytes": 10737418240
  },
  "name": "pvc-00fca7d6-cdf6-48f2-9065-f5cc38a10eab",
  "parameters": {
    "csi.storage.k8s.io/pv/name": "pvc-00fca7d6-cdf6-48f2-9065-f5cc38a10eab",
    "csi.storage.k8s.io/pvc/name": "test-vitastor-pvc",
    "csi.storage.k8s.io/pvc/namespace": "vitastor-system",
    "etcdPrefix": "/vitastor",
    "etcdUrl": "10.10.0.161:2379,10.10.0.162:2379,10.10.0.163:2379,10.10.0.164:2379,10.10.0.165:2379,10.10.0.166:2379",
    "etcdVolumePrefix": "",
    "poolId": "2"
  },
  "volume_capabilities": [
    {
      "AccessType": {
        "Mount": {
          "fs_type": "ext4"
        }
      },
      "access_mode": {
        "mode": 1
      }
    }
  ]
}

AWS Environment and performance questions

👋 Quick question, what's the expected performance using Vitastor in AWS with local NVMEs?

For instance, using d3en.xlarge instances I get this numbers:

fio -ioengine=libaio -direct=1 -invalidate=1 -name=test -bs=4M -iodepth=32 -rw=write -runtime=60 -filename=/dev/nbd0
test: (g=0): rw=write, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=libaio, iodepth=32
fio-3.33
Starting 1 process
Jobs: 1 (f=1): [W(1)][32.1%][eta 02m:11s]
test: (groupid=0, jobs=1): err= 0: pid=5630: Thu Jul 13 17:03:31 2023
  write: IOPS=13, BW=54.7MiB/s (57.4MB/s)(3404MiB/62225msec); 0 zone resets
    slat (usec): min=111, max=2743, avg=551.34, stdev=445.80
    clat (msec): min=43, max=4258, avg=2336.55, stdev=541.18
     lat (msec): min=44, max=4259, avg=2337.10, stdev=541.21
    clat percentiles (msec):
     |  1.00th=[  625],  5.00th=[ 1687], 10.00th=[ 1838], 20.00th=[ 2056],
     | 30.00th=[ 2165], 40.00th=[ 2232], 50.00th=[ 2299], 60.00th=[ 2366],
     | 70.00th=[ 2467], 80.00th=[ 2567], 90.00th=[ 3071], 95.00th=[ 3272],
     | 99.00th=[ 3977], 99.50th=[ 4111], 99.90th=[ 4245], 99.95th=[ 4245],
     | 99.99th=[ 4245]
   bw (  KiB/s): min= 8192, max=122880, per=100.00%, avg=59977.14, stdev=19769.18, samples=112
   iops        : min=    2, max=   30, avg=14.64, stdev= 4.83, samples=112
  lat (msec)   : 50=0.12%, 250=0.82%, 750=0.94%, 2000=15.86%, >=2000=82.26%
  cpu          : usr=0.33%, sys=0.07%, ctx=994, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=96.4%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,851,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=54.7MiB/s (57.4MB/s), 54.7MiB/s-54.7MiB/s (57.4MB/s-57.4MB/s), io=3404MiB (3569MB), run=62225-62225msec

Disk stats (read/write):
  nbd0: ios=51/589, merge=0/286, ticks=1847/1381521, in_queue=1383369, util=99.96%

which is slow.

OS: Debian 12 - no changes
Kernel: 6.1.0-10-cloud-amd64

do I need to tune sysctl?

Thanks!

[cinder-vitastor] ERROR cinder.volume.drivers.vitastor KeyError: 'total_raw_tb'

Hello @vitalif, when I copy vitastor driver into container cinder-volume, I got an error in cinder-volume.log

2021-10-22 03:56:01.993 25 INFO cinder.manager [req-774e5bce-2a4f-4b8c-90e7-76c60c55c84d - - - - -] Initiating service 3 cleanup
2021-10-22 03:56:01.996 25 INFO cinder.manager [req-774e5bce-2a4f-4b8c-90e7-76c60c55c84d - - - - -] Service 3 cleanup completed.
2021-10-22 03:56:02.039 25 INFO cinder.volume.manager [req-774e5bce-2a4f-4b8c-90e7-76c60c55c84d - - - - -] Initializing RPC dependent components of volume driver VitastorDriver (N/A)
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor [req-774e5bce-2a4f-4b8c-90e7-76c60c55c84d - - - - -] error getting vitastor pool stats: 'total_raw_tb': KeyError: 'total_raw_tb'
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor Traceback (most recent call last):
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor   File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/drivers/vitastor.py", line 270, in _update_volume_stats
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor     stats['free_capacity_gb'] = round(1024.0*(pool_stats['total_raw_tb']-pool_stats['used_raw_tb'])/pool_stats['raw_to_usable'], 2)
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor KeyError: 'total_raw_tb'
2021-10-22 03:56:02.043 25 ERROR cinder.volume.drivers.vitastor
2021-10-22 03:56:02.053 25 INFO cinder.volume.manager [req-774e5bce-2a4f-4b8c-90e7-76c60c55c84d - - - - -] Driver post RPC initialization completed successfully.
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor [req-381ace47-2928-46e8-aeed-5aa70cd89c79 - - - - -] error getting vitastor pool stats: 'total_raw_tb': KeyError: 'total_raw_tb'
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor Traceback (most recent call last):
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor   File "/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/drivers/vitastor.py", line 270, in _update_volume_stats
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor     stats['free_capacity_gb'] = round(1024.0*(pool_stats['total_raw_tb']-pool_stats['used_raw_tb'])/pool_stats['raw_to_usable'], 2)
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor KeyError: 'total_raw_tb'
2021-10-22 03:57:45.065 25 ERROR cinder.volume.drivers.vitastor

This is how config cinder.cof and copy driver into cinder-volume container:

[DEFAULT]
#enabled_backends = lvm-1
enabled_backends = vitastor

[vitastor]
vitastor_etcd_address = http://127.0.0.1:2379
vitastor_pool_id = 1
backend_host = vitastor:hdd
volume_backend_name = vitastor
volume_driver = cinder.volume.drivers.vitastor.VitastorDriver
sudo docker cp ./vitastor/patches/cinder-vitastor.py cinder_volume:/var/lib/kolla/venv/lib/python3.8/site-packages/cinder/volume/drivers/vitastor.py

sudo docker restart cinder_volume

I figure out how to solve this error, fix a little in driver:

diff --git a/patches/cinder-vitastor.py b/patches/cinder-vitastor.py
index bb7e9f1..4eb29d3 100644
--- a/patches/cinder-vitastor.py
+++ b/patches/cinder-vitastor.py
@@ -266,7 +266,7 @@ class VitastorDriver(driver.CloneableImageVD,
             stats['provisioned_capacity_gb'] = round(total_provisioned/1024.0/1024.0/1024.0, 2)
             pool_stats = pool_stats['responses'][0]['kvs']
             if len(pool_stats):
-                pool_stats = pool_stats[0]
+                pool_stats = pool_stats[0]['value']
                 stats['free_capacity_gb'] = round(1024.0*(pool_stats['total_raw_tb']-pool_stats['used_raw_tb'])/pool_stats['raw_to_usable'], 2)
                 stats['total_capacity_gb'] = round(1024.0*pool_stats['total_raw_tb'], 2)
             stats['backend_state'] = 'up'

cinder-volume.log

2021-10-22 04:20:02.101 7 INFO oslo_service.service [req-2823e21c-e355-43a5-b0df-fd9ba5a81132 - - - - -] Starting 1 workers
2021-10-22 04:20:02.105 24 INFO cinder.service [-] Starting cinder-volume node (version 18.0.1)
2021-10-22 04:20:02.115 24 INFO cinder.volume.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Starting volume driver VitastorDriver (N/A)
2021-10-22 04:20:02.121 24 INFO cinder.volume.driver [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Driver hasn't implemented _init_vendor_properties()
2021-10-22 04:20:02.148 24 INFO cinder.keymgr.migration [req-adb12c7b-9135-4215-8fc8-ea5f32c5444d - - - - -] Not migrating encryption keys because the ConfKeyManager's fixed_key is not in use.
2021-10-22 04:20:02.167 24 INFO cinder.volume.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Driver initialization completed successfully.
2021-10-22 04:20:02.171 24 INFO cinder.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Initiating service 3 cleanup
2021-10-22 04:20:02.174 24 INFO cinder.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Service 3 cleanup completed.
2021-10-22 04:20:02.209 24 INFO cinder.volume.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Initializing RPC dependent components of volume driver VitastorDriver (N/A)
2021-10-22 04:20:02.216 24 INFO cinder.volume.manager [req-05633245-5463-4266-ada6-c091c662dbe6 - - - - -] Driver post RPC initialization completed successfully.

I have created pull request to resolve this: #26

vitastor-nfs issue without --forgeground

I'm unable to create the NFS mount with vitastor-nfs running in the background. I also have the same issue with the mount subcommand.

If I run it like this:

vitastor-nfs start --portmap 0 --port 2049 --pool iso-images --fs isofs --foreground

I can mount and use nfs like normal with this.
Mounted using this command:
mount localhost:/ /mnt/isos/ -o mountport=2049,port=2049,nfsvers=3,soft,nolock,tcp

[vitastor-mon] TypeError: Do not know how to serialize a BigInt

Hi @vitalif ,

I use etcd 3.5.0, and after running mon for a while, the following error appears:

root@vita-1:~# node /usr/lib/vitastor/mon/mon-main.js --etcd_url 'http://192.168.1.215:2379,http://192.168.1.216:2379,http://192.168.1.217:2379' --etcd_prefix '/vitastor' --etcd_start_timeout 5
Waiting to become master
Waiting to become master
Waiting to become master
Waiting to become master
Waiting to become master
Became master
Bad key in etcd: /vitastor/pool/stats/1 = {"used_raw_tb":0.014866113662719727}
Bad key in etcd: /vitastor/pool/stats/2 = {"used_raw_tb":0,"total_raw_tb":8.730691909790039,"raw_to_usable":1.5,"space_efficiency":1}
TypeError: Do not know how to serialize a BigInt
    at JSON.stringify (<anonymous>)
    at Mon.update_total_stats (/usr/lib/vitastor/mon/mon.js:1355:33)
    at Timeout._onTimeout (/usr/lib/vitastor/mon/mon.js:1380:18)
    at listOnTimeout (internal/timers.js:554:17)
    at processTimers (internal/timers.js:497:7)
TypeError: Do not know how to serialize a BigInt
    at JSON.stringify (<anonymous>)
    at Mon.update_total_stats (/usr/lib/vitastor/mon/mon.js:1355:33)
    at Timeout._onTimeout (/usr/lib/vitastor/mon/mon.js:1380:18)
    at listOnTimeout (internal/timers.js:554:17)
    at processTimers (internal/timers.js:497:7)
TypeError: Do not know how to serialize a BigInt
    at JSON.stringify (<anonymous>)
    at Mon.update_total_stats (/usr/lib/vitastor/mon/mon.js:1355:33)
    at Timeout._onTimeout (/usr/lib/vitastor/mon/mon.js:1380:18)
    at listOnTimeout (internal/timers.js:554:17)
    at processTimers (internal/timers.js:497:7)

Data in etcd

root@vita-3:~# etcdctl --endpoints 192.168.1.215:2379 get "" --prefix
/vitastor/config/inode/1/1
{"name":"ubuntu18.04-disk-1","size":214748364800}
/vitastor/config/inode/1/2
{"name":"ubuntu18.04-disk-2","size":214748364800}
/vitastor/config/inode/1/3
{"name":"ubuntu18.04-disk-3","size":214748364800}
/vitastor/config/inode/1/4
{"name":"ubuntu18.04-disk-4","size":214748364800}
/vitastor/config/inode/1/5
{"name":"ubuntu18.04-disk-5","size":214748364800}
/vitastor/config/inode/1/6
{"name":"ubuntu18.04-disk-6","size":214748364800}
/vitastor/config/inode/1/7
{"name":"win10-disk-1","size":214748364800}
/vitastor/config/inode/1/8
{"name":"win10-disk-2","size":214748364800}
/vitastor/config/inode/2/1
{"name":"test-image-1","size":214748364800}
/vitastor/config/pgs
{"hash":"1764852fadb3828f3465b9cecaa646f99447240f","items":{"2":{"1":{"osd_set":["1","2","3"],"primary":"1"},"2":{"osd_set":["1","2","3"],"primary":"2"},"3":{"osd_set":["1","2","3"],"primary":"2"},"4":{"osd_set":["1","2","3"],"primary":"1"},"5":{"osd_set":["1","2","3"],"primary":"1"},"6":{"osd_set":["1","2","3"],"primary":"2"},"7":{"osd_set":["1","2","3"],"primary":"1"},"8":{"osd_set":["1","2","3"],"primary":"2"},"9":{"osd_set":["1","2","3"],"primary":"2"},"10":{"osd_set":["1","2","3"],"primary":"2"},"11":{"osd_set":["1","2","3"],"primary":"1"},"12":{"osd_set":["1","2","3"],"primary":"1"},"13":{"osd_set":["1","2","3"],"primary":"1"},"14":{"osd_set":["1","2","3"],"primary":"2"},"15":{"osd_set":["1","2","3"],"primary":"1"},"16":{"osd_set":["1","2","3"],"primary":"1"}}}}
/vitastor/config/pools
{"2":{"name":"ecpool","scheme":"jerasure","pg_size":3,"parity_chunks":1,"pg_minsize":2,"pg_count":16,"failure_domain":"host"}}
/vitastor/history/last_clean_pgs
{"hash":"1764852fadb3828f3465b9cecaa646f99447240f","items":{"2":{"1":{"osd_set":["1","2","3"],"primary":"1"},"2":{"osd_set":["1","2","3"],"primary":"2"},"3":{"osd_set":["1","2","3"],"primary":"2"},"4":{"osd_set":["1","2","3"],"primary":"1"},"5":{"osd_set":["1","2","3"],"primary":"1"},"6":{"osd_set":["1","2","3"],"primary":"2"},"7":{"osd_set":["1","2","3"],"primary":"1"},"8":{"osd_set":["1","2","3"],"primary":"2"},"9":{"osd_set":["1","2","3"],"primary":"2"},"10":{"osd_set":["1","2","3"],"primary":"2"},"11":{"osd_set":["1","2","3"],"primary":"1"},"12":{"osd_set":["1","2","3"],"primary":"1"},"13":{"osd_set":["1","2","3"],"primary":"1"},"14":{"osd_set":["1","2","3"],"primary":"2"},"15":{"osd_set":["1","2","3"],"primary":"1"},"16":{"osd_set":["1","2","3"],"primary":"1"}}}}
/vitastor/inode/stats/1/1
{"raw_used":"16345464832","read":{"count":"19166","usec":"3819692","bytes":"634892288"},"write":{"count":"110613","usec":"313586710","bytes":"8007872512"},"delete":{"count":"0","usec":"0","bytes":"0"}}
/vitastor/inode/stats/1/2
{"raw_used":"0","read":{"count":"509","usec":"8030","bytes":"10932224"},"write":{"count":"0","usec":"0","bytes":"0"},"delete":{"count":"0","usec":"0","bytes":"0"}}
/vitastor/osd/inodestats/1
{"1": {"1": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 171704320, "count": 5310, "usec": 986526}, "write": {"bytes": 2018930688, "count": 27823, "usec": 129020623}}, "2": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 5492736, "count": 271, "usec": 3584}, "write": {"bytes": 0, "count": 0, "usec": 0}}}, "2": {"1": {"delete": {"bytes": 1051459584, "count": 4011, "usec": 4459946}, "read": {"bytes": 1675264, "count": 107, "usec": 32561}, "write": {"bytes": 1023574016, "count": 4249, "usec": 21526417}}}}
/vitastor/osd/inodestats/2
{"1": {"1": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 272969728, "count": 7590, "usec": 1634664}, "write": {"bytes": 3488534528, "count": 48086, "usec": 107710839}}, "2": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 4087808, "count": 218, "usec": 3835}, "write": {"bytes": 0, "count": 0, "usec": 0}}}, "2": {"1": {"delete": {"bytes": 795344896, "count": 3034, "usec": 3999795}, "read": {"bytes": 548864, "count": 10, "usec": 6770}, "write": {"bytes": 795009024, "count": 3209, "usec": 15870628}}}}
/vitastor/osd/inodestats/3
{"1": {"1": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 190218240, "count": 6266, "usec": 1198502}, "write": {"bytes": 2500407296, "count": 34704, "usec": 76855248}}, "2": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 1351680, "count": 20, "usec": 611}, "write": {"bytes": 0, "count": 0, "usec": 0}}}}
/vitastor/osd/space/1
{"1": {"1": 5106302976}, "2": {"1": 0}}
/vitastor/osd/space/2
{"1": {"1": 5625610240}, "2": {"1": 0}}
/vitastor/osd/space/3
{"1": {"1": 5613551616}, "2": {"1": 0}}
/vitastor/osd/state/1
{"addresses": ["192.168.1.215"], "blockstore_enabled": true, "host": "vita-1", "port": 40589, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/2
{"addresses": ["192.168.1.216"], "blockstore_enabled": true, "host": "vita-2", "port": 37407, "primary_enabled": true, "state": "up"}
/vitastor/osd/state/3
{"addresses": ["192.168.1.217"], "blockstore_enabled": true, "host": "vita-3", "port": 38443, "primary_enabled": true, "state": "up"}
/vitastor/osd/stats/1
{"blockstore_ready": true, "free": 3194723106816, "host": "vita-1", "op_stats": {"delete": {"bytes": 0, "count": 7045, "usec": 1650423}, "list": {"bytes": 0, "count": 49, "usec": 66749}, "ping": {"bytes": 0, "count": 5143, "usec": 5558}, "primary_delete": {"bytes": 0, "count": 4011, "usec": 4459946}, "primary_read": {"bytes": 178872320, "count": 5688, "usec": 1022671}, "primary_sync": {"bytes": 0, "count": 9836, "usec": 27972}, "primary_write": {"bytes": 3042504704, "count": 32072, "usec": 150547040}, "read": {"bytes": 222588928, "count": 41594, "usec": 1407061}, "rollback": {"bytes": 0, "count": 0, "usec": 0}, "sec_read_bmp": {"bytes": 0, "count": 0, "usec": 0}, "show_config": {"bytes": 0, "count": 10, "usec": 265}, "stabilize": {"bytes": 0, "count": 7458, "usec": 773075}, "sync": {"bytes": 0, "count": 40, "usec": 3280415}, "sync_stab_all": {"bytes": 0, "count": 0, "usec": 0}, "write": {"bytes": 910262272, "count": 7458, "usec": 2195389}, "write_stable": {"bytes": 2973429760, "count": 68844, "usec": 11774571}}, "recovery_stats": {"degraded": {"bytes": 0, "count": 0}, "misplaced": {"bytes": 0, "count": 0}}, "size": 3199832424448, "subop_stats": {"delete": {"count": 8022, "usec": 7558118}, "list": {"count": 26, "usec": 65231}, "ping": {"count": 2949, "usec": 1218621}, "primary_delete": {"count": 0, "usec": 0}, "primary_read": {"count": 0, "usec": 0}, "primary_sync": {"count": 0, "usec": 0}, "primary_write": {"count": 0, "usec": 0}, "read": {"count": 360, "usec": 2266270}, "rollback": {"count": 0, "usec": 0}, "sec_read_bmp": {"count": 0, "usec": 0}, "show_config": {"count": 4, "usec": 5914}, "stabilize": {"count": 8498, "usec": 8202486}, "sync": {"count": 26, "usec": 2116791}, "sync_stab_all": {"count": 0, "usec": 0}, "write": {"count": 8498, "usec": 26592079}, "write_stable": {"count": 27823, "usec": 99824971}}, "time": "1629356735.338"}
/vitastor/osd/stats/2
{"blockstore_ready": true, "free": 3194203668480, "host": "vita-2", "op_stats": {"delete": {"bytes": 0, "count": 7045, "usec": 1623865}, "list": {"bytes": 0, "count": 44, "usec": 96611}, "ping": {"bytes": 0, "count": 5137, "usec": 5688}, "primary_delete": {"bytes": 0, "count": 3034, "usec": 3999795}, "primary_read": {"bytes": 277606400, "count": 7818, "usec": 1645269}, "primary_sync": {"bytes": 0, "count": 9836, "usec": 27139}, "primary_write": {"bytes": 4283543552, "count": 51295, "usec": 123581467}, "read": {"bytes": 324321280, "count": 62328, "usec": 2005357}, "rollback": {"bytes": 0, "count": 0, "usec": 0}, "sec_read_bmp": {"bytes": 0, "count": 0, "usec": 0}, "show_config": {"bytes": 0, "count": 9, "usec": 230}, "stabilize": {"bytes": 0, "count": 7458, "usec": 609317}, "sync": {"bytes": 0, "count": 37, "usec": 4369828}, "sync_stab_all": {"bytes": 0, "count": 0, "usec": 0}, "write": {"bytes": 908320768, "count": 7458, "usec": 1799322}, "write_stable": {"bytes": 2030223360, "count": 76359, "usec": 12506844}}, "recovery_stats": {"degraded": {"bytes": 0, "count": 0}, "misplaced": {"bytes": 0, "count": 0}}, "size": 3199832424448, "subop_stats": {"delete": {"count": 6068, "usec": 6587641}, "list": {"count": 25, "usec": 77516}, "ping": {"count": 2950, "usec": 1239818}, "primary_delete": {"count": 0, "usec": 0}, "primary_read": {"count": 0, "usec": 0}, "primary_sync": {"count": 0, "usec": 0}, "primary_write": {"count": 0, "usec": 0}, "read": {"count": 181, "usec": 388382}, "rollback": {"count": 0, "usec": 0}, "sec_read_bmp": {"count": 0, "usec": 0}, "show_config": {"count": 3, "usec": 3460}, "stabilize": {"count": 6418, "usec": 6696214}, "sync": {"count": 25, "usec": 49339}, "sync_stab_all": {"count": 0, "usec": 0}, "write": {"count": 6418, "usec": 19721826}, "write_stable": {"count": 48086, "usec": 71810987}}, "time": "1629356742.831"}
/vitastor/osd/stats/3
{"blockstore_ready": true, "free": 3194215858176, "host": "vita-3", "op_stats": {"delete": {"bytes": 0, "count": 7045, "usec": 1260279}, "list": {"bytes": 0, "count": 27, "usec": 30342}, "ping": {"bytes": 0, "count": 5102, "usec": 5591}, "primary_delete": {"bytes": 0, "count": 0, "usec": 0}, "primary_read": {"bytes": 191569920, "count": 6286, "usec": 1199113}, "primary_sync": {"bytes": 0, "count": 9731, "usec": 27622}, "primary_write": {"bytes": 2500407296, "count": 34704, "usec": 76855248}, "read": {"bytes": 191569920, "count": 40990, "usec": 1350239}, "rollback": {"bytes": 0, "count": 0, "usec": 0}, "sec_read_bmp": {"bytes": 0, "count": 0, "usec": 0}, "show_config": {"bytes": 0, "count": 6, "usec": 145}, "stabilize": {"bytes": 0, "count": 7458, "usec": 575058}, "sync": {"bytes": 0, "count": 27, "usec": 3087229}, "sync_stab_all": {"bytes": 0, "count": 0, "usec": 0}, "write": {"bytes": 954507264, "count": 7458, "usec": 1177390}, "write_stable": {"bytes": 3004219392, "count": 76023, "usec": 11734265}}, "recovery_stats": {"degraded": {"bytes": 0, "count": 0}, "misplaced": {"bytes": 0, "count": 0}}, "size": 3199832424448, "subop_stats": {"delete": {"count": 0, "usec": 0}, "list": {"count": 5, "usec": 41493}, "ping": {"count": 2950, "usec": 1137130}, "primary_delete": {"count": 0, "usec": 0}, "primary_read": {"count": 0, "usec": 0}, "primary_sync": {"count": 0, "usec": 0}, "primary_write": {"count": 0, "usec": 0}, "read": {"count": 0, "usec": 0}, "rollback": {"count": 0, "usec": 0}, "sec_read_bmp": {"count": 0, "usec": 0}, "show_config": {"count": 2, "usec": 2287}, "stabilize": {"count": 0, "usec": 0}, "sync": {"count": 5, "usec": 5596}, "sync_stab_all": {"count": 0, "usec": 0}, "write": {"count": 0, "usec": 0}, "write_stable": {"count": 34704, "usec": 51596277}}, "time": "1629356745.388"}
/vitastor/pg/state/2/1
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/10
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/11
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/12
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/13
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/14
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/15
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/16
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/2
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/3
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/4
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/5
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/6
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/7
{"peers": [1, 2, 3], "primary": 1, "state": ["active"]}
/vitastor/pg/state/2/8
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/state/2/9
{"peers": [1, 2, 3], "primary": 2, "state": ["active"]}
/vitastor/pg/stats/1/1
{"clean_count": 3205, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3205, "write_osd_set": [1, 2]}
/vitastor/pg/stats/1/10
{"clean_count": 3056, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3056, "write_osd_set": [1, 2]}
/vitastor/pg/stats/1/11
{"clean_count": 3053, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3053, "write_osd_set": [3, 1]}
/vitastor/pg/stats/1/12
{"clean_count": 3034, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3034, "write_osd_set": [3, 1]}
/vitastor/pg/stats/1/13
{"clean_count": 3860, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3860, "write_osd_set": [2, 3]}
/vitastor/pg/stats/1/14
{"clean_count": 3020, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3020, "write_osd_set": [3, 1]}
/vitastor/pg/stats/1/15
{"clean_count": 3015, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3015, "write_osd_set": [1, 2]}
/vitastor/pg/stats/1/16
{"clean_count": 3806, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3806, "write_osd_set": [2, 3]}
/vitastor/pg/stats/1/2
{"clean_count": 3122, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3122, "write_osd_set": [2, 3]}
/vitastor/pg/stats/1/3
{"clean_count": 3104, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3104, "write_osd_set": [3, 1]}
/vitastor/pg/stats/1/4
{"clean_count": 3937, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3937, "write_osd_set": [2, 3]}
/vitastor/pg/stats/1/5
{"clean_count": 3934, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3934, "write_osd_set": [2, 3]}
/vitastor/pg/stats/1/6
{"clean_count": 3050, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3050, "write_osd_set": [1, 2]}
/vitastor/pg/stats/1/7
{"clean_count": 3046, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3046, "write_osd_set": [3, 1]}
/vitastor/pg/stats/1/8
{"clean_count": 3029, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3029, "write_osd_set": [1, 2]}
/vitastor/pg/stats/1/9
{"clean_count": 3060, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 3060, "write_osd_set": [2, 3]}
/vitastor/pg/stats/2/1
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/10
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/11
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/12
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/13
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/14
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/15
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/16
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/2
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/3
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/4
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/5
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/6
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/7
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/8
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pg/stats/2/9
{"clean_count": 0, "degraded_count": 0, "incomplete_count": 0, "misplaced_count": 0, "object_count": 0, "write_osd_set": [1, 2, 3]}
/vitastor/pool/stats/1
{"used_raw_tb":0.014866113662719727}
/vitastor/pool/stats/2
{"used_raw_tb":0,"total_raw_tb":8.730691909790039,"raw_to_usable":1.5,"space_efficiency":1}
/vitastor/stats
{"op_stats":{"delete":{"count":"0","usec":"0","bytes":"0"},"list":{"count":"56","usec":"188520","bytes":"0"},"ping":{"count":"11403","usec":"12441","bytes":"0"},"primary_delete":{"count":"0","usec":"0","bytes":"0"},"primary_read":{"count":"19675","usec":"3827722","bytes":"645824512"},"primary_sync":{"count":"29193","usec":"82119","bytes":"0"},"primary_write":{"count":"110613","usec":"313586710","bytes":"8007872512"},"read":{"count":"130288","usec":"4412920","bytes":"645824512"},"rollback":{"count":"0","usec":"0","bytes":"0"},"sec_read_bmp":{"count":"0","usec":"0","bytes":"0"},"show_config":{"count":"21","usec":"498","bytes":"0"},"stabilize":{"count":"0","usec":"0","bytes":"0"},"sync":{"count":"56","usec":"10731963","bytes":"0"},"sync_stab_all":{"count":"0","usec":"0","bytes":"0"},"write":{"count":"0","usec":"0","bytes":"0"},"write_stable":{"count":"221226","usec":"36015680","bytes":"8007872512"}},"subop_stats":{"delete":{"count":"0","usec":"0"},"list":{"count":"24","usec":"167489"},"ping":{"count":"5873","usec":"2389446"},"primary_delete":{"count":"0","usec":"0"},"primary_read":{"count":"0","usec":"0"},"primary_sync":{"count":"0","usec":"0"},"primary_write":{"count":"0","usec":"0"},"read":{"count":"0","usec":"0"},"rollback":{"count":"0","usec":"0"},"sec_read_bmp":{"count":"0","usec":"0"},"show_config":{"count":"9","usec":"11661"},"stabilize":{"count":"0","usec":"0"},"sync":{"count":"24","usec":"2147704"},"sync_stab_all":{"count":"0","usec":"0"},"write":{"count":"0","usec":"0"},"write_stable":{"count":"110613","usec":"223232235"}},"recovery_stats":{"degraded":{"count":"0","bytes":"0"},"misplaced":{"count":"0","bytes":"0"}},"object_counts":{"object":"52331","clean":"52331","misplaced":"0","degraded":"0","incomplete":"0"}}

[nbd] XFS (nbd0): metadata I/O error

When I use nbd to mount the vitastor image and read and write, the following error occurs

[Mon Jul 19 10:37:34 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:34 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:34 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:34 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:34 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:34 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:34 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:34 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:34 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:34 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:34 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:34 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:34 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:34 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:34 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:34 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:34 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:34 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:34 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:34 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:34 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:34 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:34 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:34 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:34 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:34 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:34 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:34 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:34 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:34 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:34 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:34 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:34 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:34 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:34 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Metadata corruption detected at xfs_dinode_verify.part.0+0x172/0x640 [xfs], inode 0x14506b6d dinode
[Mon Jul 19 10:37:35 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:37:35 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:37:35 2021] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Mon Jul 19 10:37:35 2021] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:37:35 2021] 00000020: 60 f4 e3 3f 07 da c4 4e 60 f4 e3 3f 07 da c4 4e  `..?...N`..?...N
[Mon Jul 19 10:37:35 2021] 00000030: 60 f4 e3 3f 07 da c4 4e 00 00 00 00 00 00 1b f4  `..?...N........
[Mon Jul 19 10:37:35 2021] 00000040: 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01  ................
[Mon Jul 19 10:37:35 2021] 00000050: 00 00 00 02 00 00 00 00 00 00 00 00 ca c6 e6 ca  ................
[Mon Jul 19 10:37:35 2021] 00000060: ff ff ff ff dd d6 04 ae 00 00 00 00 00 00 00 05  ................
[Mon Jul 19 10:37:35 2021] 00000070: 00 00 00 02 00 01 da a8 00 00 00 00 00 00 00 00  ................
[Mon Jul 19 10:38:42 2021] XFS (nbd0): Corruption warning: Metadata has LSN (7:27) ahead of current LSN (3:28176). Please unmount and run xfs_repair (>= v4.3) to resolve.
[Mon Jul 19 10:38:42 2021] XFS (nbd0): Metadata CRC error detected at xfs_inobt_read_verify+0x12/0x90 [xfs], xfs_inobt block 0x12e423e8 
[Mon Jul 19 10:38:42 2021] XFS (nbd0): Unmount and run xfs_repair
[Mon Jul 19 10:38:42 2021] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Mon Jul 19 10:38:42 2021] 00000000: 54 5a 69 66 32 00 00 00 00 00 00 00 00 00 00 00  TZif2...........
[Mon Jul 19 10:38:42 2021] 00000010: 00 00 00 00 00 00 00 07 00 00 00 07 00 00 00 1b  ................
[Mon Jul 19 10:38:42 2021] 00000020: 00 00 00 3b 00 00 00 07 00 00 00 1a 80 00 00 00  ...;............
[Mon Jul 19 10:38:42 2021] 00000030: 91 05 fc 00 da 62 04 38 4c 9f 27 c8 4d 97 2b f8  .....b.8L.'.M.+.
[Mon Jul 19 10:38:42 2021] 00000040: 4e 7d e2 78 4e fd 8b b8 4f 77 0d f8 50 66 fe f9  N}.xN...Ow..Pf..
[Mon Jul 19 10:38:42 2021] 00000050: 51 60 2a 79 52 46 e0 f9 53 40 0c 79 54 26 c2 f9  Q`*[email protected]&..
[Mon Jul 19 10:38:42 2021] 00000060: 55 1f ee 79 56 06 a4 fa 56 ff d0 7a 57 e6 86 fa  U..yV...V..zW...
[Mon Jul 19 10:38:42 2021] 00000070: 58 df b2 7b 59 c6 68 fb 5a bf 94 7b 5b af 85 7b  X..{Y.h.Z..{[..{
[Mon Jul 19 10:38:42 2021] XFS: metadata IO error: 1716 callbacks suppressed
[Mon Jul 19 10:38:42 2021] XFS (nbd0): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x95/0xd0 [xfs]" at daddr 0x12e423e8 len 8 error 74
[Mon Jul 19 10:38:42 2021] XFS (nbd0): xfs_do_force_shutdown(0x1) called from line 296 of file fs/xfs/xfs_trans_buf.c. Return address = 00000000f86604ca
[Mon Jul 19 10:38:42 2021] XFS (nbd0): I/O Error Detected. Shutting down filesystem
[Mon Jul 19 10:38:42 2021] XFS (nbd0): Please unmount the filesystem and rectify the problem(s)
[Mon Jul 19 10:38:42 2021] XFS (nbd0): xfs_difree_inobt: xfs_inobt_lookup() returned error -117.

[vitastor-cli rm] Layer to remove argument is missing

In release 0.6.6, I got this error when I'm trying to remove node (Release 0.6.5, it remove normally):
This error also makes cinder-vitastor driver can't remove volume in openstack.

vitastor-cli rm --etcd_address 127.0.0.1:2379/v3 --pool 1 --inode 1
Layer to remove argument is missing

vitastor-cli rm --etcd_address 127.0.0.1:2379/v3 --pool 1 --inode 1 --parallel_osds 16 --iodepth 32
Layer to remove argument is missing

Node Data in etcd: (I run Create global configuration, Create pool configuration and Name an image command exactly like in Readme)

etcdctl --endpoints=http://127.0.0.1:2379 get /vitastor/config/inode/1/1

/vitastor/config/inode/1/1
{"name":"testimg","size":2147483648}

[vitastor-osd] Slow op from client

Hi @vitalif ,

When I use the fio test book sequence to write 64k in the qemu virtual machine, numjobs=16, iodepth=4, Slow op from client... appears, then fio writes hang...

This is an occasional problem, and it does not happen 100%.

Oct 21 14:11:16 vita-3 vitastor-osd[1535]: [OSD 3] avg latency for subop 15 (ping): 117 us
Oct 21 14:11:19 vita-3 vitastor-osd[1535]: [OSD 3] avg latency for subop 15 (ping): 113 us
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 8: primary_write id=18910 inode=1000000000013 offset=c08f55000 len=6000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434829 1000000000017:4160000 v986 offset=10000 len=10000 state=0 wait=3 (detail=5152768)
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434831 1000000000017:4480000 v1012 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434833 1000000000017:4360000 v1007 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434835 1000000000017:3e40000 v1010 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434837 1000000000017:4100000 v979 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434839 1000000000017:4080000 v1019 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434841 1000000000017:3e60000 v1055 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434843 1000000000017:3d60000 v1050 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434845 1000000000017:3f00000 v1026 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434847 1000000000017:4400000 v1025 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434849 1000000000017:4300000 v1040 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434851 1000000000017:4240000 v1104 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434853 1000000000017:3f60000 v1062 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434855 1000000000017:3e00000 v1031 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434857 1000000000017:3e80000 v1044 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434859 1000000000017:3d00000 v1029 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434861 1000000000017:4000000 v1060 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434863 1000000000017:4260000 v1054 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434865 1000000000017:4440000 v993 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 9: write_stable id=41434872 1000000000013:140b000000 v342 offset=0 len=1000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818522 1000000000017:3ce0000 v1044 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818524 1000000000017:40a0000 v1052 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818526 1000000000017:4020000 v1076 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818528 1000000000017:4420000 v1003 offset=10000 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818530 1000000000017:45c0000 v983 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818532 1000000000017:3e20000 v993 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 13: write_stable id=26818534 1000000000017:44e0000 v941 offset=0 len=10000 state=0
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873223 inode=1000000000017 offset=42d0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873225 inode=1000000000017 offset=3ed0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873226 inode=1000000000017 offset=3ff0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873242 inode=1000000000017 offset=3d50000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873245 inode=1000000000017 offset=3cd0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873254 inode=1000000000017 offset=3df0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873255 inode=1000000000017 offset=4590000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873267 inode=1000000000017 offset=44d0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873272 inode=1000000000017 offset=4190000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873276 inode=1000000000017 offset=4130000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873279 inode=1000000000017 offset=3f80000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873285 inode=1000000000017 offset=45b0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873293 inode=1000000000017 offset=3f20000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873294 inode=1000000000017 offset=4320000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873295 inode=1000000000017 offset=41a0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873296 inode=1000000000017 offset=4380000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873297 inode=1000000000017 offset=4390000 len=10000 state=1
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873300 inode=1000000000017 offset=3d80000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873302 inode=1000000000017 offset=4140000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873304 inode=1000000000017 offset=41b0000 len=10000 state=1
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873305 inode=1000000000017 offset=3f90000 len=10000 state=1
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873307 inode=1000000000017 offset=3fa0000 len=10000 state=4
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: [OSD 3] Slow op from client 14: primary_write id=115873310 inode=1000000000017 offset=4150000 len=10000 state=1
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: Journal: used_start=004ea000 next_free=004ca000 dirty_start=004b9000 trim_to=004ea000 trim_to_refs=1
Oct 21 14:11:22 vita-3 vitastor-osd[1535]: Flusher: queued=2 first=dirty,1000000000017:3bc0000 trim_wanted=1 dequeuing=0 trimming=0 cur=1 target=1 active=0 syncing=0
Oct 21 14:11:25 vita-3 vitastor-osd[1535]: [OSD 3] avg latency for subop 15 (ping): 105 us

License clarification

What is the intention behind very atypical license? My understanding is that you want to keep all low level API libraries/proxy gateways that are specifically made to expose Vitastor to proper applications (whether commercially licenced or not) opensource. For example if I were to implement my own CSI driver for Kubernetes I would have to make it open source. Is that understanding correct?

Does vitastor have data read and write demo?

Hi @vitastor,
There is a question, think about you for advice. Does vitastor have a demo for reading and writing? I saw src\test_cluster_client.cpp, but it seems that it does not write data to the real cluster. Can you provide a complete demo for reading and writing?

Is there a libvirt patch for vitastor

Hi Vitaliy,
I want to use the virsh command to manage virtual machines through libvirt, but libvirt does not support vitastor, is there a corresponding patch?

root@vitastor-test:~# virsh  define vm_win10
error: Failed to define domain from vm_win10
error: unsupported configuration: unknown protocol type 'vitastor'

mon ceph osd tree

I am new to vitastor. How to add and delete Mon? Is there any similar display command for ceph osd tree? If the cluster has SSD and HDD disks, can I manually create two types of pools? Detailed configuration parameters of RDMA, thank you

etcd-operator adoption

Hey, I know that Vitastor uses etcd for storing metadata. There is an oportunity to install Vitastor in Kubernetes, but it seems there is no reliable solution to provide you such etcd-cluster in Kuberntes.

We've initiated a group effort to create a generic, multi-purpose etcd-operator. Currently, the project is in its early development phase, so there's a good opportunity for you to make influence on the project.

https://github.com/aenix-io/etcd-operator

Our development process is open, and we are in discussions with sig-etcd about the possibility of making this the official version under Kubernetes-SIGs. Our goal is to bring together all potential adopters.

We would be pleased if you could present your official stance on this initiative and respond to the following questions:

  • Whether your project requires an etcd-operator at all.
  • Would you like to adopt and start using it once a stable version is released?
  • How important is it to you that the project continues to develop under the control of the official sig-etcd?
  • Would you be interested in joining the development and discussions to help address architectural challenges?

We welcome any other ideas and suggestions you have regarding this initiative.

[vitastor-mon] TypeError: Cannot read property 'bytes' of undefined

Hi @vitalif ,

When deploying the first OSD, vistator-mon reported an error.

etcd

root@debian11-1:~/github/vitastor/mon# etcdctl --endpoints 192.168.40.215:2379 get "" --prefix
/vitastor/config/pgs
{"hash":"50af243b05d60b8a73dc76fb2bcf1bfc80ac4f8f"}
/vitastor/mon/master
{"ip":["192.168.40.215"]}
/vitastor/osd/inodestats/1
{}
/vitastor/osd/space/1
{}
/vitastor/osd/state/1
{"addresses": ["192.168.40.215"], "blockstore_enabled": true, "host": "debian11-1", "port": 39213, "primary_enabled": true, "state": "up"}
/vitastor/osd/stats/1
{"blockstore_ready": true, "free": 53656027136, "host": "debian11-1", "op_stats": {"delete": {"bytes": 0, "count": 0, "usec": 0}, "list": {"bytes": 0, "count": 0, "usec": 0}, "ping": {"bytes": 0, "count": 0, "usec": 0}, "primary_delete": {"bytes": 0, "count": 0, "usec": 0}, "primary_read": {"bytes": 0, "count": 0, "usec": 0}, "primary_sync": {"bytes": 0, "count": 0, "usec": 0}, "primary_write": {"bytes": 0, "count": 0, "usec": 0}, "read": {"bytes": 0, "count": 0, "usec": 0}, "rollback": {"bytes": 0, "count": 0, "usec": 0}, "sec_read_bmp": {"bytes": 0, "count": 0, "usec": 0}, "show_config": {"bytes": 0, "count": 0, "usec": 0}, "stabilize": {"bytes": 0, "count": 0, "usec": 0}, "sync": {"bytes": 0, "count": 0, "usec": 0}, "sync_stab_all": {"bytes": 0, "count": 0, "usec": 0}, "write": {"bytes": 0, "count": 0, "usec": 0}, "write_stable": {"bytes": 0, "count": 0, "usec": 0}}, "recovery_stats": {"degraded": {"bytes": 0, "count": 0}, "misplaced": {"bytes": 0, "count": 0}}, "size": 53656027136, "subop_stats": {"delete": {"count": 0, "usec": 0}, "list": {"count": 0, "usec": 0}, "ping": {"count": 0, "usec": 0}, "primary_delete": {"count": 0, "usec": 0}, "primary_read": {"count": 0, "usec": 0}, "primary_sync": {"count": 0, "usec": 0}, "primary_write": {"count": 0, "usec": 0}, "read": {"count": 0, "usec": 0}, "rollback": {"count": 0, "usec": 0}, "sec_read_bmp": {"count": 0, "usec": 0}, "show_config": {"count": 0, "usec": 0}, "stabilize": {"count": 0, "usec": 0}, "sync": {"count": 0, "usec": 0}, "sync_stab_all": {"count": 0, "usec": 0}, "write": {"count": 0, "usec": 0}, "write_stable": {"count": 0, "usec": 0}}, "time": "1644562625.233"}
/vitastor/stats
{"op_stats":{},"subop_stats":{},"recovery_stats":{},"object_counts":{"object":"0","clean":"0","misplaced":"0","degraded":"0","incomplete":"0"}}

vitastor-mon

root@debian11-1:~/test/vitastor# systemctl status vitastor-mon.service 
● vitastor-mon.service - Vitastor monitor
     Loaded: loaded (/etc/systemd/system/vitastor-mon.service; disabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-02-11 01:50:09 EST; 5min ago
   Main PID: 1174 (node)
      Tasks: 7
     Memory: 42.0M
        CPU: 642ms
     CGroup: /system.slice/vitastor-mon.service
             └─1174 node /usr/lib/vitastor/mon/mon-main.js --etcd_url http://192.168.40.215:2379 --etcd_prefix /vitastor --etcd_start_timeout 5

Feb 11 01:55:51 debian11-1 node[1174]: TypeError: Cannot read property 'bytes' of undefined
Feb 11 01:55:51 debian11-1 node[1174]:     at Mon.sum_op_stats (/usr/lib/vitastor/mon/mon.js:1348:91)
Feb 11 01:55:51 debian11-1 node[1174]:     at Mon.update_total_stats (/usr/lib/vitastor/mon/mon.js:1501:26)
Feb 11 01:55:51 debian11-1 node[1174]:     at Timeout._onTimeout (/usr/lib/vitastor/mon/mon.js:1557:18)
Feb 11 01:55:51 debian11-1 node[1174]:     at listOnTimeout (internal/timers.js:554:17)
Feb 11 01:55:51 debian11-1 node[1174]:     at processTimers (internal/timers.js:497:7)

[deploy] libvitastor_client.so.0 strongly relies on the compilation environment

Hi @vitalif ,

When I copied the compiled vitastor .so and binary to the new machine, and installed the qemu and libvirt of the vitastor patch on the new machine, the new machine could not start the vm.

It seems that libvitastor_client.so.0 strongly depends on the compilation environment.

root@vitastor-2:~# ldd /usr/local/lib/x86_64-linux-gnu/libvitastor_client.so.0
	linux-vdso.so.1 (0x00007fff9d395000)
	libtcmalloc_minimal.so.4 => /lib/x86_64-linux-gnu/libtcmalloc_minimal.so.4 (0x00007f5117023000)
	liburing.so.1 => /lib/x86_64-linux-gnu/liburing.so.1 (0x00007f511701e000)
	libibverbs.so.1 => /lib/x86_64-linux-gnu/libibverbs.so.1 (0x00007f5116ffe000)
	libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f5116e31000)
	libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f5116e17000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f5116c52000)
	libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f5116b0c000)
	libnl-route-3.so.200 => /lib/x86_64-linux-gnu/libnl-route-3.so.200 (0x00007f5116a91000)
	libnl-3.so.200 => /lib/x86_64-linux-gnu/libnl-3.so.200 (0x00007f5116a6e000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f5116a4c000)
	libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f5116a46000)
	/lib64/ld-linux-x86-64.so.2 (0x00007f5117282000)
root@vitastor-2:~# virsh start ubuntu18.04-1 
error: Failed to start domain 'ubuntu18.04-1'
error: internal error: qemu unexpectedly closed the monitor: Failed to open module: libvitastor_client.so.0: cannot open shared object file: No such file or directory
2021-08-06T09:15:47.297337Z qemu-system-x86_64: -blockdev {"driver":"vitastor","etcd_host":"172.16.0.10:2379","etcd_prefix":"/vitastor","image":"ubuntu18.04-disk-1","node-name":"libvirt-3-storage","auto-read-only":true,"discard":"unmap"}: Unknown driver 'vitastor'

root@vitastor-2:~# cat /etc/ld.so.conf.d/x86_64-linux-gnu.conf 
# Multiarch support
/usr/local/lib/x86_64-linux-gnu
/lib/x86_64-linux-gnu
/usr/lib/x86_64-linux-gnu

root@vitastor-2:~# ls -l /usr/local/lib/x86_64-linux-gnu/
total 18616
-rw-r--r-- 1 root root  2505896 Aug  6 03:53 libfio_vitastor_blk.so
-rw-r--r-- 1 root root   250576 Aug  6 03:53 libfio_vitastor_sec.so
-rw-r--r-- 1 root root   137000 Aug  6 03:53 libfio_vitastor.so
lrwxrwxrwx 1 root root       20 Aug  6 03:53 libvitastor_blk.so -> libvitastor_blk.so.0
lrwxrwxrwx 1 root root       24 Aug  6 03:53 libvitastor_blk.so.0 -> libvitastor_blk.so.0.6.5
-rw-r--r-- 1 root root  5776360 Aug  6 03:53 libvitastor_blk.so.0.6.5
lrwxrwxrwx 1 root root       23 Aug  6 03:53 libvitastor_client.so -> libvitastor_client.so.0
lrwxrwxrwx 1 root root       27 Aug  6 03:53 libvitastor_client.so.0 -> libvitastor_client.so.0.6.5
-rw-r--r-- 1 root root 10382568 Aug  6 03:53 libvitastor_client.so.0.6.5

[vitastor] probably corrupted filesystem

Hi @vitalif ,

I have a vistator cluster, I created 11TB of images in vistator, used vitastor-nbd map, formatted XFS to mount it locally, and wrote files to it.

root@vita-1:~# etcdctl --endpoints=192.168.1.215:2379 put /vitastor/config/pools '{"1":{"name":"testpool","scheme":"replicated","pg_size":3,"pg_minsize":1,"pg_count":256,"failure_domain":"host"}}'

root@vita-1:~# etcdctl --endpoints=http://192.168.1.215:2379/v3 put /vitastor/config/inode/1/1 '{"name":"testimg","size":12094627905536}'

root@vita-1:~# vitastor-nbd map --etcd_address 192.168.1.215:2379/v3 --image testimg

root@vita-1:~# mkfs.xfs /dev/nbd0

root@vita-1:~# mount /dev/nbd0 /home/vita-xfs/

During data writing the vistator-nbd process is gone, when I mount it again,

root@vita-1:~# mount /dev/nbd0 /home/vita-xfs/
mount: /home/vita-xfs: cannot mount; probably corrupted filesystem on /dev/nbd0.

object(s) degraded

Hey, I have a clean v0.8.6 setup with 5 nodes, but having some issues.

Running this ec pool:

{
  "2": {
    "name": "ecpool",
    "scheme": "ec",
    "pg_size": 4,
    "parity_chunks": 2,
    "pg_minsize": 2,
    "pg_count": 256,
    "failure_domain": "host"
  }
}

I have some images setup

# vitastor-cli ls
NAME                                      POOL    SIZE  FLAGS  PARENT
testimg                                   ecpool  10 G      -
testimg2                                  ecpool  10 G      -
pvc-ab034644-084d-44db-8c7f-91ad0c20da9a  ecpool  10 G      -
pvc-7e2921cc-9c4b-4120-89d2-12355cc4ce22  ecpool  10 G      -

I was running fio in a container on one of the pvc images when things stopped working. I didn't have any of the servers crash still have 5 up, but now it won't recover.

Separate OSDs have this error

[OSD 2] 20 object(s) degraded
[OSD 2] 20 object(s) degraded

[OSD 3] 3 object(s) degraded
[OSD 3] 3 object(s) degraded
root@bmhv4:/home/ubuntu# vitastor-cli status
  cluster:
    etcd: 5 / 6 up, 1.2 M database size
    mon:  5 up, master bmhv4
    osd:  5 / 5 up

  data:
    raw:   31.3 G used, 8.4 T / 8.4 T available
    state: 7.8 G clean, 2.9 M degraded
    pools: 0 / 1 active
    pgs:   196 active
           21 repeering+degraded+has_degraded+has_unclean
           37 repeering+degraded+has_unclean
           2 stopping+degraded+has_degraded+has_unclean

  io:
    client: 0 B/s rd, 0 op/s rd, 0 B/s wr, 0 op/s wr

OSD2 crash?

[PG 2/202] is offline (248 objects)
[PG 2/202] is peering (0 objects)
[PG 2/202] is peering (0 objects)
[PG 2/204] is offline (249 objects)
[PG 2/204] is peering (0 objects)
[PG 2/204] is peering (0 objects)
[PG 2/186] Got object list from OSD 2 (local): 249 object versions (248 of them stable)
[PG 2/187] Got object list from OSD 2 (local): 250 object versions (249 of them stable)
[PG 2/189] Got object list from OSD 2 (local): 250 object versions (249 of them stable)
[PG 2/191] Got object list from OSD 2 (local): 250 object versions (249 of them stable)
[PG 2/192] Got object list from OSD 2 (local): 249 object versions (249 of them stable)
[PG 2/194] Got object list from OSD 2 (local): 248 object versions (248 of them stable)
[PG 2/202] Got object list from OSD 2 (local): 248 object versions (248 of them stable)
[PG 2/204] Got object list from OSD 2 (local): 249 object versions (249 of them stable)
[PG 2/186] Got object list from OSD 3: 249 object versions (248 of them stable)
[PG 2/186] Got object list from OSD 4: 249 object versions (248 of them stable)
[PG 2/186] Got object list from OSD 1: 249 object versions (248 of them stable)
[PG 2/186] 248 clean objects on target OSD set 1, 2, 3, 4
[PG 2/186] is active + has_unclean (248 objects)
terminate called after throwing an instance of 'std::runtime_error'
  what():  Error while doing local stabilize operation: Device or resource busy
No RDMA devices found
[OSD 2] Couldn't initialize RDMA, proceeding with TCP only
Reading blockstore metadata
[OSD 2] reporting to etcd at ["10.10.0.161:2379", "10.10.0.162:2379", "10.10.0.163:2379", "10.10.0.164:2379", "10.10.0.165:2379", "10.10.0.166:2379"] every 5 seconds
Key /vitastor/osd/state/2 already exists in etcd, OSD 2 is still up
  listening at: 10.10.0.161:37757
[OSD 2] Force stopping
No RDMA devices found
[OSD 2] Couldn't initialize RDMA, proceeding with TCP only
Reading blockstore metadata
[OSD 2] reporting to etcd at ["10.10.0.161:2379", "10.10.0.162:2379", "10.10.0.163:2379", "10.10.0.164:2379", "10.10.0.165:2379", "10.10.0.166:2379"] every 5 seconds
Successfully subscribed to etcd at 10.10.0.161:2379/v3
Metadata entries loaded: 31779, free blocks: 15592256 / 15624035
Reading blockstore journal
Journal entries loaded: 1610, free journal space: 30814208 bytes (00e73000..0110f000 is used), free blocks: 15592255 / 15624035
[PG 2/129] is starting (0 objects)
[PG 2/131] is starting (0 objects)
[PG 2/132] is starting (0 objects)

OSD3 crash?

[PG 2/9] 1 objects on OSD set 3(1), 5(2), 4(3)
[PG 2/9] is active + has_degraded (252 objects)
[PG 2/9] is active (252 objects)
[PG 2/9] is offline (252 objects)
[PG 2/9] is peering (0 objects)
[PG 2/9] is peering (0 objects)
[PG 2/9] Got object list from OSD 3 (local): 253 object versions (252 of them stable)
[PG 2/9] Got object list from OSD 4: 253 object versions (252 of them stable)
[PG 2/9] Got object list from OSD 5: 253 object versions (252 of them stable)
[OSD 3] client 10 disconnected
[OSD 3] Stopping client 10 (regular client)
[PG 2/9] Got object list from OSD 1: 253 object versions (252 of them stable)
[PG 2/9] 252 clean objects on target OSD set 1, 3, 5, 4
[PG 2/9] is active + has_unclean (252 objects)
terminate called after throwing an instance of 'std::runtime_error'
  what():  Error while doing local stabilize operation: Device or resource busy
No RDMA devices found
[OSD 3] Couldn't initialize RDMA, proceeding with TCP only
Reading blockstore metadata
[OSD 3] reporting to etcd at ["10.10.0.161:2379", "10.10.0.162:2379", "10.10.0.163:2379", "10.10.0.164:2379", "10.10.0.165:2379", "10.10.0.166:2379"] every 5 seconds
Key /vitastor/osd/state/3 already exists in etcd, OSD 3 is still up
  listening at: 10.10.0.165:38025
[OSD 3] Force stopping
No RDMA devices found
[OSD 3] Couldn't initialize RDMA, proceeding with TCP only
Reading blockstore metadata
[OSD 3] reporting to etcd at ["10.10.0.161:2379", "10.10.0.162:2379", "10.10.0.163:2379", "10.10.0.164:2379", "10.10.0.165:2379", "10.10.0.166:2379"] every 5 seconds
Successfully subscribed to etcd at 10.10.0.165:2379/v3
Metadata entries loaded: 64134, free blocks: 15559901 / 15624035
Reading blockstore journal
Journal entries loaded: 1976, free journal space: 29929472 bytes (002fa000..0066e000 is used), free blocks: 15559901 / 15624035
[PG 2/2] is starting (0 objects)
[PG 2/3] is starting (0 objects)
[PG 2/6] is starting (0 objects)
[PG 2/8] is starting (0 objects)
[PG 2/9] is starting (0 objects)
[PG 2/10] is starting (0 objects)
[PG 2/14] is starting (0 objects)
[PG 2/19] is starting (0 objects)
[PG 2/21] is starting (0 objects)
[PG 2/22] is starting (0 objects)
[PG 2/26] is starting (0 objects)
[PG 2/31] is starting (0 objects)
[PG 2/32] is starting (0 objects)

seeing a bit of this around

Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -32 (Broken pipe)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)
Error while doing flush on OSD 2: -16 (Device or resource busy)

[Question] What is mon.js optimize_change() model?

In optimize_change(), I understand it tries to create new PGs with penalty to old PGs.
As far as I see, add_pg_ means the number a PG increases, and del_pg_ means the number a PG decreased. Is it?

But what's the meaning of "'max: '+all_pg_names.map(pg_name => (
prev_weights[pg_name] ? ${pg_size+1}*add_${pg_name} - ${pg_size+1}*del_${pg_name} : ${pg_size+1-move_weights[pg_name]}*${pg_name}
)).join(' + ')+';\n';" ?
Why (pg_size + 1) ?

And in OSD weight calculation, in "osd_pg_count = all_weights[osd]pg_effsize/total_weightpg_count - rm_osd_pg_count", why use rm_osd_pg_count ?

Lower write speeds than Ceph

Hello! I probably misconfigured something and this is the explanation for my issue, but I am not sure what exactly :)

I am trying to use Vitastor at home with 2 Intel NUCs, 2 consumer NVMe SSDs and Proxmox. I ran the benchmarks with Ceph (default Proxmox config) as suggested here, removed ceph, installed Vitastor (--disable_data_fsync false and --immediate_commit none) and ran the same tests again (both times imported the same debian qcow2 to not test an empty image).

Everything was faster, except for the writes with iodepth more than 1. I do not know a lot about storage benchmarks, but noticed that importing disks to Vitastor in Proxmox is significantly slower than with Ceph, and this seems to be well reflected by the tests too. Here are some details:

iodepth rw bs iops iops, ceph
16 write 4M 40.38 194.85
1 write 4M 34.38 forgot to record
128 randwrite 4k 1365.33 9126.15
1 randwrite 4k 282.48 122.82

Can you please help me understand what I am doing wrong here (except for using cheap hardware, hehe)?

The commands just in case I tested something irrelevant :)

fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4M -direct=1 -iodepth=16 -rw=write -image=testimg
fio -ioengine=rbd -direct=1 -name=test -bs=4M -iodepth=16 -rw=write -pool=pool1 -runtime=60 -rbdname=testimg
fio -thread -ioengine=libfio_vitastor.so -name=test -bs=4k -direct=1 -iodepth=128 -rw=randwrite -image=testimg
fio -ioengine=rbd -direct=1 -name=test -bs=4k -iodepth=128 -rw=randwrite -pool=pool1 -runtime=60 -rbdname=testimg

[qemu] qemu 6.1 patch failed

Hi @vitalif ,

Debian released the official version of 11 bullseye, apt mirror was updated to 6.1, and failed to use patch 5.2 to qemu6.1.

root@vita-1:~/debsource/qemu-6.1+dfsg# patch -p1 < /root/vitastor/qemu-5.1-vitastor.patch 
patching file qapi/block-core.json
Hunk #1 FAILED at 2807.
Hunk #2 succeeded at 3763 with fuzz 1 (offset 119 lines).
Hunk #3 FAILED at 4006.
Hunk #4 succeeded at 4541 with fuzz 1 (offset 147 lines).
Hunk #5 FAILED at 4666.
3 out of 5 hunks FAILED -- saving rejects to file qapi/block-core.json.rej
patching file scripts/modules/module_block.py

[vitastor] Structure needs cleaning

Hi @vitalif ,

When I use nbd to do xfs, mount to a local directory, and write files to the mount point.

root@vita-1:~/yujiang/vdbench# ls -l /root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1219.file
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1219.file': Structure needs cleaning
root@vita-1:~/yujiang/vdbench# ls -l /root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1219.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1220.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1221.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1222.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1224.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1223.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1225.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1227.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1226.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1229.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1230.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1234.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1232.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1233.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1231.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1228.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1235.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1236.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1237.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1238.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1239.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1242.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1240.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1243.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1241.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1244.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1245.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1247.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1246.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1248.file': Structure needs cleaning
ls: cannot access '/root/performance/fg-mp2/verilog_2/vdb.1_1.dir/vdb.2_1.dir/vdb.3_1.dir/vdb.4_1.dir/vdb.5_2.dir/vdb.6_1.dir/vdb.7_1.dir/vdb_f1249.file': Structure needs cleaning
total 4368
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1000.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1001.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1002.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1003.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1004.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1005.file
-rw-r--r-- 1 root root  131072 Mar 16  2022 vdb_f1006.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1007.file
-rw-r--r-- 1 root root    4096 Mar 15 15:33 vdb_f1008.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1009.file
-rw-r--r-- 1 root root   65536 Mar 15 15:31 vdb_f1010.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1011.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1012.file
-rw-r--r-- 1 root root   65536 Mar 15 15:31 vdb_f1013.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1014.file
-rw-r--r-- 1 root root    4096 Mar 15 15:20 vdb_f1015.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1016.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1017.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1018.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1019.file
-rw-r--r-- 1 root root    4096 Mar 15 15:22 vdb_f1020.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1021.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1022.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1023.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1024.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1025.file
-rw-r--r-- 1 root root    8192 Mar 15 15:22 vdb_f1026.file
-rw-r--r-- 1 root root    8192 Mar 15 15:20 vdb_f1027.file
-rw-r--r-- 1 root root    4096 Mar 15 15:32 vdb_f1028.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1029.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1030.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1031.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1032.file
-rw-r--r-- 1 root root   65536 Mar 15  2022 vdb_f1033.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1034.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1035.file
-rw-r--r-- 1 root root    8192 Mar 15 15:31 vdb_f1036.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1037.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1038.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1039.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1040.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1041.file
-rw-r--r-- 1 root root    4096 Mar 15  2022 vdb_f1042.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1043.file
-rw-r--r-- 1 root root    8192 Mar 15 15:23 vdb_f1044.file
-rw-r--r-- 1 root root   32768 Mar 15 15:23 vdb_f1045.file
-rw-r--r-- 1 root root    8192 Mar 15 15:22 vdb_f1046.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1047.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1048.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1049.file
-rw-r--r-- 1 root root    8192 Mar 15 15:31 vdb_f1050.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1051.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1052.file
-rw-r--r-- 1 root root    8192 Mar 15 14:50 vdb_f1053.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1054.file
-rw-r--r-- 1 root root   16384 Mar 15  2022 vdb_f1055.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1056.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1057.file
-rw-r--r-- 1 root root    8192 Mar 15 15:22 vdb_f1058.file
-rw-r--r-- 1 root root    8192 Mar 15 15:22 vdb_f1059.file
-rw-r--r-- 1 root root 1048576 Mar 15 15:31 vdb_f1060.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1061.file
-rw-r--r-- 1 root root    8192 Mar 15 15:31 vdb_f1062.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1063.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1064.file
-rw-r--r-- 1 root root   16384 Mar 15 15:31 vdb_f1065.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1066.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1067.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1068.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1069.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1070.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1071.file
-rw-r--r-- 1 root root    8192 Mar 15 15:20 vdb_f1072.file
-rw-r--r-- 1 root root    8192 Mar 15 15:31 vdb_f1073.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1074.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1075.file
-rw-r--r-- 1 root root    8192 Mar 15 15:18 vdb_f1076.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1077.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1078.file
-rw-r--r-- 1 root root    4096 Mar 15  2022 vdb_f1079.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1080.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1081.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1082.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1083.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1084.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1085.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1086.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1087.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1088.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1089.file
-rw-r--r-- 1 root root   65536 Mar 15 15:20 vdb_f1090.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1091.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1092.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1093.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1094.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1095.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1096.file
-rw-r--r-- 1 root root    4096 Mar 15 15:19 vdb_f1097.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1098.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1099.file
-rw-r--r-- 1 root root  131072 Mar 15 15:33 vdb_f1100.file
-rw-r--r-- 1 root root   65536 Mar 15 15:19 vdb_f1101.file
-rw-r--r-- 1 root root  131072 Mar 16  2022 vdb_f1102.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1103.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1104.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1105.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1106.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1107.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1108.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1109.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1110.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1111.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1112.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1113.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1114.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1115.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1116.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1117.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1118.file
-rw-r--r-- 1 root root    8192 Mar 15 14:26 vdb_f1119.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1120.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1121.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1122.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1123.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1124.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1125.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1126.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1127.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1128.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1129.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1130.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1131.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1132.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1133.file
-rw-r--r-- 1 root root    8192 Mar 15 15:20 vdb_f1134.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1135.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1136.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1137.file
-rw-r--r-- 1 root root    8192 Mar 15 15:35 vdb_f1138.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1139.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1140.file
-rw-r--r-- 1 root root  131072 Mar 15 15:19 vdb_f1141.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1142.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1143.file
-rw-r--r-- 1 root root    8192 Mar 15 15:23 vdb_f1144.file
-rw-r--r-- 1 root root   32768 Mar 16  2022 vdb_f1145.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1146.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1147.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1148.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1149.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1150.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1151.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1152.file
-rw-r--r-- 1 root root  131072 Mar 15 15:35 vdb_f1153.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1154.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1155.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1156.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1157.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1158.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1159.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1160.file
-rw-r--r-- 1 root root   16384 Mar 15 15:35 vdb_f1161.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1162.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1163.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1164.file
-rw-r--r-- 1 root root    8192 Mar 15 15:32 vdb_f1165.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1166.file
-rw-r--r-- 1 root root    8192 Mar 15 15:22 vdb_f1167.file
-rw-r--r-- 1 root root    8192 Mar 15 15:35 vdb_f1168.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1169.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1170.file
-rw-r--r-- 1 root root  131072 Mar 16  2022 vdb_f1171.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1172.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1173.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1174.file
-rw-r--r-- 1 root root   65536 Mar 15 15:33 vdb_f1175.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1176.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1177.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1178.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1179.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1180.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1181.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1182.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1183.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1184.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1185.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1186.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1187.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1188.file
-rw-r--r-- 1 root root   65536 Mar 15  2022 vdb_f1189.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1190.file
-rw-r--r-- 1 root root    8192 Mar 15 15:33 vdb_f1191.file
-rw-r--r-- 1 root root    4096 Mar 15 15:23 vdb_f1192.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1193.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1194.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1195.file
-rw-r--r-- 1 root root    4096 Mar 15 15:19 vdb_f1196.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1197.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1198.file
-rw-r--r-- 1 root root   65536 Mar 16  2022 vdb_f1199.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1200.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1201.file
-rw-r--r-- 1 root root   65536 Mar 15 15:32 vdb_f1202.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1203.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1204.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1205.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1206.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1207.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1208.file
-rw-r--r-- 1 root root   16384 Mar 15 15:33 vdb_f1209.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1210.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1211.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1212.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1213.file
-rw-r--r-- 1 root root    4096 Mar 15 17:49 vdb_f1214.file
-rw-r--r-- 1 root root   16384 Mar 16  2022 vdb_f1215.file
-rw-r--r-- 1 root root    4096 Mar 16  2022 vdb_f1216.file
-rw-r--r-- 1 root root    8192 Mar 15  2022 vdb_f1217.file
-rw-r--r-- 1 root root    8192 Mar 16  2022 vdb_f1218.file
-????????? ? ?    ?          ?            ? vdb_f1219.file
-????????? ? ?    ?          ?            ? vdb_f1220.file
-????????? ? ?    ?          ?            ? vdb_f1221.file
-????????? ? ?    ?          ?            ? vdb_f1222.file
-????????? ? ?    ?          ?            ? vdb_f1223.file
-????????? ? ?    ?          ?            ? vdb_f1224.file
-????????? ? ?    ?          ?            ? vdb_f1225.file
-????????? ? ?    ?          ?            ? vdb_f1226.file
-????????? ? ?    ?          ?            ? vdb_f1227.file
-????????? ? ?    ?          ?            ? vdb_f1228.file
-????????? ? ?    ?          ?            ? vdb_f1229.file
-????????? ? ?    ?          ?            ? vdb_f1230.file
-????????? ? ?    ?          ?            ? vdb_f1231.file
-????????? ? ?    ?          ?            ? vdb_f1232.file
-????????? ? ?    ?          ?            ? vdb_f1233.file
-????????? ? ?    ?          ?            ? vdb_f1234.file
-????????? ? ?    ?          ?            ? vdb_f1235.file
-????????? ? ?    ?          ?            ? vdb_f1236.file
-????????? ? ?    ?          ?            ? vdb_f1237.file
-????????? ? ?    ?          ?            ? vdb_f1238.file
-????????? ? ?    ?          ?            ? vdb_f1239.file
-????????? ? ?    ?          ?            ? vdb_f1240.file
-????????? ? ?    ?          ?            ? vdb_f1241.file
-????????? ? ?    ?          ?            ? vdb_f1242.file
-????????? ? ?    ?          ?            ? vdb_f1243.file
-????????? ? ?    ?          ?            ? vdb_f1244.file
-????????? ? ?    ?          ?            ? vdb_f1245.file
-????????? ? ?    ?          ?            ? vdb_f1246.file
-????????? ? ?    ?          ?            ? vdb_f1247.file
-????????? ? ?    ?          ?            ? vdb_f1248.file
-????????? ? ?    ?          ?            ? vdb_f1249.file

dmesg log

[Tue Mar 15 17:49:36 2022] 00000040: 00 00 00 00 00 00 00 00 62 30 41 a4 0f 78 b3 5b  ........b0A..x.[
[Tue Mar 15 17:49:36 2022] 00000050: 62 30 bc 01 39 e0 48 c0 62 30 40 d1 39 9f 6c 5a  [email protected]
[Tue Mar 15 17:49:36 2022] 00000060: 00 00 00 00 00 00 08 00 00 00 00 00 00 00 00 01  ................
[Tue Mar 15 17:49:36 2022] 00000070: 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Metadata corruption detected at xfs_buf_ioend+0x162/0x570 [xfs], xfs_inode block 0x381ee15c8 xfs_inode_buf_verify
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Unmount and run xfs_repair
[Tue Mar 15 17:49:36 2022] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Tue Mar 15 17:49:36 2022] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Tue Mar 15 17:49:36 2022] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000020: 62 30 41 a4 19 b9 45 c2 49 4e 81 a4 03 02 00 00  b0A...E.IN......
[Tue Mar 15 17:49:36 2022] 00000030: 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000040: 00 00 00 00 00 00 00 00 62 30 41 a4 0f 78 b3 5b  ........b0A..x.[
[Tue Mar 15 17:49:36 2022] 00000050: 62 30 bc 01 39 e0 48 c0 62 30 40 d1 39 9f 6c 5a  [email protected]
[Tue Mar 15 17:49:36 2022] 00000060: 00 00 00 00 00 00 08 00 00 00 00 00 00 00 00 01  ................
[Tue Mar 15 17:49:36 2022] 00000070: 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Metadata corruption detected at xfs_buf_ioend+0x162/0x570 [xfs], xfs_inode block 0x381ee15c8 xfs_inode_buf_verify
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Unmount and run xfs_repair
[Tue Mar 15 17:49:36 2022] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Tue Mar 15 17:49:36 2022] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Tue Mar 15 17:49:36 2022] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000020: 62 30 41 a4 19 b9 45 c2 49 4e 81 a4 03 02 00 00  b0A...E.IN......
[Tue Mar 15 17:49:36 2022] 00000030: 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000040: 00 00 00 00 00 00 00 00 62 30 41 a4 0f 78 b3 5b  ........b0A..x.[
[Tue Mar 15 17:49:36 2022] 00000050: 62 30 bc 01 39 e0 48 c0 62 30 40 d1 39 9f 6c 5a  [email protected]
[Tue Mar 15 17:49:36 2022] 00000060: 00 00 00 00 00 00 08 00 00 00 00 00 00 00 00 01  ................
[Tue Mar 15 17:49:36 2022] 00000070: 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Metadata corruption detected at xfs_buf_ioend+0x162/0x570 [xfs], xfs_inode block 0x381ee15c8 xfs_inode_buf_verify
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Unmount and run xfs_repair
[Tue Mar 15 17:49:36 2022] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Tue Mar 15 17:49:36 2022] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Tue Mar 15 17:49:36 2022] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000020: 62 30 41 a4 19 b9 45 c2 49 4e 81 a4 03 02 00 00  b0A...E.IN......
[Tue Mar 15 17:49:36 2022] 00000030: 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000040: 00 00 00 00 00 00 00 00 62 30 41 a4 0f 78 b3 5b  ........b0A..x.[
[Tue Mar 15 17:49:36 2022] 00000050: 62 30 bc 01 39 e0 48 c0 62 30 40 d1 39 9f 6c 5a  [email protected]
[Tue Mar 15 17:49:36 2022] 00000060: 00 00 00 00 00 00 08 00 00 00 00 00 00 00 00 01  ................
[Tue Mar 15 17:49:36 2022] 00000070: 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Metadata corruption detected at xfs_buf_ioend+0x162/0x570 [xfs], xfs_inode block 0x381ee15c8 xfs_inode_buf_verify
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Unmount and run xfs_repair
[Tue Mar 15 17:49:36 2022] XFS (nbd0): First 128 bytes of corrupted metadata buffer:
[Tue Mar 15 17:49:36 2022] 00000000: 49 4e 81 a4 03 02 00 00 00 00 00 00 00 00 00 00  IN..............
[Tue Mar 15 17:49:36 2022] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000020: 62 30 41 a4 19 b9 45 c2 49 4e 81 a4 03 02 00 00  b0A...E.IN......
[Tue Mar 15 17:49:36 2022] 00000030: 00 00 00 00 00 00 00 00 00 00 00 01 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] 00000040: 00 00 00 00 00 00 00 00 62 30 41 a4 0f 78 b3 5b  ........b0A..x.[
[Tue Mar 15 17:49:36 2022] 00000050: 62 30 bc 01 39 e0 48 c0 62 30 40 d1 39 9f 6c 5a  [email protected]
[Tue Mar 15 17:49:36 2022] 00000060: 00 00 00 00 00 00 08 00 00 00 00 00 00 00 00 01  ................
[Tue Mar 15 17:49:36 2022] 00000070: 00 00 00 00 00 00 00 01 00 00 00 02 00 00 00 00  ................
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Metadata corruption detected at xfs_buf_ioend+0x162/0x570 [xfs], xfs_inode block 0x381ee15c8 xfs_inode_buf_verify
[Tue Mar 15 17:49:36 2022] XFS (nbd0): Unmount and run xfs_repair

where is the log of mon?

[root@node1 mon]# cat vitastor-mon.service
[Unit]
Description=Vitastor monitor
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target

[Service]
Restart=always
ExecStart=node /usr/lib/vitastor/mon/mon-main.js
WorkingDirectory=/

run:
journalctl -xeu vitastor-mon.service
-- No entries --

qemu upload disk image error: Co-routine re-entered recursively

I installed vitastor 0.5.11 and qemu from yum using CentOS 7.9, and an error was reported when I executed the following command:

[root@c7-dev ~]# qemu-img convert -f qcow2 -p /o/disk/f33.qcow2 -O raw 'vitastor:etcd_host=192.168.15.11\:2379/v3:pool=1:inode=2:size=13098811392'
Co-routine re-entered recursively
Aborted
[root@c7-dev ~]#

fio can run normally.

[vitastor] vitastor often appears io hang

When I installed the ubuntu18.04 system by mounting the optical drive through virtualization, the virtual machine stopped moving in the install kernel, and I saw the following log in syslog.

And has been stopped, it seems to be hanging, unable to continue.

Jul 12 16:59:57 vitastor-1 vitastor-osd[248596]: [OSD 2] avg latency for subop 15 (ping): 185 us
Jul 12 16:59:57 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=28 prog=536903814 vers=1 proc=6 type=0 status=0 serial=4689
Jul 12 16:59:57 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=188 prog=536903814 vers=1 proc=6 type=1 status=0 serial=4689
Jul 12 16:59:57 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=40 prog=536903814 vers=1 proc=344 type=0 status=0 serial=4690
Jul 12 16:59:57 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=3924 prog=536903814 vers=1 proc=344 type=1 status=0 serial=4690
Jul 12 16:59:59 vitastor-1 vitastor-osd[248623]: [OSD 1] avg latency for subop 15 (ping): 236 us
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34062 inode=1000000000004 offset=1280640000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34064 inode=1000000000004 offset=1280680000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34065 inode=1000000000004 offset=12806a0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34066 inode=1000000000004 offset=12806c0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34068 inode=1000000000004 offset=1280700000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34070 inode=1000000000004 offset=1280740000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34071 inode=1000000000004 offset=1280760000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34072 inode=1000000000004 offset=1280780000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34074 inode=1000000000004 offset=12807c0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34076 inode=1000000000004 offset=1280800000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34077 inode=1000000000004 offset=28800000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34079 inode=1000000000004 offset=28840000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34080 inode=1000000000004 offset=28860000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34081 inode=1000000000004 offset=28880000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34083 inode=1000000000004 offset=288c0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34085 inode=1000000000004 offset=28900000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34086 inode=1000000000004 offset=28920000 len=14000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34087 inode=1000000000004 offset=808c3d000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34088 inode=1000000000004 offset=808c3e000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34104 inode=1000000000004 offset=88a010000 len=8000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34121 inode=1000000000004 offset=808c3f000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34122 inode=1000000000004 offset=808c40000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34123 inode=1000000000004 offset=28620000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34124 inode=1000000000004 offset=808c41000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34125 inode=1000000000004 offset=808c42000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34126 inode=1000000000004 offset=808c43000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34127 inode=1000000000004 offset=808c45000 len=6000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34128 inode=1000000000004 offset=808c4b000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34129 inode=1000000000004 offset=808c4d000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34130 inode=1000000000004 offset=808c4f000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34131 inode=1000000000004 offset=808c50000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34132 inode=1000000000004 offset=808c51000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34133 inode=1000000000004 offset=808c52000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34134 inode=1000000000004 offset=808c53000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34135 inode=1000000000004 offset=28640000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34137 inode=1000000000004 offset=808c54000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34138 inode=1000000000004 offset=808c55000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34139 inode=1000000000004 offset=808c56000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34140 inode=1000000000004 offset=808c57000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34141 inode=1000000000004 offset=808c58000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34142 inode=1000000000004 offset=808c59000 len=5000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34143 inode=1000000000004 offset=808c5e000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34144 inode=1000000000004 offset=808c5f000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34151 inode=1000000000004 offset=28940000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34160 inode=1000000000004 offset=28700000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34162 inode=1000000000004 offset=28740000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34164 inode=1000000000004 offset=28780000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34165 inode=1000000000004 offset=287a0000 len=14000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34171 inode=1000000000004 offset=808c80000 len=5000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248623]: [OSD 1] Slow op from client 11: primary_write id=34172 inode=1000000000004 offset=808c85000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34061 inode=1000000000004 offset=1280620000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34063 inode=1000000000004 offset=1280660000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34067 inode=1000000000004 offset=12806e0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34069 inode=1000000000004 offset=1280720000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34073 inode=1000000000004 offset=12807a0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34075 inode=1000000000004 offset=12807e0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34078 inode=1000000000004 offset=28820000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34082 inode=1000000000004 offset=288a0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34084 inode=1000000000004 offset=288e0000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34136 inode=1000000000004 offset=28660000 len=12000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34145 inode=1000000000004 offset=808c60000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34146 inode=1000000000004 offset=808c61000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34147 inode=1000000000004 offset=808c62000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34148 inode=1000000000004 offset=808c63000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34149 inode=1000000000004 offset=808c64000 len=3000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34150 inode=1000000000004 offset=808c67000 len=3000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34152 inode=1000000000004 offset=28960000 len=a000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34153 inode=1000000000004 offset=808c6a000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34154 inode=1000000000004 offset=808c6c000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34155 inode=1000000000004 offset=808c6e000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34156 inode=1000000000004 offset=808c6f000 len=4000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34157 inode=1000000000004 offset=808c73000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34158 inode=1000000000004 offset=808c74000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34159 inode=1000000000004 offset=808c75000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34161 inode=1000000000004 offset=28720000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34163 inode=1000000000004 offset=28760000 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34166 inode=1000000000004 offset=808c76000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34167 inode=1000000000004 offset=808c77000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34168 inode=1000000000004 offset=808c78000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34169 inode=1000000000004 offset=808c79000 len=5000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 8: primary_write id=34170 inode=1000000000004 offset=808c7e000 len=2000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158793 1000000000004:1280640000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158795 1000000000004:1280680000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158797 1000000000004:12806a0000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158799 1000000000004:12806c0000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158801 1000000000004:1280700000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158803 1000000000004:1280740000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158805 1000000000004:1280760000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158807 1000000000004:1280780000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158809 1000000000004:12807c0000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158811 1000000000004:1280800000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158813 1000000000004:28800000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158815 1000000000004:28840000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158817 1000000000004:28860000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158819 1000000000004:28880000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158821 1000000000004:288c0000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158823 1000000000004:28900000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158825 1000000000004:28920000 v1 offset=0 len=14000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158827 1000000000004:808c20000 v10 offset=1d000 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158831 1000000000004:88a000000 v2 offset=10000 len=8000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158833 1000000000004:808c40000 v1 offset=0 len=1000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158835 1000000000004:28620000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158837 1000000000004:28640000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158839 1000000000004:28940000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158841 1000000000004:28700000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158843 1000000000004:28740000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158845 1000000000004:28780000 v1 offset=0 len=20000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158847 1000000000004:287a0000 v1 offset=0 len=14000
Jul 12 17:00:00 vitastor-1 vitastor-osd[248596]: [OSD 2] Slow op from client 10: write_stable id=158849 1000000000004:808c80000 v1 offset=0 len=5000
Jul 12 17:00:00 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=28 prog=536903814 vers=1 proc=6 type=0 status=0 serial=4691
Jul 12 17:00:00 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=188 prog=536903814 vers=1 proc=6 type=1 status=0 serial=4691
Jul 12 17:00:00 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=40 prog=536903814 vers=1 proc=344 type=0 status=0 serial=4692
Jul 12 17:00:00 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=3924 prog=536903814 vers=1 proc=344 type=1 status=0 serial=4692
Jul 12 17:00:02 vitastor-1 vitastor-osd[248623]: [OSD 1] avg latency for subop 15 (ping): 202 us
Jul 12 17:00:03 vitastor-1 vitastor-osd[248596]: [OSD 2] avg latency for subop 15 (ping): 232 us
Jul 12 17:00:03 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=28 prog=536903814 vers=1 proc=6 type=0 status=0 serial=4693
Jul 12 17:00:03 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=188 prog=536903814 vers=1 proc=6 type=1 status=0 serial=4693
Jul 12 17:00:03 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=40 prog=536903814 vers=1 proc=344 type=0 status=0 serial=4694
Jul 12 17:00:03 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=3924 prog=536903814 vers=1 proc=344 type=1 status=0 serial=4694
Jul 12 17:00:06 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=28 prog=536903814 vers=1 proc=6 type=0 status=0 serial=4695
Jul 12 17:00:06 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=188 prog=536903814 vers=1 proc=6 type=1 status=0 serial=4695
Jul 12 17:00:06 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_RX: client=0x555c3200e5a0 len=40 prog=536903814 vers=1 proc=344 type=0 status=0 serial=4696
Jul 12 17:00:06 vitastor-1 libvirtd[406644]: RPC_SERVER_CLIENT_MSG_TX_QUEUE: client=0x555c3200e5a0 len=3924 prog=536903814 vers=1 proc=344 type=1 status=0 serial=4696
Jul 12 17:00:08 vitastor-1 vitastor-osd[248623]: [OSD 1] avg latency for subop 15 (ping): 228 us

[vitastor-osd] read subop failed

Hi @vitalif ,

When I used fio to do a read test in the ubuntu18.04 virtual machine, read subop failed: retval = -11 (expected 131072) appeared.

I created two 200G disks from vitastor. One is used as a system disk, and the other is used as a test fio. When using fio to test the test disk, randread 4K, 8K, 16K, 256K are all normal, and an error occurs when randread 512k, 1m.

I use fio to test the command:

fio -ioengine=libaio -numjobs=16 -direct=1 -size=150g -iodepth=512 -runtime=600 -rw=randread -ba=512k -bs=512k -filename=/dev/vdb -name=iscsi_libaio_512_randread_512k -group_reporting

fio -ioengine=libaio -numjobs=16 -direct=1 -size=150g -iodepth=512 -runtime=600 -rw=randread -ba=1m -bs=1m -filename=/dev/vdb -name=iscsi_libaio_512_randread_1m -group_reporting

I uploaded the log to:
https://github.com/lnsyyj/vitastor-log/tree/main/read-subop-failed

[qemu] No Bootable device

After installing the system to the vitastor image via CDROM, the virtual machine cannot find the Bootable device.

root@vitastor-1:~/qemuxml# cat ubuntu_18.04
<domain type='kvm' id='14'>
  <name>ubuntu18.04</name>
  <uuid>b80d3d14-b99c-4db9-86b2-b03b9166139e</uuid>
  <metadata>
    <libosinfo:libosinfo xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
      <libosinfo:os id="http://ubuntu.com/ubuntu/18.04"/>
    </libosinfo:libosinfo>
  </metadata>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>2</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <vmport state='off'/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Haswell-noTSX-IBRS</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='f16c'/>
    <feature policy='require' name='rdrand'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='umip'/>
    <feature policy='require' name='md-clear'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='ssbd'/>
    <feature policy='require' name='xsaveopt'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='abm'/>
    <feature policy='require' name='ibpb'/>
    <feature policy='require' name='amd-stibp'/>
    <feature policy='require' name='amd-ssbd'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='pschange-mc-no'/>
  </cpu>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>destroy</on_reboot>
  <on_crash>destroy</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='network' device='disk'>
      <target dev='vda' bus='virtio' />
      <driver name='qemu' type='raw' />
      <!-- name is Vitastor image name -->
      <!-- config (optional) is the path to Vitastor's configuration file -->
      <!-- query (optional) is Vitastor's etcd_prefix -->
      <source protocol='vitastor' name='ubuntutestimg' query='/vitastor' config='/etc/vitastor/vitastor.conf'>
        <!-- hosts = etcd addresses -->
        <host name='192.168.3.100' port='2379' />
      </source>
      <!-- required because Vitastor only supports 4k physical sectors -->
      <blockio physical_block_size="4096" logical_block_size="512" />
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/home/ubuntu-18.04.5-live-server-amd64.iso' index='1'/>
      <backingStore/>
      <target dev='sda' bus='sata'/>
      <readonly/>
      <alias name='sata0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <alias name='usb'/>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <alias name='usb'/>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <alias name='usb'/>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1d' function='0x2'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <controller type='pci' index='5' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='5' port='0x14'/>
      <alias name='pci.5'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
    </controller>
    <controller type='pci' index='6' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='6' port='0x15'/>
      <alias name='pci.6'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:e4:1c:4c'/>
      <source network='default' portid='357baf2b-1003-4b96-a502-acc1effd0aab' bridge='virbr0'/>
      <target dev='vnet40'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-14-ubuntu18.04/org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
      <address type='usb' bus='0' port='1'/>
    </input>
    <input type='mouse' bus='ps2'>
      <alias name='input1'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input2'/>
    </input>
    <graphics type='spice' port='5907' autoport='yes' listen='127.0.0.1'>
      <listen type='address' address='127.0.0.1'/>
      <image compression='off'/>
    </graphics>
    <sound model='ich9'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir0'/>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <alias name='redir1'/>
      <address type='usb' bus='0' port='3'/>
    </redirdev>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/urandom</backend>
      <alias name='rng0'/>
      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
    </rng>
  </devices>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+0:+0</label>
    <imagelabel>+0:+0</imagelabel>
  </seclabel>
</domain>

root@vitastor-1:~/qemuxml# virsh define ubuntu_18.04
root@vitastor-1:~/qemuxml# virsh start ubuntu18.04

Install the system to the image in vitastor from the CD, and the installation is successful. After uninstalling the CD-ROM, restarting the virtual machine, the following appears:

Booting from Hard Disk...
Boot failed: not a bootable disk
No Bootable device.

[libvirt] error: internal error: argument key 'etcd_prefix' must not have null value

Hi @vitalif ,

I used vitastor's libvirt patch, and I want to try to use virsh to start the defined virtual machine template through libvirt. The xml is as follows. When I start the virtual machine, the following output appears.

Can you help me take a look?

  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
      <disk type='network' device='disk'>
        <target dev='vda' bus='virtio' />
        <driver name='qemu' type='raw' />
        <!-- name is Vitastor image name -->
        <!-- config (optional) is the path to Vitastor's configuration file -->
        <!-- query (optional) is Vitastor's etcd_prefix -->
        <source protocol='vitastor' name='vitastor-inode-1' query='/vitastor' config='/etc/vitastor/vitastor.conf'>
          <!-- hosts = etcd addresses -->
          <host name='192.168.3.100' port='2379' />
        </source>
        <!-- required because Vitastor only supports 4k physical sectors -->
        <blockio physical_block_size="4096" logical_block_size="512" />
      </disk>

root@vitastor-1:~/qemuxml# virsh define win10_3
Domain 'vdi_win10_3' defined from win10_3

root@vitastor-1:~/qemuxml# virsh start win10_3
error: Failed to start domain 'win10_3'
error: internal error: argument key 'etcd_prefix' must not have null value

[vitastor-driver] No weighed backend found

When I move Openstack from Wallaby to Train version. I got this error:

2021-11-10 09:39:44.991 9 WARNING cinder.scheduler.host_manager [req-27c3b7cb-2b35-45d9-b80d-5c18ed453932 686c0fefd96f432298b513a5004dfdd6 2a373c256e054055a9a55559248f0e49 - default default] volume service is down. (host: ops-debian-10@lvm-1)
2021-11-10 09:39:44.992 9 INFO cinder.scheduler.base_filter [req-27c3b7cb-2b35-45d9-b80d-5c18ed453932 686c0fefd96f432298b513a5004dfdd6 2a373c256e054055a9a55559248f0e49 - default default] Filtering removed all hosts for the request with volume ID 'd96f0577-deda-4574-8e98-a700b04751ac'. Filter results: AvailabilityZoneFilter: (start: 0, end: 0), CapacityFilter: (start: 0, end: 0), CapabilitiesFilter: (start: 0, end: 0)
2021-11-10 09:39:44.992 9 WARNING cinder.scheduler.filter_scheduler [req-27c3b7cb-2b35-45d9-b80d-5c18ed453932 686c0fefd96f432298b513a5004dfdd6 2a373c256e054055a9a55559248f0e49 - default default] No weighed backend found for volume with properties: {'id': '4d302413-95d3-4c31-88d7-3ef72ce97004', 'name': 'vitastor', 'description': '', 'is_public': True, 'projects': [], 'extra_specs': {'volume_backend_name': 'vitastor'}, 'qos_specs_id': None, 'created_at': '2021-11-10T02:38:37.000000', 'updated_at': None, 'deleted_at': None, 'deleted': False}
2021-11-10 09:39:44.993 9 INFO cinder.message.api [req-27c3b7cb-2b35-45d9-b80d-5c18ed453932 686c0fefd96f432298b513a5004dfdd6 2a373c256e054055a9a55559248f0e49 - default default] Creating message record for request_id = req-27c3b7cb-2b35-45d9-b80d-5c18ed453932
2021-11-10 09:39:44.998 9 ERROR cinder.scheduler.flows.create_volume [req-27c3b7cb-2b35-45d9-b80d-5c18ed453932 686c0fefd96f432298b513a5004dfdd6 2a373c256e054055a9a55559248f0e49 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available

I figured out that in train version get_volume_stats method (Inside Interface Driver) is empty while in wallaby isn't.

Wallaby interface driver

    def get_volume_stats(self, refresh=False):
        """Get volume stats.
        If 'refresh' is True, run update the stats first.
        """
        if not self._stats or refresh:
            self._update_volume_stats()

        return self._stats

Train interface driver

    def get_volume_stats(self, refresh=False):
        """Return the current state of the volume service.
        If 'refresh' is True, run the update first.
        For replication the following state should be reported:
        replication = True (None or false disables replication)
        """
        return

I created Pull request #30 to override this method into cinder-vitastor.py.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.