Coder Social home page Coder Social logo

openmpdk / xnvme Goto Github PK

View Code? Open in Web Editor NEW
215.0 18.0 64.0 6.28 MB

xNVMe: cross-platform libraries and tools for NVMe devices

Home Page: https://xnvme.io/

License: Other

Makefile 1.75% C 70.26% Python 17.61% Shell 7.09% Dockerfile 0.04% Batchfile 0.40% Meson 1.83% Rust 0.15% Jinja 0.89%

xnvme's Introduction

xNVMe Logo

xNVMe: cross-platform libraries and tools for NVMe devices

CI CI CI Coverity pre-commit REUSE status

See: https://xnvme.io/docs for documentation

  • xNVMe, base NVMe specification (1.4) available as library and CLI xnvme
    • Memory Management
    • NVMe command interface | Synchronous commands | Asynchronous commands
    • Helpers / convenience functions for common operations
    • CLI-library for convenient derivative work
    • Multiple backend implementations | Linux SPDK | Linux IOCTL | Linux io_uring | Linux libaio | FreeBSD SPDK | FreeBSD IOCTL
  • libxnvme, base NVMe Specification available as library and via CLI xnvme
  • libxnvme_nvm, The NVM Commands Set
  • libxnvme_znd, The Zoned Command Set available as a library and via CLI zoned
  • libkvs, SNIA KV API implemented [TODO]
  • libocssd, Open-Channel 2.0 support [TODO]
  • libWHATEVERYOUWANT, Go ahead and implement what you need [TODO]

Contact and Contributions

xNVMe: is in active development and maintained by Simon A. F. Lund [email protected], pull requests are most welcome. See, CONTRIBUTORS.md for a list of contributors to the current and previous versions of xNVMe. For a contributor-guidelines then have a look at the online documentation:

xnvme's People

Contributors

a-malakar avatar ankit-sam avatar baekalfen avatar birkelund avatar fuporovvstack avatar gur-singh8 avatar halfzebra avatar hmi-jeon avatar joelgranados avatar jwdevantier avatar karlowich avatar krish-reddy avatar liadoz avatar mbrsamsung avatar micvbang avatar niclashedam avatar nmtadam avatar pierrelabat avatar rs-sam avatar s-rajpoot avatar safl avatar sc108-lee avatar vikash-k5 avatar vincentkfu avatar vkalef avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xnvme's Issues

xnvme_queue_drain hangs when connection is dropped in spdk backend

When I try to make io commands while the connection to the spdk device is dropped the function xnvme_queue_poke hangs, this happens in the spdk backend but not when using nvme-cli. The behavior I expect is that the function should return an error that I would be able to handle on my side.

when using nvme-cli this is what I get in the debug log

# DBG:xnvme_be_linux_nvme.c:ioctl_wrap-137: INFO: ioctl(NVME_IOCTL_IO64_CMD), err(-1), errno(11)
# DBG:xnvme_be_linux_nvme.c:ioctl_wrap-144: INFO: retconv: -1 and set errno
# DBG:xnvme_be_linux_nvme.c:xnvme_be_linux_nvme_cmd_io-255: FAILED: ioctl_wrap(), err: -11
# DBG:xnvme_be_posix_async_emu.c:_posix_async_emu_poke-136: FAILED: sync.cmd_io{v}(), err: -11
# ERR: 'got completion errors': {errno: 5, msg: 'Input/output error'}

Here is how to reproduce the issue:

  1. create spdk device
./build/bin/nvmf_tgt &
./scripts/rpc.py nvmf_create_transport -t TCP -u 16384 -m 8 -c 8192
./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1
./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a  127.0.0.1 -s 4420
./scripts/rpc.py bdev_null_create Null0 8589934592 4096
./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null0
  1. run examples/xnvme_single_async (change opts.nsid to 1) and stop with a breakpoint at line 96 (before issuing the command)
  2. configure iptables to drop incoming connection
sudo iptables -I INPUT -p tcp -i lo --dport 4420 -j DROP

Then xnvme_queue_drain should hang

To restore the ip tables run:

sudo iptables -D INPUT -p tcp -i lo --dport 4420 -j DROP

No targets error while make project after make

I clone xNVMe repository and since I did not install meson, make is failing.
After install meson, I do re-make the project, but error is occurred like below.
I think it caused by failing to make third-party, but I'm not sure.

CMake Error at CMakeLists.txt:255 (message):
  Failed building SPDK/DPDK


-- Configuring incomplete, errors occurred!
See also "/home/oslab4/xNVMe/build/CMakeFiles/CMakeOutput.log".
See also "/home/oslab4/xNVMe/build/CMakeFiles/CMakeError.log".


Configuration failed


Makefile:53: recipe for target 'config' failed
make[1]: *** [config] Error 1
make[1]: Leaving directory '/home/oslab4/xNVMe'
Makefile:47: recipe for target 'default' failed
make: *** [default] Error 2
root@oslab4 ~/xNVMe (master)$
root@oslab4 ~/xNVMe (master)$ make                                                  <-- my command
## xNVMe: make info
OSTYPE:
PLATFORM: Linux
CC: cc
CXX: g++
MAKE: make
CTAGS: ctags
NPROC: 32
## xNVMe: make tags
/bin/sh: 1: ctags: not found
## xNVMe: make default
$( case $( uname -s ) in ( Linux ) echo "make" ;; ( FreeBSD | OpenBSD | NetBSD ) echo "gmake" ;; ( * ) echo Unrecognized ;; esac) build
make[1]: Entering directory '/home/oslab4/xNVMe'
## xNVMe: make info
OSTYPE:
PLATFORM: Linux
CC: cc
CXX: g++
MAKE: make
CTAGS: ctags
NPROC: 32
## xNVMe: make build
cd build && $( case $( uname -s ) in ( Linux ) echo "make" ;; ( FreeBSD | OpenBSD | NetBSD ) echo "gmake" ;; ( * ) echo Unrecognized ;; esac)
make[2]: Entering directory '/home/oslab4/xNVMe/build'
make[2]: *** No targets specified and no makefile found.  Stop.
make[2]: Leaving directory '/home/oslab4/xNVMe/build'
Makefile:74: recipe for target 'build' failed
make[1]: *** [build] Error 2
make[1]: Leaving directory '/home/oslab4/xNVMe'
Makefile:47: recipe for target 'default' failed
make: *** [default] Error 2

Provide documentation for contributions

Currently, little to no information is provided on how to contribute to xNVMe.
This needs to be fixed :)

  • Provide a Contributors section in docs
  • Update GitHUB Contributor practices

Deprecation-warnings on MacOS

In addition to format-specifiers giving build-warnings (this is addressed here: #205), then the following deprecation-warnings and additional warnings need some attention:

[24/202] Compiling C object lib/libxnvme-shared.dylib.p/xnvme_be_macos_dev.c.o
../lib/xnvme_be_macos_dev.c:33:42: warning: 'kIOMasterPortDefault' is deprecated: first deprecated in macOS 12.0 [-Wdeprecated-declarations]
        matching_dictionary = IOBSDNameMatching(kIOMasterPortDefault, 0, basename(devname));
                                                ^~~~~~~~~~~~~~~~~~~~
                                                kIOMainPortDefault
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:133:19: note: 'kIOMasterPortDefault' has been explicitly marked deprecated here
const mach_port_t kIOMasterPortDefault
                  ^
../lib/xnvme_be_macos_dev.c:34:49: warning: 'kIOMasterPortDefault' is deprecated: first deprecated in macOS 12.0 [-Wdeprecated-declarations]
        ioservice_device = IOServiceGetMatchingService(kIOMasterPortDefault, matching_dictionary);
                                                       ^~~~~~~~~~~~~~~~~~~~
                                                       kIOMainPortDefault
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:133:19: note: 'kIOMasterPortDefault' has been explicitly marked deprecated here
const mach_port_t kIOMasterPortDefault
                  ^
../lib/xnvme_be_macos_dev.c:78:7: warning: incompatible pointer types passing 'IONVMeSMARTInterface ***' (aka 'struct IONVMeSMARTInterface ***') to parameter of type 'LPVOID *' (aka 'void **') [-Wincompatible-pointer-types]
                                         nvme_smart_interface);
                                         ^~~~~~~~~~~~~~~~~~~~
../lib/xnvme_be_macos_dev.c:178:19: warning: incompatible pointer to integer conversion passing 'IONVMeSMARTInterface **' (aka 'struct IONVMeSMARTInterface **') to parameter of type 'io_object_t' (aka 'unsigned int') [-Wint-conversion]
                IOObjectRelease(state->nvme_smart_interface);
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:268:14: note: passing argument to parameter 'object' here
        io_object_t     object );
                        ^
../lib/xnvme_be_macos_dev.c:262:27: warning: incompatible pointer to integer conversion assigning to 'io_object_t' (aka 'unsigned int') from 'void *' [-Wint-conversion]
                state->ioservice_device = NULL;
                                        ^ ~~~~
5 warnings generated.
[94/202] Compiling C object lib/libxnvme-static.a.p/xnvme_be_macos_dev.c.o
../lib/xnvme_be_macos_dev.c:33:42: warning: 'kIOMasterPortDefault' is deprecated: first deprecated in macOS 12.0 [-Wdeprecated-declarations]
        matching_dictionary = IOBSDNameMatching(kIOMasterPortDefault, 0, basename(devname));
                                                ^~~~~~~~~~~~~~~~~~~~
                                                kIOMainPortDefault
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:133:19: note: 'kIOMasterPortDefault' has been explicitly marked deprecated here
const mach_port_t kIOMasterPortDefault
                  ^
../lib/xnvme_be_macos_dev.c:34:49: warning: 'kIOMasterPortDefault' is deprecated: first deprecated in macOS 12.0 [-Wdeprecated-declarations]
        ioservice_device = IOServiceGetMatchingService(kIOMasterPortDefault, matching_dictionary);
                                                       ^~~~~~~~~~~~~~~~~~~~
                                                       kIOMainPortDefault
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:133:19: note: 'kIOMasterPortDefault' has been explicitly marked deprecated here
const mach_port_t kIOMasterPortDefault
                  ^
../lib/xnvme_be_macos_dev.c:78:7: warning: incompatible pointer types passing 'IONVMeSMARTInterface ***' (aka 'struct IONVMeSMARTInterface ***') to parameter of type 'LPVOID *' (aka 'void **') [-Wincompatible-pointer-types]
                                         nvme_smart_interface);
                                         ^~~~~~~~~~~~~~~~~~~~
../lib/xnvme_be_macos_dev.c:178:19: warning: incompatible pointer to integer conversion passing 'IONVMeSMARTInterface **' (aka 'struct IONVMeSMARTInterface **') to parameter of type 'io_object_t' (aka 'unsigned int') [-Wint-conversion]
                IOObjectRelease(state->nvme_smart_interface);
                                ^~~~~~~~~~~~~~~~~~~~~~~~~~~
/Library/Developer/CommandLineTools/SDKs/MacOSX.sdk/System/Library/Frameworks/IOKit.framework/Headers/IOKitLib.h:268:14: note: passing argument to parameter 'object' here
        io_object_t     object );
                        ^
../lib/xnvme_be_macos_dev.c:262:27: warning: incompatible pointer to integer conversion assigning to 'io_object_t' (aka 'unsigned int') from 'void *' [-Wint-conversion]
                state->ioservice_device = NULL;

Adjust format-strings

Currently the code-base uses format-specifiers which do not always correctly match the width of the printf() arguments.
E.g. %zu is used for uint64_t.
This should be cleaned up such that all format-specifiers for fixed-width types used the macros defined in <inttypes.h>.

Changes for v0.7.0 (next)

API / Command-Sets

  • Preparation for KV
    • API: xnvme_kv_* command-construction helpers
    • CLI: kv command-line tool
    • CIJOE tests verifying behavior via the cli tool
    • #270
  • Support for FDP
  • #242
  • #269

Backends

Documentation

Fabrics

  • docs: fabrics test

Python

Rust

License

Infrastructure

Refresh ``io_uring_cmd/ucmd`` support

xNVMe supports shipping NVMe-commands via the experimental uring_cmd interface aka ioctl() over io_uring aka asynchronous ioctls aka efficient NVMe passthru-commands aka io_uring_cmd/ucmd in xNVMe.
This issue is here to sum up the tasks for updating the current io_uring_cmd/ucmd state, and release xNVMe v0.2.0 with it:

  • be:linux:sync: add support for iovec payloads
    • in: lib/xnvme_be_linux_nvme.c
    • Add an implementation of xnvme_be_linux_sync_nvme_cmd_iov() using the NVMe-iovec-ioctl
    • This is to provide use of iovec payloads in case the io_uring_cmd/ucmd interface is not available, aka a synchronous fallback which can enable async. code via the emu and thrpool async. implementations.
  • be:linux:async:ucmd: adjust to changes in Linux Kernel 5.17
    • in: lib/xnvme_be_linux_async_ucmd.c
    • Removed deprecated ioctls
    • Removed deprecated structs
    • Add the flag for indirect-command on io_uring/sqe-construction
  • be:linux:async:ucmd: add support for iovec payloads
    • in: lib/xnvme_be_linux_async_ucmd.c
    • in: xnvme_be_linux_ucmd_iov() using NVMe-64-iovec-ioctl

Progress on: #51

With the above in place, xNVMe can ship NVMe-commands over io_uring using regular contiguous buffers as well as iovecs, and enables access to the NVMe-command completion-result, which is useful for commands such as zone-append.

Next, is then adding the use of bigsqe, currently it seems like this is not strictly required to obtain the goal of shipping NVMe-commands, also, with the command inline, accessing the completion result is not possible. So, the bigsqe should be useful for commands such as NVM read/write, but not so much for ZNS zone-append. Regardless, enabling the interface allows for comparing the performance gained by inlining the NVMe-command when possible.
This could then be added to the ucmd backend in the following manner:

  • api: add toggles for experimental features
    • in: libxnvme.h
    • extend xnvme_queue_opts with XNVME_QUEUE_EFEAT1 and XNVME_QUEUE_EFEAT2
    • This is to provide toggles for enabling experimental features like bigsqe
  • be:linux:async:ucmd: add internal header
    • add include/xnvme_be_linux_ucmd.h for XNVME_QUEUE_EFEAT{1,2} toggles
    • add structs if needed in include/xnvme_be_linux_ucmd.h
  • be:linux:async:ucmd: init big-sqe when XNVME_QUEUE_EFEAT2 is set
  • be:linux:async:ucmd: cmd big-sqe when XNVME_QUEUE_EFEAT2 is set

Progress on: #54

Python Package improvements

Notes on things to improve

  • Cleanup path-constructing using pathlib for cross-platform paths
  • Provide module doc-strings to the aux/* code
  • Windows support, including addition to CI
  • Documentation on how to use the Cython header and Cython bindings

Additional notes in: docs/python/index.rst

"xnvme list/enum" core dumps if set option --disable-spdk

repro

  1. ./configure --enable-debug --disable-be-spdk --disable-be-fbsd --disable-tools-fio;make -j4;make install
  2. xnvme list

xnvme_enumerate()

Segmentation fault

root cause
After disabling spdk or fbsd, g_xnvme_be_registry[i]->dev of spdk & fbsd is NULL

Improve introspection

  • Provide information on which flags the xNVMe library was compiled with
    • The information itself is the state of all the config-variables as can be see in meson.build, e.g. XNVME_VERSION_MAJOR, XNVME_BE_SPDK_ENABLED etc.
    • Information must be exposed via cli xnvme library-info
    • Information must be retrievable via the API

MacOS/Darwin support

xNVMe currently does not support MacOS/Darwin, to enable that, the following would be needed:

  • Adjust the meson/the-build-system for MacOS/Darwin (#27)
  • Update documentation and scripts/pkgs for build/install on MacOS/Darwin (#42)
  • Add build-testing to ci/GitHUB Actions (#42)
  • Since GHA supports macos it should be able to do so similar to the build-testing of Linux
  • Add a MacOS/Darwin NVMe-capable backend to xNVMe
  • Device. interface and enumeration would be similar to that of FreeBSD
  • Async. I/O interfaces using mixins: aio, emu, and thrpool
  • Sync. I/O interfaces using psync. implementation equivalent to Linux and FreeBSD
  • Sync, I/O interface using the NVMe driver ioctl() interface in MacOS/Darwin, if possible
  • Admin interface specific to the NVMe driver ioctl() interface in MacOS/Darwin, if possible
  • progress: #55

The HW support-goal would be restricted to the new generation of M1 hardware.

The above would conclude initial MacOS/Darwin support, extended functionality would be:

  • CI also doing functional testing in the same manner as Linux/FreeBSD using cloud-init images in a qemu-guest
  • An efficient user-space NVMe-driver

CI backlog

To reduce the time spent during building-testing, then the build-linux target could utilize prepared docker-images containing all build-requirements pre-installed. Additionally, then it would serve as a convenience for xNVMe users, having a docker-image available with all the bells-and-whistles, a good-to-go build-environment. We currently provide one, but it is only for Debian.

Adding to this, the same could then be provided for a runtime-environment. Such that the current CI-testing can be expanded to verify runtime-requirements. Once this is done, then it will be good to then extend core-testing of xNVMe using the ramdisk backend. Here are things to do:

Changes for v0.3.0

Release: https://github.com/OpenMPDK/xNVMe/releases/tag/v0.3.0

Expecting to address the following in the next (v0.3.0) release of xNVMe

  • Adjustment to changes on io_uring indirect-command
    • If any adjustments are needed
    • NOTE: Dropping this as the 'indirect-command' approach is not making it upstream in the near future. However, the embedded command approach is. It will take it's place as the io_uring_cmd.
  • Addition of io_uring embedded-command approach aka big-sqe / big-cqe
    • #54
    • #84
    • NOTE: This requires changing the liburing-wrap to target master rather than a fixed tag (was liburing-2.1), this must be noted explicitly in the CHANGELOG and new release made as soon as liburing-2.2 is released
  • Documentation of dynamically loading xNVMe via C and Python
  • Documentaton for contributors
  • Fixes and re-factoring

Progress can be tracked on the pull-requests and issues mentioned above, and integration on the next branch.

EFAULT when using NVME_IOCTL_IO64_CMD_VEC via emu/thrpool

When running fio using setup of:

--xnvme_async=emu
--xnvme_sync=nvme
--xnvme_iovec=1

Then EFAULT occurs. This does not happen when using the NVME_IOCTL_IO64_CMD_VEC via --xnvme_async=io_uring_cmd. This seems to indicate a setup issue on the emu implementation for cmd_iov(). To further investiage and fix this, then do:

  • Create a xnvme_cmd_passv() specific test in tests/cmd.c
  • Add the test to cijoe-pkg-xnvme
  • Reproduce and troubleshoot
  • Fix be:async:thrpool

Changes for v0.4.0

Single-purpose release for this release:

  • liburing subproject switch from tracking master to liburing-2.2
    • This is pending the release of liburing-2.2

Bug in xnvme_fioe_reset_wp()

Hi,

I hit this bug running fio with random writes over xnvme targeting a nvme zns device. Fio fails after some time.
I have a fix but I can't create a PR.

I can't git push my local branch (based off master) with the fix.
I get:
$ git push origin fix_xnvme_fioe_reset_wp
Username for 'https://github.com': pierrelabat
Password for 'https://[email protected]':
remote: Permission to OpenMPDK/xNVMe.git denied to pierrelabat.
fatal: unable to access 'https://github.com/OpenMPDK/xNVMe.git/': The requested URL returned error: 403

What is the process you use to create a PR?

Regards
Pierre

Unable to open device with fab URI

I'm working on version 0.0.29, over CentOS 7
Trying to open a device using the format fab:174.60.77.139:4420?nsid=1, but all the examples fails to get it as a valid format.

Help will be appreciated.

Towards distribution without library bundling and vendoring

Library bundling has helped provide library dependencies unavailable via packages on a given system.
However, it causes symbol-collisions once xNVMe is linked with other projects which may also, link with a bundled library.
Thus to avoid such issues are to enable Linux distro-packaging then this needs to be addressed.
This will be a long-running effort including the tasks:

  • Do not bundle liburing
  • Do not bundle SPDK/NVMe
    • Upstream the patches in subprojects/packagefiles/patchesto DPDK and SPDK
    • Adjust documentation and scripts in toolbox/pkgs/* on building and SPDK/NVMe for consumption by xNVMe
    • Adjust make and meson
  • Do not vendor fio
    • Upstream external I/O engine changes (subnqn / hostnqn / xnvme_mem)
    • Setup CI for using the upstream fio engine on Linux / FreeBSD / Windows
    • Fix issue on Windows
    • Remove vendoring / build of fio as subproject
    • Adjust documentation on using fio with the upstream xNVMe engine

Issue when opening namespace after spdk_tgt restarted on spdk backend

How to reproduce

spdk setup:

./build/bin/nvmf_tgt
./scripts/rpc.py nvmf_create_transport -t TCP -u 16384 -m 8 -c 8192
./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1
./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a  127.0.0.1 -s 4420
./scripts/rpc.py bdev_null_create Null0 8589934592 4096
./scripts/rpc.py bdev_null_create Null1 8589934592 4096
./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null0
./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1

A program opens namespace 1, then spdk restarts with the same setup, after that the program tries to open namespace 2.

Explanation:

Since the controller of namespace 1 is in a bad state, when opening namespace 2, it fails to reuse the ctrlr and will either fail opening or will try to probe the device, which will modify the cref list and cause unwanted behavior.

Suggested solution

It was suggested to me on spdk slack channel to add spdk_nvme_ctrlr_reset when detecting a failed controller, this allows namespace 2 to connect while keeping the same controller.
Another thing I needed to modify is to not close io qpairs from the previous controllers since they have already been closed.
Also I added a function _verify_ctrlr_ok that sends a keep alive admin command to make sure there is communication to the storage device, this command may be changed to any other simple admin command such as fetching log page.
I have pushed a fix suggestion #209, the changes are not trivial and may warrant a discussion.

Unable to use spdk backend using latest libxnvme version 21

Compiling and linking your code with xNVMe
Referring -> https://xnvme.io/docs/latest/backends/xnvme_be_spdk/index.html

CMakeLists.txt
target_compile_options(${PROJECT_NAME} PUBLIC -MMD)
target_compile_options(${PROJECT_NAME} PUBLIC -MP)
target_compile_options(${PROJECT_NAME} PUBLIC -MF)
target_compile_options(${PROJECT_NAME} PUBLIC -fPIE)

target_link_libraries(${PROJECT_NAME} -Wl,--whole-archive)
target_link_libraries(${PROJECT_NAME} -Wl,--no-as-needed)
target_link_libraries(${PROJECT_NAME} -lxnvme)
target_link_libraries(${PROJECT_NAME} -Wl,--no-whole-archive)
target_link_libraries(${PROJECT_NAME} -Wl,--as-needed)

Error:
[2020-12-15 15:19:58.023298] rpc.c: 216:spdk_rpc_register_method: ERROR: duplicate RPC rpc_get_methods registered...
[2020-12-15 15:19:58.023413] rpc.c: 216:spdk_rpc_register_method: ERROR: duplicate RPC spdk_get_version registered...
[2020-12-15 15:19:58.023429] rpc.c: 216:spdk_rpc_register_method: ERROR: duplicate RPC sock_impl_get_options registered...
[2020-12-15 15:19:58.023439] rpc.c: 216:spdk_rpc_register_method: ERROR: duplicate RPC sock_impl_set_options registered...
EAL: RTE_MEMPOOL tailq is already registered
PANIC in tailqinitfn_rte_mempool_tailq():
Cannot initialize tailq: RTE_MEMPOOL

Please suggest something that i can try to make this work.

Add hostnqn to open opts for for spdk backend

Currently there is no way to specify hostnqn this is an issue for spdk server that is told what hostnqns it should have.
here is an example of spdk setup:

./build/bin/nvmf_tgt &
./scripts/rpc.py nvmf_create_transport -t TCP -u 16384 -m 8 -c 8192
./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1
./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a  127.0.0.1 -s 4420
./scripts/rpc.py bdev_null_create Null0 8589934592 4096
./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null0
./scripts/rpc.py nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
./scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:b7dde740-f468-4d33-b253-55b20ac647c0

To connect to subsystem nqn.2016-06.io.spdk:cnode1 the hostnqn nqn.2014-08.org.nvmexpress:uuid:b7dde740-f468-4d33-b253-55b20ac647c0 must be used
If hostnqn isn't specified in ctrlr opts then spdk uses a random uuid to connect with

Feature Request: PCI configuration space access support

New to xNVMe but so far I love what I have been reading and experimenting with in terms of device commands.

Ability to access NVMe device’s PCI memory space and configuration space, including all capabilities as well as ability to set registers as well. Not sure if this is already implemented but if it is, some documentation for it would be splendid.

Appreciate everyone's combined effort!

3p-information is stale

The third-party information, e.g. which version of fio, SPDK, and liburing which xNVMe is built against has gone stale.
The reason being that the xnvme_3p.py is broken, this came with the to meson where the git-submodules were also removed.
Thus, the script xnvme_3p.py needs fixing, to again produce relevant information about the 'meson-subprojects', as well as OS version that xNVMe is built against.

Introduce an open-ended CLI for device configuration/options

Currently, arguments are passed one-to-one from a command-line argument such as "--be" to opts.be.
Thus, when a new option is introduced, then all CLI tools need updating, lately for options --mem, --subnqn, and --hostnqn.
To avoid this, then an open-ended means of passing be-options should be provided.
It could look something like:

xnvme info /dev/nvme0n1 --opts "be.mem=hugepage,be.async=io_uring_cmd,..."

Opening new namespace when device already open fails with SPDK backend

We have started a discussion on this issue on Discord.
The issue happens on version 0.3.0 and I've confirmed it happens on centos 7 and ubuntu 20.

Reproduction steps:
setup nvmf app, create 2 devices and add only one to the subsystem

./build/bin/nvmf_tgt &
./scripts/rpc.py nvmf_create_transport -t TCP -u 16384 -m 8 -c 8192
./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1
./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a  127.0.0.1 -s 4420
./scripts/rpc.py bdev_null_create Null0 8589934592 4096
./scripts/rpc.py bdev_null_create Null1 8589934592 4096
./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null0

Run the following code (note the sleep)

#include <libxnvme.h>
#include <libxnvme_pp.h>
#include <unistd.h>

int
main(int argc, char *argv[])
{
        struct xnvme_opts opts1 = xnvme_opts_default();
        struct xnvme_opts opts2 = xnvme_opts_default();
        struct xnvme_dev *dev1;
        struct xnvme_dev *dev2;

        opts1.nsid = 1;
        opts2.nsid = 2;
        dev1 = xnvme_dev_open("127.0.0.1:4420", &opts1);
        sleep(20);
        dev2 = xnvme_dev_open("127.0.0.1:4420", &opts2);
        xnvme_dev_close(dev1);
        xnvme_dev_close(dev2);

        return 0;
}

When it starts sleeping add the other namespace

sudo ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1

Here are the logs:

# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'spdk'
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-230: INFO: SPDK NVMe PCIe Driver registration -- BEGIN
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-235: INFO: skipping, already registered.
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-237: INFO: SPDK NVMe PCIe Driver registration -- END
[2022-06-09 13:20:24.558367] Starting SPDK v21.10 git sha1 4e4f11ff7 / DPDK 21.08.0 initialization...
[2022-06-09 13:20:24.558980] [ DPDK EAL parameters: [2022-06-09 13:20:24.559200] spdk [2022-06-09 13:20:24.559442] --no-shconf [2022-06-09 13:20:24.559667] -c 0x1 [2022-06-09 13:20:24.559891] --log-level=lib.eal:6 [2022-06-09 13:20:24.560124] --log-level=lib.cryptodev:5 [2022-06-09 13:20:24.560329] --log-level=user1:6 [2022-06-09 13:20:24.560526] --iova-mode=pa [2022-06-09 13:20:24.560754] --base-virtaddr=0x200000000000 [2022-06-09 13:20:24.560959] --match-allocations [2022-06-09 13:20:24.561153] --file-prefix=spdk_pid105448 [2022-06-09 13:20:24.561391] ]
EAL: No available 1048576 kB hugepages reported
TELEMETRY: No legacy callbacks, legacy socket not created
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-260: INFO: spdk_env_is_initialized: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: PCIe, # 1 of 4
EAL: Cannot find device (0127:00:00.1)
EAL: Failed to attach device on primary process
[2022-06-09 13:20:24.702748] nvme.c: 838:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
[2022-06-09 13:20:24.702767] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:0, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: TCP, # 2 of 4
trid:
  trstring: 'TCP'
  trtype: 0x3
  adrfam: 0x1
  traddr: '127.0.0.1'
  trsvcid: '4420'
  subnqn: 'nqn.2016-06.io.spdk:cnode1'
  priority: 0x0
[2022-06-09 13:20:25.111446] nvme_ctrlr.c: 704:nvme_ctrlr_set_intel_support_log_pages: *WARNING*: [nqn.2016-06.io.spdk:cnode1] Intel log pages not supported on Intel drive!
# DBG:xnvme_be_spdk_dev.c:attach_cb-453: INFO: nsid: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: RDMA, # 3 of 4
[2022-06-09 13:20:25.111555] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe trtype 1 not available
[2022-06-09 13:20:25.111570] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:1, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: FC, # 4 of 4
# DBG:xnvme_be_spdk_dev.c:_xnvme_be_spdk_ident_to_trid-322: FAILED: unsupported trtype: FC
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-804: SKIP/FAILED: ident_to_trid()
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-840: INFO: open() : OK
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-475: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_ZONED)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-499: INFO: no positive response to idfy(ZNS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-510: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_FS)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-538: INFO: no positive response to idfy(FS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-546: INFO: not csi-specific id-NVM
# DBG:xnvme_be.c:xnvme_be_dev_idfy-547: INFO: falling back to NVM assumption
# DBG:xnvme_be.c:xnvme_be_factory-672: INFO: obtained device handle
# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'spdk'
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-224: INFO: already initialized
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-260: INFO: spdk_env_is_initialized: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-759: INFO: found dev->ident.uri: '127.0.0.1:4420' via cref_lookup()
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-770: FAILED: !spdk_nvme_ns_is_active(nsid:0x2)
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: PCIe, # 1 of 4
EAL: Cannot find device (0127:00:00.1)
EAL: Failed to attach device on primary process
[2022-06-09 13:20:45.228927] nvme.c: 838:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
[2022-06-09 13:20:45.228948] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:0, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: TCP, # 2 of 4
trid:
  trstring: 'TCP'
  trtype: 0x3
  adrfam: 0x1
  traddr: '127.0.0.1'
  trsvcid: '4420'
  subnqn: 'nqn.2016-06.io.spdk:cnode1'
  priority: 0x0
# DBG:xnvme_be_spdk_dev.c:attach_cb-453: INFO: nsid: 2
# DBG:xnvme_be_spdk_dev.c:attach_cb-462: FAILED: !spdk_nvme_ns_is_active(opts->nsid:0x2)
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:0, e:0, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: RDMA, # 3 of 4
[2022-06-09 13:20:45.399107] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe trtype 1 not available
[2022-06-09 13:20:45.399157] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:0, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: FC, # 4 of 4
# DBG:xnvme_be_spdk_dev.c:_xnvme_be_spdk_ident_to_trid-322: FAILED: unsupported trtype: FC
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-804: SKIP/FAILED: ident_to_trid()
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-786: FAILED: max attempts exceeded
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_dev_open-852: FAILED: xnvme_be_spdk_state_init()
# DBG:xnvme_be.c:xnvme_be_factory-687: INFO: skipping backend due to err: -6
# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'linux'
# DBG:xnvme_be_linux_dev.c:xnvme_be_linux_dev_open-51: INFO: open() : opts->oflags: 0x4, flags: 0x2, opts->create_mode: 0x180
# DBG:xnvme_be_linux_dev.c:xnvme_be_linux_dev_open-56: FAILED: open(uri: '127.0.0.1:4420') : state->fd: '-1', errno: 2
# DBG:xnvme_be_linux_dev.c:xnvme_be_linux_dev_open-63: FAILED: open(uri: '127.0.0.1:4420') : state->fd: '-1', errno: 2
# DBG:xnvme_be.c:xnvme_be_factory-687: INFO: skipping backend due to err: -2
# DBG:xnvme_be.c:xnvme_be_factory-639: INFO: skipping be: 'fbsd'; !enabled
# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'posix'
# DBG:xnvme_be_posix_dev.c:xnvme_be_posix_dev_open-46: INFO: open() : opts->oflags: 0x4, flags: 0x2, opts->create_mode: 0x180
# DBG:xnvme_be_posix_dev.c:xnvme_be_posix_dev_open-51: FAILED: open(uri: '127.0.0.1:4420'), state->fd: '-1', errno: 2
# DBG:xnvme_be.c:xnvme_be_factory-687: INFO: skipping backend due to err: -2
# DBG:xnvme_be.c:xnvme_be_factory-639: INFO: skipping be: 'windows'; !enabled
# DBG:xnvme_be.c:xnvme_be_factory-692: FAILED: no backend for uri: '127.0.0.1:4420'
# DBG:xnvme_dev.c:xnvme_dev_open-158: FAILED: failed opening uri: 127.0.0.1:4420
# DBG:xnvme_be_spdk_dev.c:_cref_deref-109: INFO: refcount: 0 => detaching

If you run the same code but with both of the namespaces already connected before you get:

# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'spdk'
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-230: INFO: SPDK NVMe PCIe Driver registration -- BEGIN
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-235: INFO: skipping, already registered.
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-237: INFO: SPDK NVMe PCIe Driver registration -- END
[2022-06-09 13:22:59.612356] Starting SPDK v21.10 git sha1 4e4f11ff7 / DPDK 21.08.0 initialization...
[2022-06-09 13:22:59.612949] [ DPDK EAL parameters: [2022-06-09 13:22:59.613180] spdk [2022-06-09 13:22:59.613416] --no-shconf [2022-06-09 13:22:59.613651] -c 0x1 [2022-06-09 13:22:59.613850] --log-level=lib.eal:6 [2022-06-09 13:22:59.614100] --log-level=lib.cryptodev:5 [2022-06-09 13:22:59.614353] --log-level=user1:6 [2022-06-09 13:22:59.614645] --iova-mode=pa [2022-06-09 13:22:59.614844] --base-virtaddr=0x200000000000 [2022-06-09 13:22:59.615067] --match-allocations [2022-06-09 13:22:59.615477] --file-prefix=spdk_pid105572 [2022-06-09 13:22:59.615888] ]
EAL: No available 1048576 kB hugepages reported
TELEMETRY: No legacy callbacks, legacy socket not created
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-260: INFO: spdk_env_is_initialized: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: PCIe, # 1 of 4
EAL: Cannot find device (0127:00:00.1)
EAL: Failed to attach device on primary process
[2022-06-09 13:22:59.759757] nvme.c: 838:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
[2022-06-09 13:22:59.759778] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:0, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: TCP, # 2 of 4
trid:
  trstring: 'TCP'
  trtype: 0x3
  adrfam: 0x1
  traddr: '127.0.0.1'
  trsvcid: '4420'
  subnqn: 'nqn.2016-06.io.spdk:cnode1'
  priority: 0x0
[2022-06-09 13:23:00.195438] nvme_ctrlr.c: 704:nvme_ctrlr_set_intel_support_log_pages: *WARNING*: [nqn.2016-06.io.spdk:cnode1] Intel log pages not supported on Intel drive!
# DBG:xnvme_be_spdk_dev.c:attach_cb-453: INFO: nsid: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: RDMA, # 3 of 4
[2022-06-09 13:23:00.195547] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe trtype 1 not available
[2022-06-09 13:23:00.195564] nvme.c: 915:spdk_nvme_probe: *ERROR*: Create probe context failed
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-810: FAILED: probe a:1, e:-1, i:0
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-798: ############################
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-799: INFO: trtype: FC, # 4 of 4
# DBG:xnvme_be_spdk_dev.c:_xnvme_be_spdk_ident_to_trid-322: FAILED: unsupported trtype: FC
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-804: SKIP/FAILED: ident_to_trid()
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-840: INFO: open() : OK
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-475: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_ZONED)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-499: INFO: no positive response to idfy(ZNS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-510: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_FS)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-538: INFO: no positive response to idfy(FS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-546: INFO: not csi-specific id-NVM
# DBG:xnvme_be.c:xnvme_be_dev_idfy-547: INFO: falling back to NVM assumption
# DBG:xnvme_be.c:xnvme_be_factory-672: INFO: obtained device handle
# DBG:xnvme_be.c:xnvme_be_factory-667: INFO: obtained backend instance be: 'spdk'
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-224: INFO: already initialized
# DBG:xnvme_be_spdk_dev.c:_spdk_env_init-260: INFO: spdk_env_is_initialized: 1
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-759: INFO: found dev->ident.uri: '127.0.0.1:4420' via cref_lookup()
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-779: INFO: re-using previously attached controller
# DBG:xnvme_be_spdk_dev.c:xnvme_be_spdk_state_init-840: INFO: open() : OK
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-475: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_ZONED)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-499: INFO: no positive response to idfy(ZNS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-510: INFO: !xnvme_adm_idfy_ctrlr_csi(CSI_FS)
# DBG:xnvme_be.c:xnvme_be_dev_idfy-538: INFO: no positive response to idfy(FS)
# DBG:xnvme_be_spdk_admin.c:xnvme_be_spdk_sync_cmd_admin-66: FAILED: xnvme_cmd_ctx_cpl_status()
# DBG:xnvme_be.c:xnvme_be_dev_idfy-546: INFO: not csi-specific id-NVM
# DBG:xnvme_be.c:xnvme_be_dev_idfy-547: INFO: falling back to NVM assumption
# DBG:xnvme_be.c:xnvme_be_factory-672: INFO: obtained device handle
# DBG:xnvme_be_spdk_dev.c:_cref_deref-109: INFO: refcount: 0 => detaching

It seems this issue is caused by xNVMe not handling a certain event sent by the target (SPDK_NVME_ASYNC_EVENT_NS_ATTR_CHANGED)
Here is the code in spdk which should handle it:

void
nvme_ctrlr_process_async_event(struct spdk_nvme_ctrlr *ctrlr,
			       const struct spdk_nvme_cpl *cpl)
{
	union spdk_nvme_async_event_completion event;
	struct spdk_nvme_ctrlr_process *active_proc;
	int rc;

	event.raw = cpl->cdw0;

	if ((event.bits.async_event_type == SPDK_NVME_ASYNC_EVENT_TYPE_NOTICE) &&
	    (event.bits.async_event_info == SPDK_NVME_ASYNC_EVENT_NS_ATTR_CHANGED)) {
		nvme_ctrlr_clear_changed_ns_log(ctrlr);

		rc = nvme_ctrlr_identify_active_ns(ctrlr);
		if (rc) {
			return;
		}
		nvme_ctrlr_update_namespaces(ctrlr);
		nvme_io_msg_ctrlr_update(ctrlr);
	}

	if ((event.bits.async_event_type == SPDK_NVME_ASYNC_EVENT_TYPE_NOTICE) &&
	    (event.bits.async_event_info == SPDK_NVME_ASYNC_EVENT_ANA_CHANGE)) {
		if (!ctrlr->opts.disable_read_ana_log_page) {
			rc = nvme_ctrlr_update_ana_log_page(ctrlr);
			if (rc) {
				return;
			}
			nvme_ctrlr_parse_ana_log_page(ctrlr, nvme_ctrlr_update_ns_ana_states,
						      ctrlr);
		}
	}

	active_proc = nvme_ctrlr_get_current_process(ctrlr);
	if (active_proc && active_proc->aer_cb_fn) {
		active_proc->aer_cb_fn(active_proc->aer_cb_arg, cpl);
	}
}

it is defined in nvme_ctrlr.c

To solve it on our end we created a workaround (that is probably not a good one) in xnvme_be_spdk_state_init, when probing fails on max attempt we call these functions

nvme_ctrlr_clear_changed_ns_log(ctrlr);
nvme_ctrlr_identify_active_ns(ctrlr);
nvme_ctrlr_update_namespaces(ctrlr);
nvme_io_msg_ctrlr_update(ctrlr);

then we run the function again.

@safl what do you think should be our approach to solve this?

xnvme info should indicate which arguments are required

When running xnvme info in a fabrics setup the required arguments are uri, dev-nsid and subnqn (if there are more subsystems).
However, this is not stated anywhere and the error from a missing argument is "ERR: 'xnvme_dev_open()': {errno: -2, msg: 'No such file or directory'}", which is not very helpful.

Ideally both the error message and the xnvme info --h should indicate which arguments are required.

Included SPDK fails enumerating a fabrics target with a bdev_null

The included SPDK fails when trying to enumerate a fabrics target with a bdev_null.
The fabrics target can be setup from the following commands:

sudo ./scripts/rpc.py nvmf_create_transport -t TCP -u 16384 -m 8 -c 8192
sudo ./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -d SPDK_Controller1
sudo ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a  127.0.0.1 -s 4420
sudo ./scripts/rpc.py bdev_null_create Null0 8589934592 4096
sudo ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null0

If xnvme enum 127.0.0.1:4420 is called after setting up the target, it will fail the enumeration.

If the target is set up with a separate SPDK repo (v22.09), then there are no issues and the bdev_null is enumerated as expected.

Clean up flow of xnvme_opts when using xnvmec

xnvme_opts_default gets overwritten when calling commands from cli.
That is, when calling a xnvme command from the cli, fx. xnvme info, the opts get overwritten by xnvmec_cli_to_opts in xnvmec.
Looking at the code in xnvmec.c (line ~1840):

// opts are created
struct xnvme_opts opts = xnvme_opts_default();
// opts are overwritten
if (xnvmec_cli_to_opts(cli, &opts)) {
	xnvmec_perr("xnvmec_cli_to_opts()", errno);
	return -1;
}

Furthermore, when xnvme_dev_open is called the opts->rdwr is reset to the default that was overwritten.
Overall it is a bit messy and not working as intended.

A solution could be to make a new function i.e. xnvme_opts_set_defaults that take opts as an argument and set the unset values to the default.
Then it could be called after the xnvmec_cli_to_opts.

Changes for v0.6.0 (``next``)

Two tasks are post-poned to v0.7.0: #184
See list of commits on PR: #170 with PRs categorized as:

API

CLI and Tools

  • CLI-improvements
  • toolbox:cijoe: add test for xdd

Ramdisk

  • testing: #168
  • be:ramdisk: add iovec support
  • be:ramdisk: add write-zeroes

Linux

  • be:linux: hugepage support
  • SQPOLL optimizations and toggles

Windows

  • Non-NVMe block-device support on Windows
  • Experimental usage for Windows IORING

Fabrics

  • cijoe: create tests for fabrics
  • be:spdk: process spdk events before reusing controller

Third-party

Toolbox (mk, cijoe, misc.)

  • ci: add scan-build to workflow
  • ci,toolbox: add ramdisk testing
  • cijoe: create tests for fabrics
  • Refactoring of Python make

Zoned-Namespaces (ZNS) Support in software/firmware level via backports

Is there a particular reason why something like this hasn't been ported in an update for users of older software and/or hardware? Since it was meant to be possible in theory to implement without dedicated hardware support of NVME 2.0 🤔

and could an easier way be brought forward in the future to use ZNS across multiple platforms if support is not directly allowed by specific sellers of the hardware and the SSD's that come with them? 🙏🏽

api-change: command-preppers vs. command-construct+submit

Currently the command-set APIs have helper functions such as:

  • xnvme_nvm_read()
  • xnvme_znd_append()

These construct, and submit commands in one function-call. This is convenient for the common-case of synchronous calls, and for non-batched async. commands.

However, it poses issues for:

  • Batch / bulk submission
  • Expansion of NVMe functionality

To increase flexibility, the construction of a command should be separate from the submission. Thus, the APIs can also split different parts of the command-construction. E.g. instead of encoding whether a payload is iovec / buffers, constructing opcode and related cdw, separate from the setup of e.g. uuid etc.

Going forward, the APIs will move towards this separation.

Issue of linking libxnvme.a(built with spdk) with myproject

Context: I will try to explain the existing project setup and error encountered while linking libxnvme.a.

B (.so)
libxnvme (.a)

A (other module .so)
X (other module .so)

A (.so) is linking B(.so)
X (.so) is linking B(.so)

  1. I am working on B. B calls libxnvme API. Used linker flags as below for linking libxnvme(.a) in B. This linkage is required to use SPDK API.
    ;# LINKAGE
    target_link_libraries(${B_NAME} -Wl,--whole-archive)
    target_link_libraries(${B_NAME} -Wl,--no-as-needed)
    target_link_libraries(${B_NAME} ${libxnvme.a})
    target_link_libraries(${B_NAME} -Wl,--no-whole-archive)
    target_link_libraries(${B_NAME} -Wl,--as-needed)

Result-> Twice linking error
Duplicate method linker error observed with few spdk methods.

  1. Workaround of issue mentioed in 1.
    Used linker flags(same set of changes of 1) in the cmakelist of module A and removed from B. Able to use SPDK API.
    This resolved twice linking error.

  2. Now X complains that it is not finding xnvme_enumerate() in B.so
    Had I been able to use #1 in first place (without twice linking error), I would not have used #2.
    Error: ImportError: X.so: undefined symbol: xnvme_enumerate

  3. To resolve error in #3, i tried putting same linker options in cmakelist of X as described in #1.
    It worked.
    But this approach is making libxnvme APIs avialable/exposed in the modules where they are not required.

This approach is having problem as it will lead to exposing libxnvme API (alongwith spdk API) everywhere.

A bit similar issue on stackoverflow: https://stackoverflow.com/questions/805555/ld-linker-question-the-whole-archive-option

Please feel free to ask more information in order to resolve the issue.

xnvme enum is not working

[root@localhost xnvme-0.0.21]# xnvme-driver reset
0000:01:00.0 (144d a808): vfio-pci -> nvme
[root@localhost xnvme-0.0.21]#
[root@localhost xnvme-0.0.21]# xnvme-driver
0000:01:00.0 (144d a808): nvme -> vfio-pci
[root@localhost xnvme-0.0.21]# xnvme enum

xnvme_enumerate()

Starting SPDK v20.07 / DPDK 20.05.0 initialization...
[ DPDK EAL parameters: xnvme -c 0x1 --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
EAL: No available hugepages reported in hugepages-1048576kB
EAL: VFIO support initialized
EAL: Cannot allocate memzone list
EAL: FATAL: Cannot init memzone
EAL: Cannot init memzone
Failed to initialize DPDK

DBG:xnvme_be_spdk.c:_spdk_env_init-244: FAILED: spdk_env_init(), err: -19

DBG:xnvme_be_spdk.c:_spdk_env_init-253: INFO: spdk_env_is_initialized: 0

DBG:xnvme_be_spdk.c:xnvme_be_spdk_enumerate-717: FAILED: _spdk_env_init()

DBG:xnvme_be.c:xnvme_enumerate-716: FAILED: spdk->enumerate(...), err: 'No such process', i: 0

DBG:xnvme_be_nosys.c:xnvme_be_nosys_enumerate-141: FAILED: not implemented(possibly intentional)

DBG:xnvme_be.c:xnvme_enumerate-716: FAILED: fbsd->enumerate(...), err: 'Function not implemented', i: 2

xnvme_enumeration:
capacity: 100
nentries: 0
entries: ~

xnvme_tests_enum is missing flag for backend

The test xnvme_tests_enum is missing a flag to specify the desired backend.
Without this it will only test the SPDK backend.
Since it is also part of the vfio testplan, it should be expanded to actually test the correct backend.

compiling faied

xNVMe-0.0.24/tests/ident.c:170:9: error: missing initializer for field ‘schm’ of ‘struct xnvme_ident’ [-Werror=missing-field-initializers]
struct xnvme_ident ident = { 0 };

Looks like gcc doesn't support the initialization like struct xnvme_ident ident = { 0 };
use memset() can avoid the error

gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)

Normalize error-handling on sync-io

The codebase need a review of error-handling on synchronous I/O. It should always fill out the completion-status. It should do so, since async. emulation will utilize the sync. interface implementation, and thus rely on completion-status for error-handling.

Initial step here, is part of the refactoring of Common Backend Implementations (CBI):

This should be done equivalently for e.g. the Linux Block shim, and generally, go over all the sync. implementations ironing this out.

Memory-backed pseudo-device

Currently testing xNVMe relies on having a device available to exercise the different code-paths.
On Linux, the NULL-block device is a great vehicle for testing xNVMe with block devices on Linux and generally qemu provides NVMe emulation on which xNVMe can then run testing on the NVMe command-set specific features. xNVMe uses these extensively, however, it is a bit cumbersome to bring that infrastructure up just to verify things like the async-emulation code of emu and thrpool, and core library logic.

A lot of core xNVMe functionality could be tested by having a pseudo-backed mimicking the behavior of a device but remaining within the library, such a pseudo-device-backend is useful for testing core-library logic and makes perf measurements of library encapsulations simpler.

Work has started on such a device-backend pull-request #38

In the current form it can replace the use of the NULL-block device, taking it further, it could replace the general use of qemu by mimicking the functionality of the Zoned Command Set and Key Value Command Set. It should not aim to replace neither NULL-block nor qemu-nvme. It should "just" implement naive responses to I/O and admin commands, such that the various core-library features and command-construction APIs can be exercised.

TODO

  • Add testing using cijoe_runner in the non-qemu/build workflows
    • build-linux
    • build-macos
    • build-windows
  • Add support for iovec, that is, implement the cmd_iov()
  • cmd_admin: handling of 'admin' commands, primarily get-log and get/set feat as defined in the base-spec
  • cmd_io: handling of 'optional' commands, e.g. simple-copy as defined for the NVM Command Set
  • cmd_io: handling of commands as defined in the Zoned Namespace Command Set
  • cmd_io: handling of commands as defined in the Key Value Command Set

Add cijoe-pkg-xnvme to xNVMe repository

Currently the test-package used for testing xNVMe lives in a separate repository: https://github.com/refenv/cijoe-pkg-xnvme
When adding new features, then that repository needs to be updated, and made available by publishing pre-release version of the package on PyPI.
To use those changes, then one does: pip3 install --user --pre cijoe-pkg-xnvme.

This works, sort of. However, it would provide for better parallel work if cijoe-pkg-xnvme was provided within the xNVMe repository. When doing so, the testing of the feature can be provided in lock-step with the PR on xNVMe, also, provides for more parallel/pipelined work.

Steps:

  • Add cijoe-pkg-xnvme to scripts/cijoe-pkg-xnvme
  • Adjust CI to install the cijoe-package "directly" from the repository rather than via PyPI
  • Consider whether the cijoe-pkg-xnvme should be removed from its current home https://github.com/refenv/cijoe-pkg-xnvme
  • Re-consider whether deployment to PyPI should happen at all, potentially like below or not at all.

The use of the package can be:

  • PR:
    • No deployment to PyPI
    • install package from the xNVMe repository
  • next:
    • Deploy package to PyPI as pre-release-package
    • install package from PyPI using pip3 install --pre
  • rc:
    • Deploy package to PyPI
    • Install via PyPI: pip3 install cijoe-pkg-xnvme --version-matching-tag
  • tags:
    • No deployment to PyPI
    • Install via PyPI: pip3 install cijoe-pkg-xnvme --version-matching-tag
  • main:
    • No deployment to PyPI
    • Install via PyPI, latest:pip3 install --user cijoe-pkg-xnvme

SPDK backend uses detached controller

I have found the following issue when working with version 0.3.0 and using ip:port for opening devices, here is the flow:

  1. Create 5 namespaces (should happen with at least 2)
  2. Open namespaces and issue commands
  3. Close namespaces
  4. Create namespace, open and issue commands
    In step 4 I crash sporadically, when debugging the issue I noticed that the controller has garbage values at this stage.

This is what I see when debugging:

  1. _cref_insert is called 5 times with the same controller, creating 5 entries
  2. _cref_lookup is called 5 times
  3. when closing the device _cref_deref is called 5 times removing the first entry of the controller out of 5 and detaching the controller
  4. _cref_lookup is called and raises the refcount of the next entry (same controller) to 2

It seems to me that the root cause of the issue is creating the 5 entries instead of only 1. assuming _cref_deref would be called enough times to clear every entry in the table it will detach the controller 5 times which should not happen.

With these changes the I stop crashing

diff --git a/lib/xnvme_be_spdk_dev.c b/lib/xnvme_be_spdk_dev.c
index 09f2ffc..91bd247 100644
--- a/lib/xnvme_be_spdk_dev.c
+++ b/lib/xnvme_be_spdk_dev.c
@@ -69,6 +69,15 @@ _cref_insert(struct xnvme_ident *ident, struct spdk_nvme_ctrlr *ctrlr)
                XNVME_DEBUG("FAILED: !ctrlr");
                return -EINVAL;
        }
+       for (int i = 0; i < XNVME_BE_SPDK_CREFS_LEN; ++i) { // check if entry already exists
+                if (!g_cref[i].refcount) {
+                        continue;
+                }
+
+                if (strncmp(g_cref[i].uri, ident->uri, XNVME_IDENT_URI_LEN - 1)) {
+                        return 0;
+                }
+        }

        for (int i = 0; i < XNVME_BE_SPDK_CREFS_LEN; ++i) {
                if (g_cref[i].refcount) {

Unable to open device with PCI URI

After configuring and installing

$  ./configure --enable-debug --disable-tests --disable-examples --disable-tools --disable-tools-fio --enable-be-spdk
$ make
$ sudo make install

And building the example application, using the pci identifier to enable the SPDK backend:

#include <stdio.h>
#include <libxnvme.h>

int
main(int argc, char** argv)
{
    struct xnvme_dev* dev = xnvme_dev_open("pci://0000:af:00.0");
    if (!dev) {
        perror("xnvme_dev_open");
        return 1;
    }
    xnvme_dev_pr(dev, XNVME_PR_DEF);
    xnvme_dev_close(dev);

    return 0;
}

This address corresponds to the NVMe SSD on my system:

af:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM173X

Expected behavior is something like the example output in the docs. Instead I get:

xnvme_dev_open: No such device or address

I've tried numerous variations on the PCI string but none have worked. What is the needed configuration to enable the SPDK backend?

FreeBSD 13.1 breakage

The FreeBSD releng/13.0 which the cloud-init images are based on and used in the CI environment are end-of-life.
This broke the CI as installation of ninja leads to binary-incompatibility issues.

Upgrading to 13.1 breaks SPDK/DPDK, fixes for these will be available from SPDK v22.09.
Thus, currently disabling the build-and-test jobs on FreeBSD.

xnvmec: add a simple pretty-printer for io-stats

In libxnvme_util.h there are helpers for simple elapsed-wallclock-timers:

  • xnvme_timer_start()
  • xnvme_timer_stop
  • xnvme_timer_bw_pr() -- printing mebibytes/second

This set of helpers take a struct xnvme_timer as argument, where the following take an cli-instance as argument, which in turn has the timer initialized for convenience. Additionally in the the cli-library helpers are provided:

  • xnvmec_timer_start()
  • xnvmec_timer_stop()
  • xnvmec_timer_bw_pr() -- printing mebibytes/second

It would be nice with an addition of a convenience function for printing IOPS e.g. xnvmec_timer_bw_pr(). Even better then a re-work of how this stat-printing works, possibly a xnvmec_iostats_pr(). This will be consumed by the varies tools providing "performance" estimates in the the example-code.

Re-vamp the testing infrastructure

The cijoe testing infrastructure has been re-written to facilitate pytest as the testrunner and improve reproducibility: https://cijoe.readthedocs.io/en/latest/

The motivation for switching, and the re-tooling of cijoe are manyfold:

glue-tests
xNVMe relies on the parametrization of small C-programs to test functionality. The was previously done by writing a bash-script (CIJOE testcase) and have the cijoe testrunner produce the parameter-set executing the test. The "glue-tests" also serve for parametrization of applications such as fio to test the different I/O paths supported by xNVMe, testing the command-line utilitzes provided with xNVMe etc.

  • Improve "glue-tests"
    • Glue-tests were written in Bash, they are now written in Python as pytests, why is this great? Well, by relying on pytest infrastructure for parametrization and fixtures for configuration-state, then the boiler-plate is drastically reduced
    • To quantify the improvement of less per-test boiler-plate, have a look at PR (#148) this switch reduces the testing infra code with about 3000 lines.
    • Changing the cijoe testrunner from a custom testrunner with its own testcase, testsuite and testplan definitions to a pytest-plugin, is done to leverage prior pytest-knowledge of potential new-comes and contributors to xNVMe. E.g. reading a pytest conftest and a bunch of pytest-tests should be easier to jump into, rather than jumping into the previous state of the cijoe testrunner.
  • Reproducibility
    • A lot of testing is done via GitHUB Actions, these in turn rely on spinning up qemu-guests with emulated NVMe devices. Reproducing these qemu-guests and testing locally guests was not trivial, thus often, testing relied entirely on waiting for time on CI.
    • Now, the workflow-steps of a GitHUB Action can be expressed by cijoe and run locally. Bootstrapping the exact same guests, and reproducing errors found. It reducsed dependency on the GitHUB infra, as the cijoe workflows can be run locally in a guest, on hardware or on other CI infrastructure available elsewhere.

The major marks

  • Re-implementation cijoe in Python utilizing pytest as the test-runner
  • Release cijoe re-implementation
  • Port cijoe-pkg-linux
  • Port cijoe-pkg-qemu
  • Port cijoe-pkg-fio
  • Port cijoe-pkg-xnvme
  • Adjust GitHUB CI - #148
    • adjust "build-and-test"
    • adjust "docgen"

Testcase porting check-list

Tests are beeing ported, the last major one are the fio encapsulated tests, doing paramater-contruction for external engines.

  • examples-xnvme_hello.sh
  • examples-xnvme_io_async_read.sh
  • examples-xnvme_io_async_write.sh
  • examples-zoned_io_async_append.sh
  • examples-zoned_io_async_read.sh
  • examples-zoned_io_async_write.sh
  • examples-zoned_io_sync_append.sh
  • examples-zoned_io_sync_read.sh
  • examples-zoned_io_sync_write.sh
  • lblk_enum.sh
  • lblk_idfy.sh
  • lblk_info.sh
  • lblk_read.sh
  • lblk_write.sh
  • lblk_write_uncor.sh
  • lblk_write_zeroes.sh
  • xnvme_enum.sh
  • xnvme_enum_fabrics.sh
  • xnvme_feature_get.sh
  • xnvme_feature_set.sh
  • xnvme_file_copy_sync.sh
  • xnvme_fioe.sh
  • xnvme_format.sh
  • xnvme_idfy.sh
  • xnvme_idfy_ctrlr.sh
  • xnvme_idfy_ns.sh
  • xnvme_info.sh
  • xnvme_kvs_enum.sh
    • Done, but needs improvement
  • xnvme_kvs_exist.sh
  • xnvme_kvs_idfy_ns.sh
  • xnvme_kvs_io.sh
  • xnvme_kvs_list.sh
  • xnvme_kvs_retrieve.sh
  • xnvme_kvs_store_opt.sh
  • xnvme_library_info.sh
  • xnvme_log-erri.sh
  • xnvme_log-health.sh
  • xnvme_log.sh
  • xnvme_padc.sh
  • xnvme_pioc.sh
  • xnvme_sanitize.sh
  • xnvme_tests_async_intf01.sh
  • xnvme_tests_async_intf02.sh
  • xnvme_tests_async_intf03.sh
  • xnvme_tests_async_intf04.sh
  • xnvme_tests_enum_any_be_multi.sh
  • xnvme_tests_enum_any_be_open.sh
  • xnvme_tests_enum_multi.sh
  • xnvme_tests_enum_open.sh
  • xnvme_tests_lblk_io.sh
  • xnvme_tests_lblk_scopy.sh
  • xnvme_tests_lblk_write_uncorrectable.sh
  • xnvme_tests_lblk_zero.sh
  • xnvme_tests_scc_idfy.sh
  • xnvme_tests_scc_scopy_async.sh
  • xnvme_tests_scc_scopy_msrc_async.sh
  • xnvme_tests_scc_scopy_msrc_sync.sh
  • xnvme_tests_scc_support.sh
  • xnvme_tests_znd_append.sh
  • xnvme_tests_znd_state.sh
  • xnvme_tests_znd_zrwa_flush.sh
  • xnvme_tests_znd_zrwa_flush_explicit.sh
  • xnvme_tests_znd_zrwa_flush_implicit.sh
  • xnvme_tests_znd_zrwa_idfy.sh
  • xnvme_tests_znd_zrwa_open_with_zrwa.sh
  • xnvme_tests_znd_zrwa_open_without_zrwa.sh
  • xnvme_tests_znd_zrwa_support.sh
  • xpy_ctypes_bin_dev_open.sh
  • xpy_ctypes_bin_enumerate.sh
  • xpy_ctypes_bin_libconf.sh
  • xpy_cython_bindings_pytest.sh
  • xpy_cython_header_pytest.sh
  • zoned_append.sh
  • zoned_changes.sh
  • zoned_enum.sh
  • zoned_idfy_ctrlr.sh
  • zoned_idfy_ns.sh
  • zoned_mgmt_open.sh
  • zoned_read.sh
  • zoned_report.sh
  • zoned_report_all.sh
  • #147
  • zoned_report_some.sh
  • zoned_write.sh

Future fixing

toolbox/cijoe-pkg-xnvme/src/cijoe/xnvme/tests/logic/test_xnvme_tests_enum.py
toolbox/cijoe-pkg-xnvme/src/cijoe/xnvme/tests/tools/test_xnvme.py
toolbox/cijoe-pkg-xnvme/src/cijoe/xnvme/tests/logic/test_xnvme_tests_ioworker.py

The 'test_xnvme_tests_ioworker.py" needs to be expanded with a test_verify_sync() that replaces the xnvme_tests_ioworker_sync.sh from next.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.