Coder Social home page Coder Social logo

Issue while doing ./mkfs.nvfuse -f about nvfuse HOT 7 OPEN

nvfuse avatar nvfuse commented on May 30, 2024
Issue while doing ./mkfs.nvfuse -f

from nvfuse.

Comments (7)

yongseokoh avatar yongseokoh commented on May 30, 2024

Hi,

This message sometimes happens when block descriptors are not written properly to SSDs. Can you make use of the "NVFUSE_BD_DEBUG" definition to verify the BDs?

Thanks

from nvfuse.

Vinayak099 avatar Vinayak099 commented on May 30, 2024

I will try that and let you know.

Thanks

from nvfuse.

shounak1 avatar shounak1 commented on May 30, 2024

Hi,
I have the same issue but I get a segmentation fault when I run ./mkfs.nvfuse -f. I am running nvfuse, spdk and dpdk on qemu.

Here is my qemu command to start the vm.

sudo ./qemu-system-x86_64 -m 9000 -cdrom ~/Downloads/ubuntu-14.04.5-server-amd64.iso -drive file=~/vdisk/16gb.img, -drive file=~/vdisk/nvme_disk.img,if=none,id=drv1 -device nvme,drive=drv1,serial=foo1 --enable-kvm -smp 16 -cpu qemu64,+ssse3,+sse4.1,+sse4.2,+x2apic -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::2222-:22 -display none

Here is the output ./mkfs.nvfuse -f

 appname = (null)
 cpu core mask = 1
 qdepth = 512 
 buffer size = 0 MB
 need format = 1 
 need mount = 0 
 preallocation = 0 
 spdk_setup: filename = SPDK, qdepth = 1024 
Initialization complete.
Starting SPDK v18.07-pre / DPDK 18.02.0 initialization...
[ DPDK EAL parameters: hello_world -c 1 -m 8192 --file-prefix=spdk1 --base-virtaddr=0x1000000000 --proc-type=auto ]
EAL: Detected 16 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Multi-process socket /var/run/.spdk1_unix
EAL: Probing VFIO support...
EAL: WARNING: cpu flags constant_tsc=no nonstop_tsc=no -> using unreliable clock cycles !
Initializing NVMe Controllers
EAL: PCI device 0000:00:03.0 on NUMA socket 0
EAL:   probe driver: 8086:5845 spdk_nvme
Attaching to 0000:00:03.0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES (09) sqid:0 cid:95 nsid:0 cdw10:0000000b cdw11:0000001f
nvme_qpair.c: 283:nvme_qpair_print_completion: *NOTICE*: INVALID FIELD (00/02) sqid:0 cid:95 cdw0:0 sqhd:0005 p:1 m:0 dnr:1
nvme_ctrlr.c:1239:nvme_ctrlr_configure_aer: *NOTICE*: nvme_ctrlr_configure_aer failed!
nvme_qpair.c: 112:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) sqid:0 cid:95 nsid:ffffffff cdw10:007f00c0 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: *NOTICE*: INVALID OPCODE (00/01) sqid:0 cid:95 cdw0:0 sqhd:0006 p:1 m:0 dnr:1
nvme_ctrlr.c: 384:nvme_ctrlr_set_intel_support_log_pages: *ERROR*: nvme_ctrlr_cmd_get_log_page failed!
Attached to 0000:00:03.0
Using controller QEMU NVMe Ctrl       (foo1                ) with 1 namespaces.
  Namespace ID: 1 size: 17GB
 NVMe: sector size = 512, number of sectors = 33554432
 NVMe: total capacity = 0.016TB
 called: spdk init 
 alloc io qpair for nvme 
 spdk init: Ok
 total blks = 33554432 
 ipc init 
 rte_socket_id() = 0 
 rte_lcore_id() = 0 
 send ring = 0x10b2558e00, recv ring = 0x10b2538c00
 send ring = 0x10b2518a00, recv ring = 0x10b2538c00
 send ring = 0x10b24f8800, recv ring = 0x10b2538c00
 send ring = 0x10b24d8600, recv ring = 0x10b2538c00
 send ring = 0x10b24b8400, recv ring = 0x10b2538c00
 send ring = 0x10b2498200, recv ring = 0x10b2538c00
 send ring = 0x10b2478000, recv ring = 0x10b2538c00
 send ring = 0x10b2457e00, recv ring = 0x10b2538c00
 send ring = 0x10b2437c00, recv ring = 0x10b2538c00
 send ring = 0x10b2417a00, recv ring = 0x10b2538c00
 send ring = 0x10b23dfe00, recv ring = 0x10b2538c00
 send ring = 0x10b23bfc00, recv ring = 0x10b2538c00
 send ring = 0x10b239fa00, recv ring = 0x10b2538c00
 send ring = 0x10b237f800, recv ring = 0x10b2538c00
 send ring = 0x10b235f600, recv ring = 0x10b2538c00
 send ring = 0x10b233f400, recv ring = 0x10b2538c00
Segmentation fault (core dumped)

Here is the output of my gdb

GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.3) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from mkfs.nvfuse...done.

warning: exec file is newer than core file.
[New LWP 24251]
[New LWP 24252]
[New LWP 24253]
[New LWP 24254]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./mkfs.nvfuse -f -m'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  rte_mempool_default_cache (mp=<optimized out>, mp=<optimized out>, lcore_id=<optimized out>)
    at /home/shounak/spdk/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1080
1080		if (mp->cache_size == 0)
(gdb) bt
#0  rte_mempool_default_cache (mp=<optimized out>, mp=<optimized out>, lcore_id=<optimized out>)
    at /home/shounak/spdk/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1080
#1  rte_mempool_get_bulk (n=1, obj_table=0x7ffc53c2cfc0, mp=0x0) at /home/shounak/spdk/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1338
#2  rte_mempool_get (obj_p=0x7ffc53c2cfc0, mp=0x0) at /home/shounak/spdk/dpdk/x86_64-native-linuxapp-gcc/include/rte_mempool.h:1365
#3  nvfuse_put_channel_id (ipc_ctx=ipc_ctx@entry=0x7ffc53c2d140, channel_id=channel_id@entry=1) at nvfuse_ipc_ring.c:343
#4  0x000000000040bb4a in nvfuse_ipc_init (ipc_ctx=ipc_ctx@entry=0x7ffc53c2d140) at nvfuse_ipc_ring.c:526
#5  0x0000000000404e4f in nvfuse_configure_spdk (io_manager=io_manager@entry=0x7ffc53c2db60, ipc_ctx=ipc_ctx@entry=0x7ffc53c2d140, cpu_core_mask=<optimized out>, 
    qdepth=qdepth@entry=1024) at nvfuse_api.c:161
#6  0x0000000000403095 in main (argc=<optimized out>, argv=<optimized out>) at mkfs.nvfuse.c:40

Is there any problem with my setup or anything I am doing wrong ? Please help !

from nvfuse.

xxks-kkk avatar xxks-kkk commented on May 30, 2024

@shounak1 Kind of curious from your log shows that

Using controller QEMU NVMe Ctrl       (foo1                ) with 1 namespaces.

Are you using QEMU to simulate the NVMe device? Which software are you using? I want to try this code but I don't have NVMe SSD at hand. Thanks!

from nvfuse.

yongseokoh avatar yongseokoh commented on May 30, 2024

Hi,

I have been using a 4TB commercial NVMe SSD for development. I believe NVFUSE can work with QEMU. But, there are many modifications to SPDK and NVFUSE must be modified along with this. Sorry, my research interests are slightly moving to another work. I will focus on this issue as soon as possible.

Thanks

from nvfuse.

yongseokoh avatar yongseokoh commented on May 30, 2024

Hi,

The hello example (examples/helloworld/start_helloworld.sh) is currently working and format and mount operations are also included. Please, refer to the following terminal log.

Thanks

Yongseok

ysoh@ysoh-desktop:~/nvfuse/examples/helloworld$ sudo ./start_helloworld.sh
[sudo] ysoh의 암호:
INFO[nvfuse_api.c|nvfuse_parse_args():292:cpu-1] appname = helloworld
INFO[nvfuse_api.c|nvfuse_parse_args():293:cpu-1] cpu core mask = 7
INFO[nvfuse_api.c|nvfuse_parse_args():294:cpu-1] qdepth = 512
INFO[nvfuse_api.c|nvfuse_parse_args():295:cpu-1] buffer size = 0 MB
INFO[nvfuse_api.c|nvfuse_parse_args():296:cpu-1] need format = 1
INFO[nvfuse_api.c|nvfuse_parse_args():297:cpu-1] need mount = 1
INFO[nvfuse_api.c|nvfuse_parse_args():298:cpu-1] preallocation = 0
INFO[nvfuse_api.c|nvfuse_parse_args():299:cpu-1] config file = nvme.conf
Starting DPDK 17.08.0 initialization...
[ DPDK EAL parameters: bdevtest -c 7 --file-prefix=spdk_pid5939 ]
EAL: Detected 4 lcore(s)
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
Total cores available: 3
Occupied cpu socket mask is 0x1
reactor.c: 354:_spdk_reactor_run: NOTICE: Reactor started on core 1 on socket 0
reactor.c: 354:_spdk_reactor_run: NOTICE: Reactor started on core 2 on socket 0
reactor.c: 354:_spdk_reactor_run: NOTICE: Reactor started on core 0 on socket 0
copy_engine_ioat.c: 305:copy_engine_ioat_init: NOTICE: Ioat Copy Engine Offload Enabled
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 1c5c:c161 spdk_nvme
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:2 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:0005 p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:3 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:0006 p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:4 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:5 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:0008 p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:6 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:7 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:000a p:1 m:0 dnr:0
nvme_qpair.c: 112:nvme_admin_qpair_print_command: NOTICE: IDENTIFY (06) sqid:0 cid:63 nsid:8 cdw10:00000000 cdw11:00000000
nvme_qpair.c: 283:nvme_qpair_print_completion: NOTICE: INVALID NAMESPACE OR FORMAT (00/0b) sqid:0 cid:63 cdw0:0 sqhd:000b p:1 m:0 dnr:0
allocate event on lcore = 0
allocate event on lcore = 1
allocate event on lcore = 2
bdev name = 0x555555d185a0
blocklen = 512b blockcnt = 6251216896
INFO[nvfuse_api.c|nvfuse_create_handle():346:cpu1] NVMe: sector size = 512, number of sectors = 6251216896
INFO[nvfuse_api.c|nvfuse_create_handle():348:cpu1] NVMe: total capacity = 2.911TB
INFO[nvfuse_mkfs.c|nvfuse_format():401:cpu1]-------------------------------------------------------------------
INFO[nvfuse_mkfs.c|nvfuse_format():402:cpu1] Formatting NVFUSE ...
INFO[nvfuse_mkfs.c|nvfuse_format():403:cpu1] Warning: your data will be removed permanently...
INFO[nvfuse_mkfs.c|nvfuse_format():404:cpu1]--------------------Option------------------------------------------
INFO[nvfuse_mkfs.c|nvfuse_format():428:cpu1] spdk: io_target = 0x7ffc54000980
INFO[nvfuse_mkfs.c|nvfuse_format():434:cpu1] sectors = 6251216896, blocks = 781402112
INFO[nvfuse_mkfs.c|nvfuse_format():442:cpu1] bg size = 128MB
INFO[nvfuse_mkfs.c|nvfuse_format():443:cpu1] num bgs = 23846
INFO[nvfuse_mkfs.c|nvfuse_format_write_bd():255:cpu1]
INFO[nvfuse_mkfs.c|nvfuse_print_bd():197:cpu1] magic = bdbdbdbd bytes
INFO[nvfuse_mkfs.c|nvfuse_print_bd():198:cpu1] inode size = 4096 bytes
INFO[nvfuse_mkfs.c|nvfuse_print_bd():199:cpu1] bd_bg_start = 1
INFO[nvfuse_mkfs.c|nvfuse_print_bd():200:cpu1] bd_ibitmap_start = 2
INFO[nvfuse_mkfs.c|nvfuse_print_bd():201:cpu1] bd_ibitmap_size = 1 blocks
INFO[nvfuse_mkfs.c|nvfuse_print_bd():202:cpu1] bd_dbitmap_start = 3
INFO[nvfuse_mkfs.c|nvfuse_print_bd():203:cpu1] bd_dbitmap_size = 1 blocks
INFO[nvfuse_mkfs.c|nvfuse_print_bd():204:cpu1] itable start = 4
INFO[nvfuse_mkfs.c|nvfuse_print_bd():205:cpu1] itable size = 4096 blocks
INFO[nvfuse_mkfs.c|nvfuse_print_bd():206:cpu1] dtable start = 4100
INFO[nvfuse_mkfs.c|nvfuse_print_bd():207:cpu1] dtable size = 28668 blocks
INFO[nvfuse_mkfs.c|nvfuse_print_bd():208:cpu1] bd end = 32768
INFO[nvfuse_mkfs.c|nvfuse_format_write_bd():257:cpu1]
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 0 on 4 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 1 on 5 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 2 on 6 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 3 on 7 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 4 on 8 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 5 on 9 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 6 on 10 block
DEBUG[nvfuse_mkfs.c|nvfuse_alloc_root_inode_direct():131:cpu1] write inode = 7 on 11 block
INFO[nvfuse_mkfs.c|nvfuse_format():469:cpu1] inodes per bg = 4096
INFO[nvfuse_mkfs.c|nvfuse_format():470:cpu1] blocks per bg = 32768
INFO[nvfuse_mkfs.c|nvfuse_format():488:cpu1] NVFUSE writes the superblock on 0 block
INFO[nvfuse_mkfs.c|nvfuse_format():491:cpu1] NVFUSE capability
INFO[nvfuse_mkfs.c|nvfuse_format():492:cpu1] max file size = 4.004TB
INFO[nvfuse_mkfs.c|nvfuse_format():494:cpu1] max files per directory = 7fffffff
INFO[nvfuse_mkfs.c|nvfuse_format():495:cpu1] NVFUSE was formatted successfully. (1.716 sec)
INFO[nvfuse_mkfs.c|nvfuse_format():496:cpu1]-------------------------------------------------------------------
INFO[nvfuse_core.c|nvfuse_mount():1387:cpu1]start
INFO[nvfuse_core.c|nvfuse_mount():1416:cpu1]mempool size for index: 20480
INFO[nvfuse_core.c|nvfuse_mount():1447:cpu1]Allocation of BP_MEMPOOL type = 0 0x7ffc64c7df00
INFO[nvfuse_core.c|nvfuse_mount():1410:cpu1]mempool size for master: 11520
INFO[nvfuse_core.c|nvfuse_mount():1447:cpu1]Allocation of BP_MEMPOOL type = 1 0x7ffc6507df00
INFO[nvfuse_core.c|nvfuse_mount():1428:cpu1]mempool size for key: 11091968
INFO[nvfuse_core.c|nvfuse_mount():1447:cpu1]Allocation of BP_MEMPOOL type = 2 0x7ffc6547df00
INFO[nvfuse_core.c|nvfuse_mount():1434:cpu1]mempool size for value: 5545984
INFO[nvfuse_core.c|nvfuse_mount():1447:cpu1]Allocation of BP_MEMPOOL type = 3 0x7ffc6587df00
INFO[nvfuse_core.c|nvfuse_mount():1422:cpu1]mempool size for pair: 4096
INFO[nvfuse_core.c|nvfuse_mount():1447:cpu1]Allocation of BP_MEMPOOL type = 4 0x7ffc66c7df00
INFO[nvfuse_core.c|nvfuse_mount():1453:cpu1] mempool size for value: 147456
INFO[nvfuse_buffer_cache.c|nvfuse_init_buffer_cache():607:cpu1] mempool for bh head size = 7340032
INFO[nvfuse_buffer_cache.c|nvfuse_init_buffer_cache():622:cpu1] create mempool for bc head size = 436207616
INFO[nvfuse_buffer_cache.c|nvfuse_init_buffer_cache():676:cpu1] Buffer cache size = 3052.352 MB
INFO[nvfuse_buffer_cache.c|nvfuse_init_buffer_cache():682:cpu1] Set Default Buffer Cache = 3052MB
INFO[nvfuse_inode_cache.c|nvfuse_init_ictx_cache():302:cpu1] ictx cache size = 4194304
INFO[nvfuse_core.c|nvfuse_scan_superblock():1952:cpu1] sectors = 6251216896, blocks = 781402112
INFO[nvfuse_core.c|nvfuse_scan_superblock():1978:cpu1]root ino = 2
INFO[nvfuse_core.c|nvfuse_scan_superblock():1979:cpu1]no of sectors = 1956118528
INFO[nvfuse_core.c|nvfuse_scan_superblock():1980:cpu1]no of blocks = 781385728
INFO[nvfuse_core.c|nvfuse_scan_superblock():1981:cpu1]no of used blocks = 0
INFO[nvfuse_core.c|nvfuse_scan_superblock():1982:cpu1]no of inodes per bg = 4096
INFO[nvfuse_core.c|nvfuse_scan_superblock():1983:cpu1]no of blocks per bg = 32768
INFO[nvfuse_core.c|nvfuse_scan_superblock():1984:cpu1]no of free inodes = 97673208
INFO[nvfuse_core.c|nvfuse_scan_superblock():1985:cpu1]no of free blocks = 683617127
INFO[nvfuse_core.c|nvfuse_mount():1591:cpu1] no_of_bgs = 23846
INFO[nvfuse_core.c|nvfuse_mount():1592:cpu1] mempool for bg node size = 572304
INFO[nvfuse_core.c|nvfuse_mount():1694:cpu1] flush worker is disabled.
INFO[nvfuse_core.c|nvfuse_check_flush_dirty():2926:cpu1] Flush complets
INFO[nvfuse_core.c|nvfuse_mount():1720:cpu1] DIRTY_FLUSH_POLICY: DELAY
INFO[nvfuse_core.c|nvfuse_mount():1744:cpu1] NVFUSE has been successfully mounted.
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():491:cpu1] ipc init
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():492:cpu1] rte_socket_id() = 0
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():493:cpu1] rte_lcore_id() = 1
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef25fd900, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef25bd500, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef259d300, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef257d100, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef255cf00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef253cd00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef251cb00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef24fc900, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef24dc700, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef24bc500, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef249c300, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef247c100, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef245bf00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef243bd00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef241bb00, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():521:cpu1] send ring = 0x7ffef23fb900, recv ring = 0x7ffef25dd700
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():541:cpu1] IPC init in primary core
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_init():588:cpu1] IPC initialized successfully for Primary Core
Write Buf: Hello World!
Read Buf: Hello World!
INFO[nvfuse_core.c|nvfuse_umount():1769:cpu1] start nvfuse_umount
INFO[nvfuse_core.c|nvfuse_check_flush_dirty():2926:cpu1] Flush complets
INFO[nvfuse_core.c|nvfuse_umount():1795:cpu1] free bp mempool 0 (num = 5)
INFO[nvfuse_core.c|nvfuse_umount():1795:cpu1] free bp mempool 1 (num = 5)
INFO[nvfuse_core.c|nvfuse_umount():1795:cpu1] free bp mempool 2 (num = 5)
INFO[nvfuse_core.c|nvfuse_umount():1795:cpu1] free bp mempool 3 (num = 5)
INFO[nvfuse_core.c|nvfuse_umount():1795:cpu1] free bp mempool 4 (num = 5)
INFO[nvfuse_buffer_cache.c|nvfuse_deinit_buffer_cache():751:cpu1] > buffer cache hit rate = 0.997580
INFO[nvfuse_core.c|nvfuse_umount():1843:cpu1] NVFUSE has been successfully unmounted.
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_exit():596:cpu1] ipc deinit ...
INFO[nvfuse_ipc_ring.c|nvfuse_ipc_exit():598:cpu1] Release channel = 0

from nvfuse.

maxwellxxx avatar maxwellxxx commented on May 30, 2024

Hello~ Is there any user guides such as example of nvme.conf??

from nvfuse.

Related Issues (2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.