Coder Social home page Coder Social logo

cloudxy's Introduction

Cloudxy

Automatically exported from https://code.google.com/p/cloudxy/

Cloudxy is a generic and open source platform which can provide adjustable compute capacity in the cloud. It means you can scale capacity as your computing requirements change. Also, you can recover your virtual machine at any snapshot point when failure occurs.

Cloudxy is constructed of HLFS(HDFS-based Log-structured file system) and ECMS(Elastic Cloud Management System).

HLFS

The subsystem HLFS(actually, Block level Storage System seems more proper than File System) is a distributed VM image storage system for ECMS,which provides highly available block level storage volumes that can be attached to XEN virtual machines by its tapdisk driver.Similar project related to KVM is sheepdog,but they are in different architectures.

Compared with sheepdog, HLFS has greater scalability and reliability than sheepdog by now,as we are on the shoulder of hadoop distribute file system (HDFS).Meanwhile,HLFS also supports advanced volume management features such as snapshot(HLFS can also support snapshot tree)、cloning、thin provisioning and cache.

The main idea of HLFS is:

Take advantage of Log-structured File System's ideology to build an on-line image storage system on HDFS which can guarantee the reliability and scalability for our storage system
The ideology of LFS makes our storage system support random access to online images.
The ideology of LFS also makes our storage system more efficient and easily take snapshot. 

ECMS

The subsystem ECMS is a virtual machine management system used in HLFS storage environment. The current work for ECMS is to smart schedule and life-cycle manage virtual machine

The further goal for ECMS is to build Virtual IDC, which will include develop virtual resource define language and virtual resource visualization management. we wish user can define their own virtual resource (for example,a single instance mysql service,or a master-slave mysql service,or else anything,they are all belong to virtual resource),and can instantiate、reuse、deploy virtual resource.

Authors: Hua Kang [email protected] and Weiwei Jia [email protected]

cloudxy's People

Contributors

harrywei avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

cloudxy's Issues

fedora16 loop device number

What steps will reproduce the problem?
1. max loop device
2.
3.

What is the expected output? What do you see instead?
The default loop device in fedora is 8,if I want to increase the number of loop 
device, which configure file do I need to edit?
There is no /etc/modprobe.d/aliases.conf or /etc/modprobe.conf.

What version of the product are you using? On what operating system?
Linux fedora 3.1.0-7.fc16.x86_64

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 8 May 2012 at 5:21

Install QEMU errors

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
4. patch -p1 < hlfs_driver_for_qemu.patch
5, Modify the dead path
6, ./configure 

What is the expected output? What do you see instead?
Excepected output:
No any *Not Found* packages.

See instead:
jiawei@jiawei-laptop:~/workshop3/qemu$ ./configure
Package ncurses was not found in the pkg-config search path.
Perhaps you should add the directory containing `ncurses.pc'
to the PKG_CONFIG_PATH environment variable
No package 'ncurses' found
Install prefix    /usr/local
BIOS directory    /usr/local/share/qemu
binary directory  /usr/local/bin
library directory /usr/local/lib
libexec directory /usr/local/libexec
include directory /usr/local/include
config directory  /usr/local/etc
local state directory   /usr/local/var
Manual directory  /usr/local/share/man
ELF interp prefix /usr/gnemul/qemu-%M
Source path       /home/jiawei/workshop3/qemu
C compiler        cc
Host C compiler   cc
Objective-C compiler cc
CFLAGS            -O2 -D_FORTIFY_SOURCE=2 
-I/home/jiawei/workshop3/hlfs/3part/log/include 
-I/home/jiawei/workshop3/hlfs/src/include -I/usr/include/glib-2.0 
-I/usr/lib/glib-2.0/include
QEMU_CFLAGS       -Werror -fPIE -DPIE -m32 -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 
-D_LARGEFILE_SOURCE -Wstrict-prototypes -Wredundant-decls -Wall -Wundef 
-Wwrite-strings -Wmissing-prototypes -fno-strict-aliasing  
-fstack-protector-all -Wendif-labels -Wmissing-include-dirs -Wempty-body 
-Wnested-externs -Wformat-security -Wformat-y2k -Winit-self 
-Wignored-qualifiers -Wold-style-declaration -Wold-style-definition 
-Wtype-limits  -I/usr/include/libpng12   -I/usr/include/pixman-1  
LDFLAGS           -Wl,--warn-common -Wl,-z,relro -Wl,-z,now -pie -m32 -g 
make              make
install           install
python            python
smbd              /usr/sbin/smbd
host CPU          i386
host big endian   no
target list       i386-softmmu x86_64-softmmu alpha-softmmu arm-softmmu 
cris-softmmu lm32-softmmu m68k-softmmu microblaze-softmmu microblazeel-softmmu 
mips-softmmu mipsel-softmmu mips64-softmmu mips64el-softmmu or32-softmmu 
ppc-softmmu ppcemb-softmmu ppc64-softmmu sh4-softmmu sh4eb-softmmu 
sparc-softmmu sparc64-softmmu s390x-softmmu xtensa-softmmu xtensaeb-softmmu 
unicore32-softmmu i386-linux-user x86_64-linux-user alpha-linux-user 
arm-linux-user armeb-linux-user cris-linux-user m68k-linux-user 
microblaze-linux-user microblazeel-linux-user mips-linux-user mipsel-linux-user 
or32-linux-user ppc-linux-user ppc64-linux-user ppc64abi32-linux-user 
sh4-linux-user sh4eb-linux-user sparc-linux-user sparc64-linux-user 
sparc32plus-linux-user unicore32-linux-user s390x-linux-user 
tcg debug enabled no
gprof enabled     no
sparse enabled    no
strip binaries    yes
profiler          no
static build      no
-Werror enabled   yes
pixman            system
SDL support       no
curses support    yes
curl support      no
mingw32 support   no
Audio drivers     oss
Extra audio cards ac97 es1370 sb16 hda
Block whitelist   
Mixer emulation   no
VirtFS support    no
VNC support       yes
VNC TLS support   no
VNC SASL support  no
VNC JPEG support  yes
VNC PNG support   yes
xen support       no
brlapi support    no
bluez  support    no
Documentation     no
NPTL support      yes
GUEST_BASE        yes
PIE               yes
vde support       no
Linux AIO support no
ATTR/XATTR support yes
Install blobs     yes
KVM support       yes
TCG interpreter   no
fdt support       no
preadv support    yes
fdatasync         
madvise           yes
posix_madvise     yes
sigev_thread_id   yes
uuid support      no
libcap-ng support no
vhost-net support yes
Trace backend     nop
Trace output file trace-<pid>
spice support     no (/)
rbd support       no
xfsctl support    no
nss used          no
usb net redir     no
OpenGL support    yes
libiscsi support  no
build guest agent yes
seccomp support   no
coroutine backend ucontext
GlusterFS support no
HLFS support        no
virtio-blk-data-plane no
gcov              gcov
gcov enabled      no



Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 8:49

Install QEMU errors

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
4. patch -p1 < hlfs_driver_for_qemu.patch
5, Modify the dead path
6, ./configure 
7, make (No error)

What is the expected output? What do you see instead?
Expected output:
Make okay.

See instead:
[snipped]
  CC    block/hlfs.o
block/hlfs.c:16:24: error: qemu-error.h: 没有那个文件或目录
block/hlfs.c:17:25: error: qemu_socket.h: 没有那个文件或目录
block/hlfs.c:18:23: error: block_int.h: 没有那个文件或目录
block/hlfs.c:19:20: error: bitops.h: 没有那个文件或目录
block/hlfs.c: In function ‘hlbs_open’:
block/hlfs.c:89: error: dereferencing pointer to incomplete type
block/hlfs.c:110: error: dereferencing pointer to incomplete type
block/hlfs.c: At top level:
block/hlfs.c:120: error: expected declaration specifiers or ‘...’ before 
‘QEMUOptionParameter’
cc1: warnings being treated as errors
block/hlfs.c: In function ‘hlbs_create’:
block/hlfs.c:137: error: implicit declaration of function ‘error_report’
block/hlfs.c:137: error: nested extern declaration of ‘error_report’
block/hlfs.c:141: error: ‘options’ undeclared (first use in this function)
block/hlfs.c:141: error: (Each undeclared identifier is reported only once
block/hlfs.c:141: error: for each function it appears in.)
block/hlfs.c:142: error: ‘BLOCK_OPT_SIZE’ undeclared (first use in this 
function)
block/hlfs.c:142: error: left-hand operand of comma expression has no effect
block/hlfs.c:142: error: left-hand operand of comma expression has no effect
block/hlfs.c:142: error: left-hand operand of comma expression has no effect
block/hlfs.c:142: error: left-hand operand of comma expression has no effect
block/hlfs.c:144: error: ‘BLOCK_OPT_BACKING_FILE’ undeclared (first use in 
this function)
block/hlfs.c:144: error: left-hand operand of comma expression has no effect
block/hlfs.c:144: error: left-hand operand of comma expression has no effect
block/hlfs.c:144: error: left-hand operand of comma expression has no effect
block/hlfs.c:144: error: left-hand operand of comma expression has no effect
block/hlfs.c:146: error: ‘BLOCK_OPT_PREALLOC’ undeclared (first use in this 
function)
block/hlfs.c:146: error: left-hand operand of comma expression has no effect
block/hlfs.c:146: error: left-hand operand of comma expression has no effect
block/hlfs.c:146: error: left-hand operand of comma expression has no effect
block/hlfs.c:146: error: left-hand operand of comma expression has no effect
block/hlfs.c:147: error: left-hand operand of comma expression has no effect
block/hlfs.c:147: error: value computed is not used
block/hlfs.c:147: error: left-hand operand of comma expression has no effect
block/hlfs.c:149: error: left-hand operand of comma expression has no effect
block/hlfs.c:149: error: value computed is not used
block/hlfs.c:149: error: left-hand operand of comma expression has no effect
block/hlfs.c:264: error: implicit declaration of function 
‘g_key_file_set_uint64’
block/hlfs.c:264: error: nested extern declaration of 
‘g_key_file_set_uint64’
block/hlfs.c:310: error: implicit declaration of function ‘parse_from_uri’
block/hlfs.c:310: error: nested extern declaration of ‘parse_from_uri’
block/hlfs.c: In function ‘hlbs_close’:
block/hlfs.c:333: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_getlength’:
block/hlfs.c:342: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_get_allocated_file_size’:
block/hlfs.c:348: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_write’:
block/hlfs.c:356: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_read’:
block/hlfs.c:367: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_flush’:
block/hlfs.c:377: error: dereferencing pointer to incomplete type
block/hlfs.c: At top level:
block/hlfs.c:382: error: expected declaration specifiers or ‘...’ before 
‘QEMUSnapshotInfo’
block/hlfs.c: In function ‘hlbs_snapshot_create’:
block/hlfs.c:385: error: dereferencing pointer to incomplete type
block/hlfs.c:386: error: ‘sn_info’ undeclared (first use in this function)
block/hlfs.c: In function ‘hlbs_snapshot_goto’:
block/hlfs.c:396: error: dereferencing pointer to incomplete type
block/hlfs.c:412: error: dereferencing pointer to incomplete type
block/hlfs.c: In function ‘hlbs_snapshot_delete’:
block/hlfs.c:425: error: dereferencing pointer to incomplete type
block/hlfs.c: At top level:
block/hlfs.c:431: error: expected declaration specifiers or ‘...’ before 
‘QEMUSnapshotInfo’
block/hlfs.c: In function ‘hlbs_snapshot_list’:
block/hlfs.c:433: error: dereferencing pointer to incomplete type
block/hlfs.c:438: error: ‘QEMUSnapshotInfo’ undeclared (first use in this 
function)
block/hlfs.c:438: error: ‘sn_tab’ undeclared (first use in this function)
block/hlfs.c:453: error: ‘psn_tab’ undeclared (first use in this function)
block/hlfs.c: At top level:
block/hlfs.c:460: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or 
‘__attribute__’ before ‘hlbs_create_options’
block/hlfs.c:476: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or 
‘__attribute__’ before ‘bdrv_hlbs’
block/hlfs.c: In function ‘bdrv_hlbs_init’:
block/hlfs.c:499: error: implicit declaration of function ‘bdrv_register’
block/hlfs.c:499: error: nested extern declaration of ‘bdrv_register’
block/hlfs.c:499: error: ‘bdrv_hlbs’ undeclared (first use in this function)
make: *** [block/hlfs.o] 错误 1



Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 9:37

[BUG]unit-test for hlfs_take_snapshot get the incorrect results

What steps will reproduce the problem?
0. mkdir /tmp/testenv && mkdir /tmp/unittest
1. cd snapshot/build && cmake ../src && ./build_local.sh
2. cd ../src/snapshot/unittest/build/  &&  cmake ..
3. ./test
4. cd /tmp/testenv/testfs && cat snapshot.txt

[NOTE]This is in our snapshot branch, so you must download its all sources.

What is the expected output? What do you see instead?
expected output
================
+snapshot0#18446744071634696256#8240#
+ #18446744071634696300#16660#
++#18446744071634696345#25080#
+##@#18446744071634696380#33500#
+..#18446744071634696424#41920#
+ **#18446744071634696469#50340#
+1234#18446744071634696524#58760#

see instead
============
+snapshot#18446744071634696256#8240#
+#18446744071634696300#16660#
+#18446744071634696345#25080#
+###18446744071634696380#33500#
+.#18446744071634696424#41920#
+ *#18446744071634696469#50340#
+123#18446744071634696524#58760#


Please use labels and text to provide additional information.
Please see the snapshot branch for details.


Original issue reported on code.google.com by [email protected] on 28 Dec 2011 at 11:17

After install QEMU, we cannot use 'qemu-img' for hlfs driver

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. git apply hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, sudo make install
10, qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G 

What is the expected output? What do you see instead?
Expected output:
Create a HLFS block with 10G

See instead:
$ qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
qemu-img: Unknown file format 'hlfs'


NOTE:
If we change directory to qemu, and execute
"./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G", we can get 
right answers like following.

$ ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
** Message: enter func bdrv_hlbs_init
** Message: leave func bdrv_hlbs_init
Formatting 'hlfs:local:///tmp/testenv/testfs', fmt=hlfs size=10737418240 
** Message: enter func hlbs_create
** Message: enter func parse_vdiname
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func deinit_storage_handler
disconnect succ
leave func deinit_storage_handler
** Message: leave func hlbs_create



Original issue reported on code.google.com by [email protected] on 1 Feb 2013 at 12:29

hardcode when use system call for unit-test's setup and teardown

bug描述
=======
我们在单测时,我们需要setup(一些初始化工作)和teardown(销毁�
��回收工作),
我们开始都是自己全部从头写的,但是为了降低耦合度(这��
�是单测的核心工作),
我们采用系统调用来执行我们存在的一些app或者系统本身的(�
��如mkfs.hlfs,等)来进行
setup和teardown(这样也很方便)。这就存在一个问题,当你执�
��system时,这个app
的路径如何确定呢??(这里我们指的是我们的app没有安装到�
��器上)

bug分析(这里以test_hlfs_take_snapshot.c为例)
====================================
[snip]

static void 
hlfs_take_snapshot_setup(Fixture *fixture, const void *data) {
    const char *test_dir = (const char *)data;
    g_print("test env dir is %s\n", test_dir);
    char *fs_dir = g_build_filename(test_dir, "testfs", NULL);
//  g_assert(g_mkdir(fs_dir, 0700) == 0);
    char *uri = g_malloc0(128);
    g_assert(uri != NULL);
    snprintf(uri, 128, "%s%s", "local://", fs_dir);
//  char *uri = g_build_path(tmp, fs_dir, NULL);
    g_print("uri is %s\n", uri);
    pid_t status;
    const char cmd[256];
    memset((char *) cmd, 0, 256);
    sprintf((char *) cmd, "%s %s %s %s %d %s %d %s %d", "../../../../output/bin/mkfs.hlfs", 
                                "-u", uri,
                                "-b", 8192,
                                "-s", 67108864,
                                "-m", 1024);
    g_message("cmd is [%s]", cmd);
    status = system(cmd);
    fixture->uri = uri;
    g_print("fixture->uri is %s\n", fixture->uri);
    fixture->ctrl = init_hlfs(fixture->uri);
    g_assert(fixture->ctrl != NULL);
    int ret = hlfs_open(fixture->ctrl, 1);
    g_assert(ret == 0);
//  g_key_file_free(sb_keyfile);
//  g_free(sb_file_path);
    g_free(fs_dir);
    return ;
}
[snip]
static void 
hlfs_take_snapshot_tear_down(Fixture *fixture, const void *data) {
    const char *test_dir = (const char *) data;
    g_print("clean dir path: %s\n", test_dir);
    char *fs_dir = g_build_filename(test_dir, "testfs", NULL);
    pid_t status;
    const char cmd[256];
    memset((char *) cmd, 0, 256);
    sprintf((char *) cmd, "%s %s %s", "rm", "-r", fs_dir);
    g_message("cmd is [%s]", cmd);
    status = system(cmd);
    g_free(fs_dir);
    g_free(fixture->uri);
    hlfs_close(fixture->ctrl);
    deinit_hlfs(fixture->ctrl);
    return;
}
[snip]

这个测试用例,我就做成了硬编码,这样不好,肯定有问题��
�如下
sprintf((char *) cmd, "%s %s %s %s %d %s %d %s %d", 
"../../../../output/bin/mkfs.hlfs", 
                                "-u", uri,
                                "-b", 8192,
                                "-s", 67108864,
                                "-m", 1024);
如果测试者的mkfs.hlfs没在 ../../../../output/bin/mkfs.hlfs, 
那么就出现问题了。


bug解决方案
============
这里给出以下方案:
1, 
安装所有app,也就是说把我们生产的所有app都安装到系统目��
�下,这样就
    可以和使用系统的命令一样很方便了。
2, 在环境变量中添加我们的生成的app路径,这样也就可以了


这里我更喜欢方案1,我将按照方案1进行fix,如有其它意见,
可以发来讨论一下 ;-) 


指导人:陈莉君老师,康华老师
测试人:贾威威
后期负责人:贾威威


Original issue reported on code.google.com by [email protected] on 1 Jan 2012 at 5:42

Under HLFS hdfs mode, we cannot boot a VM correctly

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. wget 
http://cloudxy.googlecode.com/svn/trunk/hlfs/patches/hlfs_driver_for_qemu_1.3.0.
patch 
5. git apply hlfs_driver_for_qemu_1.3.0.patch
6, Modify the dead path
7, ./configure 
8, make
9, sudo make install
10, wget http://cloudxy.googlecode.com/files/linux-0.2.img.zip
11, unzip linux-0.2.img.zip 
12, hadoop fs -mkdir /tmp/testenv
13, qemu-img convert linux-0.2.img  hlfs:hdfs:///tmp/testenv/testfs
14, qemu-system-x86_64 -m 512 -drive file=hlfs:hdfs:///tmp/testenv/testfs

What is the expected output? What do you see instead?
Expected output:
create this base linux os correctly.

See instead:
[...]
read block from seg:0#9915708 size:8192
read len 1024
** Message: leave func hlbs_read
** Message: enter func hlbs_read
Hlfs Read Req 
pos:10148864,read_len:4096,last_segno:0,last_offset:18168213,cur_file_len:194001
92
read offset:10148864,read len:4096
need to read muti block
need to read first block
--Entering func dbcache_query_block
block_no 1238 will be queried
NO item in hash table
not find in cache!
storage address:9915708
enter func read_block_fast
offset :9915708,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9915708 size:8192
fist offset:1024
start db: 1239 end db: 1239
need to read last block
--Entering func dbcache_query_block
block_no 1239 will be queried
NO item in hash table
not find in cache!
storage address:9923900
enter func read_block_fast
offset :9923900,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9923900 size:8192
leave func hlfs_read
** Message: leave func hlbs_read
** Message: enter func hlbs_read
Hlfs Read Req 
pos:10152960,read_len:32768,last_segno:0,last_offset:18168213,cur_file_len:19400
192
read offset:10152960,read len:32768
need to read muti block
need to read first block
--Entering func dbcache_query_block
block_no 1239 will be queried
NO item in hash table
not find in cache!
storage address:9923900
enter func read_block_fast
offset :9923900,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9923900 size:8192
fist offset:5120
start db: 1240 end db: 1243
--Entering func dbcache_query_block
block_no 1240 will be queried
NO item in hash table
not find in cache!
storage address:9932092
enter func read_block_fast
offset :9932092,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9932092 size:8192
offset: 13312
--Entering func dbcache_query_block
block_no 1241 will be queried
NO item in hash table
not find in cache!
storage address:9940284
enter func read_block_fast
offset :9940284,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9940284 size:8192
offset: 21504
--Entering func dbcache_query_block
block_no 1242 will be queried
NO item in hash table
not find in cache!
storage address:9948476
enter func read_block_fast
offset :9948476,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9948476 size:8192
offset: 29696
need to read last block
--Entering func dbcache_query_block
block_no 1243 will be queried
NO item in hash table
not find in cache!
storage address:9973229
enter func read_block_fast
offset :9973229,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9973229 size:8192
leave func hlfs_read
** Message: leave func hlbs_read
** Message: enter func hlbs_read
Hlfs Read Req 
pos:10185728,read_len:65536,last_segno:0,last_offset:18168213,cur_file_len:19400
192
read offset:10185728,read len:65536
need to read muti block
need to read first block
--Entering func dbcache_query_block
block_no 1243 will be queried
NO item in hash table
not find in cache!
storage address:9973229
enter func read_block_fast
offset :9973229,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9973229 size:8192
fist offset:5120
start db: 1244 end db: 1251
--Entering func dbcache_query_block
block_no 1244 will be queried
NO item in hash table
not find in cache!
storage address:9981421
enter func read_block_fast
offset :9981421,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9981421 size:8192
offset: 13312
--Entering func dbcache_query_block
block_no 1245 will be queried
NO item in hash table
not find in cache!
storage address:9989613
enter func read_block_fast
offset :9989613,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9989613 size:8192
offset: 21504
--Entering func dbcache_query_block
block_no 1246 will be queried
NO item in hash table
not find in cache!
storage address:9997805
enter func read_block_fast
offset :9997805,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#9997805 size:8192
offset: 29696
--Entering func dbcache_query_block
block_no 1247 will be queried
NO item in hash table
not find in cache!
storage address:10005997
enter func read_block_fast
offset :10005997,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#10005997 size:8192
offset: 37888
--Entering func dbcache_query_block
block_no 1248 will be queried
NO item in hash table
not find in cache!
storage address:10014189
enter func read_block_fast
offset :10014189,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#10014189 size:8192
offset: 46080
--Entering func dbcache_query_block
block_no 1249 will be queried
NO item in hash table
not find in cache!
storage address:10022381
enter func read_block_fast
offset :10022381,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#10022381 size:8192
offset: 54272
--Entering func dbcache_query_block
block_no 1250 will be queried
NO item in hash table
not find in cache!
storage address:10030573
enter func read_block_fast
offset :10030573,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#10030573 size:8192
offset: 62464
need to read last block
--Entering func dbcache_query_block
block_no 1251 will be queried
NO item in hash table
not find in cache!
storage address:10038765
enter func read_block_fast
offset :10038765,segno:0,last_offset:18168213,last_rsegfile_offset:18168213
using pre open read file handler
read block from seg:0#10038765 size:8192
leave func hlfs_read
** Message: leave func hlbs_read
 we should do clean in silent period ;access timestamp:1360340274185,cur timestamp:1360340275491
 we should do clean in silent period ;access timestamp:1360340274185,cur timestamp:1360340276491
 time wait res for cond is :0 !
--total dirty block:1,oldest block no:0--
--blocks_count:1,buff_len:8192--
ib_amount we need 0 ibs
ib_amount we need 0 ibs
 db_cur_no:0 db_offset:25
 is level1 -- db_cur_no:0 db_offset:25
COMPRESSED: db_offset:8217
COMPRESSED: ib_offset:8217
to update inode ...
last offset:18168213 , last segno:0 log head len:25 iboffset:8217
inode address's offset 18176430 , give it 18176430
to fill log header ...
ib_amount we need 0 ibs
enter func update_inode_index
 is level1 -- db_cur_no:0 db_offset:25
-----dbno:0,idx:0, blocks:18168238,db_offset:25
log size:8369,log header:25,inode:136,inode map:16,dbnum:1,ibnum:0
Exception in thread "Thread-4173" org.apache.hadoop.ipc.RemoteException: 
java.io.IOException: Append is not supported. Please see the dfs.support.append 
configuration parameter.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1455)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:718)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1439)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1435)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1278)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1433)

    at org.apache.hadoop.ipc.Client.call(Client.java:1150)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.append(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy0.append(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:799)
    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:788)
    at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:175)
    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:702)
Call to 
org.apache.hadoop.conf.FileSystem::append((Lorg/apache/hadoop/fs/Path;)Lorg/apac
he/hadoop/fs/FSDataOutputStream;) failed!
**
ERROR:/home/jiawei/workshop3/hlfs/src/logger/segfile_handler_optmize.c:122:prev_
open_wsegfile: assertion failed: (0)
已放弃

Original issue reported on code.google.com by [email protected] on 8 Feb 2013 at 4:38

qemu can't boot system attach hlfs disk and from cdrom

What steps will reproduce the problem?

first   hlfs is the new trunk r1641

01: cd qemu
02: git reset --hard v1.3.0
03: wget -c 
http://cloudxy.googlecode.com/svn/branches/hlfs/person/harry/hlfs/patches/hlfs_d
river_for_qemu.patch
04: git apply patches/hlfs_driver_for_qemu.patch
05: sudo apt-get install libsdl1.2-dev
06: ./configure --enable-kvm --enable-debug --enable-hlfs --enable-sdl
07: make -j8 & sudo make install
08: mkdir /tmp/testenv
09: ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
10: cd qemu/x86_64-softmmu$ 
11: ./qemu-system-x86_64 -hda hlfs:local:///tmp/testenv/testfs  -cdrom 
/home/kanghua/ubuntu-12.04-mini.iso -boot d -m 512 -no-acpi

output:

enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :3

7777 file:alive_snapshot.txt,size:43,time:1359795453

7777 file:snapshot.txt,size:60,time:1359797807

7777 file:superblock,size:115,time:1359794584

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10

(process:6963): GLib-ERROR **: /build/buildd/glib2.0-2.32.3/./glib/gmem.c:165: 
failed to allocate 1688849860263936 bytes
enter func seg_clean_task
跟踪/断点陷阱 (核心已转储)


another patch

01: cd qemu
02: make clean
03: rm block/hlfs.c 
04: git reset --hard v1.3.0
05: wget -c git 
http://cloudxy.googlecode.com/svn/trunk/hlfs/patches/hlfs_driver_for_qemu_1.3.0.
patch
06: git apply hlfs_driver_for_qemu_1.3.0.patch
06: ./configure --enable-kvm --enable-debug --enable-hlfs --enable-sdl
07: make -j8 & sudo make install
08: ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
09: cd qemu/x86_64-softmmu
10: ./qemu-system-x86_64 -hda hlfs:local:///tmp/testenv/testfs  -cdrom 
/home/kanghua/ubuntu-12.04-mini.iso -boot d -m 512 -no-acpi

output:

enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :1

7777 file:superblock,size:115,time:1359869795

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10
enter func seg_clean_task
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359869853521

(process:15535): GLib-ERROR **: /build/buildd/glib2.0-2.32.3/./glib/gmem.c:165: 
failed to allocate 1688849860263936 bytes
跟踪/断点陷阱 (核心已转储)


outside:

    if I boot from only from cdrom else to attach hlfs will boot but the xshell spring windows hang on the attach image

01: ./qemu-system-x86_64 -hda hlfs:local:///tmp/testenv/testfs  -cdrom 
/home/kanghua/ubuntu-12.04-mini.iso -boot d -m 512 -no-acpi


Original issue reported on code.google.com by littlesmartsmart on 3 Feb 2013 at 6:28

Attachments:

a goto statement error in snapshot api

bug总述
========
最近在编译一段c程序(一个文件中有很多函数)时,出现了一��
�错误,这个错误是有关c语言中goto语句
的,但是奇怪的是,同一个文件中其他函数中的goto语句没有�
��错误(我把这个报错函数中的
goto语句替换掉,进行了测试,运行正常)。具体如下。

bug测试环境
==========
gcc  version
------------------
jiawei@jiawei-laptop:~/workshop15/snapshot/build$ gcc -v
Using built-in specs.
Target: i486-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 4.4.3-4ubuntu5' 
--with-bugurl=file:///usr/share/doc/gcc-4.4/README.Bugs 
--enable-languages=c,c++,fortran,objc,obj-c++ --prefix=/usr --enable-shared 
--enable-multiarch --enable-linker-build-id --with-system-zlib 
--libexecdir=/usr/lib --without-included-gettext --enable-threads=posix 
--with-gxx-include-dir=/usr/include/c++/4.4 --program-suffix=-4.4 --enable-nls 
--enable-clocale=gnu --enable-libstdcxx-debug --enable-plugin --enable-objc-gc 
--enable-targets=all --disable-werror --with-arch-32=i486 --with-tune=generic 
--enable-checking=release --build=i486-linux-gnu --host=i486-linux-gnu 
--target=i486-linux-gnu
Thread model: posix
gcc version 4.4.3 (Ubuntu 4.4.3-4ubuntu5) 

os  relevance 
-----------------------
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.3 LTS
Release:    10.04
Codename:   lucid
Linux jiawei-laptop 2.6.32-37-generic #81-Ubuntu SMP Fri Dec 2 20:35:14 UTC 
2011 i686 GNU/Linux
cmake version 2.8.1
- Hide quoted text -

bug复原
========
1, 下载snapshot 分支
svn checkout http://cloudxy.googlecode.com/svn/trunk/  snapshot
2,    编译libhlfs
cd snapshot/build && cmake ../src && make all

这时候就会出现如下错误
jiawei@jiawei-laptop:~/workshop15/snapshot/build$ make all
Makefile:103: 警告:覆盖关于目标“all”的命令
Makefile:69: 警告:忽略关于目标“all”的旧命令
Scanning dependencies of target hlfs
[  3%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_stat.c.o
[  7%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_close.c.o
[ 11%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_read.c.o
[ 15%] Building C object CMakeFiles/hlfs.dir/storage/init_hlfs.c.o
[ 19%] Building C object CMakeFiles/hlfs.dir/storage/deinit_hlfs.c.o
[ 23%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_ctrl.c.o
[ 26%] Building C object CMakeFiles/hlfs.dir/storage/log_write_task.c.o
[ 30%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_open.c.o
[ 34%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_write.c.o
[ 38%] Building C object CMakeFiles/hlfs.dir/common/logger.c.o
[ 42%] Building C object CMakeFiles/hlfs.dir/backend/hdfs_storage.c.o
[ 46%] Building C object CMakeFiles/hlfs.dir/backend/local_storage.c.o
[ 50%] Building C object CMakeFiles/hlfs.dir/clean/clean_route.c.o
[ 53%] Building C object CMakeFiles/hlfs.dir/utils/segment_cleaner.c.o
[ 57%] Building C object CMakeFiles/hlfs.dir/utils/storage_helper.c.o
[ 61%] Building C object CMakeFiles/hlfs.dir/utils/misc.c.o
[ 65%] Building C object CMakeFiles/hlfs.dir/utils/address.c.o
[ 69%] Building C object CMakeFiles/hlfs.dir/snapshot/snapshot_helper.c.o
/home/jiawei/workshop15/snapshot/src/snapshot/snapshot_helper.c: In function 
‘load_all_ss’:
/home/jiawei/workshop15/snapshot/src/snapshot/snapshot_helper.c:144: error: 
jump into scope of identifier with variably modified type
make[3]: *** [CMakeFiles/hlfs.dir/snapshot/snapshot_helper.c.o] 错误 1
make[2]: *** [CMakeFiles/hlfs.dir/all] 错误 2
make[1]: *** [CMakeFiles/all.dir/rule] 错误 2
make: *** [all] 错误 2


bug分析
=========
首先给出部分源码(都在snapshot_helper.c中)
[.....]
int load_all_ss(struct back_storage *storage, GHashTable *ss_hashtable)
{
    int ret = 0;
    int i = 0;
    g_message("%s -- 77 dbg", __func__);
    if (-1 == storage->bs_file_is_exist(storage, SNAPSHOT_FILE)) {
        HLOG_ERROR("snapshot.txt is not exist");
        ret = -1;
        goto out;                // 这里的goto 语句报错,错误信息如上所示    
    }
    bs_file_info_t *file_info = storage->bs_file_info(storage, SNAPSHOT_FILE);
    if (NULL == file_info) {
        HLOG_ERROR("get snapshot info error!");
        ret = -1;
        goto out;
    }
    uint32_t file_size = file_info->size; 
    g_free(file_info);
    HLOG_DEBUG("file_size : %u", file_size);
    char buf[file_size];
    memset(buf, 0, file_size);
    bs_file_t file = storage->bs_file_open(storage, SNAPSHOT_FILE, BS_READONLY);
    if (file == NULL) {
        HLOG_ERROR("open snapshot.txt error");
        ret = -2;
        goto out;
    }
.......
[snip]
........
    g_strfreev(sss);
    storage->bs_file_close(storage, SNAPSHOT_FILE); 
#if 1
out:
    if (NULL != file) {
        storage->bs_file_close(storage, file);
    }
#endif
    return ret;
}
[.....]

当我把如上函数中的所有goto 
语句替换掉(直接return),那么就会一切正常,
但是奇怪的是,snapshot_helper.c文件中还有其他函数,他们中也
有goto
语句,为什么他们没有报错呢?? 
以下给出这个文件中另一个函数的
源码

[.....]
int dump_snapshot_delmark(struct back_storage *storage, const char 
*snapshot_file, \
        const char *ssname){
    if(snapshot_file == NULL || ssname == NULL || storage == NULL){
        return -1;
    }
    int ret = 0;
    int len = 0;
    bs_file_t file = NULL;
    if (-1 == storage->bs_file_is_exist(storage, snapshot_file)) {
        HLOG_DEBUG("cp file not exist, create cp file");
        file = storage->bs_file_create(storage,snapshot_file);
        if (NULL == file) {
            HLOG_ERROR("can not create cp file %s", snapshot_file);
            goto out2;
        }
        storage->bs_file_close(storage, file);
    }
    file = storage->bs_file_open(storage, snapshot_file, BS_WRITEABLE);
    if (NULL == file) {
        HLOG_ERROR("can not open ss file %s", snapshot_file);
        goto out2;
    }
    char snapshot_delmark_text[1024];
    memset(snapshot_delmark_text, 0, 1024);
    len = snapshot_delmark2text(ssname, snapshot_delmark_text);
    HLOG_DEBUG("cp text is %s", snapshot_delmark_text);
    if (len != storage->bs_file_append(storage, file, snapshot_delmark_text, len)) {
        HLOG_ERROR("write cp file error, write bytes %d", ret);
        ret = -1;
        goto out2;
    }
out2:
    if (NULL != file) {
        storage->bs_file_close(storage, file);
    }
    return ret;
}
[.....]

这个函数中也有goto语句,但是当我把load_all_ss函数中的goto语�
��替换掉,编译
一切正常,也就是说 dump_snapshot_delmark 
以及这个文件中的其他函数
中的goto语句都是合法的。

具体snapshot_helper.c的所有源码,可以通过以下链接访问
http://cloudxy.googlecode.com/svn/branches/snapshot/src/snapshot/snapshot_helper
.c

注意执行以上测试步骤可能需要安装一些软件,详见:
http://code.google.com/p/cloudxy/wiki/HlfsUserManual


bug修复
========
这个bug,可以很快的修复,也很简单,只需把相应的load_add_ss
函数中的所有goto语句
都提换掉,那么就不会出现错误了。但是我不清楚这个错误��
�原因,所以希望大家都看
一下,集思广益,搞清楚这个问题的本质。我也google过这个��
�题,有个人和我出现一样的错误
但是,他们讨论的,我没太看懂,给大家参考一下,链接如��
�:
http://objectmix.com/c/251271-error-undefined-behaviour.html



注意:这个bug,我已经按照替换goto语句的方案修复了,如果�
��想复原,只需要把load_all_ss
          函数中相应注视goto语句去掉注视,然后把相应的return语句进行注视即可。


指导人: 陈莉君老师,康华老师
测试人:贾威威
后期负责人:贾威威


Original issue reported on code.google.com by [email protected] on 1 Jan 2012 at 4:09

Install QEMU problems

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. patch -p1 < hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, ldd ./qemu-img

What is the expected output? What do you see instead?
Expected output:
1, No warnings, errors.
2, lib*.so points to what i think.

See instead:
1, Some warnings, see attach file for details.
2, lib*.so points to what i cannot understand! 
jiawei@jiawei-laptop:~/workshop4/qemu$ ldd ./qemu-img 
    linux-gate.so.1 =>  (0x003f0000)
    librt.so.1 => /lib/tls/i686/cmov/librt.so.1 (0x00636000)
    libgthread-2.0.so.0 => /usr/lib/libgthread-2.0.so.0 (0x00781000)
    libglib-2.0.so.0 => /usr/lib/libglib-2.0.so.0 (0x00179000)
    libz.so.1 => /lib/libz.so.1 (0x0053f000)
    libhlfs.so => /usr/lib/libhlfs.so (0x00285000)
    liblog4c.so.3 => not found
    libhdfs.so.0 => not found
    libjvm.so => /usr/lib/libjvm.so (0x00e00000)
    libpthread.so.0 => /lib/tls/i686/cmov/libpthread.so.0 (0x008c5000)
    libc.so.6 => /lib/tls/i686/cmov/libc.so.6 (0x11bbf000)
    /lib/ld-linux.so.2 (0x0015c000)
    libhdfs.so.0 => /home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/hadoop/lib32/libhdfs.so.0 (0x00607000)
    liblog4c.so.3 => /home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so.3 (0x00398000)
    libstdc++.so.6 => /usr/lib/libstdc++.so.6 (0x003f1000)
    libm.so.6 => /lib/tls/i686/cmov/libm.so.6 (0x00110000)
    libdl.so.2 => /lib/tls/i686/cmov/libdl.so.2 (0x00136000)
    libexpat.so.1 => /lib/libexpat.so.1 (0x0032b000)
    libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00674000)

My key lib/include paths in qemu/configure file changed like following:
[...]
2838     GLIB_DIR1_INC=/usr/lib/glib-2.0/include
2839     GLIB_DIR2_INC=/usr/include/glib-2.0
2840     HLFS_DIR=/home/jiawei/workshop3/hlfs
2841     LOG4C_DIR=$HLFS_DIR/3part/log
2842     HDFS_DIR=$HLFS_DIR/3part/hadoop
2843     JVM_DIR=/usr/lib/jvm/java-6-openjdk
2844 
2845     if [ `getconf LONG_BIT` -eq "64" ];then
2846         CLIBS="-L$LOG4C_DIR/lib64"
2847         CLIBS="-L$HDFS_DIR/lib64 $CLIBS"
2848         CLIBS="-L$HLFS_DIR/output/lib64  $CLIBS"
2849         CLIBS="-L$JVM_DIR/jre/lib/amd64/server/ $CLIBS"
2850     fi
2851 
2852     if [ `getconf LONG_BIT` -eq "32" ];then
2853         CLIBS="-L$LOG4C_DIR/lib32"
2854         CLIBS="-L$HDFS_DIR/lib32 $CLIBS"
2855         CLIBS="-L$JVM_DIR/jre/lib/i386/server $CLIBS"
2856         CLIBS="-L$HLFS_DIR/output/lib32  $CLIBS"
2857     fi
2858 
2859     CFLAGS="-I$GLIB_DIR1_INC"
2860     CFLAGS="-I$GLIB_DIR2_INC $CFLAGS"
2861     CFLAGS="-I$HLFS_DIR/src/include $CFLAGS"
2862     CFLAGS="-I$LOG4C_DIR/include $CFLAGS"
2863 
2864     hlfs_libs="$CLIBS -lhlfs -llog4c -lglib-2.0 -lgthread-2.0 -lrt -lhdfs 
-ljvm"
2865     if compile_prog "$CFLAGS" "$CLIBS $hlfs_libs" ; then
2866         hlfs=yes
2867         libs_tools="$hlfs_libs $libs_tools"
2868         libs_softmmu="$hlfs_libs $libs_softmmu"
2869     else
2870         if test "$hlfs" = "yes" ; then
2871             feature_not_found "hlfs block device"
2872         fi
2873         hlfs=no
2874     fi
2875 fi
[...]

I cannot understand why  libhdfs.so.0 => 
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/hadoop/lib32/libhdfs.so
.0 (0x00607000)
    liblog4c.so.3 => /home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so.3 (0x00398000)?


Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 6:00

Attachments:

[Bug]hlfs_find_inode_before_time get the incoorect resule in light of our wiki description

bug概述
========
hlfs_find_inode_before_time主要负责用户在模糊时间下找到这个时��
�
之前的一个inode addr, 严格来说只能找到之前的inode addr, 
如果之前
没有inode,那么就返回一个错误信息了,但是目前我们的hlfs_f
ind_inode_before_time
实现上有所偏差,需要进行修复。

具体hlfs_find_inode_before_time实现细节如下:
回滚到检查点(只知道模糊时间点):调用find_inode_before_time获��
�inode结构体,
然后调用get_inode_info() 
获取inode信息,根据该信息判断是否满足用户要求,
则启动回滚——调用init_hlfs()获得全局控制结构,并以只读或
者是可写模式调用
hlfs_open_by_inode()


测试代码
=========
http://cloudxy.googlecode.com/svn/branches/snapshot/src/snapshot/unittest/test_h
lfs_find_inode_before_time.c


测试结果
=========
jiawei@jiawei-laptop:~/workshop15/cloudxy1/branches/snapshot/src/snapshot/unitte
st/build$ ./test_hlfs_find_inode_before_time
** Message: enter func main
** Message: leave func main
[snipped]
** Message: current time [0], inode addr is [8240]
** Message: current time [1328112969626], inode addr is [16660]
** Message: current time [1328112970028], inode addr is [84020]
** Message: leave func test_hlfs_find_inode_before_time
clean dir path: 
/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest/build
OK


jiawei@jiawei-laptop:~/workshop15/cloudxy1/branches/snapshot/src/snapshot/unitte
st/build$ cat testfs/snapshot.txt
+T0@@##$$1328112969644@@##$$8240@@##$$
+T1@@##$$1328112969689@@##$$16660@@##$$T0
+T2@@##$$1328112969724@@##$$25080@@##$$T1
+T3@@##$$1328112969769@@##$$33500@@##$$T2
+T4@@##$$1328112969814@@##$$41920@@##$$T3
+T5@@##$$1328112969858@@##$$50340@@##$$T4
+T6@@##$$1328112969903@@##$$58760@@##$$T5

测试分析
========
对比测试运行结果和 
snapshot.txt文件就可以看出来确实有问题,
当时间为0时居然还找到了一个inode 
addr,显然不对,再看看下面
的一个也存在问题。


bug状态
=======
这个bug目前还未修复,马上进行修复,如果你对此bug还存在��
�他更好的
建议,大家可以进行讨论,我们采用最优方式。

Original issue reported on code.google.com by [email protected] on 1 Feb 2012 at 5:16

revise snapshot bug

bug概述
=======  
           T1                                   T2                               T3                             T4
|---------------------------|          |--------------------------|         
|------------------------|         |---------------------|
|     null  |     T1      |          |     T1 |    T2        |         |    T2  
 |    T3     |         |    T3  |    T4    |
|---------------------------|          |--------------------------|         
|------------------------|         |---------------------|     .......

注意:上图的第一个域表示 up snapshot name, 第二个域表示 
current snapshot name。

如果要删除T2, 那么我们需要把T3的up snapshot name 域修改为T1, 
这样才能达到revise的结果,
结果如下:
           T1                                 T3                             T4
|---------------------------|        |------------------------|         
|---------------------|
|     null  |     T1      |        |    T1   |    T3     |         |    T3  |   
 T4   |
|---------------------------|        |------------------------|         
|---------------------|     .......
但是当前我们代码中的revise_snapshot_relation没有实现如上功能��
�

修复bug后的代码
==============

static gboolean predicate_same_upname_snapshot(gpointer key,gpointer 
value,gpointer user_data){
       char * ss_name = (char*)key;
      struct snapshot *ss = (struct snapshot*)value;
       char * del_ss_name = (char*)user_data;
 -       if(g_strcmp0(ss_name,del_ss_name) == 0){
+       if(g_strcmp0(ss->up_sname,del_ss_name) == 0){ 
          return TRUE;
       }
       return FALSE;
}

static void revise_snapshot_relation(GHashTable *ss_hashtable,GList 
*remove_list){
     int i;
     for(i = 0; i < g_list_length(remove_list); i++){
        char * ss_name = g_list_nth_data(remove_list,i);
            struct snapshot *ss = g_hash_table_lookup(ss_hashtable,ss_name);
        g_assert(ss!=NULL);
        char *up_ss_name = ss->up_sname;
        struct snapshot *revise_ss = g_hash_table_find (ss_hashtable,predicate_same_upname_snapshot,ss_name);
        if(revise_ss !=NULL){
                   snprintf(revise_ss->up_sname,HLFS_FILE_NAME_MAX,"%s",ss->up_sname);
        }
        g_hash_table_remove(ss_hashtable, ss->sname);
 -       g_free(ss);
     }
     return ;
}

测试代码
========
jiawei@jiawei-laptop:~/workshop15/cloudxy/branches/snapshot/test$ cat test1.c
#include <glib.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>

#define HLFS_FILE_NAME_MAX            (80)

struct snapshot {
    uint64_t timestamp;
    uint64_t inode_addr;
    char sname[HLFS_FILE_NAME_MAX];
    char up_sname[HLFS_FILE_NAME_MAX]; /* for tree style snapshot */
} __attribute__((packed));


/* load all snapshot will remove del snapshot and revise relation upname */
static gboolean predicate_same_upname_snapshot(gpointer key,gpointer 
value,gpointer user_data){
        g_message("enter func %s", __func__);
       char * ss_name = (char*)key;
       struct snapshot *ss = (struct snapshot*)value;
       char * del_ss_name = (char*)user_data;
       if(g_strcmp0(ss->up_sname,del_ss_name) == 0){
        g_message("leave func %s", __func__);
          return TRUE;
       }
        g_message("leave func %s", __func__);
       return FALSE;
}

static void revise_snapshot_relation(GHashTable *ss_hashtable,GList 
*remove_list){
    g_message("enter func %s", __func__);
     int i = 0;
     g_message("del list length is %d", g_list_length(remove_list));
     for(i = 0; i < g_list_length(remove_list); i++){
        char * ss_name = g_list_nth_data(remove_list,i);
        struct snapshot *ss = g_hash_table_lookup(ss_hashtable,ss_name);
        g_message("99 dbg del ss %s", ss->sname);
        g_assert(ss!=NULL);
        char *up_ss_name = ss->up_sname;
        struct snapshot *revise_ss = g_hash_table_find (ss_hashtable,predicate_same_upname_snapshot,ss_name);
        if(revise_ss !=NULL){
           snprintf(revise_ss->up_sname,HLFS_FILE_NAME_MAX,"%s",ss->up_sname);
        }
        g_message("99 dbg revise %s", revise_ss->sname);
        g_hash_table_remove(ss_hashtable, ss->sname);
        g_message("99 dbg revise %s", revise_ss->sname);
//        g_free(ss);
     }
    g_message("leave func %s", __func__);
     return ;
}

int main(int argc, char **argv) {
    int i = 0;
    char up_sname[128];
    memset(up_sname, 0, 128);
    struct snapshot ss[4];
    memset(ss, 0, 4);
    GHashTable *ss_hashtable = g_hash_table_new_full(g_str_hash, g_str_equal, NULL, NULL);
    for (i = 0; i < 4; i++) {
        ss[i].timestamp = i;
        ss[i].inode_addr = i;
        sprintf(ss[i].sname, "%s%d", "T", i);
        sprintf(ss[i].up_sname, "%s", up_sname);
        memset(up_sname, 0, 128);
        strcpy(up_sname, ss[i].sname);
        g_hash_table_insert(ss_hashtable, ss[i].sname, &ss[i]);
    }
    GList *to_remove_ss_list = NULL;
    to_remove_ss_list = g_list_append(to_remove_ss_list, "T1");
//    to_remove_ss_list = g_list_append(to_remove_ss_list, "T2");
    g_message("list is %s", (char *) to_remove_ss_list->data);
    revise_snapshot_relation(ss_hashtable, to_remove_ss_list);
    g_message("99 dbg");
    GList *values = g_hash_table_get_values(ss_hashtable);
    for (i = 0; i < g_list_length(values); i++) {
        struct snapshot *value = g_list_nth_data(values, i);
        printf("snapshot %d is -------------\n", i);
        printf("name is %s\n", value->sname);
        printf("up name is %s\n", value->up_sname);
    }
    g_hash_table_destroy(ss_hashtable);
    return 0;
}

输出结果
========
jiawei@jiawei-laptop:~/workshop15/cloudxy/branches/snapshot/test/build$ ./test
** Message: list is T1
** Message: enter func revise_snapshot_relation
** Message: del list length is 1
** Message: 99 dbg del ss T1
** Message: enter func predicate_same_upname_snapshot
** Message: leave func predicate_same_upname_snapshot
** Message: enter func predicate_same_upname_snapshot
** Message: leave func predicate_same_upname_snapshot
** Message: 99 dbg revise T2
** Message: 99 dbg revise T2
** Message: leave func revise_snapshot_relation
** Message: 99 dbg
snapshot 0 is -------------
name is T0
up name is
snapshot 1 is -------------
name is T3
up name is T2
snapshot 2 is -------------
name is T2
up name is T0

bug当前状态
==========
这个bug目前还未处理,我希望康师哥review一下,我再修复,��
�能我的测试用例
还存在问题。

测试用例存在的问题
====================
当我把 g_hash_table_new_full加上自动free,就会出现free invalid 
pointer 错误,求解。

Original issue reported on code.google.com by [email protected] on 29 Jan 2012 at 4:54

QEMU clone question

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. git apply hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
10,./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs1 10G
11,./qemu-img snapshot -c snapshot1 hlfs:local:///tmp/testenv/testfs 
12,./qemu-img rebase -b local:///tmp/testenv/testfs%snapshot1 
hlfs:local:///tmp/testenv/testfs1

What is the expected output? What do you see instead?
Expected output:
$ cat /tmp/testenv/testfs1/superblock 

[METADATA]
uri=local:///tmp/testenv/testfs1
block_size=8192
segment_size=67108864
max_fs_size=1024
is_compress=0
from_segno=0
father_uri=local:///tmp/testenv/testfs
father_ss=snapshot1
snapshot_inode=0

See instead:
$ cat /tmp/testenv/testfs1/superblock 

[METADATA]
uri=local:///tmp/testenv/testfs1
block_size=8192
is_compress=0
segment_size=67108864
max_fs_size=10240


Original issue reported on code.google.com by [email protected] on 23 Jan 2013 at 12:58

restore_last_segno_file func is done when we use hlfs as a local style

What steps will reproduce the problem?
NOTE: you should do this in my branch
1. my test case is test.c located 
http://cloudxy.googlecode.com/svn/branches/harry/hlfs/src/snapshot/unittest/test
.c  
2. cd branches/harry/hlfs/src/snapshot/unittest/build
3. cmake .. && make
4, ./test -u local:///tmp/testenv/testfs -r 4096 -a 40960

What is the expected output? What do you see instead?
[expected output]
TEST: uri is local:///tmp/testenv/testfs, request size is 4096, total size is 
40960
TEST  hlfs open over 
test hlfs write
offset:4096
offset:8192
offset:12288
offset:16384
offset:20480
offset:24576
offset:28672
offset:32768
offset:36864
offset:40960
TEST  hlfs write over 
test hlfs read
offset:4096
offset:8192
offset:12288
offset:16384
offset:20480
offset:24576
offset:28672
offset:32768
offset:36864
offset:40960

[see instead]
TEST: uri is local:///tmp/testenv/testfs, request size is 4096, total size is 
40960
TEST  hlfs open over 
test hlfs write
offset:4096
offset:8192
offset:12288
offset:16384
offset:20480
offset:24576
offset:28672
offset:32768
offset:36864
offset:40960
TEST  hlfs write over 
test hlfs read
offset:4096
offset:8192
offset:12288
offset:16384
offset:20480
offset:24576
offset:28672
offset:32768
offset:36864
offset:40960
cp: No FileSystem for scheme: local
rm: No FileSystem for scheme: local
mv: No FileSystem for scheme: local

That is to say when we execute hlfs as local style, we also do the restore 
action.
i think we should fix it, which only do restore when hdfs style.




Original issue reported on code.google.com by [email protected] on 19 Dec 2011 at 2:16

fedora 16 xen-bridge

What steps will reproduce the problem?
1. HVM with qemu, and want to create a VM.If I restart in 
/etc/xen/scripts/network-bridge, bridge works, however the IP of host is 
unavailable. If I make the IP available, the bridge doesn't work.


What is the expected output? What do you see instead?
both of them works.

What version of the product are you using? On what operating system?
Linux fedora 3.1.0-7.fc16.x86_64

Please provide any additional information below.


Original issue reported on code.google.com by [email protected] on 8 May 2012 at 5:26

HLFS Clone operation has a bug

What steps will reproduce the problem?
1. svn checkout 
http://cloudxy.googlecode.com/svn/branches/hlfs/person/harry/hlfs  hlfs 
2. cd hlfs/build;
3. cmake ../src/;
4, make all;
5, cd ../src/tools/;
6, ./mkfs.hlfs -u local:///tmp/testenv/testfs -b 8192 -s 67108864 -m 1024
7, ./mkfs.hlfs -u local:///tmp/testenv/testfs1 -b 8192 -s 67108864 -m 10240
8, ./snapshot.hlfs -u local:///tmp/testenv/testfs -s snapshot1
9, ./clone.hlfs -f local:///tmp/testenv/testfs%snapshot1 -s 
local:///tmp/testenv/testfs1

What is the expected output? What do you see instead?
Expected output:
$ cat /tmp/testenv/testfs1/superblock 

[METADATA]
uri=local:///tmp/testenv/testfs1
block_size=8192
segment_size=67108864
max_fs_size=1024
is_compress=0
from_segno=0
father_uri=local:///tmp/testenv/testfs
father_ss=snapshot1
snapshot_inode=0

See instead:

$ cat /tmp/testenv/testfs1/superblock 

[METADATA]
uri=local:///tmp/testenv/testfs1
block_size=8192
segment_size=67108864
max_fs_size=1024
is_compress=0
from_segno=13803037682521079809
father_uri=local:///tmp/testenv/testfs
father_ss=snapshot1
snapshot_inode=0


NOTE: See from_segno option for their differences.


Original issue reported on code.google.com by [email protected] on 23 Jan 2013 at 11:49

[BUG]mkfs.hlfs -u hdfs://namenode:8020/tmp/testenv/testfs -b 8192 -s 67108864 -m 1024

环境
====
分布式环境,三台机器名字各为
namenode     222.24.10.147
datanode1     222.24.10.153
datanode2     222.24.22.6

机器都为Os 都为 Centos 5.4, hadoop-0.20.2-cdh3u2, jdk1.6.0_29

在namenode上执行以下步骤
=========================
1,cd hadoop-0.20.2-cdh3u2 && bin/hadoop fs -mkdir 
hdfs://namenode:8020/tmp/testenv
2, cd ../cloudxy/trunk/hlfs/build
3, cmake ../src/ && make all
4, cd ../output/bin
5, mkfs.hlfs -u hdfs://namenode:8020/tmp/testenv/testfs -b 8192 -s 67108864 -m 
1024 

What is the expected output? What do you see instead?
======================================================
想要看到顺利的建立testfs目录以及其下的superblock文件。
但事实上出现以下错误
enter func init_storage_handler
enter func parse_from_uri
leave func parse_from_uri
loc [fs:testfs], [path:tmp/testenv/testfs]

hdfs -- enter func hdfs_connect
enter func parse_from_uri
leave func parse_from_uri
hdfs -- leave func hdfs_connect
leave func init_storage_handler
hdfs -- enter func hdfs_file_mkdir
hdfs -- enter func build_hdfs_path
enter func parse_from_uri
leave func parse_from_uri
full path is tmp/testenv/testfs
hdfs -- leave func build_hdfs_path
hdfs mkdir:tmp/testenv/testfs
** Message: hdfs_file_mkdir -- full path is tmp/testenv/testfs
** Message: can not mkdir for our fs hdfs://namenode:8020/tmp/testenv/testfs


Please use labels and text to provide additional information.


Original issue reported on code.google.com by [email protected] on 8 Dec 2011 at 1:50

iscsiadm can not discovery the HLFS target

What steps will reproduce the problem?
1. Create a target with HLFS lun according to the wiki page 
http://code.google.com/p/cloudxy/wiki/Support_iSCSI
2. Install iscsi tools:
$sudo apt-get install openiscsi
3. Login HLFS target.
sudo iscsiadm -m discovery -t st -p localhost

What is the expected output? What do you see instead?
Nothing was found.

Please use labels and text to provide additional information.

Original issue reported on code.google.com by [email protected] on 24 Jan 2013 at 11:27

./qemu-img info hlfs:local:///tmp/testenv/testfs errors

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. patch -p1 < hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
10, ./qemu-img info hlfs:local:///tmp/testenv/testfs 

What is the expected output? What do you see instead?
Expected output:
Show hlfs qemu image infos with no errors

See instead:
See two attach files for two conditions. 



Original issue reported on code.google.com by [email protected] on 22 Jan 2013 at 2:57

Attachments:

日志输出问题

当我们第一次启动hlfs时,这时虽然snapshot.txt和alive_snapshot.txt�
��存在,但是这时正常情况,但是我们的日志会打印出 ERROR, 
这可能会大大降低我们日志的可信度。具体日志如下:

....
20120131 16:01:40.811 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/backend/local_storage.c]
[local_file_is_exist][110]full path 
/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest/build/t
estfs/snapshot.txt
20120131 16:01:40.811 ERROR    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/utils/storage_helper.c][
file_get_contents][552]file is not exist
20120131 16:01:40.811 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/backend/local_storage.c]
[local_file_close][95]local -- enter func local_file_close
20120131 16:01:40.811 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/backend/local_storage.c]
[local_file_close][99]local -- leave func local_file_close
20120131 16:01:40.811 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/utils/storage_helper.c][
file_get_contents][590]leave func file_get_contents
20120131 16:01:40.811 ERROR    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/snapshot_helper
.c][load_all_snapshot][144]can not read snapshot content!
20120131 16:01:40.811 ERROR    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/snapshot_helper
.c][load_snapshot_by_name][249]load all ss error

.....

Original issue reported on code.google.com by [email protected] on 31 Jan 2012 at 4:20

lock bug

在基本快照api---hlfs_list_all_snapshots(...)中存在隐含的bug, 
具体见如下分析

int hlfs_list_all_snapshots(const char *uri, char **ss_name_array) {
          .....
    if (0 > rewrite_snapshot_file(storage, ss_hashtable)) {
        HLOG_ERROR("rewrite snapshot.txt error");
        ret = -6;
        goto out;
    }
          .....
}

int rewrite_snapshot_file(struct back_storage *storage, GHashTable 
*ss_hashtable)
{
    HLOG_DEBUG("enter func %s", __func__);
    if((0 == storage->bs_file_is_exist(storage, SNAPSHOT_FILE)) && 
            (0 > storage->bs_file_delete(storage, SNAPSHOT_FILE))) {
        HLOG_ERROR("remove snapshot.txt failed");
        return -1;
    }
    bs_file_t file = storage->bs_file_create(storage, SNAPSHOT_FILE);
    if (file == NULL) {
        HLOG_ERROR("create snapshot.txt error");
        return -2;
    }
    storage->bs_file_close(storage, file);
    GList *list = g_hash_table_get_values(ss_hashtable);
    g_list_foreach(list, dump_ss_one_by_one, storage);
    g_list_free(list);
    HLOG_DEBUG("leave func %s", __func__);
    return 0;
}

void dump_ss_one_by_one(gpointer data, gpointer storage)
{
    HLOG_DEBUG("enter func %s", __func__);
    if (NULL == data || NULL == storage) {
        HLOG_ERROR("Param error");
        return ;
    }
    struct snapshot *ss = (struct snapshot *) data;
    char *file_name = SNAPSHOT_FILE;
    if (0 > dump_snapshot((struct back_storage *)storage, file_name, ss)) {
        HLOG_ERROR("dump ss error");
        return ;
    }
    HLOG_DEBUG("leave func %s", __func__);
}

以上的hlfs_list_snapshots调用了rewrite_snapshot_file,rewrite_snapshot_fi
le调用dump_ss_one_by_one,dump_ss_one_by_one又调用了dump_snapshot, 
这就出现了问题。对文件进行io操作,应该加锁的,但是这里
没有进行加锁。 
hlfs_rm_snapshot也存在这个问题,我们必须都要上锁。


Original issue reported on code.google.com by [email protected] on 6 Jan 2012 at 4:09

Install OS with "qemu-system-x86_64" command into HLFS backend storage errors

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. git apply hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, ./qemu-img create -f hlfs hlfs:local:///tmp/testenv/testfs 10G
10, cd x86_64-softmmu
11, ./qemu-system-x86_64 -hda hlfs:local:///tmp/testenv/testfs 
/home/jiawei/test/ubuntu-12.04.1-desktop-amd64.iso -boot d -m 512 -no-acpi

What is the expected output? What do you see instead?
Expected output:
Install OS successfully

See instead:
jiawei@jiawei-laptop:~/workshop4/qemu/x86_64-softmmu$ ./qemu-system-x86_64 -hda 
hlfs:local:///tmp/testenv/testfs 
/home/jiawei/test/ubuntu-12.04.1-desktop-i386.iso -boot d -m 512 -no-acpi
** Message: enter func bdrv_hlbs_init
** Message: leave func bdrv_hlbs_init
enter func bdrv_new
leave func bdrv_new
999 enter func bdrv_open
999$$ filename is hlfs:local:///tmp/testenv/testfs
999 enter func find_image_format
enter func bdrv_new
leave func bdrv_new
** Message: enter func hlbs_open
** Message: 999 filename is hlfs:local:///tmp/testenv/testfs
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: enter func parse_vdiname
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :1

7777 file:superblock,size:115,time:1359731400

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10
enter func seg_clean_task
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359731408899
Data Block Cache Init Over ! 
cache_size:4096,block_size:8192,flush_interval:5,flush_trigger_level:80,flush_on
ce_size:64
--iblock_size:8192,icache_size:1024,invalidate_trigger_level:90,invalidate_once_
size:64
-- flush worker doing --
Indirect  Block Cache Init Over ! 
icache_size:1024,iblock_size:8192,invalidate_trigger_level:90,invalidate_once_si
ze:64
enter func hlfs_open
inode no 0 , inode address 0
empty filesystem testfs
create new fs inode !
ctrl->rw_inode_flag:1
do not need read alive snapshot file
leave func hlfs_open
** Message: 1leave func hlbs_open
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
** Message: enter func hlbs_read
Hlfs Read Req pos:0,read_len:2048,last_segno:0,last_offset:0,cur_file_len:0
read offset:0,read len:2048
only need to read one block: 0
--Entering func dbcache_query_block
The hash table of block_map is empty
NO item in hash table
not find in cache!
beyond current inode's length:0
read len 2048
** Message: leave func hlbs_read
** Message: enter func hlbs_close
hlfs close over !
enter func deinit_storage_handler
disconnect succ
leave func deinit_storage_handler
 time wait res for cond is :1 !
-- flush worker should exit --
--flush worker exit--
--Entering func icache_destroy
--Leaving func icache_destroy
leave func seg_clean_task
** Message: leave func hlbs_close
999 after find_image_format, ret is 2048
enter func bdrv_new
leave func bdrv_new
** Message: enter func hlbs_open
** Message: 999 filename is hlfs:local:///tmp/testenv/testfs
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: enter func parse_vdiname
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :1

7777 file:superblock,size:115,time:1359731400

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10
enter func seg_clean_task
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359731409900
Data Block Cache Init Over ! 
cache_size:4096,block_size:8192,flush_interval:5,flush_trigger_level:80,flush_on
ce_size:64
--iblock_size:8192,icache_size:1024,invalidate_trigger_level:90,invalidate_once_
size:64
-- flush worker doing --
Indirect  Block Cache Init Over ! 
icache_size:1024,iblock_size:8192,invalidate_trigger_level:90,invalidate_once_si
ze:64
enter func hlfs_open
inode no 0 , inode address 0
empty filesystem testfs
create new fs inode !
ctrl->rw_inode_flag:1
do not need read alive snapshot file
leave func hlfs_open
** Message: 1leave func hlbs_open
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
qemu-system-x86_64: -hda hlfs:local:///tmp/testenv/testfs: drive with bus=0, 
unit=0 (index=0) exists
jiawei@jiawei-laptop:~/workshop4/qemu/x86_64-softmmu$ ./qemu-system-x86_64 -hda 
hlfs:local:///tmp/testenv/testfs 
/home/jiawei/test/ubuntu-12.04.1-desktop-amd64.iso -boot d -m 512 -no-acpi
** Message: enter func bdrv_hlbs_init
** Message: leave func bdrv_hlbs_init
enter func bdrv_new
leave func bdrv_new
999 enter func bdrv_open
999$$ filename is hlfs:local:///tmp/testenv/testfs
999 enter func find_image_format
enter func bdrv_new
leave func bdrv_new
** Message: enter func hlbs_open
** Message: 999 filename is hlfs:local:///tmp/testenv/testfs
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: enter func parse_vdiname
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :1

7777 file:superblock,size:115,time:1359731400

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10
enter func seg_clean_task
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359731455435
Data Block Cache Init Over ! 
cache_size:4096,block_size:8192,flush_interval:5,flush_trigger_level:80,flush_on
ce_size:64
--iblock_size:8192,icache_size:1024,invalidate_trigger_level:90,invalidate_once_
size:64
-- flush worker doing --
Indirect  Block Cache Init Over ! 
icache_size:1024,iblock_size:8192,invalidate_trigger_level:90,invalidate_once_si
ze:64
enter func hlfs_open
inode no 0 , inode address 0
empty filesystem testfs
create new fs inode !
ctrl->rw_inode_flag:1
do not need read alive snapshot file
leave func hlfs_open
** Message: 1leave func hlbs_open
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
** Message: enter func hlbs_read
Hlfs Read Req pos:0,read_len:2048,last_segno:0,last_offset:0,cur_file_len:0
read offset:0,read len:2048
only need to read one block: 0
--Entering func dbcache_query_block
The hash table of block_map is empty
NO item in hash table
not find in cache!
beyond current inode's length:0
read len 2048
** Message: leave func hlbs_read
** Message: enter func hlbs_close
hlfs close over !
enter func deinit_storage_handler
disconnect succ
leave func deinit_storage_handler
 time wait res for cond is :1 !
-- flush worker should exit --
--flush worker exit--
--Entering func icache_destroy
--Leaving func icache_destroy
leave func seg_clean_task
** Message: leave func hlbs_close
999 after find_image_format, ret is 2048
enter func bdrv_new
leave func bdrv_new
** Message: enter func hlbs_open
** Message: 999 filename is hlfs:local:///tmp/testenv/testfs
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: enter func parse_vdiname
** Message: 999 filename is local:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:local:///tmp/testenv/testfs,head:local,dir:/tmp/testenv,fsname:testfs,hostna
me:default,port:0,user:kanghua
leave func init_storage_handler
enter func init_from_superblock
SEGMENT_SIZE:67108864,HBLOCK_SIZE:8192
father uri:(null)
enter func get_cur_latest_segment_info
how much file :1

7777 file:superblock,size:115,time:1359731400

7777 file:,size:0,time:0

leave func get_cur_latest_segment_info
Raw Hlfs Ctrl Init Over ! 
uri:local:///tmp/testenv/testfs,max_fs_size:10240,seg_size:67108864,block_size:8
192,last_segno:0,last_offset:0,start_segno:0,io_nonactive_period:10
enter func seg_clean_task
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359731456436
Data Block Cache Init Over ! 
cache_size:4096,block_size:8192,flush_interval:5,flush_trigger_level:80,flush_on
ce_size:64
--iblock_size:8192,icache_size:1024,invalidate_trigger_level:90,invalidate_once_
size:64
-- flush worker doing --
Indirect  Block Cache Init Over ! 
icache_size:1024,iblock_size:8192,invalidate_trigger_level:90,invalidate_once_si
ze:64
enter func hlfs_open
inode no 0 , inode address 0
empty filesystem testfs
create new fs inode !
ctrl->rw_inode_flag:1
do not need read alive snapshot file
leave func hlfs_open
** Message: 1leave func hlbs_open
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
** Message: enter func hlbs_getlength
** Message: leave func hlbs_getlength
qemu-system-x86_64: -hda hlfs:local:///tmp/testenv/testfs: drive with bus=0, 
unit=0 (index=0) exists




Original issue reported on code.google.com by [email protected] on 1 Feb 2013 at 3:20

[HLFS]HLFS Pool driver for Libvirt V1 multi-line comment warning

What steps will reproduce the problem?
1, 
搭建HLFS环境,详见http://code.google.com/p/cloudxy/wiki/HlfsUserManual
2, 搭建 libvirt 环境。
  a. git clone git://libvirt.org/libvirt.git
  b. git reset --hard v1.0.1
  c. wget http://cloudxy.googlecode.com/svn/branches/hlfs/person/harry/hlfs/patches/hlfs_driver_for_libvirt_pool.patch
  d. git apply hlfs_driver_for_libvirt_pool.patch
这里需要修改一下 libvirt/src/Makefile.am 
文件中的一些链接库路径和头文件路径。
  e. ./autogen.sh
  f. ./configure
  g. make 

output:

make[3]: *** Waiting for unfinished jobs....
  CC     libvirt_driver_storage_impl_la-storage_backend_hlfs.lo
In file included from /home/kanghua/hlfs/src/include/snapshot_helper.h:14:0,
                 from storage/storage_backend_hlfs.c:19:
/home/kanghua/hlfs/src/include/snapshot.h:56:2: error: multi-line comment 
[-Werror=comment]
cc1: all warnings being treated as errors


Let's look the snapshot.h
    54  
    55      struct snapshot *hlfs_get_all_snapshots(const char *uri, int *num_entries);
    56      //struct snapshot *__hlfs_get_all_snapshots(struct back_storage\ *storage, 
    57      int *num_entries);
    58  

改为下面这样就没事了,蛤蟆意思,不识别换行符,但是非注释��
�换行符却是识别的。难道因为注释连换行符都失效了,不对�
��,要是失效就应该报错才对。
    54  
    55      struct snapshot *hlfs_get_all_snapshots(const char *uri, int *num_entries);
    56      //struct snapshot *__hlfs_get_all_snapshots(struct back_storage *storage, 
    57      //int *num_entries);
    58  
Please use labels and text to provide additional information.

Original issue reported on code.google.com by littlesmartsmart on 6 Feb 2013 at 11:52

hlfs_rm_snapshot remove a snapshot not taken

bug总述
========
hlfs_rm_snapshot设计的目的是删除一个snapshot, 
但是如果这个snapshot根本就没有
take,我们再去删除这个snapshot,这时我们应该给出用户一个提�
��,你要删除的snapshot
不存在。存在以下场景,可能会出现问题,
用户删除snapshot时,名字给错了,但是咱们系统没给出提示,
那么他还以为自己删除了。

测试环境
=========
Distributor ID: Ubuntu
Description:    Ubuntu 10.04.3 LTS
Release:        10.04
Codename:       lucid
Linux jiawei-laptop 2.6.32-37-generic #81-Ubuntu SMP Fri Dec 2
20:35:14 UTC 2011 i686 GNU/Linux

测试用例
=========
#include <glib.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include "api/hlfs.h"
#include "hlfs_log.h"

#define REQ_SIZE 4096
#define TOTAL_SIZE 40960

typedef struct {
       struct hlfs_ctrl *ctrl;
} Fixture;

static void
do_snapshot(Fixture *fixture, int i) {
       g_message("enter func %s", __func__);
       char buffer[128];
       memset(buffer, 0, 128);
       if (0 == i) {
               sprintf(buffer, "%s%d", "snapshot", i);
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (1 == i) {
               sprintf(buffer, "%s", " ");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (2 == i) {
               sprintf(buffer, "%s", "+");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (3 == i) {
               sprintf(buffer, "%s", "##@");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (4 == i) {
               sprintf(buffer, "%s", "..");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (5 == i) {
               sprintf(buffer, "%s", " **");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       } else if (6 == i) {
               sprintf(buffer, "%s", "1234");
               g_message("%d buffer is [%s]", i, buffer);
               int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
               g_assert(ret == 0);
       }
       g_message("leave func %s", __func__);
       return ;
}

static void
take_snapshot(Fixture *fixture, const void *data) {
       g_message("enter func %s", __func__);
       char content[REQ_SIZE];
       int offset = 0;
       int i = 0;

       memset(content, 0, REQ_SIZE);
       while (offset < TOTAL_SIZE) {
               int ret1 = hlfs_write(fixture->ctrl, content, REQ_SIZE, offset);
               g_assert_cmpint(ret1, ==, REQ_SIZE);
               do_snapshot(fixture, i);
               offset += REQ_SIZE;
               i += 1;
       }
       g_message("leave func %s", __func__);
       return;
}

static void
hlfs_rm_snapshot_setup(Fixture *fixture, const void *data) {
       g_message("enter func %s", __func__);
       const char *uri = (const char *)data;
       fixture->ctrl = init_hlfs(uri);
       int ret = hlfs_open(fixture->ctrl, 1);
       g_assert_cmpint(ret, == , 0);
       g_assert(fixture->ctrl != NULL);
       take_snapshot(fixture, data);
       g_message("leave func %s", __func__);
       return ;
}

static void
test_hlfs_rm_snapshot(Fixture *fixture, const void *data) {
       g_message("enter func %s", __func__);
       const char *uri = (const char *) data;
       int ret = 0;
       ret = hlfs_rm_snapshot(uri, "snapshot0");
       g_assert(ret == 0);
       ret = hlfs_rm_snapshot(uri, "bug here");
       g_assert(ret == 0);
       g_message("leave func %s", __func__);
       return ;
}

static void
hlfs_rm_snapshot_tear_down(Fixture *fixture, const void *data) {
       g_message("enter func %s", __func__);
       hlfs_close(fixture->ctrl);
       deinit_hlfs(fixture->ctrl);
       g_message("leave func %s", __func__);
       return;
}

int main(int argc, char **argv) {
       g_message("enter func %s", __func__);
       if (log4c_init()) {
               g_message("log4c init error!");
       }
       g_test_init(&argc, &argv, NULL);
       g_test_add("/misc/hlfs_rm_snapshot",
                               Fixture,
                               "local:///tmp/testenv/testfs",
                               hlfs_rm_snapshot_setup,
                               test_hlfs_rm_snapshot,
                               hlfs_rm_snapshot_tear_down);
       g_message("leave func %s", __func__);
       return g_test_run();
}


用例分析
===========
调用hlfs_take_snapshot创建了以下snapshot
"snapshot0"
" "
"+"
"##@"
".."
" **"
"1234"

调用hlfs_rm_snapshot删除以下snapshot
"snapshot0"
"bug here"

期望输出
=========
+snapshot0#1325165574277#8240#
+ #1325165574333#16660#
++#1325165574401#25080#
+##@#1325165574445#33500#
+..#1325165574490#41920#
+ **#1325165574535#50340#
+1234#1325165574579#58760#
-snapshot0###

实际输出
========
+snapshot0#1325165574277#8240#
+ #1325165574333#16660#
++#1325165574401#25080#
+##@#1325165574445#33500#
+..#1325165574490#41920#
+ **#1325165574535#50340#
+1234#1325165574579#58760#
-snapshot0###
-bug here###

bug分析
=======
我们在删除“bug here”快照之前,根本没有take 
它,也就是说删除这个bug的记录
不应该放到snapshot.txt文件中,至少应该给用户一个提示,“��
�所删除的快照不存在”
或者其他形式。

出现bug的原因
==============
在删除某个快照之前,没有判断此刻这个快照是否存在

这个bug的目前状态
=================
目前这个bug还没解决,我会尽快进行处理。


Original issue reported on code.google.com by [email protected] on 29 Dec 2011 at 1:58

valgrind for test_hlfs_take_snapshot

bug描述
=====
用valgrind扫了一下test_hlfs_take_snapshot,发现了很多处内存泄露
我把扫描的log给大家打印一下,感兴趣的可以一起修修。

bug复原
=====
1, 下载snapshot 分支
svn checkout http://cloudxy.googlecode.com/svn/trunk/  snapshot
2,    编译libhlfs
cd snapshot/build && cmake ../src && make all
3,    编译单测
cd ../src/snapshot/unittest/build && cmake .. && make all
4,    valgrind 进行扫描
valgrind --log-file=memcheck.log --tool=memcheck --leak-check=full 
--show-reachable=yes ./test_hlfs_take_snapshot

这时你打开 memcheck.log 
,就会看到详细情况,也可以通过以下链接进行查看
http://paste.org/43063

bug分析
=====
都是hlfs目前存在的内存泄露以及其他内存问题

bug修复
=====
因为数量比较多,需要一个一个分析解决,目前决定集合大��
�
的力量一起解决这些bugs,感兴趣的可以通过以上步骤复原
并且进行修复部分。也可以通过以下连接
http://paste.org/43063
进行查看部分bug,然后进行修复


指导人:陈莉君老师,康华老师
完成人:贾威威
后期负责人:贾威威


Original issue reported on code.google.com by [email protected] on 1 Jan 2012 at 6:29

test_hlfs_tree_snapshots error

bug概述
=======
今天测试构建树形快照,但是在构建树枝时,出现了错误,
目前还不是很清除根源,正在调查。

恢复bug
=======
1, 下载snapshot分支     
svn checkout http://cloudxy.googlecode.com/svn/branches/snapshot/ hlfs
2,  cd hlfs/build && cmake ../src && make all
3,  cd ../src/snapshot/unittest/build && cmake ..
4,  make all
5,  ./test_hlfs_tree_snapshots

测试源码
========
http://cloudxy.googlecode.com/svn/branches/snapshot/src/snapshot/unittest/test_h
lfs_tree_snapshots.c


执行结果
=========
期望结果
---------------
一切正常,断言没有执行

实际结果
--------------
jiawei@jiawei-laptop:~/workshop15/snapshot/src/snapshot/unittest/build$ 
./test_hlfs_tree_snapshots 
/misc/hlfs_take_snapshot: test env dir is 
/home/jiawei/workshop15/snapshot/src/snapshot/unittest/build
uri is 
local:///home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs
** Message: cmd is [../mkfs.hlfs -u 
local:///home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs -b 
8192 -s 67108864 -m 1024]
** Message: key file data :
[METADATA]
uri=local:///home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs
block_size=8192
segment_size=67108864
max_fs_size=1024

** Message: sb file path 
/home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs/superblock
** Message: sb file path 
1/home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs/superblock
** Message: sb file path 
2/home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs/superblock
** Message: append size:148
fixture->uri is 
local:///home/jiawei/workshop15/snapshot/src/snapshot/unittest/build/testfs
** Message: get_current_time is 1326271479821
** Message: length is 4096
** Message: get_current_time is 1326271479873
** Message: length is 8192
** Message: 1 buffer is [T1]
** Message: get_current_time is 1326271479919
** Message: length is 12288
** Message: 2 buffer is [T2]
** Message: get_current_time is 1326271479963
** Message: length is 16384
** Message: 3 buffer is [T3]
** Message: get_current_time is 1326271480007
** Message: length is 20480
** Message: 4 buffer is [T4]
** Message: get_current_time is 1326271480052
** Message: length is 24576
** Message: 5 buffer is [T5]
** Message: get_current_time is 1326271480109
** Message: length is 28672
** Message: 6 buffer is [T6]
** Message: get_current_time is 1326271480154
** Message: length is 32768
** Message: 7 buffer is [T7]
** Message: get_current_time is 1326271480209
** Message: length is 36864
** Message: 8 buffer is [T8]
** Message: get_current_time is 1326271480254
** Message: length is 40960
** Message: 9 buffer is [T9]
** Message: get_current_time is 1326271480298
** Message: length is 45056
** Message: 10 buffer is [T10]
** Message: get_current_time is 1326271481349
** Message: length is 24576
** Message: get_current_time is 1326271481389
** Message: length is 24576
** Message: 1 buffer is [T11]
**
ERROR:/home/jiawei/workshop15/snapshot/src/snapshot/unittest/test_hlfs_tree_snap
shots.c:219:test_hlfs_tree_snapshots: assertion failed (ret1 == REQ_SIZE): (-1 
== 4096)
已放弃


bug分析
=======
这里给出部分log信息,如下:
20120111 08:44:41.423 DEBUG    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/backend/local_storage.c][local_file_pread]
[137]local -- leave func local_file_pread
20120111 08:44:41.423 ERROR    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/utils/misc.c][read_block][104]bs_file_prea
d's size is not equal to block_size
20120111 08:44:41.423 DEBUG    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/backend/local_storage.c][local_file_close]
[94]local -- enter func local_file_close
20120111 08:44:41.423 DEBUG    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/backend/local_storage.c][local_file_close]
[98]local -- leave func local_file_close
20120111 08:44:41.423 DEBUG    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/utils/misc.c][read_block][112]leave func 
read_block
20120111 08:44:41.424 ERROR    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/common/logger.c][load_block_by_no][228]can
 not read block for storage address 25308
20120111 08:44:41.424 ERROR    hlfslog- 
[/home/jiawei/workshop15/snapshot/src/storage/hlfs_write.c][hlfs_write][70]can 
not read logic block: 1

read_block源码如下:
char *read_block(struct back_storage *storage ,uint64_t 
storage_address,uint32_t block_size)
{
HLOG_DEBUG("enter func %s", __func__);
    int ret = 0;
    uint32_t offset = get_offset(storage_address); 
    const char segfile_name[SEGMENT_FILE_NAME_MAX];
    ret = build_segfile_name(get_segno(storage_address), (char *) segfile_name);
    if (-1 == ret) {
   HLOG_ERROR("build_segfile_name error");
   return NULL;
    }

    bs_file_t file = storage->bs_file_open(storage,segfile_name,BS_READONLY); 
    if(file==NULL){
        HLOG_ERROR("can not open segment file %s",segfile_name);
        return NULL;
    }

    char * block = (char*)g_malloc0(block_size);
    if (NULL == block) {
   HLOG_ERROR("Allocate Error!");
   block = NULL;
   goto out;
    }
    if(block_size != storage->bs_file_pread(storage,file,block,block_size,offset)){
   HLOG_ERROR("bs_file_pread's size is not equal to block_size");
        g_free(block);
        block = NULL;
        goto out;
    }

out:
    storage->bs_file_close(storage,file);
HLOG_DEBUG("leave func %s", __func__);
    return block;
}

显然读取这个seg文件时出现了问题,没有读出来block_size大小�
��导致错误发生。

bug目前状态
==========
这个bug目前还未修复,我会尽快找到原因并修复,欢迎感兴��
�的一起进行修复  ;-)


Original issue reported on code.google.com by [email protected] on 11 Jan 2012 at 12:26

The dead path in configure file of HLFS driver for QEMU

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
4. patch -p1 < hlfs_driver_for_qemu.patch
5, Modify the dead path

What is the expected output? What do you see instead?
Expected:
Should not let user modify this dead path.

Instead:
Now, must modify these paths.

Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 6:46

Check HLFS Patch for QEMU, which has errors when execute "git apply hlfs_driver_for_qemu.patch"

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
4. git apply hlfs_driver_for_qemu.patch

NOTE: Step 3 is just copy hlfs patch for QEMU to qemu dir.

What is the expected output? What do you see instead?
Expected output:
HLFS patch was patched well.

See instead:
error: patch failed: configure:223
error: configure: patch does not apply



Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 4:32

The dead path in configure file of HLFS driver for QEMU

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
4. patch -p1 < hlfs_driver_for_qemu.patch
5, Modify the dead path

What is the expected output? What do you see instead?
Expected:
Should not let user modify this dead path.

Instead:
Now, must modify these paths.

Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 6:47

QEMU VM hangs up when install OS to hlfs image

What steps will reproduce the problem?
1. apply HLFS patch to QEMU.
2. Run a VM with HLFS image.
3. Install the OS to HLFS image.

What is the expected output? What do you see instead?
The VM hangs up.

Please use labels and text to provide additional information.


Original issue reported on code.google.com by [email protected] on 21 Jan 2013 at 1:30

Environment variable CLASSPATH not set!

What steps will reproduce the problem?
  0. Build QEMU with hlfs driver(http://code.google.com/p/cloudxy/wiki/HLFS_SUPPORT_QEMU) && Build HLFS ENV.(http://code.google.com/p/cloudxy/wiki/HlfsUserManual) && Build hadoop pesudo ENV.( https://ccp.cloudera.com/display/CDHDOC/CDH3+Installation)
  1. git clone git://libvirt.org/libvirt.git
  2. git reset --hard v1.0.1
  3. wget http://cloudxy.googlecode.com/svn/branches/hlfs/person/harry/hlfs/patches/hlfs_driver_for_libvirt_network_disk.patch
  4. git apply hlfs_driver_for_libvirt_network_disk.patch
  5. ./autogen.sh
  6. ./configure
  7. make
  8. sudo make install
  9, hadoop fs -mkdir /tmp/testenv
  10, wget http://cloudxy.googlecode.com/files/linux-0.2.img.zip
  11, unzip linux-0.2-img.zip
  12, qemu-img convirt linux-0.2-img.zip hlfs:hdfs:///tmp/testenv/testfs
  13, virsh hlfs_hdfs.xml
$ cat > hlfs_hdfs.xml 
<domain type='qemu'>
<name>testvm</name>
<memory>1048576</memory>
<os>
<type arch='x86_64'>hvm</type>
</os>
<devices>
<disk type='network'>
<source protocol="hlfs" name="hdfs:///tmp/testenv/testfs"/>
<target dev='hda' bus='ide'/>
</disk>
<graphics type='vnc' port='-1' autoport='yes'/>
</devices>
</domain>


What is the expected output? What do you see instead?
Expected output:
create a VM successfully.

See instead:
$ ./virsh create hlfs_hdfs.xml 
error:create domain failure from hlfs_hdfs.xml 
error:internal error process exited while connecting to monitor: ** Message: 
enter func bdrv_hlbs_init
** Message: leave func bdrv_hlbs_init
enter func bdrv_new
leave func bdrv_new
999 enter func bdrv_open
999$$ filename is hlfs:hdfs:///tmp/testenv/testfs
999 enter func find_image_format
enter func bdrv_new
leave func bdrv_new
** Message: enter func hlbs_open
** Message: 999 filename is hlfs:hdfs:///tmp/testenv/testfs
** Message: 999 filename is hdfs:///tmp/testenv/testfs
** Message: enter func parse_vdiname
** Message: 999 filename is hdfs:///tmp/testenv/testfs
** Message: leave func parse_vdiname
enter func init_storage_handler
loc [fs:testfs], 

uri:hdfs:///tmp/testenv/testfs,head:hdfs,dir:/tmp/testenv,fsname:testfs,hostname
:default,port:0,user:kanghua
Environment variable CLASSPATH not set!
fs is null, hdfsConnect error!
ret is not 0, so error happened
leave func init_storage_handler
[uri:hdfs:///tmp/testenv/testfs] can not accessable
init raw hlfs ctrl failed
enter func hlfs_open
error params :flag 1
**
ERROR:block/hlfs.c:















NOTE: Actually, my CLASSPATH is set correctly like follwoing.
$ cat /home/jiawei/.bashrc
[...]
export HLFS_HOME=/home/jiawei/workshop3/hlfs
export LOG_HOME=$HLFS_HOME/3part/log
export SNAPPY_HOME=$HLFS_HOME/3part/snappy
export HADOOP_HOME=$HLFS_HOME/3part/hadoop
export JAVA_HOME=/usr/lib/jvm/java-6-sun
export PATH=/usr/bin/:/usr/local/bin/:/bin/:/usr/sbin/:/sbin/:$JAVA_HOME/bin/
#export LD_LIBRARY_PATH=$JAVAHOME/lib
export 
LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/i386/server/:$HADOOP_HOME/lib32/:$LOG_HOME/li
b32/:$SNAPPY_HOME/lib32/:$HLFS_HOME/output/lib32/:/usr/lib/
export PKG_CONFIG_PATH=/usr/lib/pkgconfig/:/usr/share/pkgconfig/
export CFLAGS="-L/usr/lib -L/lib -L/usr/lib64"
export CXXFLAGS="-L/usr/lib -L/lib"
export 
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/htmlconverter.jar:$JAVA_HOME/li
b/jconsole.jar:$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/j
re/lib/charsets.jar:$JAVA_HOME/jre/lib/deploy.jar:$JAVA_HOME/jre/lib/javaws.jar:
$JAVA_HOME/jre/lib/jce.jar:$JAVA_HOME/jre/lib/jsse.jar:$JAVA_HOME/jre/lib/manage
ment-agent.jar:$JAVA_HOME/jre/lib/plugin.jar:$JAVA_HOME/jre/lib/resources.jar:$J
AVA_HOME/jre/lib/rt.jar:$JAVA_HOME/jre/lib/:$JAVA_HOME/lib/:/usr/lib/hadoop-0.20
/conf:/usr/lib/jvm/java-6-sun/lib/tools.jar:/usr/lib/hadoop-0.20:/usr/lib/hadoop
-0.20/hadoop-core-0.20.2-cdh3u2.jar:/usr/lib/hadoop-0.20/lib/ant-contrib-1.0b3.j
ar:/usr/lib/hadoop-0.20/lib/aspectjrt-1.6.5.jar:/usr/lib/hadoop-0.20/lib/aspectj
tools-1.6.5.jar:/usr/lib/hadoop-0.20/lib/commons-cli-1.2.jar:/usr/lib/hadoop-0.2
0/lib/commons-codec-1.4.jar:/usr/lib/hadoop-0.20/lib/commons-daemon-1.0.1.jar:/u
sr/lib/hadoop-0.20/lib/commons-el-1.0.jar:/usr/lib/hadoop-0.20/lib/commons-httpc
lient-3.1.jar:/usr/lib/hadoop-0.20/lib/commons-logging-1.0.4.jar:/usr/lib/hadoop
-0.20/lib/commons-logging-api-1.0.4.jar:/usr/lib/hadoop-0.20/lib/commons-net-1.4
.1.jar:/usr/lib/hadoop-0.20/lib/core-3.1.1.jar:/usr/lib/hadoop-0.20/lib/hadoop-f
airscheduler-0.20.2-cdh3u2.jar:/usr/lib/hadoop-0.20/lib/hsqldb-1.8.0.10.jar:/usr
/lib/hadoop-0.20/lib/jackson-core-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jackson
-mapper-asl-1.5.2.jar:/usr/lib/hadoop-0.20/lib/jasper-compiler-5.5.12.jar:/usr/l
ib/hadoop-0.20/lib/jasper-runtime-5.5.12.jar:/usr/lib/hadoop-0.20/lib/jets3t-0.6
.1.jar:/usr/lib/hadoop-0.20/lib/jetty-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20
/lib/jetty-servlet-tester-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jetty-u
til-6.1.26.cloudera.1.jar:/usr/lib/hadoop-0.20/lib/jsch-0.1.42.jar:/usr/lib/hado
op-0.20/lib/junit-4.5.jar:/usr/lib/hadoop-0.20/lib/kfs-0.2.2.jar:/usr/lib/hadoop
-0.20/lib/log4j-1.2.15.jar:/usr/lib/hadoop-0.20/lib/mockito-all-1.8.2.jar:/usr/l
ib/hadoop-0.20/lib/oro-2.0.8.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-200812
11.jar:/usr/lib/hadoop-0.20/lib/servlet-api-2.5-6.1.14.jar:/usr/lib/hadoop-0.20/
lib/slf4j-api-1.4.3.jar:/usr/lib/hadoop-0.20/lib/slf4j-log4j12-1.4.3.jar:/usr/li
b/hadoop-0.20/lib/xmlenc-0.52.jar:/usr/lib/hadoop-0.20/lib/jsp-2.1/jsp-2.1.jar:/
usr/lib/hadoop-0.20/lib/jsp-2.1/jsp-api-2.1.jar


Original issue reported on code.google.com by [email protected] on 9 Feb 2013 at 1:28

Bug of compiling libhlfs for us, fix it by yourself

What steps will reproduce the problem?
1. cd hlfs/build
2. cmake ../src/
3. make all

What is the expected output? What do you see instead?

Expected output
-----------------
Scanning dependencies of target hlfs
[  5%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_write.c.o
[ 11%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_open.c.o
[ 16%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_read.c.o
[ 22%] Building C object CMakeFiles/hlfs.dir/storage/deinit_hlfs.c.o
[ 27%] Building C object CMakeFiles/hlfs.dir/storage/log_write_task.c.o
[ 33%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_ctrl.c.o
[ 38%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_stat.c.o
[ 44%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_close.c.o
[ 50%] Building C object CMakeFiles/hlfs.dir/storage/init_hlfs.c.o
[ 55%] Building C object CMakeFiles/hlfs.dir/common/logger.c.o
[ 61%] Building C object CMakeFiles/hlfs.dir/backend/local_storage.c.o
[ 66%] Building C object CMakeFiles/hlfs.dir/backend/hdfs_storage.c.o
[ 72%] Building C object CMakeFiles/hlfs.dir/clean/clean_route.c.o
[ 77%] Building C object CMakeFiles/hlfs.dir/utils/segment_cleaner.c.o
[ 83%] Building C object CMakeFiles/hlfs.dir/utils/misc.c.o
[ 88%] Building C object CMakeFiles/hlfs.dir/utils/address.c.o
[ 94%] Building C object CMakeFiles/hlfs.dir/utils/storage_helper.c.o
Linking C shared library
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/lib32/libhlfs.so
[ 94%] Built target hlfs
Scanning dependencies of target mkfs.hlfs
[100%] Building C object CMakeFiles/mkfs.hlfs.dir/tools/hlfs_mkfs.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/mkfs.hlfs
[100%] Built target mkfs.hlfs
Scanning dependencies of target all
[100%] Built target all
Built target hlfs
Built target mkfs.hlfs
Scanning dependencies of target nbd_ops
Building C object CMakeFiles/nbd_ops.dir/tools/nbd_ops.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/nbd_ops
Built target nbd_ops
Scanning dependencies of target seg.clean
Building C object CMakeFiles/seg.clean.dir/tools/hlfs_seg_clean.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/seg.clean
Built target seg.clean
Scanning dependencies of target segcalc.hlfs
Building C object CMakeFiles/segcalc.hlfs.dir/tools/hlfs_seg_usage_calc.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/segcalc.hlfs
Built target segcalc.hlfs
Scanning dependencies of target tapdisk_ops
Building C object CMakeFiles/tapdisk_ops.dir/tools/tapdisk_ops.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/tapdisk_ops
Built target tapdisk_ops


See instead
--------------
Scanning dependencies of target hlfs
[  5%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_write.c.o
[ 11%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_open.c.o
[ 16%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_read.c.o
[ 22%] Building C object CMakeFiles/hlfs.dir/storage/deinit_hlfs.c.o
[ 27%] Building C object CMakeFiles/hlfs.dir/storage/log_write_task.c.o
[ 33%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_ctrl.c.o
[ 38%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_stat.c.o
[ 44%] Building C object CMakeFiles/hlfs.dir/storage/hlfs_close.c.o
[ 50%] Building C object CMakeFiles/hlfs.dir/storage/init_hlfs.c.o
[ 55%] Building C object CMakeFiles/hlfs.dir/common/logger.c.o
[ 61%] Building C object CMakeFiles/hlfs.dir/backend/local_storage.c.o
[ 66%] Building C object CMakeFiles/hlfs.dir/backend/hdfs_storage.c.o
[ 72%] Building C object CMakeFiles/hlfs.dir/clean/clean_route.c.o
[ 77%] Building C object CMakeFiles/hlfs.dir/utils/segment_cleaner.c.o
[ 83%] Building C object CMakeFiles/hlfs.dir/utils/misc.c.o
[ 88%] Building C object CMakeFiles/hlfs.dir/utils/address.c.o
[ 94%] Building C object CMakeFiles/hlfs.dir/utils/storage_helper.c.o
Linking C shared library
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/lib32/libhlfs.so
[ 94%] Built target hlfs
Scanning dependencies of target mkfs.hlfs
[100%] Building C object CMakeFiles/mkfs.hlfs.dir/tools/hlfs_mkfs.c.o
Linking C executable
/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/mkfs.hlfs
/usr/bin/ld: warning: libexpat.so.1, needed by
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so,
not found (try using -rpath or -rpath-link)
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_SetEndElementHandler'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_ParserCreate'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_GetErrorCode'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_SetUserData'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_GetCurrentColumnNumber'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_ParseBuffer'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_GetCurrentLineNumber'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_SetCommentHandler'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_ErrorString'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_ParserFree'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_SetStartElementHandler'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_Parse'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_GetBuffer'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_GetCurrentByteIndex'
/home/jiawei/workshop1/cloudxy/trunk/hlfs/build/../3part/log/lib32/liblog4c.so:
undefined reference to `XML_SetCharacterDataHandler'
collect2: ld 返回 1
make[3]: *** [/home/jiawei/workshop1/cloudxy/trunk/hlfs/output/bin/mkfs.hlfs]
错误 1
make[2]: *** [CMakeFiles/mkfs.hlfs.dir/all] 错误 2
make[1]: *** [CMakeFiles/all.dir/rule] 错误 2
make: *** [all] 错误 2


How to fix it ??
1, cd /usr/lib/
2, sudo ln -s ../../lib/libexpat.so.0.5.0 libexpat.so.1


Original issue reported on code.google.com by [email protected] on 17 Dec 2011 at 5:19

Install OS with "qemu-system-x86_64" command into HLFS backend storage

What steps will reproduce the problem?
1. git clone git://git.qemu.org/qemu.git
2. cd qemu
3, git reset --hard v1.3.0
4. cp ../hlfs/patches/hlfs_driver_for_qemu.patch ./ 
5. git apply hlfs_driver_for_qemu.patch
6, Modify the dead path
7, ./configure 
8, make
9, cd x86_64-softmmu
10, $ ./qemu-system-x86_64 -hda hlfs:local:///tmp/testenv/testfs -cdrom 
/home/jiawei/test/ubuntu-12.04.1-desktop-amd64.iso -boot d -m 512 -no-acpi

What is the expected output? What do you see instead?
Expected output:
Install OS correctly and produce some hlfs segment files into HLFS backend 
storage dir.

See instead:
[...]
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723547552
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723548552
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723549553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723550553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723551553
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723552553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723553553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723554553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723555553
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723556554
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723557554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723558554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723559554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723560554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723561554
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723562554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723563554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723564554
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723565555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723566555
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723567555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723568555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723569555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723570555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723571555
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723572555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723573555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723574555
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723575556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723576556
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723577556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723578556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723579556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723580556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723581556
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723582556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723583556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723584556
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723585557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723586557
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723587557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723588557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723589557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723590557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723591557
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723592557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723593557
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723594558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723595558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723596558
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723597558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723598558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723599558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723600558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723601558
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723602558
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723603559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723604559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723605559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723606559
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723607559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723608559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723609559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723610559
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723611559
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723612560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723613560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723614560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723615560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723616560
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723617560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723618560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723619560
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723620561
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723621561
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723622561
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723623561
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723624561
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723625561
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723626561
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723627562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723628562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723629562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723630562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723631562
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723632562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723633562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723634562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723635562
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723636563
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723637563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723638563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723639563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723640563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723641563
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723642563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723643563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723644563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723645563
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723646564
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723647564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723648564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723649564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723650564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723651564
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723652564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723653564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723654564
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723655565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723656565
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723657565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723658565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723659565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723660565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723661565
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723662565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723663565
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723664566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723665566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723666566
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723667566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723668566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723669566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723670566
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723671566
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723672567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723673567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723674567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723675567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723676567
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723677567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723678567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723679567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723680567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723681568
 time wait res for cond is :0 !
--blocks_count:0,buff_len:0--
do not need flush now
[...]

NOTE: The following words are printed circularly.

--blocks_count:0,buff_len:0--
do not need flush now
-- flush worker doing --
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723672567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723673567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723674567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723675567
 we should do clean in silent period ;access timestamp:0,cur timestamp:1359723676567


Original issue reported on code.google.com by [email protected] on 1 Feb 2013 at 1:09

[RFC]restart hlfs

当我们卸载了hlfs之后,再次挂载hlfs,从卸载到挂载分为很多
种情况,
以下我简单列举几个例子加以说明,我现在不是很清楚比较��
�的处理
方案,希望大家互相讨论来解决这个问题。

注意:之所以存在很多种方案,原因是我们hlfs目前是针对单�
��件进行处理的,
          上下文不需要完全替换。

方案一:

挂载hlfs:   init_hlfs ------>  hlfs_open  ......
卸载hlfs:   hlfs_close ------->  deinit_hlfs ......
挂载hlfs:   init_hlfs ------>  hlfs_open  ......
卸载hlfs:   hlfs_close ------->  deinit_hlfs ......
.....

方案二:

挂载hlfs:   init_hlfs ------>  hlfs_open  ......
卸载hlfs:   hlfs_close  ......
挂载hlfs:   hlfs_open  ......
卸载hlfs:   hlfs_close  ......
.....
挂载hlfs:    hlfs_open .......
.....

两种方案进行分析
==============
之所以出现以上两种方案的根本原因是,我们的hlfs目前针对�
��是单文件,
并且对于cloudxy项目可能单文件就足够了。 
方案一比较普遍,任何文件
系统都成立。 
方案二,我认为只适合于像我们目前的单文件系统,上下文
不需要完全替换,也挺方便的。
但是我们还是要选择一种方案进行处理,目前我采用的是方��
�二,比较山寨,
大家讨论一下,希望你提出自己的合理方案。




Original issue reported on code.google.com by [email protected] on 30 Jan 2012 at 5:50

[BUG]hlfs_open error when first start

bug 概述
========
当我们第一次启动hlfs时候,我们会 
find_lastest_alive_snapshot(...), 但是这时候
我们还没有alive_snapshot.txt文件,所以第一次启动时find_lastest_a
live_snapshot会
报错。

测试用例
========
#include <glib.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include "api/hlfs.h"
#include "hlfs_log.h"
#include "storage.h"
#include "storage_helper.h"

#define REQ_SIZE 4096
#define TOTAL_SIZE 40960

typedef struct {
    struct hlfs_ctrl *ctrl;
    char *uri;
} Fixture;

static void
hlfs_take_snapshot_setup(Fixture *fixture, const void *data) {
    const char *test_dir = (const char *)data;
    g_print("test env dir is %s\n", test_dir);
    char *fs_dir = g_build_filename(test_dir, "testfs", NULL);
//    g_assert(g_mkdir(fs_dir, 0700) == 0);
    char *uri = g_malloc0(128);
    g_assert(uri != NULL);
    snprintf(uri, 128, "%s%s", "local://", fs_dir);
//    char *uri = g_build_path(tmp, fs_dir, NULL);
    g_print("uri is %s\n", uri);
    pid_t status;
    const char cmd[256];
    memset((char *) cmd, 0, 256);
    sprintf((char *) cmd, "%s %s %s %s %d %s %d %s %d", "../mkfs.hlfs",
                                "-u", uri,
                                "-b", 8192,
                                "-s", 67108864,
                                "-m", 1024);
    g_message("cmd is [%s]", cmd);
    status = system(cmd);
#if 0
    GKeyFile *sb_keyfile = g_key_file_new();
    g_key_file_set_string(sb_keyfile, "METADATA", "uri", uri);
    g_key_file_set_integer(sb_keyfile, "METADATA", "block_size", 8196);
    g_key_file_set_integer(sb_keyfile, "METADATA", "segment_size", 67108864);
    g_key_file_set_integer(sb_keyfile, "METADATA", "max_fs_size", 671088640);
    gchar *content = g_key_file_to_data(sb_keyfile, NULL, NULL);
    char *sb_file_path = g_build_filename(fs_dir, "superblock", NULL);
    g_print("sb file path is %s\n", sb_file_path);
    GError *error = NULL;
    if (TRUE != g_file_set_contents(sb_file_path, content, strlen(content) + 1, &error)) {
        g_print("error msg is %s", error->message);
        error = NULL;
    }
#endif
    fixture->uri = uri;
    g_print("fixture->uri is %s\n", fixture->uri);
    fixture->ctrl = init_hlfs(fixture->uri);
    g_assert(fixture->ctrl != NULL);
    int ret = 0;
    ret = hlfs_open(fixture->ctrl, 1);
    g_message("ret is %d", ret);
    g_assert(ret == 0);
//    g_key_file_free(sb_keyfile);
//    g_free(sb_file_path);
    g_free(fs_dir);
    return ;
}

static void
do_snapshot(Fixture *fixture, int i) {
    char buffer[128];
    memset(buffer, 0, 128);
    if (0 == i) {
        sprintf(buffer, "%s", "T0");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (1 == i) {
        sprintf(buffer, "%s", "T1");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (2 == i) {
        sprintf(buffer, "%s", "T2");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (3 == i) {
        sprintf(buffer, "%s", "T3");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (4 == i) {
        sprintf(buffer, "%s", "T4");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (5 == i) {
        sprintf(buffer, "%s", "T5");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (6 == i) {
        sprintf(buffer, "%s", "T6");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (7 == i) {
        sprintf(buffer, "%s", "T7");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (8 == i) {
        sprintf(buffer, "%s", "T5");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_message("ret is %d", ret);
        g_assert(ret == -2);
    } else if (9 == i) {
        sprintf(buffer, "%s", "T9");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    }
    return ;
}

static void
do_snapshot1(Fixture *fixture, int i) {
    char buffer[128];
    memset(buffer, 0, 128);
    if (0 == i) {
        sprintf(buffer, "%s", "T10");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (1 == i) {
        sprintf(buffer, "%s", "T11");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (2 == i) {
        sprintf(buffer, "%s", "T12");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (3 == i) {
        sprintf(buffer, "%s", "T13");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (4 == i) {
        sprintf(buffer, "%s", "T14");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (5 == i) {
        sprintf(buffer, "%s", "T15");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (6 == i) {
        sprintf(buffer, "%s", "T16");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (7 == i) {
        sprintf(buffer, "%s", "T17");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    } else if (8 == i) {
        sprintf(buffer, "%s", "T15");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_message("ret is %d", ret);
        g_assert(ret == -2);
    } else if (9 == i) {
        sprintf(buffer, "%s", "T19");
        g_message("%d buffer is [%s]", i, buffer);
        int ret = hlfs_take_snapshot(fixture->ctrl, buffer);
        g_assert(ret == 0);
    }
    return ;
}

static void
test_hlfs_take_snapshot(Fixture *fixture, const void *data) {
    char content[REQ_SIZE];
    int offset = 0;
    int i = 0;

    memset(content, 0, REQ_SIZE);
    while (offset <= TOTAL_SIZE) {
        int ret1 = hlfs_write(fixture->ctrl, content, REQ_SIZE, offset);
        g_assert_cmpint(ret1, ==, REQ_SIZE);
        do_snapshot(fixture, i);
        offset += REQ_SIZE;
        i += 1;
    }
    hlfs_close(fixture->ctrl);
    hlfs_open(fixture->ctrl, 1);
    offset = 0;
    i = 0;

    memset(content, 0, REQ_SIZE);
    while (offset <= TOTAL_SIZE) {
        int ret1 = hlfs_write(fixture->ctrl, content, REQ_SIZE, offset);
        g_assert_cmpint(ret1, ==, REQ_SIZE);
        do_snapshot1(fixture, i);
        offset += REQ_SIZE;
        i += 1;
    }

    return ;
}

static void
hlfs_take_snapshot_tear_down(Fixture *fixture, const void *data) {
    const char *test_dir = (const char *) data;
    g_print("clean dir path: %s\n", test_dir);
    char *fs_dir = g_build_filename(test_dir, "testfs", NULL);
#if 0
    pid_t status;
    const char cmd[256];
    memset((char *) cmd, 0, 256);
    sprintf((char *) cmd, "%s %s %s", "rm", "-r", fs_dir);
    g_message("cmd is [%s]", cmd);
    status = system(cmd);

    struct back_storage *storage = init_storage_handler(fixture->uri);
    g_assert(storage != NULL);
    int nums = 0;
    bs_file_info_t *infos = storage->bs_file_list_dir(storage, ".", &nums);
    g_assert(infos != NULL);
    bs_file_info_t *info = infos;
    int i = 0;
    g_message("nums is %d", nums);
    for (i = 0; i < nums; i++) {
        g_message("info name is %s", info->name);
        char *tmp_file = g_build_filename(fs_dir, info->name, NULL);
        g_message("tmp file name is [%s]", tmp_file);
        g_assert(g_remove(tmp_file) == 0);
        g_free(tmp_file);
        info += 1;
    }
//    char *sb_file = g_build_filename(fs_dir, "superblock", NULL);
//    g_assert(g_remove(sb_file) == 0);
    g_assert(g_remove(fs_dir) == 0);
    g_free(fixture->uri);
    g_free(fs_dir);
//    g_free(sb_file);
    g_free(storage);
    g_free(infos);
#endif
    g_free(fs_dir);
    g_free(fixture->uri);
    hlfs_close(fixture->ctrl);
    deinit_hlfs(fixture->ctrl);
    return;
}

int main(int argc, char **argv) {
    if (log4c_init()) {
        g_message("log4c init error!");
    }
    g_test_init(&argc, &argv, NULL);
    g_test_add("/misc/hlfs_take_snapshot",
                Fixture,
                g_get_current_dir(),
                hlfs_take_snapshot_setup,
                test_hlfs_take_snapshot,
                hlfs_take_snapshot_tear_down);
    return g_test_run();
}

输出结果
========
jiawei@jiawei-laptop:~/workshop15/cloudxy1/branches/snapshot/src/snapshot/unitte
st/build$ ./test_hlfs_take_snapshot
/misc/hlfs_take_snapshot: test env dir is 
/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest/build
uri is 
local:///home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest
/build/testfs
** Message: cmd is [../mkfs.hlfs -u 
local:///home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest
/build/testfs -b 8192 -s 67108864 -m 1024]
** Message: can not mkdir for our fs 
local:///home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest
/build/testfs
fixture->uri is 
local:///home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest
/build/testfs
** Message: ret is -1
**
ERROR:/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest/t
est_hlfs_take_snapshot.c:67:hlfs_take_snapshot_setup: assertion failed: (ret == 
0)
已放弃

hlfs日志输出
==========
[snipped]
[110]full path 
/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/unittest/build/t
estfs/alive_snapshot.txt
20120130 06:36:06.591 ERROR    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/utils/storage_helper.c][
file_get_contents][551]file is not exist
20120130 06:36:06.591 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/backend/local_storage.c]
[local_file_close][95]local -- enter func local_file_close
20120130 06:36:06.591 DEBUG    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/backend/local_storage.c]
[local_file_close][99]local -- leave func local_file_close
20120130 06:36:06.591 ERROR    hlfslog- 
[/home/jiawei/workshop15/cloudxy1/branches/snapshot/src/snapshot/snapshot_helper
.c][load_all_alive_snapshot][274]can not read snapshot content!

bug分析
=======
就是因为我们第一次启动就去判断alive_snapshot.txt是否存在,��
�果不存在就返回错误,
这时候肯定不存在了,所以hlfs日志就给出了如上错误。

修复方案
========
1, 提前先创建这个文件
2, 
如果发现这个文件不存在,那么就说明是第一次启动,我们��
�把alive_ss_name置为NULL
......
我目前采用方案2,当发现这个文件不存在时就把alive_ss_name置
为NULL, 那就说明目前还没
快照,如果第一个快照发生,那么他的up name 就是 NULL , 
然后再更新 alive_ss_name 了。
但是这种方案可能存在问题,因为这个文件不存在的情况可��
�不较多,不仅仅是第一次启动时。

Original issue reported on code.google.com by [email protected] on 30 Jan 2012 at 7:29

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.