Coder Social home page Coder Social logo

s3backer's People

Watchers

 avatar

s3backer's Issues

Abort trap doing anything on Mac OS X 10.6

What steps will reproduce the problem?
1. Install fuse etc. with MacPorts
2. Download and build s3backer-1.3.1
3. Try the example command at 
http://code.google.com/p/s3backer/wiki/CreatingANewFilesystem

I modified the Makefile to remove -O3 and ran s3backer under GDB to get some 
traceback.  Here's the result:

(gdb) run --blockSize=128k --size=1t --listBlocks rptb1-backup-test 
/Users/rb/tmp/mnt
Starting program: /Users/rb/opt/s3backer-1.3.1/s3backer --blockSize=128k 
--size=1t --listBlocks rptb1-backup-test /Users/rb/tmp/mnt
Reading symbols for shared libraries .+++++++......... done
s3backer: auto-detecting block size and total file size...

Program received signal SIGABRT, Aborted.
0x00007fff8315e3d6 in __kill ()
(gdb) bt
#0  0x00007fff8315e3d6 in __kill ()
#1  0x00007fff831fe913 in __abort ()
#2  0x00007fff831e2ff0 in __stack_chk_fail ()
#3  0x000000010000cb66 in http_io_get_auth (buf=0x7fff5fbf1960 'A' <repeats 27 
times>, "=", bufsiz=200, config=0x10001aca0, method=0x100014a5f "HEAD", 
ctype=0x0, md5=0x0, date=0x7fff5fbf1a30 "Tue, 21 Sep 2010 22:23:00 GMT", 
headers=0x0, resource=0x7fff5fbf17c9 "/00000000") at http_io.c:1530
#4  0x0000000100009b1f in http_io_detect_sizes (s3b=0x100506220, 
file_sizep=0x7fff5fbf1c08, block_sizep=0x7fff5fbf1c3c) at http_io.c:664
#5  0x0000000100010890 in validate_config () at s3b_config.c:1118
#6  0x000000010000e10a in s3backer_get_config (argc=6, argv=0x7fff5fbff4b0) at 
s3b_config.c:491
#7  0x0000000100001106 in main (argc=6, argv=0x7fff5fbff4b0) at main.c:40

I've tried various combinations of options.  Adding -d, --debug, and 
--debug-http does not produce any extra output.

Original issue reported on code.google.com by [email protected] on 21 Sep 2010 at 10:42

Not sure blockCacheMaxDirty is working

I have blockCacheMaxDirty=10 in /etc/fstab, and "ps auxww | grep s3backer" 
confirms that it was passed to s3backer when it was started.

Nevertheless, although I've unmounted my reiserfs loop filesystem and "sync" 
has successfully run to completion, s3backer has been actively writing 
additional blocks to S3 for several minutes, far more than 10 of them.

It doesn't appear to me that blockCacheMaxDirty is working.

Here are my /etc/fstab entries:

s3backer#jik2-backup-dev /mnt/s3backer-dev 
fuse    accessFile=/etc/passwd-s3fs,compress=9,rrs,blockCacheFile=/var/cache/s3back
er-dev-cache,size=100G,blockSize=128k,blockCacheSize=327680,blockCacheThreads=6,
blockCacheMaxDirty=10,noatime,noauto
/mnt/s3backer-dev/file /mnt/s3backer-fs reiserfs loop,noatime,noauto

Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 11:42

Multi-part upload

S3 now supports multipart uploads. I have no exact idea how this may improve 
s3backer, but I thought it might be worth mentioning in case there is 
employment for the option to make s3backer better/faster/more reliable

http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?uploadobjusingm
pu.html

Original issue reported on code.google.com by [email protected] on 15 Nov 2010 at 11:16

Support for RRS (Reduced Redundancy Storage)

This is a feature request for the new (cheaper) RRS that Amazon started
offering recently:

http://aws.amazon.com/s3/faqs/#How_do_I_specify_that_I_want_to_store_my_data_usi
ng_RRS

Objects need to be 'PUT' with a different storage class setting to make use
of the new storage class.

Original issue reported on code.google.com by [email protected] on 25 May 2010 at 10:08

Configure doesn't find fuse4x from MacPorts

What steps will reproduce the problem?

1. Install fuse4x via MacPorts.  (MacFUSE project is long dead.)

2. Unpack s3backer and try ./configure.  It will complain there is no libfuse.

As far as I can tell, configure is not using pkg-config to look for fuse.  
pkg-config can find it just fine.

You can work around this with a command like

  CFLAGS='-I /opt/local/include' LIBS='-L /opt/local/lib' ./configure

What version of the product are you using? On what operating system?

s3backer-1.3.2 on Mac OS X 10.7.  uname -a says

Darwin Albatross.local 11.0.0 Darwin Kernel Version 11.0.0: Sat Jun 18 12:57:44 
PDT 2011; root:xnu-1699.22.73~1/RELEASE_I386 i386

Please provide any additional information below.

Builds without error. Can run the steps in the Mac wiki page, create a DMG, put 
stuff in it.  However, everything seems to hang on unmount and the S3 
management console shows no data written.  I'll follow up when I've 
investigated further.

Original issue reported on code.google.com by [email protected] on 4 Aug 2011 at 3:03

Can't compile on a Mac OS X 10.5 (u_int, u_long and u_char undefined)

What steps will reproduce the problem?
1. Download, unpack. 
2. ./configure
3. make
4. Make shows errors (below).

I am on a Mac OS X 10.5

If add lines:

typedef unsigned int u_int;
typedef unsigned long u_long;
typedef unsigned char u_char;

to s3backer.h everything compiles Ok.

Original make errors:

In file included from main.c:25:
s3backer.h:84: error: syntax error before ‘u_int’
s3backer.h:84: warning: no semicolon at end of struct or union
s3backer.h:85: warning: type defaults to ‘int’ in declaration of 
‘block_bits’
s3backer.h:85: warning: data definition has no type or storage class
s3backer.h:89: error: syntax error before ‘connect_timeout’
s3backer.h:89: warning: type defaults to ‘int’ in declaration of
‘connect_timeout’
s3backer.h:89: warning: data definition has no type or storage class
s3backer.h:90: error: syntax error before ‘io_timeout’
s3backer.h:90: warning: type defaults to ‘int’ in declaration of 
‘io_timeout’
s3backer.h:90: warning: data definition has no type or storage class
s3backer.h:91: error: syntax error before ‘initial_retry_pause’
s3backer.h:91: warning: type defaults to ‘int’ in declaration of
‘initial_retry_pause’
s3backer.h:91: warning: data definition has no type or storage class
s3backer.h:92: error: syntax error before ‘max_retry_pause’
s3backer.h:92: warning: type defaults to ‘int’ in declaration of
‘max_retry_pause’
s3backer.h:92: warning: data definition has no type or storage class
s3backer.h:93: error: syntax error before ‘min_write_delay’
s3backer.h:93: warning: type defaults to ‘int’ in declaration of
‘min_write_delay’
s3backer.h:93: warning: data definition has no type or storage class
s3backer.h:94: error: syntax error before ‘cache_time’
s3backer.h:94: warning: type defaults to ‘int’ in declaration of 
‘cache_time’
s3backer.h:94: warning: data definition has no type or storage class
s3backer.h:95: error: syntax error before ‘cache_size’
s3backer.h:95: warning: type defaults to ‘int’ in declaration of 
‘cache_size’
s3backer.h:95: warning: data definition has no type or storage class
s3backer.h:97: warning: built-in function ‘log’ declared as non-function
s3backer.h:102: error: syntax error before ‘}’ token
s3backer.h:132: error: syntax error before ‘u_int’
s3backer.h:132: warning: function declaration isn’t a prototype
main.c: In function ‘main’:
main.c:41: error: dereferencing pointer to incomplete type
main.c:41: error: ‘u_long’ undeclared (first use in this function)
main.c:41: error: (Each undeclared identifier is reported only once
main.c:41: error: for each function it appears in.)
main.c:41: error: syntax error before ‘getpid’
main.c:42: error: dereferencing pointer to incomplete type
main.c:42: error: dereferencing pointer to incomplete type
make[1]: *** [main.o] Error 1
make: *** [all] Error 2

Original issue reported on code.google.com by [email protected] on 9 Jul 2008 at 4:19

Wish: throttling option

I'm using s3backer on one of my home servers to rsync my photo collection
to s3, but this causes my ssh sessions on other machines to be heavily
disturbed (a lot of typing latency). I've tried to throttle rsync
(--bwlimit) but since I cache s3backer the rsync throttling is not very
effective. So, I would love to see a (good) throttling/bwlimiting mechanism
implemented in s3backer, please ;)

Original issue reported on code.google.com by [email protected] on 19 Oct 2009 at 12:46

Attempting to mount initial drive while s3backer is running corrupts data

What steps will reproduce the problem?
1. mount the first filesystem 
2. mount the child filesystem, and create a bunch of data 
3. unmount both filesystems
4. ps -Af | grep s3backer, and note the process is still running (sending data)
5. mount the first filesystem again.  Note that 'file' is now corrupted.

What is the expected output? What do you see instead?
I would expect it to either fail to mount the first filesystem, or wait for it 
to finish sending the necessary data.  As is, it appears to be killing the 
original process.

What version of the product are you using? On what operating system?
centos 6, s3backer 1.3.4

Please provide any additional information below.
fstab:
s3backer#backup     /s3/dev/s3backer/backup      fuse    
noauto,size=500g,blockSize=1024k,encrypt,compress=9,passwordFile=xxxxxxxxxxxx,ac
cessFile=xxxxxxxxxxxx,bloc\
kCacheSize=512000,md5CacheSize=512000,md5CacheTime=0,blockCacheFile=/s3/cache/ba
ckup/cachefile,blockCacheWriteDelay=60000,blockCacheThreads=5,timeout=60,listBlo
cks,rrs,ssl,insecure,vhost  0 0     

# backup disk looped onto s3backer mount                                        


/s3/dev/s3backer/backup/file          /s3/backup        ext4 
noauto,loop,noatime,nodiratime,sync  0       0

Original issue reported on code.google.com by [email protected] on 30 Apr 2013 at 2:48

memory leak?

On my s3backer testsystem that is backing up several gigabytes of photos to
my freshly created s3backer partition, I see a steady increase of memory
usage by s3backer.
I started monitoring this because the backup process kept crashing due to
the unavailability of the loopack mount, which was due to s3backer having
stopped see dmesg output (I'm not sure what the start of the error is)

...
nmbd invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
Pid: 3050, comm: nmbd Tainted: G        W  2.6.28.4 #11
Call Trace:
 [<c01345f6>] oom_kill_process+0x4d/0x17c
 [<c01349aa>] out_of_memory+0x133/0x15d
 [<c01364a4>] __alloc_pages_internal+0x2ce/0x373
 [<c0137af8>] __do_page_cache_readahead+0x74/0x152
 [<c0137ea6>] do_page_cache_readahead+0x3d/0x47
 [<c0133f6a>] filemap_fault+0x133/0x2e1
 [<c0123596>] __wake_up_bit+0x25/0x2a
 [<c013c44d>] __do_fault+0x3f/0x2da
 [<c013d617>] handle_mm_fault+0x205/0x423
 [<c01100ba>] do_page_fault+0x238/0x556
 [<c010fe82>] do_page_fault+0x0/0x556
 [<c03c31e2>] error_code+0x6a/0x70
Mem-Info:
DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: hi:  186, btch:  31 usd: 130
Active_anon:42305 active_file:38 inactive_anon:42423
 inactive_file:767 unevictable:0 dirty:1 writeback:6 unstable:0
 free:1250 slab:2226 mapped:19 pagetables:617 bounce:0
DMA free:1828kB min:92kB low:112kB high:136kB active_anon:4372kB
inactive_anon:4604kB active_file:12kB inactive_file:216kB unevictable:0kB
present:15868kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 427 427
Normal free:3172kB min:2596kB low:3244kB high:3892kB active_anon:164848kB
inactive_anon:165088kB active_file:140kB inactive_file:2852kB
unevictable:0kB present:437768kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0
DMA: 1*4kB 4*8kB 0*16kB 0*32kB 2*64kB 1*128kB 0*256kB 1*512kB 1*1024kB
0*2048kB 0*4096kB = 1828kB
Normal: 151*4kB 7*8kB 3*16kB 5*32kB 8*64kB 4*128kB 1*256kB 0*512kB 1*1024kB
0*2048kB 0*4096kB = 3172kB
826 total pagecache pages
0 pages in swap cache
Swap cache stats: add 890828, delete 890828, find 45044/67748
Free swap  = 0kB
Total swap = 610252kB
114400 pages RAM
2165 pages reserved
8404 pages shared
102350 pages non-shared
Out of memory: kill process 9655 (s3backer) score 15018 or a child
Killed process 9655 (s3backer)
Buffer I/O error on device loop0, logical block 261160960
...

After that I've watched cat /proc/`pidof s3backer`/status | grep Vm for
some time, and it shows the numbers steadily increasing over time.

I'm not a C debugger, so I may be completely wrong about a suspected leak,
but the fact remains that s3backer keeps crashing on me on this system.

This is the commandline I use for the s3backer mount:
s3backer --vhost --blockCacheFile=/var/cache/s3backer/s3b-cache
--blockCacheSize=256 ******* /mnt/s3backer

Original issue reported on code.google.com by [email protected] on 15 Oct 2009 at 9:30

Block 00000000 disappeared after power outage

What steps will reproduce the problem?
1. Computer was turned off due to electric failure
2.
3.

What is the expected output? What do you see instead?
Expected output is to access again the filesystem stored with s3backer.
Current result: s3backer is not able to get the 00000000 file from aws,
since it does not exist.

What version of the product are you using? On what operating system?
1.3.1 on Ubuntu 8.10

Please provide any additional information below.
Everything was working fine when an power outage happened. After that, I
tried to mount again the file system with no success. First I thought I had
some permission problem, but after trying some things I saw that the
00000000 block is not on AWS anymore, and that is why it is failing.

Original issue reported on code.google.com by [email protected] on 3 Mar 2010 at 2:12

Feature request: change cache size

It would be great to be able to change the size of the on-disk block cache 
between invocations of s3backer without having to throw away the entire cache. 
If the cache size is reduced, it should just be a matter of calling ftruncate 
to chop off the top of it; if it's grown, it should be possible to simply add 
on to the end of the file. It's very bad that if I realize my cache size is 
wrong, I have to throw away the whole thing and start over, which incurs a 
significant performance (and cost) hit while the cache is being repopulated.

Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 4:27

s3backer never releases memory and reach to peak after 10 -15 days.

What steps will reproduce the problem?
1.Remounting s3backer 
2.
3.

What is the expected output? What do you see instead?
ps -uH `pid of s3backer`

O/P->

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:00 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:00 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:00 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer
root      4485  0.0  2.3 878512 88808 ?        Ssl  01:42   0:01 
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http quarem3-production-backup /s3backer

--------------------------------------------------------------------------


lsof -p `pidof s3backer`
o/p->


COMMAND   PID USER   FD   TYPE             DEVICE SIZE/OFF     NODE NAME
s3backer 4485 root  cwd    DIR             202,65     4096        2 /
s3backer 4485 root  rtd    DIR             202,65     4096        2 /
s3backer 4485 root  txt    REG             202,65   347828   294400 
/usr/local/s3backer/bin/s3backer
s3backer 4485 root  mem    REG             202,65    91096   294091 
/lib64/libz.so.1.2.3
s3backer 4485 root  mem    REG             202,65   224328   267630 
/usr/lib64/libssl3.so
s3backer 4485 root  mem    REG             202,65   317168   294115 
/lib64/libldap-2.4.so.2.5.6
s3backer 4485 root  mem    REG             202,65    63304   285445 
/lib64/liblber-2.4.so.2.5.6
s3backer 4485 root  mem    REG             202,65   386040   280450 
/lib64/libfreebl3.so
s3backer 4485 root  mem    REG             202,65    43392   287443 
/lib64/libcrypt-2.12.so
s3backer 4485 root  mem    REG             202,65  1286744   265631 
/usr/lib64/libnss3.so
s3backer 4485 root  mem    REG             202,65   243096   287440 
/lib64/libnspr4.so
s3backer 4485 root  mem    REG             202,65    17096   285261 
/lib64/libplds4.so
s3backer 4485 root  mem    REG             202,65   177952   285270 
/usr/lib64/libnssutil3.so
s3backer 4485 root  mem    REG             202,65    21256   283747 
/lib64/libplc4.so
s3backer 4485 root  mem    REG             202,65   183896   285435 
/usr/lib64/libsmime3.so
s3backer 4485 root  mem    REG             202,65   108728   285457 
/usr/lib64/libsasl2.so.2.0.23
s3backer 4485 root  mem    REG             202,65   124624   285408 
/lib64/libselinux.so.1
s3backer 4485 root  mem    REG             202,65    17256   285418 
/lib64/libcom_err.so.2.1
s3backer 4485 root  mem    REG             202,65   915736   285419 
/lib64/libkrb5.so.3.3
s3backer 4485 root  mem    REG             202,65    12592   280447 
/lib64/libkeyutils.so.1.3
s3backer 4485 root  mem    REG             202,65   181632   285417 
/lib64/libk5crypto.so.3.1
s3backer 4485 root  mem    REG             202,65    46336   285416 
/lib64/libkrb5support.so.0.1
s3backer 4485 root  mem    REG             202,65   272360   285420 
/lib64/libgssapi_krb5.so.2.2
s3backer 4485 root  mem    REG             202,65   167648   267620 
/lib64/libexpat.so.1.5.2
s3backer 4485 root  mem    REG             202,65  1953536   294417 
/usr/lib64/libcrypto.so.1.0.1e
s3backer 4485 root  mem    REG             202,65   444040   294443 
/usr/lib64/libssl.so.1.0.1e
s3backer 4485 root  mem    REG             202,65   164024   294470 
/usr/lib64/libssh2.so.1.0.1
s3backer 4485 root  mem    REG             202,65   343544   294489 
/usr/lib64/libcurl.so.4.1.1
s3backer 4485 root  mem    REG             202,65   156872   282897 
/lib64/ld-2.12.so
s3backer 4485 root  mem    REG             202,65  1908792   282898 
/lib64/libc-2.12.so
s3backer 4485 root  mem    REG             202,65    22536   282900 
/lib64/libdl-2.12.so
s3backer 4485 root  mem    REG             202,65   141576   265675 
/lib64/libpthread-2.12.so
s3backer 4485 root  mem    REG             202,65   586000   264533 
/usr/lib64/libsqlite3.so.0.8.6
s3backer 4485 root  mem    REG             202,65    47064   282907 
/lib64/librt-2.12.so
s3backer 4485 root  mem    REG             202,65   113952   265677 
/lib64/libresolv-2.12.so
s3backer 4485 root  mem    REG             202,65   209120   268225 
/lib64/libidn.so.11.6.1
s3backer 4485 root  mem    REG             202,65   150712   268884 
/usr/lib64/libnsspem.so
s3backer 4485 root  mem    REG             202,65    10352   268891 
/usr/lib64/libnsssysinit.so
s3backer 4485 root  mem    REG             202,65   254008   267587 
/usr/lib64/libsoftokn3.so
s3backer 4485 root  mem    REG             202,65    27424   265111 
/lib64/libnss_dns-2.12.so
s3backer 4485 root  mem    REG             202,65    65928   265113 
/lib64/libnss_files-2.12.so
s3backer 4485 root  mem    REG             202,65   221728   283744 
/lib64/libfuse.so.2.8.3
s3backer 4485 root    0u   CHR                1,3      0t0     3593 /dev/null
s3backer 4485 root    1u   CHR                1,3      0t0     3593 /dev/null
s3backer 4485 root    2u   CHR                1,3      0t0     3593 /dev/null
s3backer 4485 root    3u  unix 0xffff8800aaa7d3c0      0t0 10468420 socket
s3backer 4485 root    4u   CHR             10,229      0t0     6315 /dev/fuse
s3backer 4485 root    5u  IPv4           10523049      0t0      TCP 
ip-10-84-201-9.ec2.internal:38000->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root    6u   REG             202,65     9216   285288 
/etc/pki/nssdb/cert9.db
s3backer 4485 root    7u   REG             202,65    11264   285348 
/etc/pki/nssdb/key4.db
s3backer 4485 root    8u  IPv4           10523311      0t0      TCP 
ip-X.X.X.X.ec2.internal:39297->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root    9u   REG             202,65      512    12333 
/var/tmp/etilqs_PCnEh0NTRnLafgM (deleted)
s3backer 4485 root   10u   REG             202,65        0    12402 
/var/tmp/etilqs_kwaINYMcSch8taP (deleted)
s3backer 4485 root   11u   REG             202,65     2048    17431 
/var/tmp/etilqs_14hb2M6Espx6yd3 (deleted)
s3backer 4485 root   12u  IPv4           10522912      0t0      TCP 
ip-X-X-X-XX.ec2.internal:39213->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root   13u   REG             202,65      512    17625 
/var/tmp/etilqs_dmz8dBtg0Wip1X2 (deleted)
s3backer 4485 root   14u   REG             202,65        0    17626 
/var/tmp/etilqs_6h9jnSb83YWnVTZ (deleted)
s3backer 4485 root   15u   REG             202,65     2048    17737 
/var/tmp/etilqs_OVTKh6mwgpGsooP (deleted)
s3backer 4485 root   16u  sock                0,6      0t0 10523297 can't 
identify protocol
s3backer 4485 root   17u  IPv4           10523326      0t0      TCP 
ip-X-X-X-X.ec2.internal:39301->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root   18u  IPv4           10523868      0t0      TCP 
ip-X-X-X-X.ec2.internal:60583->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root   19u  IPv4           10523314      0t0      TCP 
ip-X-X-X-X.ec2.internal:39299->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)
s3backer 4485 root   20u  IPv4           10523050      0t0      TCP 
ip-X-X-X-X.ec2.internal:39249->XXXXXXXX.amazonaws.com:https (CLOSE_WAIT)


What version of the product are you using? On what operating system?

s3backer version 1.3.7 (r496)

Please provide any additional information below.

*M/C:- Amazon Medium instance with 4GB RAM
*OS:-CentOS Linux release 6.0 X86_64 
*Configuration:-
/usr/local/s3backer/bin/s3backer --blockSize=64k --size=50g --ssl --vhost 
--listBlocks --debug-http XXXXXX-XXXXX-backup /s3backer

Original issue reported on code.google.com by [email protected] on 5 May 2014 at 1:16

maxRetryPause doesn't have desired effect on Leopard (Mac OS X 10.5)

What steps will reproduce the problem?

1. Mount s3backer file with large "--maxRetryPause" value:
s3backer --prefix=macos --size=75M --filename=s3-backup3-remote.dmg
--maxRetryPause=300000 -d -f bucket mnt-bucket

2. Copy something to that file with dd:
dd if=local-75M-non-empty-file of=mnt-bucket/s3-backup3-remote.dmg bs=4096

3. Shut down network interface.

4. After attempt #8 s3backer exits and file system unmounts.
And at this moment in system.log you have this message:

Jul 10 17:07:28 macbook kernel[0]: MacFUSE: force ejecting (no response
from user space 5)

When starting s3backer prints correct maxRetryPause value and uses it, but
MacFUSE has its own timeout option "daemon_timeout" that has some default 
value. After that timeout Tiger (10.4) shows up "File system timeout"
dialog box with some useful options, but Leopard does not. It just kills
user process and unmounts file system.

So I believe it worth mentioning in man page, that one has to include
"daemon_timeout" option in command line arguments when maxRetryPause has
non-default value. Or may be just set it to some very high value by default.

It is last version (1.04) of s3backer. Mac OS X 10.5.4.

s3backer startup log:

2008-07-10 17:05:36 INFO: created s3backer using
http://s3.amazonaws.com/du-backup3
s3backer: auto-detecting block size and total file size...
2008-07-10 17:05:36 DEBUG: HEAD
http://s3.amazonaws.com/du-backup3/macos00000000
s3backer: auto-detected block size=4k and total size=75m
2008-07-10 17:05:37 DEBUG: s3backer config:
2008-07-10 17:05:37 DEBUG:         accessId: 
2008-07-10 17:05:37 DEBUG:        accessKey: "****"
2008-07-10 17:05:37 DEBUG:       accessFile: "/Users/demon/.s3backer_passwd"
2008-07-10 17:05:37 DEBUG:           access: "private"
2008-07-10 17:05:37 DEBUG:     assume_empty: false
2008-07-10 17:05:37 DEBUG:          baseURL: "http://s3.amazonaws.com/"
2008-07-10 17:05:37 DEBUG:           bucket: "du-backup3"
2008-07-10 17:05:37 DEBUG:           prefix: "macos"
2008-07-10 17:05:37 DEBUG:            mount:
"/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 DEBUG:         filename: "s3-backup3-remote.dmg"
2008-07-10 17:05:37 DEBUG:       block_size: - (4096)
2008-07-10 17:05:37 DEBUG:       block_bits: 12
2008-07-10 17:05:37 DEBUG:        file_size: 75M (78643200)
2008-07-10 17:05:37 DEBUG:       num_blocks: 19200
2008-07-10 17:05:37 DEBUG:        file_mode: 0600
2008-07-10 17:05:37 DEBUG:        read_only: false
2008-07-10 17:05:37 DEBUG:  connect_timeout: 30s
2008-07-10 17:05:37 DEBUG:       io_timeout: 30s
2008-07-10 17:05:37 DEBUG: initial_retry_pause: 200ms
2008-07-10 17:05:37 DEBUG:  max_retry_pause: 300000ms
2008-07-10 17:05:37 DEBUG:  min_write_delay: 500ms
2008-07-10 17:05:37 DEBUG:       cache_time: 10000ms
2008-07-10 17:05:37 DEBUG:       cache_size: 10000 entries
2008-07-10 17:05:37 DEBUG: fuse_main arguments:
2008-07-10 17:05:37 DEBUG:   [0] = "s3backer"
2008-07-10 17:05:37 DEBUG:   [1] = "-o"
2008-07-10 17:05:37 DEBUG:   [2] =
"kernel_cache,fsname=s3backer,use_ino,entry_timeout=31536000,negative_timeout=31
536000,attr_timeout=31536000,default_permissions,nodev,nosuid"
2008-07-10 17:05:37 DEBUG:   [3] = "-d"
2008-07-10 17:05:37 DEBUG:   [4] = "-f"
2008-07-10 17:05:37 DEBUG:   [5] = "/Users/demon/mounts/mnt-du-backup3"
2008-07-10 17:05:37 INFO: s3backer process 10403 for
/Users/demon/mounts/mnt-du-backup3 started


And it dies like that:

2008-07-10 17:06:28 DEBUG: PUT http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 NOTICE: HTTP operation timeout: PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 INFO: retrying query (attempt #2): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:06:59 INFO: retrying query (attempt #3): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:06:59 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:00 INFO: retrying query (attempt #4): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:00 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:02 INFO: retrying query (attempt #5): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:02 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:05 INFO: retrying query (attempt #6): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:05 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:11 INFO: retrying query (attempt #7): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:11 ERROR: curl error: couldn't resolve host name
2008-07-10 17:07:24 INFO: retrying query (attempt #8): PUT
http://s3.amazonaws.com/du-backup3/macos0000002e
2008-07-10 17:07:24 ERROR: curl error: couldn't resolve host name

(end of the messages. s3backer exits)

Original issue reported on code.google.com by [email protected] on 10 Jul 2008 at 9:29

S3Backer via Crontab

What steps will reproduce the problem?
1. Add S3Backer mount command to crontab
2. Let it run automatically via cron
3. Get the error: warning: no accessId specified ...

What is the expected output? What do you see instead?
Mounted S3 Volume :)

What version of the product are you using? On what operating system?
S3Backer 1.3.7 / Ubuntu 12.04 LTS

Please provide any additional information below.
S3Backer works fine if i run it as sudo user. I also specified the the path to 
the access id file as a flag to the s3backer command and made sure the file is 
in place (i tried both the home of the user and /root).

Any help would be very appreciated :)

Original issue reported on code.google.com by [email protected] on 31 Jul 2013 at 1:21

Segfault while creating filesystem

I am running s3backer in test mode with:

./s3backer --test --prefix=s3backer --size=2g /tmp /mnt

Then:

mke2fs -b 4096 -F /mnt/file

After writing some blocks, I get a segfault in the kernel log:
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180c8 started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180c9 started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180ca started (zero
block)
Aug 10 23:07:56 sleepless s3backer: test_io: write 000180cb started (zero
block)
Aug 10 23:07:56 sleepless kernel: [101497.099010] s3backer[6531]: segfault
at 0000011b eip b7ebe86a esp b7065060 error 4

System is Ubuntu 8.04.1 (hardy). Latest updates installed.
s3backer configured and compiled correctly.

When I enable the debug output of FUSE and s3backer and force it stay in
the foreground, this does _not_ happen. 

The machine is running on a Dual-Core-CPU (Athlon64 X2): Linux sleepless
2.6.24-19-generic #1 SMP Fri Jul 11 23:41:49 UTC 2008 i686 GNU/Linux






Original issue reported on code.google.com by [email protected] on 10 Aug 2008 at 9:15

Fsck says uncleanly unmounted after umount used to unmount

I'm using s3backer with an ext2 filesystem.
Recently, I unmounted the ext2 loop filesystem mount and then unmounted the 
s3backer filesystem. It unmounted immediately even though I believe that there 
were pending writes that had not yet been flushed to S3.
Then I remounted the s3backer filesystem and ran fsck on the "file" file in it. 
Fsck reported that the device was not cleanly unmounted and forced an fsck, 
thus I believe confirming that there were pending writes that had not yet been 
flushed.
Shouldn't umount of the FUSE filesystem block until all pending writes are 
flushed to disk? This is how local filesystems work, obviously.
If not, then what's the recommended way to ensure that data is not lost?
I'm using r437 on Linux (Fedora 14).
Thanks.

Original issue reported on code.google.com by [email protected] on 15 Oct 2010 at 2:53

  • Merged into: #40

stats file doesn't seem to update

It doesn't seem like the stats file updates when it should. On many occasions, 
I've looked at the stats file a few seconds, a few minutes, or even many 
minutes apart and found it to have exactly the same contents as it did before, 
even though there had been much s3backer activity since the last time I looked 
at it.

I noticed that it appears in the filesystem as a file rather than a device. I 
wonder if the kernel or fuse is caching its contents because it knows that 
nothing has written to the file since the last time it was read? If so, then 
perhaps it has to be made a device rather than a file so that it'll get reread 
every time it is opened?

Original issue reported on code.google.com by [email protected] on 21 Oct 2010 at 8:13

Data corruption hazard (make cache non-volatile)

The current cache implementation introduces volatility into the system. While a 
filesystem backed by 
s3backer may be journaled there's still a high risk for data loss. For example, 
if there is a system failure with dirty blocks in the cache there is a 
likelihood that the filesystem journal will get out of sync with 
what's actually on S3. The journal can't help you in this case because as far 
as it's concerned the data has 
already been written [to s3backers cache]. The issue is compounded when the 
blocks are uploaded out of 
order. 

The easiest solution is probably to make the cache non-volatile so that the 
system can later recover.

Original issue reported on code.google.com by jonsview on 14 Aug 2009 at 1:49

Support FUSE-passed mount options from /etc/fstab


Quoting from email thread...

-----------------------

Like this

-o size=4T -o blockSize=1M -o blockCacheSize=10

which works with /etc/fstab like so

s3backer#bucket /mnt/bucket fuse
size=4T,blockSize=1M,blockCacheSize=10 0 0

Then you can just mount with

mount /mnt/bucket

If I have to use custom flags on the command line like:
s3backer --size=XXXX --blockSize=1M --blockCacheSize=10 then I can't
use the fuse app with /etc/fstab or autofs because it doesn't
uniformly fit.

See the s3fs on googlecode here for examples.

Sure would be awesome if this is a quick fix.  I would love to
automount s3backer.

-----------------------

Original issue reported on code.google.com by [email protected] on 14 Apr 2009 at 9:09

enhancement request with patch: track zero blocks after startup even if --listBlocks wasn't specified

Even if --listBlocks wasn't specified, it makes sense to keep track of when 
zero blocks are read or written so that they don't have to be read or written 
repeatedly. The attached patch accomplishes this as follows:

* Change the non-zero block map into a zero block map, i.e., a bit in the map 
is set if the corresponding block is zero, rather than being set if it's 
non-zero. This change is not, strictly speaking, entirely necessary, since I 
could have just left it as a non-zero map and then checked for the opposite bit 
value, but I think it logically makes more sense for it to be zero map, and 
hence the code is clearer this way, because what we're really interested in 
knowing is the fact that a block is zero so we don't need to read or write it.

* Create an empty zero map when initializing http_io if --listBlocks wasn't 
specified.

* Add a bit to the zero map if we try to read a block and get ENOENT.

* Add a bit to the zero map if we write a zero block that wasn't previously 
zero.

This is actually the first patch of five I intend to submit in this area, if 
it's OK with you. They are:

1. This patch (track zero instead of non-zero blocks, and track even when 
--listBlocks wasn't specified).

2. Make --listBlocks happen in the background in a separate thread after the 
filesystem is mounted (this should be relatively easy to do now that I've done 
patch 1).

3. When a block that we expect to exist in S3 isn't there when we try to read 
it, restore it from the cache if possible.

4. When a block that we expect to exist in S3 isn't there when we do 
--listBlocks, restore it from the cache if possible.

5. Add an option to rerun --listBlocks periodically in the background while 
s3backer is running.

Patches 3-5 deserve some explanation. My concern is that, to a very small 
extent with regular S3 storage and to a much larger and even likely over time 
extent with reduced redundancy storage (RRS), blocks could simply disappear 
from S3 without any intervention on our part. I'm using s3backer to store my 
backups with rsync, so I'm using RRS, since all the data I'm saving exists on 
my desktop as well. However, the doc for RRS says that it should only be used 
for data that can be restored easily, and indeed it can in this case, since for 
performance reasons, my s3backer cache is big enough to hold my entire backup 
filesystem. Ergo, it makes a great deal of sense to teach s3backer how to 
automatically restore dropped blocks.

Please let me know your thoughts about this patch and my plans for the rest of 
them. Especially since I think I may need some guidance from you when 
implementing patches 3-5 :-).

Thanks,

  jik

Original issue reported on code.google.com by [email protected] on 24 Oct 2010 at 7:45

Attachments:

can't read block zero meta-data: Operation not permitted

Hello,

When I try to execute "s3backer mybucket /root/mybucket/", I get the following 
error:

root@dev-intranet:~# s3backer mybucket /root/mybucket/
s3backer: auto-detecting block size and total file size...
s3backer: can't read block zero meta-data: Operation not permitted

I use Debian and the latest version of s3backer.

Someone already had this error?

Thank you and sorry for my english.

Original issue reported on code.google.com by [email protected] on 2 Jan 2013 at 2:52

make fails for s3backer ver 1.2.2 and 1.2.3 on CentOS 5.x x86_64

What steps will reproduce the problem?
1. download tarball (1.2.2) or checkout svn (1.2.3)
2. run configure -- all OK, no errors
3. make fails with the following errors:
...
http_io.c: In function 'http_io_list_prepper':
http_io.c:440: error: 'CURLOPT_HTTP_CONTENT_DECODING' undeclared (first use
in this function)
http_io.c:440: error: (Each undeclared identifier is reported only once
http_io.c:440: error: for each function it appears in.)
http_io.c: In function 'http_io_read_prepper':
http_io.c:762: error: 'CURLOPT_HTTP_CONTENT_DECODING' undeclared (first use
in this function)
make[1]: *** [http_io.o] Error 1
make[1]: Leaving directory `/root/s3backer-1.2.2'
make: *** [all] Error 2
...

All libraries/dependencies are satisfied.
gcc version: gcc (GCC) 4.1.2 20071124 (Red Hat 4.1.2-42)
OS Version: CentOS release 5.2 (Final)
kernel: 2.6.18-8.el5.028stab031.1 (x86_64)
Build host is an OpenVZ x86_64 container running on an x86_64 hardware node
powered by AMD Athlon 64 X2 Dual Core Processor 4200+

Note: to get svn checkout of version 1.2.3 to create a good configure
requires editing autogen.sh and commenting out the line that sources
cleanup.sh when you run autogen.sh for the first time; you can uncomment
the line (i.e. source cleanup.sh) after the first run. Otherwise autogen.sh
fails due to cleanup.sh trying to delete non-existent dirs.

Original issue reported on code.google.com by [email protected] on 7 Apr 2009 at 4:52

Should be able to resize s3backer device (maybe already can?)

Hi,

It really needs to be possible to resize an S3 device. For example, if I create 
a partition now to hold my offsite backup, and then discover in five years that 
I've filled it and need more space, I should be able to simply grow the S3 
device and then run resize_reiserfs or whatever to grow the underlying 
filesystem.

Ditto for shrinking a device.

The s3backer documentation has all kinds of dire warnings about doing this, 
claiming that data won't read back properly if the size of a device is changed, 
but I can't figure out why this is the case.

It seems to me that, as long as the block size isn't changed, if the device is 
grown, more blocks will simply be added on to the end of it.

Shrinking a device is a bit more complicated, since the old files left over 
would need to be cleaned up somehow, but it seems like it should be relatively 
easy to code something to make that happen automatically (coupled with a 
listBlocks) when a filesystem is shrunk.

But aside from the shrinking complication mentioned above, if I'm right that 
it's possible to grow a device simply by increasing its size and then 
specifying "force" the next time you mount, then the documentation should be 
updated to say that, and the warning you get when you mount a filesystem with a 
larger size should be edited to be less dire.

However, the warning should definitely remain dire when the block size is 
changed!

Original issue reported on code.google.com by [email protected] on 22 Oct 2010 at 7:26

I/O error on mounted filesystem, Unknown error:0 in debug output

$ opt/s3backer/bin/s3backer --blockSize=128k --size=1t --listBlocks 
rptb1-backup-test2 tmp/rptb1-backup-test2
$ file tmp/rptb1-backup-test2
tmp/rptb1-backup-test2: cannot open `tmp/rptb1-backup-test2' (Input/output 
error)

So I tried:

$ opt/s3backer/bin/s3backer -s -d --blockSize=128k --size=1t --listBlocks 
rptb1-backup-test2 tmp/rptb1-backup-test2
s3backer: auto-detecting block size and total file size...
2010-09-22 17:22:01 DEBUG: HEAD 
http://s3.amazonaws.com/rptb1-backup-test2/00000000
2010-09-22 17:22:01 DEBUG: rec'd 404 response: HEAD 
http://s3.amazonaws.com/rptb1-backup-test2/00000000
s3backer: auto-detection failed; using configured block size 128k and file size 
1t
s3backer: listing non-zero blocks...2010-09-22 17:22:01 DEBUG: GET 
http://s3.amazonaws.com/rptb1-backup-test2?prefix=&max-keys=256
2010-09-22 17:22:01 DEBUG: success: GET 
http://s3.amazonaws.com/rptb1-backup-test2?prefix=&max-keys=256
done
s3backer: found 0 non-zero blocks
2010-09-22 17:22:01 DEBUG: s3backer config:
2010-09-22 17:22:01 DEBUG:                test mode: "false"
2010-09-22 17:22:01 DEBUG:                 accessId: [redacted]
2010-09-22 17:22:01 DEBUG:                accessKey: "****"
2010-09-22 17:22:01 DEBUG:               accessFile: 
"/Users/rb/.s3backer_passwd"
2010-09-22 17:22:01 DEBUG:               accessType: private
2010-09-22 17:22:01 DEBUG:                  baseURL: "http://s3.amazonaws.com/"
2010-09-22 17:22:01 DEBUG:                   bucket: "rptb1-backup-test2"
2010-09-22 17:22:01 DEBUG:                   prefix: ""
2010-09-22 17:22:01 DEBUG:              list_blocks: true
2010-09-22 17:22:01 DEBUG:                    mount: "tmp/rptb1-backup-test2"
2010-09-22 17:22:01 DEBUG:                 filename: "file"
2010-09-22 17:22:01 DEBUG:           stats_filename: "stats"
2010-09-22 17:22:01 DEBUG:               block_size: 128k (131072)
2010-09-22 17:22:01 DEBUG:                file_size: 1t (1099511627776)
2010-09-22 17:22:01 DEBUG:               num_blocks: 8388608
2010-09-22 17:22:01 DEBUG:                file_mode: 0600
2010-09-22 17:22:01 DEBUG:                read_only: false
2010-09-22 17:22:01 DEBUG:                 compress: 0
2010-09-22 17:22:01 DEBUG:               encryption: (none)
2010-09-22 17:22:01 DEBUG:                 password: ""
2010-09-22 17:22:01 DEBUG:                  timeout: 30s
2010-09-22 17:22:01 DEBUG:      initial_retry_pause: 200ms
2010-09-22 17:22:01 DEBUG:          max_retry_pause: 30000ms
2010-09-22 17:22:01 DEBUG:          min_write_delay: 500ms
2010-09-22 17:22:01 DEBUG:           md5_cache_time: 10000ms
2010-09-22 17:22:01 DEBUG:           md5_cache_size: 10000 entries
2010-09-22 17:22:01 DEBUG:         block_cache_size: 1000 entries
2010-09-22 17:22:01 DEBUG:      block_cache_threads: 20 threads
2010-09-22 17:22:01 DEBUG:      block_cache_timeout: 0ms
2010-09-22 17:22:01 DEBUG:  block_cache_write_delay: 250ms
2010-09-22 17:22:01 DEBUG:    block_cache_max_dirty: 0 blocks
2010-09-22 17:22:01 DEBUG:         block_cache_sync: false
2010-09-22 17:22:01 DEBUG:               read_ahead: 4 blocks
2010-09-22 17:22:01 DEBUG:       read_ahead_trigger: 2 blocks
2010-09-22 17:22:01 DEBUG:   block_cache_cache_file: ""
2010-09-22 17:22:01 DEBUG:    block_cache_no_verify: "false"
2010-09-22 17:22:01 DEBUG: fuse_main arguments:
2010-09-22 17:22:01 DEBUG:   [0] = "opt/s3backer/bin/s3backer"
2010-09-22 17:22:01 DEBUG:   [1] = 
"-ofsname=http://s3.amazonaws.com/rptb1-backup-test2/"
2010-09-22 17:22:01 DEBUG:   [2] = "-o"
2010-09-22 17:22:01 DEBUG:   [3] = 
"kernel_cache,allow_other,use_ino,max_readahead=0,subtype=s3backer,entry_timeout
=31536000,negative_timeout=31536000,attr_timeout=0,default_permissions,nodev,nos
uid,daemon_timeout=600"
2010-09-22 17:22:01 DEBUG:   [4] = "-s"
2010-09-22 17:22:01 DEBUG:   [5] = "-d"
2010-09-22 17:22:01 DEBUG:   [6] = "tmp/rptb1-backup-test2"
2010-09-22 17:22:01 INFO: s3backer process 48602 for tmp/rptb1-backup-test2 
started
unique: 0, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.8
flags=0x00000000
max_readahead=0x00100000
   INIT: 7.8
   flags=0x00000000
   max_readahead=0x00000000
   max_write=0x00400000
   unique: 0, error: 0 (Unknown error: 0), outsize: 40
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 96
unique: 2, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 2, error: 0 (Unknown error: 0), outsize: 96
unique: 3, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 96
unique: 0, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 0, error: 0 (Unknown error: 0), outsize: 128
unique: 2, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 2, error: 0 (Unknown error: 0), outsize: 128
unique: 3, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 96
unique: 1, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 1, error: 0 (Unknown error: 0), outsize: 128
unique: 3, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 3, error: 0 (Unknown error: 0), outsize: 128
unique: 4, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 4, error: 0 (Unknown error: 0), outsize: 128
unique: 5, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 5, error: 0 (Unknown error: 0), outsize: 128
unique: 6, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 6, error: 0 (Unknown error: 0), outsize: 128
unique: 7, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 7, error: 0 (Unknown error: 0), outsize: 96
unique: 8, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 8, error: 0 (Unknown error: 0), outsize: 128
unique: 7, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 7, error: 0 (Unknown error: 0), outsize: 128
unique: 9, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 9, error: 0 (Unknown error: 0), outsize: 96
unique: 9, opcode: GETATTR (3), nodeid: 1, insize: 40
   unique: 9, error: 0 (Unknown error: 0), outsize: 128
unique: 10, opcode: STATFS (17), nodeid: 1, insize: 40
   unique: 10, error: 0 (Unknown error: 0), outsize: 96

Original issue reported on code.google.com by [email protected] on 22 Sep 2010 at 4:23

Make Error

What steps will reproduce the problem?
1. $ PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure
2. $ make

What is the expected output? What do you see instead?
Expected: No Errors

Actual:
make  all-recursive
Making all in debian
make[2]: Nothing to be done for `all'.
gcc -DHAVE_CONFIG_H -I.    -D__FreeBSD__=10 -D_FILE_OFFSET_BITS=64 
-I/usr/local/include/fuse   -g -O3 -pipe -Wall -Waggregate-return -Wcast-align 
-Wchar-subscripts -Wcomment -Wformat -Wimplicit -Wmissing-declarations 
-Wmissing-prototypes -Wnested-externs -Wno-long-long -Wparentheses 
-Wpointer-arith -Wredundant-decls -Wreturn-type -Wswitch -Wtrigraphs 
-Wuninitialized -Wunused -Wwrite-strings -Wshadow -Wstrict-prototypes 
-Wcast-qual  -MT main.o -MD -MP -MF .deps/main.Tpo -c -o main.o main.c
In file included from s3backer.h:47,
                 from main.c:25:
/usr/local/include/curl/curl.h:52:23: error: osreldate.h: No such file or 
directory
make[2]: *** [main.o] Error 1
make[1]: *** [all-recursive] Error 1
make: *** [all] Error 2

What version of the product are you using? On what operating system?
s3backer: 1.3.2
Mac OS X: 10.6.7

Please provide any additional information below.

Error encountered while following instructions on the BuildAndInstall wiki page.

Original issue reported on code.google.com by [email protected] on 14 Jul 2011 at 8:28

Writing to s3backer uses too much CPU

What steps will reproduce the problem?

1. On a small EC2 instance, start s3backer. Use blockSize=1M. Optionally
try using blockCacheWriteDelay>0 although it doesn't solve this specific
problem.
2. Format it and mount it. I tested several filesystems, ZFS(FUSE) caused
s3backer to use most CPU, JFS least, and Ext2 was in the middle.
3. Using 'dd' create a 100MB test file, initializing it from /dev/zero (or
optionally from /dev/urandom for slightly different results)
4. Copy the 100MB test file to the filesystem mounted on s3backer. Watch
the s3backer process using 100% CPU for more than a minute, while the
throughput is quite modest (1MB/s - 5MB/s, depending on the filesystem).

What is the expected output?

s3backer should be I/O-bound, not CPU-bound. At such low speeds it
shouldn't use 100% CPU. The only two CPU-intensive operations it performs
are AFAIK md5 calculation and zero-block checking. Both of them should be
considerably faster than 1-5 MB/s, which makes me think there is a bug
somewhere. For example, could s3backer be calculating the MD5 hash of the
entire 1MB block each time a 4096-byte sector is written?

What version of the product are you using? On what operating system?

r277 from SVN. OS is Ubuntu Intrepid (8.10) 32-bit on a small EC2 instance.

Original issue reported on code.google.com by onestone on 29 Sep 2008 at 1:18

Cannot configure s3backer in FreeBSD 9

What steps will reproduce the problem?
1.Download s3backer-1.3.7.tar.gz and extract
2.Install required libraries( curl, fuse, fusefs-s3fs-1.71_1)
3.Go to the s3backer directory and run the configure script

root@ES141-Jan1:/cbdir/aafak/s3backer-1.3.7 # ./configure 
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... scripts/install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... nawk
checking whether make sets $(MAKE)... yes
checking whether make sets $(MAKE)... (cached) yes
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking for pkg-config... /usr/local/bin/pkg-config
checking pkg-config is at least version 0.19... yes
checking for FUSE... yes
checking for curl_version in -lcurl... yes
checking for BIO_new in -lcrypto... yes
checking for XML_ParserCreate in -lexpat... yes
checking for fuse_version in -lfuse... yes
checking for compressBound in -lz... yes
configure: error: version of curl is not 7.16.2 or later

What is the expected output? What do you see instead?

configure: error: version of curl is not 7.16.2 or later

What version of the product are you using? On what operating system?
s3backer-1.3.7
OS: Freebsd9

Please provide any additional information below.
root@ES141-Jan1:/cbdir/aafak # pkg_info | grep curl
curl-7.34.0         Non-interactive tool to get files from FTP, GOPHER, HTTP(S)
root@ES141-Jan1:/cbdir/aafak # 
Installed curl version is 7.34.0 , but it is saying it said version of curl is 
not 7.16.2 or later

root@ES141-Jan1:/cbdir/aafak # pkg_info | grep fuse
fuse-1.1.1_2        Free Unix (Sinclair ZX-)Spectrum Emulator
fusefs-kmod-0.3.9.p1.20080208_11 Kernel module for fuse
fusefs-libs-2.9.3_1 FUSE allows filesystem implementation in userspace
fusefs-s3fs-1.71_1  FUSE-based file system backed by Amazon S3
libconfuse-2.7      Configuration file parsing library
root@ES141-Jan1:/cbdir/aafak # 

My all packages:
root@ES141-Jan1:/cbdir/aafak # pkg_info 
DTraceToolkit-0.99_1 Collection of useful scripts for DTrace
ap22-mod_wsgi3-3.5  Python WSGI adapter module for Apache
apache22-2.2.27_4   Version 2.2.x of Apache web server with prefork MPM.
apr-1.4.8.1.5.2     Apache Portability Library
atk-2.8.0           GNOME accessibility toolkit (ATK)
bsdpan-Class-Load-0.20 Class::Load - a working (require "Class::Name") and more
bsdpan-Config-Tiny-2.20 Config::Tiny - Read/Write .ini style files with as 
little c
bsdpan-DBD-mysql-4.025 DBD::mysql - MySQL driver for the Perl5 Database 
Interface 
bsdpan-DBI-1.630    DBI - Database independent interface for Perl
bsdpan-Data-OptList-0.108 Data::OptList - parse and validate simple name/value 
option
bsdpan-Dist-CheckConflicts-0.09 Dist::CheckConflicts - declare version 
conflicts for your d
bsdpan-List-MoreUtils-0.33 List::MoreUtils - Provide the stuff missing in 
List::Util
bsdpan-Log-Dispatch-2.41 Log::Dispatch - Dispatches messages to one or more 
outputs
bsdpan-Module-Implementation-0.07 Module::Implementation - Loads one of several 
alternate und
bsdpan-Package-Stash-0.36 Package::Stash - routines for manipulating stashes
bsdpan-Package-Stash-XS-0.28 Package::Stash::XS - faster and more correct 
implementation
bsdpan-Parallel-ForkManager-1.05 Parallel::ForkManager - A simple parallel 
processing fork m
bsdpan-Params-Util-1.07 Params::Util - Simple, compact and correct 
param-checking f
bsdpan-Sub-Install-0.927 Sub::Install - install subroutines into packages easily
bsdpan-Test-Deep-0.112 Test::Deep - Extremely flexible deep comparison
bsdpan-Test-Fatal-0.013 Test::Fatal - incredibly simple helpers for testing 
code wi
bsdpan-Test-NoWarnings-1.04 Test::NoWarnings - Make sure you didn't emit any 
warnings w
bsdpan-Test-Tester-0.109 Test::Tester - Ease testing test modules built with 
Test::B
bsdpan-Try-Tiny-0.18 Try::Tiny - minimal try/catch with proper preservation of $
bsdpan-mha4mysql-manager-0.55 Unknown perl module
bsdpan-mha4mysql-node-0.54 Unknown perl module
ca_root_nss-3.15.4  The root certificate bundle from the Mozilla Project
cairo-1.10.2_5,2    Vector graphics library with cross-device output support
cidr-2.3.2_1        RFC 1878 subnet calculator / helper
compositeproto-0.4.2 Composite extension headers
cups-client-1.5.4_1 Common UNIX Printing System: Library cups
curl-7.34.0         Non-interactive tool to get files from FTP, GOPHER, HTTP(S)
damageproto-1.2.1   Damage extension headers
db42-4.2.52_5       The Berkeley DB package, revision 4.2
dejavu-2.33         Bitstream Vera Fonts clone with a wider range of characters
desktop-file-utils-0.22_1 Couple of command line utilities for working with 
desktop e
dmidecode-2.12      Tool for dumping DMI (SMBIOS) contents in human-readable fo
dri2proto-2.8       DRI2 prototype headers
encodings-1.0.4,1   X.Org Encoding fonts
erlang-15.b.03.1_1,3 A functional programming language from Ericsson
expat-2.1.0         XML 1.0 parser written in C
fixesproto-5.0      Fixes extension headers
font-bh-ttf-1.0.3   X.Org Bigelow & Holmes TTF font
font-misc-ethiopic-1.0.3 X.Org miscellaneous Ethiopic font
font-misc-meltho-1.0.3 X.Org miscellaneous Meltho font
font-util-1.3.0     Create an index of X font files in a directory
fontconfig-2.10.93,1 An XML-based font configuration API for X Windows
freeglut-2.8.1      An alternative to the OpenGL Utility Toolkit (GLUT) library
freetype2-2.4.12_1  A free and portable TrueType font rendering engine
fuse-1.1.1_2        Free Unix (Sinclair ZX-)Spectrum Emulator
fusefs-kmod-0.3.9.p1.20080208_11 Kernel module for fuse
fusefs-libs-2.9.3_1 FUSE allows filesystem implementation in userspace
fusefs-s3fs-1.71_1  FUSE-based file system backed by Amazon S3
ganglia-monitor-core-3.4.0_1 Ganglia cluster monitor, monitoring daemon
gdbm-1.10           GNU database manager
gdk-pixbuf2-2.28.2  Graphic library for GTK+
gettext-0.18.3      GNU gettext package
glib-2.36.3         Some useful routines of C programming (current stable versi
gnomehier-3.0       A utility port that creates the GNOME directory tree
graphite2-1.2.3     Rendering capabilities for complex non-Roman writing system
gtk-update-icon-cache-2.24.22 Gtk-update-icon-cache utility from the Gtk+ 
toolkit
gtk2-2.24.22_1      Gimp Toolkit for X11 GUI (previous stable version)
harfbuzz-0.9.19     OpenType text shaping engine
hicolor-icon-theme-0.12 A high-color icon theme shell from the FreeDesktop 
project
icu-50.1.2          International Components for Unicode (from IBM)
inputproto-2.3      Input extension headers
ipmitool-1.8.12_3   CLI to manage IPMI systems
jasper-1.900.1_12   An implementation of the codec specified in the JPEG-2000 s
java-zoneinfo-2013.d Updated Java timezone definitions
javavmwrapper-2.4_3 Wrapper script for various Java Virtual Machines
jbigkit-1.6         Lossless compression for bi-level images such as scanned pa
jpeg-8_4            IJG's jpeg compression utilities
kbproto-1.0.6       KB extension headers
libGL-7.6.1_4       OpenGL library that renders using GLX or DRI
libGLU-9.0.0        OpenGL utility library
libICE-1.0.8,1      Inter Client Exchange library for X11
libSM-1.2.1,1       Session Management library for X11
libX11-1.6.0,1      X11 library
libXau-1.0.8        Authentication Protocol library for X11
libXaw-1.0.11,2     X Athena Widgets library
libXcomposite-0.4.4,1 X Composite extension library
libXcursor-1.1.14   X client-side cursor loading library
libXdamage-1.1.4    X Damage extension library
libXdmcp-1.1.1      X Display Manager Control Protocol library
libXext-1.3.2,1     X11 Extension library
libXfixes-5.0.1     X Fixes extension library
libXft-2.3.1        Client-sided font API for X applications
libXi-1.7.2,1       X Input extension library
libXinerama-1.1.3,1 X11 Xinerama library
libXmu-1.1.1,1      X Miscellaneous Utilities libraries
libXp-1.0.2,1       X print library
libXpm-3.5.10       X Pixmap library
libXrandr-1.4.2     X Resize and Rotate extension library
libXrender-0.9.8    X Render extension library
libXt-1.1.4,1       X Toolkit library
libXtst-1.2.2       X Test extension
libXxf86vm-1.1.3    X Vidmode Extension
libconfuse-2.7      Configuration file parsing library
libdrm-2.4.17_1     Userspace interface to kernel Direct Rendering Module servi
libffi-3.0.13       Foreign Function Interface
libfontenc-1.1.2    The fontenc Library
libgcrypt-1.5.2     General purpose crypto library based on code used in GnuPG
libgpg-error-1.12   Common error values for all GnuPG components
libiconv-1.14_1     A character set conversion library
libpciaccess-0.13.2 Generic PCI access library
libpthread-stubs-0.3_3 This library provides weak aliases for pthread functions
libspectrum-1.1.1   Handling of ZX-Spectrum emulator files formats
libxcb-1.9.1        The X protocol C-language Binding (XCB) library
libxml2-2.8.0_2     XML parser library for GNOME
libxslt-1.1.28_1    The XSLT C library for GNOME
mkfontdir-1.0.7     Create an index of X font files in a directory
mkfontscale-1.1.1   Creates an index of scalable font files for X
mysql-client-5.5.25 Multithreaded SQL database (client)
mysql-client-5.6.12 Multithreaded SQL database (client)
mysql-server-5.5.25 Multithreaded SQL database (server)
net-snmp-5.7.2_3    An extendable SNMP implementation
open-motif-2.3.4    Motif X11 Toolkit (industry standard GUI (IEEE 1295))
openjdk6-b27_6      Oracle's Java 6 virtual machine release under the GPL v2
pango-1.34.1_1      An open-source framework for the layout and rendering of i1
pciids-20131225     Database of all known IDs used in PCI devices
pcre-8.33           Perl Compatible Regular Expressions library
perl-5.14.4         Practical Extraction and Report Language
perl5-5.16.3_6      Practical Extraction and Report Language
pixman-0.30.0       Low-level pixel manipulation library
pkg_replace-0.8.0   A utility for upgrading installed packages
pkgconf-0.9.2_1     Utility to help to configure compiler and linker flags
png-1.5.17          Library for manipulating PNG images
printproto-1.0.5    Print extension headers
pure-ftpd-1.0.36    A small, easy to set up, fast, and very secure FTP server
py27-fail2ban-0.8.9 Scans log files and bans IP that makes too many password fa
python-2.7_1,2      The "meta-port" for the default version of Python interpret
python2-2_2         The "meta-port" for version 2 of the Python interpreter
python27-2.7.5_1    Interpreted object-oriented programming language
rabbitmq-3.0.4      RabbitMQ is an implementation of AMQP
randrproto-1.4.0    Randr extension headers
recordproto-1.14.2  RECORD extension headers
renderproto-0.11.1  RenderProto protocol headers
rrdtool-1.4.7_2     Round Robin Database Tools
rsync-3.0.9_3       Network file distribution/synchronization utility
serf-1.2.1_1        Serf HTTP client library
sg3_utils-1.37      Set of utilities that send SCSI commands to devices
shared-mime-info-1.1 MIME types database from the freedesktop.org project
smartmontools-6.2   S.M.A.R.T. disk monitoring tools
sqlite3-3.7.17_1    SQL database engine in a C library
subversion-1.8.1    Version control system
sudo-1.8.8          Allow others to run commands as root
tiff-4.0.3          Tools and library routines for working with TIFF images
vim-lite-7.3.1314_2 Vi "workalike", with many additional features (Lite package
xbitmaps-1.1.1      X.Org bitmaps data
xcb-util-0.3.9_1,1  A module with libxcb/libX11 extension/replacement libraries
xcb-util-renderutil-0.3.8 Convenience functions for the Render extension
xextproto-7.2.1     XExt extension headers
xf86vidmodeproto-2.3.1 XFree86-VidModeExtension extension headers
xineramaproto-1.2.1 Xinerama extension headers
xmlstarlet-1.4.2    Command Line XML Toolkit
xorg-fonts-truetype-7.7_1 X.Org TrueType fonts
xproto-7.0.24       X11 protocol headers
zoneinfo-2012.c     Updated timezone definitions
root@ES141-Jan1:/cbdir/aafak # 

I had used it in my ubuntu setup, but i am not able to do it in Freebsd 9.
Can you please provide me the steps to do it.






Original issue reported on code.google.com by [email protected] on 7 Jan 2015 at 11:01

EU buckets support

It seems s3bucket doesn't support EU buckets:

EU buckets MUST be addressed using the virtual hosted style method:
http://<bucket>.s3.amazonaws.com/

See:
http://docs.amazonwebservices.com/AmazonS3/latest/index.html?VirtualHosting.html

Original issue reported on code.google.com by [email protected] on 1 Oct 2009 at 7:51

Feature request: dynamic threads

It would be really neat if s3backer were able to adjust the number of active 
threads based on performance.

Here's my thinking...

When too many threads have been configured for the available upload bandwidth, 
packets start to get lost and s3backer starts to see operation timeouts. 
Suppose s3backer took advantage of this as follows:

* Allow the user to configure a minimum and maximum number of active threads.
* Start out using the maximum configured number of threads.
* Each time an operation timeout happens, decrease the active thread count by 
1, unless it's already at the minimum.
* Whenever a certain number of writes to S3 occurs without any timeouts, 
increase the active thread count by 1 unless it's already at the maximum.
* Log the active thread count each time it is decreased or increased, so that 
the user can determine from his logs the optimal number of threads to use.

With this approach, I believe that s3backer will hover most of the time around 
the optimal active thread count, with occasional short-lived detours lower or 
higher.

I took a stab at implementing this but the code is sufficiently complex that I 
didn't feel like I could do it justice in the time I have available. It would 
probably be easier for the guy who wrote the code. ;-)


Original issue reported on code.google.com by [email protected] on 19 Oct 2010 at 4:25

Simple locking mechanism to prevent simultaneous mounts

With the data loss hazard in issue 9, it might be wise to use a small file or 
meta information to 
indicate that the filesystem is mounted. A simple locking mechanism will 
protect against 
unintended unsafe mounts (e.g. laptop did not unmount cleanly due to network 
issues).

Original issue reported on code.google.com by jonsview on 14 Aug 2009 at 1:56

Cannot mount s3 bucket as a loop back device in FreeBsd9

What steps will reproduce the problem?
1.Download s3backer-1.3.7.tar.gz and extract
2.Install required libraries( curl, fuse, fusefs-s3fs-1.71_1)
3.Go to the s3backer directory and run the configure script
4.root@Es140-Nov29:~/aafak/s3backer-1.3.7 # make
5.root@Es140-Nov29:~/aafak/s3backer-1.3.7 # make install
6.root@Es140-Nov29:~/aafak # mkdir s3b.mnt demo.mnt
7.root@Es140-Nov29:~/aafak # s3backer --readOnly s3backer-demo s3b.mnt
s3backer: auto-detecting block size and total file size...
s3backer: auto-detected block size=4k and total size=20m
8.root@Es140-Nov29:~/aafak # mount -o loop s3b.mnt/file demo.mnt
mount: s3b.mnt/file: mount option <loop> is unknown: Invalid argument

What is the expected output? What do you see instead?
It should mount.
It failed and says..
mount: s3b.mnt/file: mount option <loop> is unknown: Invalid argument



What version of the product are you using? On what operating system?
s3backer-1.3.7(but Applied changes of (revision 498))
+++ s3b_config.c    (working copy)
@@ -421,7 +421,9 @@
     "-onegative_timeout=31536000",
     "-oattr_timeout=0",             // because statistics file length changes
     "-odefault_permissions",
+#ifndef __FreeBSD__
     "-onodev",
+#endif
     "-onosuid",
 #ifdef __APPLE__
     "-odaemon_timeout=" FUSE_MAX_DAEMON_TIMEOUT_STRING,


OS: FreeBSD 9
curl-7.34.0 
fusefs-libs-2.9.3_1 
fusefs-s3fs-1.71_1 
libconfuse-2.7     


Please provide any additional information below.
attached config logs

Can anyone please help me to mount s3 backer as a loop back device in freebsd 9?
I also searched for mount loop option for freebsd in google, it suggested for 
mdconfig
but mdconfig not exists for Freebsd 9


Original issue reported on code.google.com by [email protected] on 8 Jan 2015 at 11:38

s3backer fails on buckets outside the "US Standard" region

What steps will reproduce the problem?
1. Make an S3 bucket in region "Ireland".
2. Apply s3backer to it.

What is the expected output? What do you see instead?

s3backer fails to read or write to the bucket with messages like

* Failed writing received data to disk/application

What version of the product are you using? On what operating system?

s3backer-1.3.2 on Mac OS X 10.7

rb@Crane$ uname -a
Darwin Crane.local 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 
2011; root:xnu-1504.15.3~1/RELEASE_I386 i386

Please provide any additional information below.

Reading the source code I found the debug-http flag and used it.  Amazon is 
sending back HTTP 301 "Moved Permanently" responses to requests.

< HTTP/1.1 301 Moved Permanently
...
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Thu, 04 Aug 2011 15:28:54 GMT
< Connection: close
< Server: AmazonS3

These do not seem to include a changed location header.

I'm happy to report that s3backer seems to be working fine for me with a bucket 
in the US Standard region.

Original issue reported on code.google.com by [email protected] on 4 Aug 2011 at 3:46

Unable to mount on FreeBSD

What steps will reproduce the problem?
1. Attempt to mount via: s3backer --blockSize=128k --size=1t -d --listBlocks 
bucket-name /vol/s3

What is the expected output? What do you see instead?
Expected: File system to be mounted
Error messages:
mount_fusefs: -o dev: option not supported
fuse: failed to mount file system: Operation now in progress

What version of the product are you using? On what operating system?
s3backer version 1.3.7 (r496)
Freebsd 9.2-RELEASE
fusefs-kmod-0.3.9.p1.20080208_11
fusefs-libs-2.9.3_1

Please provide any additional information below.
Attached is the full output when in debug mode.

Original issue reported on code.google.com by [email protected] on 4 Nov 2013 at 4:14

Attachments:

Repeated 400 Bad Request errors

Very often after some errors (HTTP timeout, 500) s3backer starts to
generate "400 Bad Request" errors and locks in that condition (until retry
timeout and give up message).

By using tcpdump I have found the same pattern:

(Some network error - unplugging cable is enough).

20:04:35.281374 IP macbook.58450 > s3.amazonaws.com.http: tcp 365
E...4.@.@.'....eH....R.Pc....e..P...Ss..PUT /du-backup3/macos0000080f HTTP/1.1
Us
20:04:35.603823 IP s3.amazonaws.com.http > macbook.58450: tcp 25
E([email protected]..&JD..HTTP/1.1 100 Continue



20:04:55.613733 IP s3.amazonaws.com.http > macbook.58450: tcp 630
E([email protected]..&R...HTTP/1.1 400 Bad Request
x-amz-request-id
20:04:55.614898 IP s3.amazonaws.com.http > macbook.58450: tcp 5
H......e.P.R.e..c...P..&9...0


And these messages go until retry timeout.

It looks like s3backer starts PUT request, S3 answers "100 Continue",
nothing happens for 20 seconds and then S3 says "400 Bad Request". S3backer
complaints in syslog, waits and repeats the same pattern.

It happens with s3backer 1.0.4 on Mac OS X 10.5.4

s3backer connect string:

s3backer --prefix=macos --size=75M --filename=<local-file>
--maxRetryPause=5000000 -o daemon_timeout=3600 <bucket> <local-dir>

I am writing file with dd:
dd if=<another local file> of=<local-file on s3backer> bs=4096

tcpdump called like this:
tcpdump -i en1 -A -q 'tcp and (((ip[2:2] - ((ip[0]&0xf)<<2)) -
((tcp[12]&0xf0)>>2)) != 0)'

Original issue reported on code.google.com by [email protected] on 10 Jul 2008 at 12:21

Can't mount anymore when updating to 1.3.3

I created a reiserfs filesystem successfully with s3backer version 1.3.1:

s3backer --encrypt --vhost --blockSize=128k --size=5M --listBlocks mybucket 
mnts3/

It worked fine but when updated to version 1.3.3 I can't mount it anymore. Has 
the encryption changed from version 1.3.1 to 1.3.3?

I can still use the filesystem with s3backer 1.3.1 but not with 1.3.3.

Original issue reported on code.google.com by [email protected] on 7 Jun 2012 at 11:49

Feature request: list blocks in background

listBlocks can be done in the background in a worker thread while at the same 
time the filesystem is used for reading and writing. It doesn't seem like it 
should be necessary to block the mount until the listBlocks is finished.

Original issue reported on code.google.com by [email protected] on 20 Oct 2010 at 2:32

on OS X: http_io.c:759:22: error: use of undeclared identifier 'HOST_NAME_MAX'

cc -DHAVE_CONFIG_H -I.    -D_FILE_OFFSET_BITS=64 -D_DARWIN_USE_64_BIT_INODE 
-I/usr/local/Cellar/fuse4x/0.9.2/include/fuse  -g -O3 -pipe -Wall 
-Waggregate-return -Wcast-align -Wchar-subscripts -Wcomment -Wformat -Wimplicit 
-Wmissing-declarations -Wmissing-prototypes -Wnested-externs -Wno-long-long 
-Wparentheses -Wpointer-arith -Wredundant-decls -Wreturn-type -Wswitch 
-Wtrigraphs -Wuninitialized -Wunused -Wwrite-strings -Wshadow 
-Wstrict-prototypes -Wcast-qual  -D_FILE_OFFSET_BITS=64 
-D_DARWIN_USE_64_BIT_INODE -I/usr/local/Cellar/fuse4x/0.9.2/include/fuse  -MT 
reset.o -MD -MP -MF .deps/reset.Tpo -c -o reset.o reset.c

http_io.c:759:22: error: use of undeclared identifier 'HOST_NAME_MAX'
        char content[HOST_NAME_MAX + 64];
                     ^
1 error generated.


Version 1.3.5

OS X: 10.8.4-x86_64
Xcode: 4.6.2
CLT: 4.6.0.0.1.1365549073

Original issue reported on code.google.com by [email protected] on 5 Jun 2013 at 2:28

Needed to add pthread library check to configure.ac

I tried to build r437 from subversion and it complained about pthread_create 
not being there. I had to make this change to make it build:

--- configure.ac~   2010-10-14 23:17:21.977142077 -0400
+++ configure.ac    2010-10-14 23:17:23.055284667 -0400
@@ -51,6 +51,8 @@ PKG_CHECK_MODULES(FUSE, fuse,,
     [AC_MSG_ERROR(["fuse" not found in pkg-config])])

 # Check for required libraries
+AC_CHECK_LIB(pthread, pthread_create,,
+   [AC_MSG_ERROR([required library pthread missing])])
 AC_CHECK_LIB(curl, curl_version,,
    [AC_MSG_ERROR([required library libcurl missing])])
 AC_CHECK_LIB(crypto, BIO_new,,

Thanks for creating and maintaining this package!

Original issue reported on code.google.com by [email protected] on 15 Oct 2010 at 4:00

Feature request: block de-duplication

I am a completly noob when it comes to FUSE and have only a basic academic 
understanding of file systems, but I would like to contribute to s3backer 
making some improvement as my final graduation project. I was thinking about 
implementing some basic deduplication mechanism, where the dedup indexes are 
stored on the client machine, like the cache file. If think this could improve 
s3 storage usage and make s3backer even more cost effective.

I was also thinking about using s3backer as an archiving solution for a 
production environment. As I would be storing only cold data, eventual 
consistency would not be problem, as the time between a possible read and its 
predecessor write would be too large. Is there any scenario where I can get 
data corrupted using the data cache inside a persistent storage device? Let´s 
not consider the case where this very own device fails.

Thanks in advance,

Original issue reported on code.google.com by [email protected] on 25 Mar 2014 at 1:01

Allow configuring a limit on the number of outstanding dirty blocks

s3backer currently allows an unlimited number of blocks in the block cache
to be dirty at the same time.

Some situations may want to limit this number to avoid the degree of
inconsistency that can occur in case of a crash.

Suggest adding a new flag `--blockCacheMaxDirty=NUMBLOCKS`. When a new
write was attempted while the maximum number of dirty blocks had already
been reached, then the subsequent write would block.

Original issue reported on code.google.com by [email protected] on 1 Oct 2009 at 7:59

Problems with headers on FreeBSD

What steps will reproduce the problem?
1. Attempt to compile on freebsd
2. Errors relating to locating libraries and then header issues

What is the expected output? What do you see instead?

When you try to compile on FreeBSD configure seems to skip searching /usr/local 
for headers and libraries. Also it doesn't seem to have the command line 
options to pass in curl's path as I've seen previously (e.g. 
--with-curl=/usr/local). Those were easily fixed though. (Via setting 
LDFLAGS=-L/usr/local/lib CPPFLAGS=-I/usr/local/include)

When trying to compile however I got a ton of header errors. Once I added 
"#include <sys/types.h>" to all files that included s3backer.h things were able 
to compile successfully under both GCC and CLANG. I did note one minor warning 
under clang however:
s3b_config.c:1047:39: warning: comparison of unsigned expression < 0 is always 
false [-Wtautological-compare]
        if (config.http_io.key_length < 0 || config.http_io.key_length > EVP_MAX_KEY_LENGTH) {
            ~~~~~~~~~~~~~~~~~~~~~~~~~ ^ ~

Original issue reported on code.google.com by [email protected] on 4 Nov 2013 at 3:12

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.