Coder Social home page Coder Social logo

s3fs-fuse / s3fs-fuse Goto Github PK

View Code? Open in Web Editor NEW
8.1K 171.0 996.0 5.63 MB

FUSE-based file system backed by Amazon S3

License: GNU General Public License v2.0

Shell 11.93% C++ 85.46% C 0.74% Makefile 0.62% M4 0.99% Python 0.27%
fuse s3 aws-s3 filesystem cloud-storage fuse-filesystem

s3fs-fuse's Introduction

s3fs

s3fs allows Linux, macOS, and FreeBSD to mount an S3 bucket via FUSE(Filesystem in Userspace).
s3fs makes you operate files and directories in S3 bucket like a local file system.
s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.

s3fs-fuse CI Twitter Follow

Features

  • large subset of POSIX including reading/writing files, directories, symlinks, mode, uid/gid, and extended attributes
  • compatible with Amazon S3, and other S3-based object stores
  • allows random writes and appends
  • large files via multi-part upload
  • renames via server-side copy
  • optional server-side encryption
  • data integrity via MD5 hashes
  • in-memory metadata caching
  • local disk data caching
  • user-specified regions, including Amazon GovCloud
  • authenticate via v2 or v4 signatures

Installation

Many systems provide pre-built packages:

  • Amazon Linux via EPEL:

    sudo amazon-linux-extras install epel
    sudo yum install s3fs-fuse
    
  • Arch Linux:

    sudo pacman -S s3fs-fuse
    
  • Debian 9 and Ubuntu 16.04 or newer:

    sudo apt install s3fs
    
  • Fedora 27 or newer:

    sudo dnf install s3fs-fuse
    
  • Gentoo:

    sudo emerge net-fs/s3fs
    
  • RHEL and CentOS 7 or newer via EPEL:

    sudo yum install epel-release
    sudo yum install s3fs-fuse
    
  • SUSE 12 and openSUSE 42.1 or newer:

    sudo zypper install s3fs
    
  • macOS 10.12 and newer via Homebrew:

    brew install --cask macfuse
    brew install gromgit/fuse/s3fs-mac
    
  • FreeBSD:

    pkg install fusefs-s3fs
    
  • Windows:

    Windows has its own install, seening in this link

Otherwise consult the compilation instructions.

Examples

s3fs supports the standard AWS credentials file stored in ${HOME}/.aws/credentials. Alternatively, s3fs supports a custom passwd file. Finally s3fs recognizes the AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN environment variables.

The default location for the s3fs password file can be created:

  • using a .passwd-s3fs file in the users home directory (i.e. ${HOME}/.passwd-s3fs)
  • using the system-wide /etc/passwd-s3fs file

Enter your credentials in a file ${HOME}/.passwd-s3fs and set owner-only permissions:

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
chmod 600 ${HOME}/.passwd-s3fs

Run s3fs with an existing bucket mybucket and directory /path/to/mountpoint:

s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs

If you encounter any errors, enable debug output:

s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs -o dbglevel=info -f -o curldbg

You can also mount on boot by entering the following line to /etc/fstab:

mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other 0 0

If you use s3fs with a non-Amazon S3 implementation, specify the URL and path-style requests:

s3fs mybucket /path/to/mountpoint -o passwd_file=${HOME}/.passwd-s3fs -o url=https://url.to.s3/ -o use_path_request_style

or(fstab)

mybucket /path/to/mountpoint fuse.s3fs _netdev,allow_other,use_path_request_style,url=https://url.to.s3/ 0 0

Note: You may also want to create the global credential file first

echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > /etc/passwd-s3fs
chmod 600 /etc/passwd-s3fs

Note2: You may also need to make sure netfs service is start on boot

Limitations

Generally S3 cannot offer the same performance or semantics as a local file system. More specifically:

  • random writes or appends to files require rewriting the entire object, optimized with multi-part upload copy
  • metadata operations such as listing directories have poor performance due to network latency
  • non-AWS providers may have eventual consistency so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020)
  • no atomic renames of files or directories
  • no coordination between multiple clients mounting the same bucket
  • no hard links
  • inotify detects only local modifications, not external ones by other clients or tools

References

  • CSI for S3 - Kubernetes CSI driver
  • docker-s3fs-client - Docker image containing s3fs
  • goofys - similar to s3fs but has better performance and less POSIX compatibility
  • s3backer - mount an S3 bucket as a single file
  • S3Proxy - combine with s3fs to mount Backblaze B2, EMC Atmos, Microsoft Azure, and OpenStack Swift buckets
  • s3ql - similar to s3fs but uses its own object format
  • YAS3FS - similar to s3fs but uses SNS to allow multiple clients to mount a bucket

Frequently Asked Questions

License

Copyright (C) 2010 Randy Rizun [email protected]

Licensed under the GNU GPL version 2

s3fs-fuse's People

Contributors

adamqqqplay avatar arkamar avatar carstengrohmann avatar driskell avatar eryugey avatar flandr avatar fly3366 avatar gaul avatar ggtakec avatar jirapong avatar juliogonzalez avatar kahing avatar kristjanvalur avatar liuyongqing avatar lutzfinsterle2019 avatar macos-fuse-t avatar mapreri avatar nkkashyap avatar nturner avatar orozery avatar psyvision avatar robbkistler avatar rockuw avatar rrizun avatar siketyan avatar swordsreversed avatar tlevi avatar vadimeremeev avatar vincentbernat avatar vvoidv avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3fs-fuse's Issues

s3fs have crashed with a segfault

I've seen the s3fs segfault today in the morning with the following message:

Jan 13 03:29:56 monitoring kernel: [1322370.223444] s3fs[7788]: segfault at 0 ip 00000000004275db sp 00007f2660b7d7c0 error 4 in s3fs[400000+4b000]

There are no accompanying messages in logs but I know there was some minor writing to S3 ~5min in advance to that.

The s3fs daemon was running since 9 Jan with more than 1.5Tb written since that time. Before that I've tried running s3fs of the same version on the same host with no errors (but with lesser overall throughput I think) for 2 weeks (before that I've used the older 1.61 version).

I'm running Debian Wheezy 7.3 with s3fs 1.74 (I think I've used r499 for compilation):

root@monitoring:~# s3fs --version
Amazon Simple Storage Service File System 1.74
Copyright (C) 2010 Randy Rizun <[email protected]>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

need help installing on FreeBSD 10

Trying to compile from source on FreeBSD 10. Can someone help me with this?

[root@host /usr/local/fuse/s3fs-fuse-master]# ./autogen.sh
configure.ac:29: installing './compile'
configure.ac:25: installing './config.guess'
configure.ac:25: installing './config.sub'
configure.ac:26: installing './install-sh'
configure.ac:26: installing './missing'
src/Makefile.am: installing './depcomp'
parallel-tests: installing './test-driver'

[root@host /usr/local/fuse/s3fs-fuse-master]# ./configure
checking build system type... amd64-unknown-freebsd10.0
checking host system type... amd64-unknown-freebsd10.0
checking target system type... amd64-unknown-freebsd10.0
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... ./install-sh -c -d
checking for gawk... no
checking for mawk... no
checking for nawk... nawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for g++... no
checking for c++... c++
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether c++ accepts -g... yes
checking for style of include used by make... GNU
checking dependency style of c++... gcc3
checking for gcc... no
checking for cc... cc
checking whether we are using the GNU C compiler... yes
checking whether cc accepts -g... yes
checking for cc option to accept ISO C89... none needed
checking whether cc understands -c and -o together... yes
checking dependency style of cc... gcc3
checking s3fs build with nettle(GnuTLS)... no
checking s3fs build with OpenSSL... no
checking s3fs build with GnuTLS... no
checking s3fs build with NSS... no
./configure: 4270: Syntax error: word unexpected (expecting ")")
[root@host /usr/local/fuse/s3fs-fuse-master]#

Installing on Ubuntu 14.10

Hello

After run:

./configure --prefix=/usr --with-openssl

you get:

./configure: line 4271: syntax error near unexpected token common_lib_checking,' ./configure: line 4271:PKG_CHECK_MODULES(common_lib_checking, fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6)'

All dependencies were installed according to the documentation

Regrads

Adrian

tmp file path

If i create a new file (for eample like this:

dd if=/dev/zero of=output.dat bs=10M count=5

)

the file is first created in a temp folder and then uploaded to s3. Is there an option to specify the location where these files are created? Or is this the use_cache option?

Ok to use mount as a shared file directory across a cluster of servers?

What I'd like to do is use a S3 bucket as a mountpoint for a cluster of compute instances. The configuration would do more reading than writing, so I figure that would be ok. I thought of this solution when I was looking at auto-scaling and a Hadoop configuration and thought it would be better for scaling and management to automount an S3 Bucket via S3fs and boom. I feel the support for write-locking would prevent corruption, even though the system I use doesn't overwrite files, just renames by appending _N. Any pros/cons to this solution?

Operation not permitted on subdirectory

Hello,

I am would like connect to S3-like API e24cloud, but I am unable to list any subdirectories or read any files. I am able only list root directory

I ran s3fs:

$ sudo s3fs -o allow_other -f -d rownosc-debug aa -ourl='e24files.com/'
    set_moutpoint_attribute(3379): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=1000, mode=40775)
s3fs_init(2650): init
s3fs_check_service(2968): check services.
    CheckBucket(2366): check a bucket.
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/
    RequestPerform(1587): HTTP response code 200
s3fs_getattr(716): [path=/.Trash]
    HeadRequest(2006): [tpath=/.Trash]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
    HeadRequest(2006): [tpath=/.Trash/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash/
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
    HeadRequest(2006): [tpath=/.Trash_$folder$]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash_%24folder%24
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
  list_bucket(2268): [path=/.Trash]
    ListBucketRequest(2410): [tpath=/.Trash]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=.Trash/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
s3fs_getattr(716): [path=/.Trash-1000]
    HeadRequest(2006): [tpath=/.Trash-1000]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash-1000
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
    HeadRequest(2006): [tpath=/.Trash-1000/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash-1000/
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
    HeadRequest(2006): [tpath=/.Trash-1000_$folder$]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/.Trash-1000_%24folder%24
    RequestPerform(1587): HTTP response code 404
    RequestPerform(1611): HTTP response code 404 was returned, returning ENOENT
  list_bucket(2268): [path=/.Trash-1000]
    ListBucketRequest(2410): [tpath=/.Trash-1000]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=.Trash-1000/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
s3fs_getattr(716): [path=/]
s3fs_getattr(716): [path=/]
s3fs_getattr(716): [path=/]
s3fs_access(2709): [path=/][mask=X_OK ]
s3fs_opendir(2077): [path=/][flags=100352]
s3fs_readdir(2225): [path=/]
  list_bucket(2268): [path=/]
    ListBucketRequest(2410): [tpath=/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=&max-keys=1000
    RequestPerform(1587): HTTP response code 200
  readdir_multi_head(2146): [path=/][list=0]
    Request(3305): [count=7]
    AddStat(247): add stat cache entry[path=/button.html]
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/css/)
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/fonts/)
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/img/)
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/js/)
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/scss/)
MultiRead(3247): failed a request(404: rownosc-debug.e24files.com/less/)
    GetStat(170): stat cache hit [path=/button.html][time=1415675872][hit count=0]
    readdir_multi_head(2208): Could not find /css file in stat cache.
    readdir_multi_head(2208): Could not find /fonts file in stat cache.
    readdir_multi_head(2208): Could not find /img file in stat cache.
    readdir_multi_head(2208): Could not find /js file in stat cache.
    readdir_multi_head(2208): Could not find /less file in stat cache.
    readdir_multi_head(2208): Could not find /scss file in stat cache.
s3fs_getattr(716): [path=/]
s3fs_getattr(716): [path=/button.html]
    GetStat(170): stat cache hit [path=/button.html][time=1415675872][hit count=1]
s3fs_getattr(716): [path=/css]
    HeadRequest(2006): [tpath=/css]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/css
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/css]
    ListBucketRequest(2410): [tpath=/css]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=css/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    AddStat(247): add stat cache entry[path=/css/]
    GetStat(170): stat cache hit [path=/css/][time=1415675872][hit count=0]
s3fs_getattr(716): [path=/fonts]
    HeadRequest(2006): [tpath=/fonts]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/fonts
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/fonts]
    ListBucketRequest(2410): [tpath=/fonts]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=fonts/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    AddStat(247): add stat cache entry[path=/fonts/]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675872][hit count=0]
s3fs_getattr(716): [path=/img]
    HeadRequest(2006): [tpath=/img]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/img
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/img]
    ListBucketRequest(2410): [tpath=/img]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=img/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    AddStat(247): add stat cache entry[path=/img/]
    GetStat(170): stat cache hit [path=/img/][time=1415675872][hit count=0]
s3fs_getattr(716): [path=/js]
    HeadRequest(2006): [tpath=/js]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/js
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/js]
    ListBucketRequest(2410): [tpath=/js]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=js/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    AddStat(247): add stat cache entry[path=/js/]
    GetStat(170): stat cache hit [path=/js/][time=1415675872][hit count=0]
s3fs_getattr(716): [path=/less]
    HeadRequest(2006): [tpath=/less]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/less
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/less]
    ListBucketRequest(2410): [tpath=/less]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=less/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    append_objects_from_xml_ex(2354): contents_xp->nodesetval is empty.
    AddStat(247): add stat cache entry[path=/less/]
    GetStat(170): stat cache hit [path=/less/][time=1415675872][hit count=0]
s3fs_getattr(716): [path=/scss]
    HeadRequest(2006): [tpath=/scss]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com/scss
    RequestPerform(1587): HTTP response code 200
  list_bucket(2268): [path=/scss]
    ListBucketRequest(2410): [tpath=/scss]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=scss/&max-keys=1000
    RequestPerform(1587): HTTP response code 200
    AddStat(247): add stat cache entry[path=/scss/]
    GetStat(170): stat cache hit [path=/scss/][time=1415675873][hit count=0]
s3fs_opendir(2077): [path=/][flags=100352]
s3fs_readdir(2225): [path=/]
  list_bucket(2268): [path=/]
    ListBucketRequest(2410): [tpath=/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=&max-keys=1000
    RequestPerform(1587): HTTP response code 200
  readdir_multi_head(2146): [path=/][list=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675872][hit count=2]
    GetStat(170): stat cache hit [path=/css/][time=1415675872][hit count=1]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675872][hit count=1]
    GetStat(170): stat cache hit [path=/img/][time=1415675872][hit count=1]
    GetStat(170): stat cache hit [path=/js/][time=1415675872][hit count=1]
    GetStat(170): stat cache hit [path=/less/][time=1415675872][hit count=1]
    GetStat(170): stat cache hit [path=/scss/][time=1415675873][hit count=1]
    Request(3305): [count=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=2]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675878][hit count=2]
    GetStat(170): stat cache hit [path=/img/][time=1415675878][hit count=2]
    GetStat(170): stat cache hit [path=/js/][time=1415675878][hit count=2]
    GetStat(170): stat cache hit [path=/less/][time=1415675878][hit count=2]
    GetStat(170): stat cache hit [path=/scss/][time=1415675878][hit count=2]
s3fs_opendir(2077): [path=/][flags=100352]
s3fs_readdir(2225): [path=/]
  list_bucket(2268): [path=/]
    ListBucketRequest(2410): [tpath=/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=&max-keys=1000
    RequestPerform(1587): HTTP response code 200
  readdir_multi_head(2146): [path=/][list=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/img/][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/js/][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/less/][time=1415675878][hit count=3]
    GetStat(170): stat cache hit [path=/scss/][time=1415675878][hit count=3]
    Request(3305): [count=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/img/][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/js/][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/less/][time=1415675878][hit count=4]
    GetStat(170): stat cache hit [path=/scss/][time=1415675878][hit count=4]
s3fs_opendir(2077): [path=/][flags=100352]
s3fs_readdir(2225): [path=/]
  list_bucket(2268): [path=/]
    ListBucketRequest(2410): [tpath=/]
    RequestPerform(1571): connecting to URL rownosc-debug.e24files.com?delimiter=/&prefix=&max-keys=1000
    RequestPerform(1587): HTTP response code 200
  readdir_multi_head(2146): [path=/][list=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/img/][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/js/][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/less/][time=1415675878][hit count=5]
    GetStat(170): stat cache hit [path=/scss/][time=1415675878][hit count=5]
    Request(3305): [count=0]
    GetStat(170): stat cache hit [path=/button.html][time=1415675878][hit count=7]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/fonts/][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/img/][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/js/][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/less/][time=1415675878][hit count=6]
    GetStat(170): stat cache hit [path=/scss/][time=1415675878][hit count=6]
s3fs_getattr(716): [path=/css]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=7]
s3fs_getattr(716): [path=/]
s3fs_access(2709): [path=/css][mask=X_OK ]
    GetStat(170): stat cache hit [path=/css/][time=1415675878][hit count=8]
s3fs_access(2709): [path=/css][mask=X_OK ]
    GetStat(170): stat cache hit [path=/css/][time=1415675879][hit count=9]

And do some activity:

/tmp/s3fs-fuse-master$ ls -lah aa/
razem 12K
drwxrwxrwx 1 root root    0 sty  1  1970 .
drwxrwxr-x 7 adas adas 4,0K lis 11 03:47 ..
---------- 1 root root 3,6K lis 10 18:29 button.html
d--------- 1 root root    0 lis 10 18:30 css
d--------- 1 root root    0 lis 10 18:30 fonts
d--------- 1 root root    0 lis 10 18:30 img
d--------- 1 root root    0 lis 10 17:24 js
d--------- 1 root root    0 lis 10 18:30 less
d--------- 1 root root    0 lis 10 18:31 scss
/tmp/s3fs-fuse-master$ ls -lah aa/css
ls: nie można otworzyć katalogu aa/css: Operacja niedozwolona``` 
```Operacja niedozwolona``` mean ```Operation not permitted``` in Polish.

How can I debug that?

curl ssl problems

as described on

https://code.google.com/p/s3fs/issues/detail?id=343&can=1
https://code.google.com/p/s3fs/issues/detail?id=257&can=1
https://code.google.com/p/s3fs/issues/detail?id=235&can=1

the problem with curl error 35 (CURLE_SSL_CONNECT_ERROR) still exists with version 1.74. on my case it was caused by a heavy mv with 180 files, 1gb per file

system: ubuntu 12.04

Mar  7 10:04:30 vm s3fs: connecting to URL https://bucket123.s3-eu-west-1.amazonaws.com?delimiter=/&prefix=path/file/&max-keys=1000
Mar  7 10:04:30 vm s3fs: HTTP response code 200
Mar  7 10:04:30 vm s3fs: contents_xp->nodesetval is empty.
Mar  7 10:04:30 vm s3fs: contents_xp->nodesetval is empty.
Mar  7 10:04:30 vm s3fs: copying... [path=/path/file]
Mar  7 10:04:30 vm s3fs: connecting to URL https://bucket123.s3-eu-west-1.amazonaws.com/path/file
Mar  7 10:06:13 vm s3fs: ### CURLE_RECV_ERROR
Mar  7 10:06:15 vm s3fs: ### retrying...
Mar  7 10:06:15 vm s3fs: Retry request. [type=2][url=https://bucket123.s3-eu-west-1.amazonaws.com/path/file][path=/path/file]
Mar  7 10:06:15 vm s3fs: ###curlCode: 35  msg: SSL connect error

s3fs crashes with segfault

Version: 1.76

We have a script that parses some files and writes output to s3. s3fs-fuse constantly crash and we get Transport is not connected error.

The only log message in the messages log is:
kernel: s3fs[28576]: segfault at 0 ip 0000000000423679 sp 00007f3d87569870 error 4 in s3fs[400000+51000]

We tried running with s3fs -f and... it does not crash at all. So we cannot get the real output beside that message in the messages log file.

Not sure how to go about it. I think we can get dump file if this will help.

Thanks!

Andrew

s3fs crashes on ls

I can mount the bucket which I want, but I met a crash on ls. After the crash, I see Device not configured, and s3fs unmount this automatically(?).

$ cd /mnt/s3fs-test/
$ ls
(It's ok.)
$ cd logs
(logs has many log files)
$ ls
ls: fts_read: Device not configured
$ ls
ls: .: No such file or directory

This is the log with -f -d option, actual command: s3fs BUCKET_NAME /mnt/s3fs-test/ -o uid=UID -o gid=GID -o use_cache=/tmp -d -f

(snip)
    Request(3305): [count=20]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141016-03_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141020-11_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141020-10_0.gz]
MultiRead(3264): failed to read(remaining: 16 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 15 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 14 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 13 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 12 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 11 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 10 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 9 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 8 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 7 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 6 code: 7  msg: Couldn't connect to server), so retry this.
    AddStat(247): add stat cache entry[path=/logs/(snip)20141019-14_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141019-16_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141019-13_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141019-07_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141019-17_0.gz]
    AddStat(247): add stat cache entry[path=/logs/(snip)20141020-07_0.gz]
MultiRead(3264): failed to read(remaining: 10 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 9 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 8 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 7 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 6 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 5 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 4 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 3 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 2 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 1 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 0 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 10 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 9 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 8 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 7 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 6 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 5 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 4 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 3 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 2 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 1 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 0 code: 7  msg: Couldn't connect to server), so retry this.
MultiRead(3264): failed to read(remaining: 10 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-04_0.gz).
MultiRead(3264): failed to read(remaining: 9 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-02_0.gz).
MultiRead(3264): failed to read(remaining: 8 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-01_0.gz).
MultiRead(3264): failed to read(remaining: 7 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141017-06_0.gz).
MultiRead(3264): failed to read(remaining: 6 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141017-02_0.gz).
MultiRead(3264): failed to read(remaining: 5 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141016-07_0.gz).
MultiRead(3264): failed to read(remaining: 4 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141016-05_0.gz).
MultiRead(3264): failed to read(remaining: 3 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-12_0.gz).
MultiRead(3264): failed to read(remaining: 2 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-19_0.gz).
MultiRead(3264): failed to read(remaining: 1 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141018-21_0.gz).
MultiRead(3264): failed to read(remaining: 0 code: 7  msg: Couldn't connect to server), so retry this.
multi_head_retry_callback(2119): Over retry count(3) limit(/logs/(snip)20141016-04_0.gz).
    GetStat(170): stat cache hit [path=/logs/(snip)20141016-03_0.gz][time=1414677745][hit count=0]
    readdir_multi_head(2208): Could not find /logs/(snip)20141016-04_0.gz file in stat cache.
    readdir_multi_head(2208): Could not find /logs/(snip)20141016-05_0.gz file in stat cache.
    readdir_multi_head(2208): Could not find /logs/(snip)20141016-07_0.gz file in stat cache.
    readdir_multi_head(2208): Could not find /logs/(snip)20141017-02_0.gz file in stat cache.
    readdir_multi_head(2208): Could not find /logs/(snip)20141017-06_0.gz file in stat cache.
    readdir_multi_head(2208): Could not find /logs/(snip)20141018-01_0.gz file in stat cache.
    (snip)
    GetStat(170): stat cache hit [path=/logs/(snip)20141030-07_0.gz][time=1414678058][hit count=0]
s3fs_destroy(2689): destroy

$ s3fs --version
Amazon Simple Storage Service File System V1.78 with OpenSSL

I use current(2014-10-30) s3fs-fuse master 77d4d06
Mac OS X 10.9.5

Related issue(?)
s3fs appears to crash on ls with permission failure

InvalidRequest Error while trying to connect to new Frankfurt Region

Hello,

we currently aren't able to connect to the S3 buckets of the new region (Frankfurt), the error message provided is following:

CheckBucket(2400): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidRequest</Code><Message>The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.</Message><RequestId> ...

As far as I know the new region only supports the newest version of the Amazon API (V4), because it was deployed after jan. 2014: http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

Has anybody a clue how to enable V4 and thus enable support for new S3 regions?

Thanks!

Can't install - package requirements

Hi,

I've installed s3fs before on other Ubuntu systems but this time I'm running into issues. This is also the first time I'm using a Bitnami Ubuntu stack so it's completely possible that's related. Here's my install attempt:

$ sudo ./configure
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... no
checking for mawk... mawk
checking whether make sets $(MAKE)... yes
checking for g++... g++
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for pkg-config... /opt/bitnami/common/bin/pkg-config
checking pkg-config is at least version 0.9.0... yes
checking for DEPS... no
configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6 libcrypto >= 0.9) were not met:

No package 'fuse' found
No package 'libcurl' found
No package 'libxml-2.0' found
No package 'libcrypto' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

Alternatively, you may set the environment variables DEPS_CFLAGS
and DEPS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

The interesting thing is that the packages listed ARE installed and DO meet the version requirements:

$ pkg-config --list-all
iso-codes        iso-codes - ISO country, language, script and currency codes and translations
libtasn1         libtasn1 - Library for ASN.1 and DER manipulation
gnutls           GnuTLS - Transport Security Layer implementation for the GNU system
p11-kit-1        p11-kit - Library and proxy module for properly loading and sharing PKCS
libsepol         libsepol - SELinux policy library
openssl          OpenSSL - Secure Sockets Layer and cryptography libraries and tools
xkeyboard-config XKeyboardConfig - X Keyboard configuration data
librtmp          librtmp - RTMP implementation
libselinux       libselinux - SELinux utility library
fuse             fuse - Filesystem in Userspace
libcrypto        OpenSSL-libcrypto - OpenSSL cryptography library
zlib             zlib - zlib compression library
libxml-2.0       libXML - libXML library version2.
com_err          com_err - Common error description library
udev             udev - udev
usbutils         usbutils - USB device database
libidn           Libidn - IETF stringprep, nameprep, punycode, IDNA text processing.
libcurl          libcurl - Library to transfer files with ftp, http, etc.
gnutls-extra     GnuTLS-extra - Additional add-ons for GnuTLS licensed under GPL
dbus-python      dbus-python - Python bindings for D-Bus
libssl           OpenSSL - Secure Sockets Layer and cryptography libraries

Obviously pkg-config can see those packages so I'm not sure why the installer can't. Any ideas what's going on?

Cheers,

Jacob

0 byte files

We have a problem where we are getting 0 byte files written back to s3. I can seem to find a way to replicate as it seems to be quite random. We cant tell if it is a s3fs issue or something on our side.

I was wondering if there is a way of disallowing 0 byte files from being written?

Thanks.

cannot mount Google Cloud storage buckets

I'm using CentOS 6.5, FuSE 2.9.3, s3fs version 1.78 (from GIT), everything compiled from scratch, no old fuse or s3fs..

I created a key in "Interoperable Storage Access Keys" in the Google API Console, entered it into .passwd-s3fs, changed permissions, done everything, but when trying to mount I get this:

$ s3fs -d -d mybucket google-cloud-storage/
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
s3fs: Failed to access bucket.

Have I forgot something?

truncate error

I'm using S3FS as a storage directory for Bacula backups and quite often I see the following error in bacula logs:

2014-01-25 04:30:10   xxxx-sd JobId 267: Fatal error: label.c:463 Truncate error on device "FileStorage" (/mnt/backups): ERR=dev.c:1831 Unable to truncate device "FileStorage" (/mnt/backups). ERR=Operation not permitted 

I'm running s3fs v1.74 on Debian Wheezy x64 AWS instance with the following commandline:

s3fs xxxxxxxxxxx /mnt/backups -o allow_other,retries=10,connect_timeout=30,readwrite_timeout=30,use_cache=/mnt/cache

I've recently was observing the same error with nearly the same frequency with s3fs v1.61 on i686 Debian Squeeze instance.

verify Content-MD5 during get object

s3fs-fuse does not guarantee the integrity of downloaded objects as #64 demonstrates. Instead it should check against the ETag header returned from S3. Implementing this will have some complications on the parallel multi-get request code path.

comparison to s3fuse

Hi,

I have used s3fuse for a while (https://code.google.com/p/s3fuse/). Can you tell me any advantages of your project over s3fuse?
The other projects you link to cannot compete in maturity as far as I can say, but your project and s3fuse seem to be stable, and active. It says it is based on your project.
Maybe you can share insight on why another project has been created, instead of them working together with you? The difference I see is that s3fuse also provides access to google storage. From a user perspective it seems to be a good idea to merge the projects.

Thanks for your great work,
joe

Permission issue when mounted via /etc/fstab but nor when called from CLI

When I mount via /etc/fstab I can't create or copy files as a regular user:

$ whoami
jay
$ id -u 
1001
$ id -g 
1001
$ ls -ld /mnt/shared
drwxr-xr-x 2 jay jay 4096 2014-09-26 12:42 /mnt/shared
$ sudo mount /mnt/shared
$ ls -ld /mnt/shared 
drwxrwxrwx 1 jay jay 0 1970-01-01 01:00 /mnt/shared
$ cd /mnt/shared
$ touch test
touch: cannot touch `test': Input/output error

Here's the /etc/fstab entry:

s3fs#mybucket /mnt/shared fuse default_acl='public-read',use_cache='/mnt/.s3-cache',rw,del_cache,noatime,allow_other,uid=1001,gid=1001

and here's the line from the mount command:

fuse on /ebs/mnt/shared type fuse (rw,noatime,allow_other)

But if I unmount it, and then mount using the s3fs command, everything works as expected:

$ sudo umount /mnt/shared
$ ls -ld /mnt/shared
drwxr-xr-x 2 jay jay 4096 2014-09-26 12:42 /mnt/shared
$ sudo /usr/bin/s3fs mybucket /mnt/shared -o default_acl='public-read' -o use_cache='/mnt/.s3-cache' -o rw -o del_cache -o noatime -o allow_other -o uid=1001 -o gid=1001
$ mount
...
fuse on /ebs/mnt/shared type fuse (rw,nosuid,nodev,noatime,allow_other)
...
$ ls -ld /mnt/shared
drwxrwxrwx 1 jay jay 0 1970-01-01 01:00 /mnt/shared
$ touch test
$ ls -l test
-rw-r--r-- 1 jay jay 0 2014-09-26 16:37 test

Have I got the /etc/fstab entry incorrect? There is a slight difference shown by calling mount. Here's the /etc/fstab entry:

fuse on /ebs/mnt/shared type fuse (rw,noatime,allow_other)

and here's what it looks like when mounted from the s3fs commandline:

fuse on /ebs/mnt/shared type fuse (rw,nosuid,nodev,noatime,allow_other)

Is it a mistake on my part?

s3fs -u should return 0 if there are no lost multiparts

root@monitoring:~# s3fs -u XXXXXXXX
Utility Mode

Lists the parts that have been uploaded for a specific multipart upload.

There is no list.
root@monitoring:~# echo $?
1

I think s3fs should return 0 in the case of successful operation (no matter if there are any lost multiparts or not and if they were deleted or not) and non-0 otherwise (i.e. on errors). Currently it looks like you can't distinguish easily between s3fs errors and no lost multiparts.

I don't insist my vision is the only correct one but just wanted to hear of other opinions on this matter.

chmod doesn't appear to be working properly

Using s3fs version 1.74, I'm seeing the following behavior:

[root@vnpdb2 20130925]# pwd
/mnt/s3/quantumcloud/mands_files/Phase6c/Subset/20130925
[root@vnpdb2 20130925]# ls -l CAL*
-rwxrwx--- 1 root root 744 Dec 19 04:40 CALENDAR.ctrl
-rw-rw---- 1 root root   0 Dec 19 04:40 CALENDAR.dat
[root@vnpdb2 20130925]# chmod 777 CAL*
[root@vnpdb2 20130925]# ls -l CAL*
-rwxrwx--- 1 root root 744 Dec 19 04:40 CALENDAR.ctrl
-rw-rw---- 1 root root   0 Dec 19 04:40 CALENDAR.dat
[root@vnpdb2 20130925]# mount | grep quantumcloud
fuse on /mnt/s3/quantumcloud type fuse (rw,nosuid,nodev,allow_other)

The only thing I'm seeing the s3fs log output is:

s3fs_getattr(691): [path=/mands_files/Phase6c/Subset/20130925/CALENDAR.ctrl]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset/20130925][time=1393347318][hit count=2]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset][time=1393347318][hit count=2]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c][time=1393347318][hit count=1]
    GetStat(170): stat cache hit [path=/mands_files][time=1393347318][hit count=1]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset/20130925/CALENDAR.ctrl][time=1393347318][hit count=1]
s3fs_getattr(691): [path=/mands_files/Phase6c/Subset/20130925/CALENDAR.dat]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset/20130925][time=1393347318][hit count=3]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset][time=1393347318][hit count=3]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c][time=1393347318][hit count=2]
    GetStat(170): stat cache hit [path=/mands_files][time=1393347318][hit count=2]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset/20130925/CALENDAR.dat][time=1393347318][hit count=1]
s3fs_opendir(2050): [path=/mands_files/Phase6c/Subset/20130925][flags=100352]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset/20130925][time=1393347318][hit count=4]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset][time=1393347318][hit count=4]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c][time=1393347318][hit count=3]
    GetStat(170): stat cache hit [path=/mands_files][time=1393347318][hit count=3]
    GetStat(170): stat cache hit [path=/mands_files/Phase6c/Subset][time=1393347318][hit count=5]

-Wsign-compare make Warning

g++ -DPACKAGE_NAME="s3fs" -DPACKAGE_TARNAME="s3fs" -DPACKAGE_VERSION="1.77" -DPACKAGE_STRING="s3fs\ 1.77" -DPACKAGE_BUGREPORT="" -DPACKAGE_URL="" -DPACKAGE="s3fs" -DVERSION="1.77" -DHAVE_MALLOC_TRIM=1 -DSTDC_HEADERS=1 -DHAVE_SYS_TYPES_H=1 -DHAVE_SYS_STAT_H=1 -DHAVE_STDLIB_H=1 -DHAVE_STRING_H=1 -DHAVE_MEMORY_H=1 -DHAVE_STRINGS_H=1 -DHAVE_INTTYPES_H=1 -DHAVE_STDINT_H=1 -DHAVE_UNISTD_H=1 -I. -D_FILE_OFFSET_BITS=64 -I/usr/include/fuse -I/usr/include/libxml2 -g -O2 -Wall -D_FILE_OFFSET_BITS=64 -MT fdcache.o -MD -MP -MF .deps/fdcache.Tpo -c -o fdcache.o fdcache.cpp
fdcache.cpp: In member function ‘int FdEntity::Load(off_t, off_t)’:
fdcache.cpp:800:61: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
if((*iter)->bytes >= (2 * S3fsCurl::GetMultipartSize()) && !nomultipart){ // default 20MB

Add option to use "path style" requests

At the moment s3fs-fuse only supports virtual host style requests with the path prepended to the Host, i.e:

DELETE /puppy.jpg HTTP/1.1
User-Agent: dotnet
Host: mybucket.s3.amazonaws.com
Date: Tue, 15 Jan 2008 21:20:27 +0000
x-amz-date: Tue, 15 Jan 2008 21:20:27 +0000
Authorization: signatureValue

However some S3 compatible APIs do not support this style of request, and instead need the bucket name to be in the path, i.e.

DELETE /mybucket/puppy.jpg HTTP/1.1
User-Agent: dotnet
Host: s3.amazonaws.com
Date: Tue, 15 Jan 2008 21:20:27 +0000
x-amz-date: Tue, 15 Jan 2008 21:20:27 +0000
Authorization: signatureValue

I'm currently working on a patch which would allow this style of request to be used, which would be enabled with a configuration option such as use_path_style_request.

Use s3fs with a proxy - no write permission

Hello,

I'm using s3fs with a transparent proxy squid.
I'm using the allow_other option.

I can read data on s3 without any problem, s3 and my directory are synced.
But if I try for example

 mkdir /mnt/s3/test/

I always get :

 mkdir: cannot create directory ‘/mnt/s3/test/’: Operation not permitted

For this machine, I have to use a proxy. I've tried on another machine, which works fine without the proxy (it's the same accessKey/secretKey and the same bucket). If I activate the proxy on this other machine, I get the same problem (without changing anything else).

I've looked to the http data. I get the following messaging from Amazon, for the PUT method :

<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>

I have the same problem if I want to copy/create a file in the mounted directory.
It seems Amazon is recalculating the signature only in the case of a PUT method.

I've configured Squid with, looking to the Issue 59 :

via off
forwarded_for off
ignore_expect_100 on

Cannot find pkg-config when configured with any SSL backend except openssl

Hello,

s3fs ./configure step won't generate makefiles if --with-gnutls, --with-nettle or --with-gnutls option was used. Reported error is that ./configure step cannot find pkg-config which is installed. Overriding this error with environment variable allows to bypass this issue. Example log for gnutls:

user@localhost$ ./configure --with-gnutls
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for g++... g++
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking s3fs build with nettle(GnuTLS)... no
checking s3fs build with OpenSSL... no
checking s3fs build with GnuTLS... yes
checking s3fs build with NSS... no
checking compile s3fs with... GnuTLS-gcrypt
checking for gcry_control in -lgnutls... no
checking for gcry_control in -lgcrypt... yes
checking for DEPS... no
configure: error: in `/home/jollyroger/debian/packages/s3fs/tmp/s3fs-fuse-1.78-pre':
configure: error: The pkg-config script could not be found or is too old.  Make sure it
is in your PATH or set the PKG_CONFIG environment variable to the full
path to pkg-config.

Alternatively, you may set the environment variables DEPS_CFLAGS
and DEPS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

To get pkg-config, see .
See `config.log' for more details

Once we override this PKG_CONFIG environment variable, ./configure step succeeds.

user@localhost$ PKG_CONFIG=/usr/bin/pkg-config ./configure --with-gnutls
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for g++... g++
checking whether the C++ compiler works... yes
checking for C++ compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for style of include used by make... GNU
checking dependency style of g++... gcc3
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking dependency style of gcc... gcc3
checking s3fs build with nettle(GnuTLS)... no
checking s3fs build with OpenSSL... no
checking s3fs build with GnuTLS... yes
checking s3fs build with NSS... no
checking compile s3fs with... GnuTLS-gcrypt
checking for gcry_control in -lgnutls... no
checking for gcry_control in -lgcrypt... yes
checking for DEPS... yes
checking gnutls is build with... gcrypt
checking for malloc_trim... yes
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating src/Makefile
config.status: creating test/Makefile
config.status: creating doc/Makefile
config.status: executing depfiles commands

Dependencies available on the system:

dpkg -l pkg-config libfuse-dev libxml2-dev libcurl4-gnutls-dev libgnutls-dev                                                
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                 Version         Architecture    Description
+++-====================-===============-===============-=============================================
ii  libcurl4-gnutls-dev: 7.37.1-1        amd64           development files and documentation for libcu
ii  libfuse-dev          2.9.3-15        amd64           Filesystem in Userspace (development)
ii  libgnutls-dev        2.12.23-17      amd64           GNU TLS library - development files
ii  libxml2-dev:amd64    2.9.1+dfsg1-4   amd64           Development files for the GNOME XML library
ii  pkg-config           0.28-1          amd64           manage compile and link flags for libraries

s3fs appears to crash on ls with permission failure

Use case:

  • S3fs latest as of 3/23/14, e.g. 1.74
  • Arrange s3 bucket and iam role permissions for account a to write account b bucket (we wrote to a top level pseudo-folder, e.g. "files/whatever"
  • Copy files from a to b with a 'brute force' tool like aws cli (sync or copy)
  • Observe that the ACL on account b does not have account b owner rights ACL (because account a wrote the file)

Now ...
List the top level bucket with s3fs and see s3fs is working
List the top level folder and observe the segfault
Mar 23 11:44:52 ip-10-0-0-62 kernel: s3fs[23263]: segfault at 0 ip 0000000000423639 sp 00007f9a1ee22760 error 4 in s3fs[400000+51000]

Thing is ... s3fs will segfault if any file in a listing fails permissions in this way.
Suggested fix:

  • Catch and trap any ACL error
  • Print out message object: XXXX cannot be (whatever the operation was)
  • Continue to next operation

s3fs-fuse doesn't allow rsync delta transfer.

It is my understanding that s3fs-fuse is supposed to support rsync but it does not appear to support its most important/unique feature which is delta encoding.

Changing the id3 tags on some MP3s and rerunning rsync:

~$ rsync -avsh --delete --progress --no-whole-file --inplace ~/Random-Songs ~/DreamObjects

rsync runs for about as long as I would expect the 50 MBs to take to upload at ~2 Mbps. rsync reports the following:

sent 114.96K bytes  received 121.78K bytes  1.07K bytes/sec
total size is 52.16M  speedup is 220.30

Am I doing anything incorrectly or is this simply a limitation of rsync/S3/s3fs-fuse?

Thank you

The full output is:

~$ rsync -avsh --delete --progress --no-whole-file --inplace ~/Random-Songs ~/DreamObjects
sending incremental file list
Random-Songs/Aztec Camera -  Just Like Gold.mp3
       3.15M 100%   75.06MB/s    0:00:00 (xfer#1, to-check=11/13)
Random-Songs/Aztec Camera -  Lost Outside The Tunnel.mp3
       3.30M 100%   69.76MB/s    0:00:00 (xfer#2, to-check=10/13)
Random-Songs/Aztec Camera -  We Could Send Letters.mp3
       4.75M 100%   43.97MB/s    0:00:00 (xfer#3, to-check=9/13)
Random-Songs/Aztec Camera - Just Like Gold (7')-5jtzu8RdicQ.mp4
       7.26M 100%   40.26MB/s    0:00:00 (xfer#4, to-check=8/13)
Random-Songs/Aztec Camera - Lost Outside the Tunnel-909-YjBIjAY.mp4
       6.16M 100%  514.79kB/s    0:00:11 (xfer#5, to-check=7/13)
Random-Songs/Aztec Camera - Mattress Of Wire.mp3
       2.64M 100%    1.11MB/s    0:00:02 (xfer#6, to-check=6/13)
Random-Songs/Aztec Camera - Mattress of Wire-N2ozbgxQyJE.mp4
       5.07M 100%  227.41kB/s    0:00:21 (xfer#7, to-check=5/13)
Random-Songs/Aztec Camera - We Could Send Letters (7' Version)-wO_6Y4oT7UQ.mp4
      10.35M 100%  331.93kB/s    0:00:30 (xfer#8, to-check=4/13)
Random-Songs/Seven Saturdays - Eleven Eleven.mp3
       4.73M 100%    2.81MB/s    0:00:01 (xfer#9, to-check=3/13)

sent 114.96K bytes  received 121.78K bytes  1.07K bytes/sec
total size is 52.16M  speedup is 220.30

incron / lsyncd (inotify or fsevents) not detecting changes at s3.

Hi,

I've been pulling my hair out all day on finding out why an s3fs mounted directory won't be updated via incron or lsyncd.

Then I happened to come across this:

http://unix.stackexchange.com/questions/100837/incrontab-doesnt-detect-modifications-on-a-s3fs-mount

I am running the latest fuse:

fuse-devel-2.9.3-1.el6.x86_64
fuse-libs-2.9.3-1.el6.x86_64
fuse-2.9.3-1.el6.x86_64

I created the rpms myself for centos 6.5.

and the latest s3fs:

[root@ip-10-0-4-78 www]# s3fs --v
Amazon Simple Storage Service File System V1.77 with OpenSSL

Yet, when the s3 bucket is updated, the change is available when i view the s3fs mount point but incron or lsyncd do not update nor to they do suggest any activity in their logs.

However, if I go into the s3fs mount and I do an update there. Everything works as it should.

So why am I having the issue of nothing being detected when a change at s3 happens?

Thanks

du shows incorrect usage stats

root@monitoring:/mnt/backups# ls -l Vol0001 
-rw-r----- 1 root root 548618 Янв  7 08:15 Vol0001
root@monitoring:/mnt/backups# du -chs Vol0001 
512 Vol0001
512 total

I'm running s3fs 1.74 (r499) on Debian Wheezy 7.3

s3fs[2008]: segfault at 0 ip 0000000000423639 sp 00007fe075d006f0 error 4 in s3fs[400000+51000]

Amazon Simple Storage Service File System V1.78 with OpenSSL
fuse-2.9.3
libcurl-devel-7.19.7-40.el6_6.1.x86_64
libcurl-7.19.7-40.el6_6.1.x86_64

Plenty of space on target dir partition.
Actually the problem is when I try to not use cache I get
s3fs[2008]: segfault at 0 ip 0000000000423639 sp 00007fe075d006f0 error 4 in s3fs[400000+51000]

A problem with Libcurl ? It's a 29 gig ec2 instance the file is only 18gig for our tests.

Fuse 2.8.4 really required?

Hi,

I'm using centos 6.5. It comes with Fuse 2.8.3.

Is there any particular reason why fuse 2.8.4 is required? Are there any problems with 2.8.3?

Thanks

Operation not permitted using ahbe_conf

I'm trying to use ahbe_conf to set specific headers, but when enabled I'm getting a "Operation not permitted" error while copying files to my bucket.

My ahbe_conf only line ".js Content-Type charset=utf-8"

I tried to use the example file to test, but I get the same errors.

Version of s3fs being used (s3fs --version): 1.74

Version of fuse being used (pkg-config --modversion fuse): 2.9.3

System information (uname -a): Linux "hostname" 3.4.66-55.43.amzn1.x86_64 #1 SMP Wed Oct 16 06:26:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

Distro (cat /etc/issue): Amazon Linux AMI release 2013.09

/etc/fstab entry (if applicable): s3fs#"bucket_name" /export/s3/lib fuse _netdev,use_cache=/tmp,use_rrs=1,default_acl=public-read,ahbe_conf=/etc/s3fs/ahbe.conf,allow_other,uid=222,gid=500 0 0

S3FS IAM roles

Hi;

Is there an example of how to use s3fs with IAM roles please. If my server has an IAM role attached to it, how will I define that in the s3fs configs? How do I do this?

Thanks
Ali.

umask CLI option doesn't handle octal umask value

I ran into the issue with subdirectories in an s3 bucket not having x-ams-meta-*** headers, and therefore appearing to have d--------- permissions. The user, group members, and others could not list the subdirectories.

I followed the suggestion of setting umask=022 in issue 370 from the Google Code issues (https://code.google.com/p/s3fs/issues/detail?id=370), which forces subdirectories to have drwxr-xr-x permissions. The user and group members could then list the subdirectories, but others still could not.

It turns out that the parser code for umask values doesn't handle string representations of octal values correctly. The parser code dropped leading zeros and treated the remaining digits as base-10 integers; thus, it was treating octal 022 as base-10 22, i.e., octal 026.

The workaround is to set umask values to hexadecimal values, which the parser code does handle correctly. Thus, for 022, use 0x12.

s3fs swift on top of radosgw

I have set up ceph 0.72.2 with radosgw (S3 / Swift - compatible API) for local S3 storage cluster.

The python-swiftclient (S3 compatible) client connects to the local node http://hmetrics002 and works well, allowing to perform upload / download / list commands for bucket 'backup'.

> export ST_AUTH=http://hmetrics002/auth
> export ST_USER=metrics-backup:swift
> export ST_KEY=XXUv90NDQtzaTn5ogda8wj1it66hwC67IAaE7XW
> swift -V 1.0 upload backup test.txt

> swift -V 1.0 list backup
test.txt

Here is the log of try mounting this S3-compatible bucket using latest compiled s3fs-fuse. Please notice that we need to use exported variables due to having already comma symbol in the name of KEY (metrics-backup:swift), that has been provided by ceph. So neither /etc/passwd-s3fs, nor ~/passwd-s3fs file can be used due to this reason of separator overlapping.

Anyway, the mount of this local S3-compatible storage fails and I can't figure out what to change to overcome this

export AWSACCESSKEYID=metrics-backup:swift
export AWSSECRETACCESSKEY=XXUv90NDQtzaTn5ogda8wj1it66hwC67IAaE7XW
sudo src/s3fs backup /mnt/s3fs-fuse -o use_cache=/tmp,url=http://hmetrics002 -f --debug
    set_moutpoint_attribute(3293): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs: could not determine how to establish security credentials

Here is the details on versions

 > sudo pkg-config --modversion fuse
2.8.6

> sudo src/s3fs --version
Amazon Simple Storage Service File System 1.76

> grep s3fs /var/log/syslog
Mar 31 16:50:12 metrics s3fs: init $Rev: 497 $

>uname -a 
uname -a
Linux hmetrics001 3.2.0-31-generic #50-Ubuntu SMP  x86_64 x86_64 x86_64 GNU/Linux

Need to mention, that nothing interesting appears in syslog

Access amazon using HTTPS

Can't seem to get the url=https://s3.amazonaws.com working

Using 1.78, Debian 7.x instructions (Vagrant using Virtualbox on Mac OSX)

git clone https://github.com/s3fs-fuse/s3fs-fuse
cd s3fs-fuse/
./autogen.sh
./configure --prefix=/usr --with-openssl
make
sudo make install

When I try to mount it using

sudo vi /etc/fstab
sudo mount /mnt/test-bucket

with theurl=https://s3.amazonaws.com => mount fails (shows the ???)

when I remove the url=https://s3.amazonaws.com from the fstab line

sudo mount /mnt/test-bucket succeeds

s3fs only works properly with -f option?

Hi

I've successfully setup s3fs working on an EC2 instance and connecting to S3 via an IAM role (using the iam_role option).

Everything works fine when I invoke s3fs using the -f option - and I see the log of what's going on in the terminal window.

However if I try to run this without the -f option or to add it to /etc/fstab so that it runs at boot and mounts automatically it doesn't work.
The command returns fine, but when I try to access the local directory containing the mount, I get:

s3fs: unable to access MOUNTPOINT /mnt/batches: Transport endpoint is not connected

I can get it working again by running umount and then s3fs with -f option again.

Any clues as to what I'm doing wrong?

mount fails on eu-central-1 hosting

bash-4.2# /opt/bin/s3fs -o allow_other -f -d logicfish-crux-ports ~/mnt
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://logicfish-crux-ports.s3.amazonaws.com/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
bash-4.2#

Moving a directory containing more than 1000 files truncates the directory

Move to a s3fs mount point and

mkdir test test2
cd test
for i in {0..1234}; do echo Hello > $i; done
ls | wc -l

The directory contains 1235 files
now do

cd ..
mv test test2
ls test2/test/ | wc -l

You may see an error such as
mv: cannot move test' to test2/test': Input/output error
and the directory only contains only 1000 files.

A quick look at the code, maybe this is the problem?

query += "&max-keys=1000";

use_sse is ignored when creating new files

The x-amz-server-side-encryption header is not sent when creating new files, only when files are flushed from the cache.

This means that when you set a policy on your bucket like the one described here: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html, which forces that only encrypted objects can be stored in the bucket, you get 'operation not permitted' errors because the policy is blocking all put requests that don't have the sse header.

The functions in s3fs.cpp, create_file_object and create_directory_object now call PutRequest like this:

return s3fscurl.PutRequest(path, meta, -1, false);

It should probably be called like this:

return s3fscurl.PutRequest(path, meta, -1, S3fsCurl::GetUseSse());

Can see folders but can't mount them

The problem is that i can mount the whole bucket and view all folders inside but when i mount a path like :/images on folder mnt , i can't cd to the folder with error "Transport endpoint is not connected" , the folder images was created through AWS console and filled by w3tc wordpress plugin, knowing that if i create any new folder through AWS console i can mount it normally and use it, it's only that "images" folder and it's subfolders that gives the mentioned error when mounted.

FreeBSD issue

I download version 1.78. but can not compile on FreeBSD

openssl_auth.cpp:97:10: error: static_cast from 'pthread_t' (aka 'pthread *') to 'unsigned long' is not allowed

Directory Representation

Hi,

Could you explain your design to map a directory structure to amazon s3? If I understand correctly, you use metadata headers like "application/x-directory" to create an object representing a single directory. I did not get as far as how you name those objects, or how you handle adding new objects to the directory. I would hash the actual directory path, and add the filenames into the object one name per line so that you can quickly list a directory without the need to list the whole bucket contents. It would be great if you could give me a hand here, though. You must have spent some time working this out, so I don't see the point in reinventing the wheel, when I can profit from your experience. I hope to use the solution for my Cloud Storage Framework CloudFusion, in an effort to integrate Google Storage, which seems to have similar restrictions concerning the creation of directories as amazon s3.

Thanks,
joe42

empty file is written to s3

Not sure this is related to #11 but I've decided to create a separate issue.

While writing to s3fs we quite often see that the file of 0 bytes is actually written.
The file size is 75,661,483,206 bytes last time we've see that and about that size in the previous cases. I think we see that only for the only that big file while smaller files (like 20Gb) are written ok.

We use the following command to run s3fs:

s3fs -d foobar-backups /mnt/backups -o allow_other,retries=10,connect_timeout=30,readwrite_timeout=30,use_cache=/mnt/cache -o passwd_file=/etc/foobar-backups

S3fs version is 1.74

The logs for the case are the following:

Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122304][hit count=37]
Feb 23 06:30:07 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:07 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:07 bacula s3fs: HTTP response code 200
Feb 23 06:30:07 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:07 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=0]
Feb 23 06:30:07 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:07 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122607][hit count=1]
Feb 23 06:30:09 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122609][hit count=2]
Feb 23 06:30:09 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploads
Feb 23 06:30:09 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198?uploadId=1NY2wdJHkfH8NDzefATUGqr.MAJ8AhTfBp4UvuQML527Sgva96MMD6qsF7TEg1Tq3bloudT2AmsSooKaa.qjjZ2qOHrJ.7aS0QEOOn59zs5191lEc.jcu.4Iz2rUT7Jl
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2]
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 204
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198/
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198_%24folder%24
Feb 23 06:30:10 bacula s3fs: HTTP response code 404
Feb 23 06:30:10 bacula s3fs: HTTP response code 404 was returned, returning ENOENT
Feb 23 06:30:10 bacula s3fs: Body Text: 
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com?delimiter=/&prefix=bacula_client_pool_0198/&max-keys=1000
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: contents_xp->nodesetval is empty.
Feb 23 06:30:10 bacula s3fs: create zero byte file object.
Feb 23 06:30:10 bacula s3fs: uploading... [path=/bacula_client_pool_0198][fd=-1][size=0]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: file unlocked(/bacula_client_pool_0198)
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=2]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=3]
Feb 23 06:30:10 bacula s3fs: copying... [path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: connecting to URL http://foobar-backups.s3.amazonaws.com/bacula_client_pool_0198
Feb 23 06:30:10 bacula s3fs: HTTP response code 200
Feb 23 06:30:10 bacula s3fs: add stat cache entry[path=/bacula_client_pool_0198]
Feb 23 06:30:10 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=0]
Feb 23 06:44:01 bacula /USR/SBIN/CRON[30386]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 06:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393122610][hit count=1]
Feb 23 07:44:01 bacula /USR/SBIN/CRON[11138]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 07:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393123442][hit count=2]
Feb 23 08:44:01 bacula /USR/SBIN/CRON[24259]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 08:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393127041][hit count=3]
Feb 23 09:44:02 bacula /USR/SBIN/CRON[3633]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 09:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393130641][hit count=4]
Feb 23 10:44:01 bacula /USR/SBIN/CRON[14815]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 10:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393134242][hit count=5]
Feb 23 11:44:01 bacula /USR/SBIN/CRON[25959]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 11:44:02 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393137842][hit count=6]
Feb 23 12:44:01 bacula /USR/SBIN/CRON[4655]: (root) CMD (/usr/local/bin/cleanup_s3fs_cache)
Feb 23 12:44:01 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393141442][hit count=7]
Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145041][hit count=8]
Feb 23 12:48:46 bacula s3fs: stat cache hit [path=/bacula_client_pool_0198][time=1393145326][hit count=9]
Feb 23 12:48:46 bacula s3fs: delete stat cache entry[path=/bacula_client_pool_0198]
Feb 23 12:48:46 bacula s3fs: file locked(/bacula_client_pool_0198 - /mnt/cache/.foobar-backups.stat/bacula_client_pool_0198)
Feb 23 12:48:46 bacula s3fs: file unlocked(/bacula_client_pool_0198)

Do you have any ideas what could be the reason for that behaviour and how we could fix that?

Thanks.

Cache files are blank

Please look at this binary comparison image:
image-differences
On the left we have the source image file downloaded from S3 console and on the right we have the cache file created by the s3fs mount.

Is this the expected behaviour? I noticed that the cache doesn't make a speed difference as the file is being downloaded from S3 each time. I tested latency to my web server in a couple of ways:

  • Direct image load (no s3 mount): ~200 latency
  • Direct image load (with mount, no cache): ~1000 latency
  • Direct image load (with mount, with cache): ~1000 latency

This lead me to believe that the cache isn't working properly, thus I did the binary comparison and found the bytes of the source vs cache doesn't match up.

Urgent help would be much appreciated.

Thanks,
Matan

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.