Coder Social home page Coder Social logo

geesefs's People

Contributors

angristan avatar bttrfl avatar cekvenich avatar codelingobot avatar djmaze avatar dotslash avatar ebressler avatar felixoid avatar fly1028 avatar gaul avatar gilbsgilbs avatar hengzhe-zhang avatar javilumbrales avatar jfwarner avatar jtwang83 avatar kahing avatar lnicola avatar lrowe avatar mfowlewebs avatar monken avatar murka avatar pcram-techcyte avatar reo7sp avatar robmadole avatar roopetervo avatar shindo avatar shvc avatar t2y avatar vitalif avatar yuvisara avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

geesefs's Issues

Disk cache doesn't seem to work

Filesystem mounted with command:

/opt/geesefs/bin/geesefs --cache /large/geesefs-cache --dir-mode 0750 --file-mode 0640 --cache-file-mode 0640 --cache-to-disk-hits 1 --memory-limit 4000 --max-flushers 32 --max-parallel-parts 32 --part-sizes 25 -f my-bucket /mnt/my-bucket

I can access files in /mnt/my-bucket, no errors are reported, but nothing is stored in /large/geesefs-cache no matter how many times file is accessed.

Is it broken or I am doing something wrong?

$ /opt/geesefs/bin/geesefs --version
geesefs version 0.30.8

geesefs started to panic with R2 recently

Hello, yesterday I've started getting issues in syslog, and some programs stuck

Oct 27 06:40:29 ip-172-31-87-196 /usr/bin/geesefs[1444]: main.ERROR stacktrace from panic: deref inode 1896 (rpm/stable/repodata) by 4 from 2 #012goroutine 431264 [running]:#012runtime/debug.Stack(0xc0adce2bd8, 0xf7baa0, 0xc05aa5f4b0)#012#011/opt/hostedtoolcache/go/1.16.15/x64/src/runtime/debug/stack.go:24 +0x9f#012github.com/yandex-cloud/geesefs/api/common.LogPanic(0xc0adce2f10)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:32 +0x76#012panic(0xf7baa0, 0xc05aa5f4b0)#012#011/opt/hostedtoolcache/go/1.16.15/x64/src/runtime/panic.go:965 +0x1b9#012github.com/yandex-cloud/geesefs/internal.(*Inode).DeRef(0xc063d27680, 0x4, 0x768)#012#011/home/runner/work/geesefs/geesefs/internal/handles.go:361 +0x3a8#012github.com/yandex-cloud/geesefs/internal.(*Goofys).ForgetInode(0xc000408240, 0x133cad8, 0xc05aa63fb0, 0xc0597c1cb0, 0x0, 0x18)#012#011/home/runner/work/geesefs/geesefs/internal/goofys.go:1080 +0xd4#012github.com/yandex-cloud/geesefs/api/common.FusePanicLogger.ForgetInode(0x134cdc0, 0xc000408240, 0x133cad8, 0xc05aa63fb0, 0xc0597c1cb0, 0x0, 0x0)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:61 +0x89#012github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0xc0003b5260, 0xc0003589c0, 0x133cad8, 0xc05aa63fb0, 0xf4af20, 0xc05b8070e0)#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:160 +0xb58#012created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x1a5#012
Oct 27 06:40:29 ip-172-31-87-196 /usr/bin/geesefs[1444]: fuse.ERROR *fuseops.BatchForgetOp error: input/output error

The command how the bucket is mounted:

/usr/bin/geesefs packages /home/ubuntu/r2 -o rw,user_id=1000,group_id=1000,--cheap,--file-mode=0666,--dir-mode=0777,--endpoint=https://****.r2.cloudflarestorage.com,--shared-config=/home/ubuntu/.r2_auth,--memory-limit=2050,--gc-interval=100,--max-flushers=2,--max-parallel-parts=3,--max-parallel-copy=2,dev,suid

Не монтируется повторно самотоятельно если geesefs убил OOM

Пришел ООM во время бэкапа

Oct 20 18:03:22 storage-new kernel: [15748.775876] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/system.slice/mnt-s3-support_files.mount,task=geesefs,pid=883,uid=0
Oct 20 18:03:22 storage-new kernel: [15748.775915] Out of memory: Killed process 883 (geesefs) total-vm:4499164kB, anon-rss:2332184kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:5776kB oom_score_adj:0
Oct 20 18:03:22 storage-new kernel: [15748.861969] oom_reaper: reaped process 883 (geesefs), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

proxmox-backup client

catalog upload error - channel closed
Error: error at "140201/test000.ts": Transport endpoint is not connected (os error 107)

mount -a
df -h /mnt/s3/support_files
df: /mnt/s3/support_files: Transport endpoint is not connected
umount -f /mnt/s3/support_files
mount -a
df -h /mnt/s3/support_files
Filesystem Size Used Avail Use% Mounted on
support-files 1.0P 0 1.0P 0% /mnt/s3/support_files

Unstable when copy many files

We use our tool written in C# which every night copy many files (approx. 3000 files with 30 GB total size) from local storage to S3 bucket mounted as disk using geesefs. Process started at 3:00 at night and after 30 minutes shows the following error:

Dec 28 07:31:39 wp-angstrem /sbin/geesefs[159621]: fuse.ERROR *fuseops.LookUpInodeOp error: permission denied
Dec 28 03:30:50 wp-angstrem wigwam-api[21726]: #33[41m#033[30mfail#033[39m#033[22m#033[49m: Microsoft.AspNetCore.Diagnostics.ExceptionHandlerMiddleware[1]
Dec 28 03:30:50 wp-angstrem wigwam-api[21726]: An unhandled exception has occurred while executing the request.
Dec 28 03:30:50 wp-angstrem wigwam-api[21726]: System.UnauthorizedAccessException: Access to the path '/archive-s3/webplanner/12/36/1236584.b3db' is denied.
Dec 28 03:30:50 wp-angstrem wigwam-api[21726]: ---> System.IO.IOException: Permission denied
Dec 28 03:30:50 wp-angstrem wigwam-api[21726]: --- End of inner exception stack trace ---

When our tool copy files from one local disk to another all works fine. Seems that it's an issue of geesefs. Can you please explain what does this error means?

Enormous memory usage

I have a bucket with over 6 million objects mounted using geesefs like this:

/opt/geesefs/bin/geesefs --dir-mode 0750 --file-mode 0640 --cache-file-mode 0640 -f my-bucket /mnt/my-bucket

After running rsync for almost two days to syncrhonize data that mostly already exists in the bucket, I noticed very high memory usage:

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
3390745 www-data  20   0   10.5g   7.8g   7312 R 146.5  33.5   1502:09 geesefs

And it keeps growing.

Trust evaluate failures

I'm attempting to use geesefs on an M1 Mac. I've downloaded the appropriate binary, made it executable, moved it to /usr/local/bin, and installed macfuse.

When I run geesefs geesefs {bucket_name} {mount_point, I get main.FATAL Unable to mount file system, see syslog for details.

In the console, I see a number of Trust evaluate failure: [leaf TemporalValidity] entries. How can I resolve these so I can use geesefs?

image

Deadlock when renaming non empty folder

Hi, we experience blocking for geesefs mounted filesystem when renaming a folder (300 files, 125 GB). It was discussed on support thread at yandex cloud, ticket number 166384642671346, but as also related to geesefs itself we came here.

The behavior: after entering mv match/Editing/MIX_Match_DEL match/Editing/MIX_Match_DELME it hangs, ls match/ succeed but ls match/Editing hangs. Another folders are renamed successfully, so it is only single folder that is affected with this issue been discovered (storage.yandexcloud.net/match/Editing/MIX_Match_DEL).

> geesefs --version
geesefs version 0.31.8

The stack trace that you may need, taken while filesystem is affected with this issue: grs.txt.

The log for geesefs itself contains a production data, that should not be published here. Is there chance that you would access to corresponding support thread (166384642671346) at yandex cloud?

Unable to mount OCI Object Storage

I'm trying to use geesefs with Oracle Cloud (OCI) Object Storage.

I can use s3fs, but not geesefs:

s3fs -o url=https://<namespace>.compat.objectstorage.<region>.oraclecloud.com -o use_path_request_style -o passwd_file=s3fs-credentials <bucket> <mount-point>

Works!

./geesefs --endpoint https://<namespace>.compat.objectstorage.<region>.oraclecloud.com --debug_s3 --debug_fuse --log-file log.txt -o use_path_request_style <bucket> <mount-point>

or

AWS_ACCESS_KEY_ID=<id> AWS_SECRET_ACCESS_KEY='<key>' ./geesefs --endpoint https://<namespace>.compat.objectstorage.<region>.oraclecloud.com --debug_s3 --debug_fuse --log-file log.txt <bucket> <mount-point>

Log:

s3.DEBUG HEAD https://<namespace>.compat.objectstorage.<region>.oraclecloud.com/<bucket> = 404 []
s3.DEBUG Opc-Request-Id = [jed-1:j_JLhhEZA3g17ufgrdzPoWTN6AsLA1nz8ZREULNI7QWHAs_PSYsMSpRomoTg2OiF]
s3.DEBUG X-Api-Id = [s3-compatible]
s3.DEBUG X-Amz-Request-Id = [jed-1:j_JLhhEZA3g17ufgrdzPoWTN6AsLA1nz8ZREULNI7QWHAs_PSYsMSpRomoTg2OiF]
s3.DEBUG Date = [Sat, 13 Aug 2022 12:36:07 GMT]
main.ERROR Unable to access '<bucket>': bucket <bucket> does not exist
main.FATAL Mounting file system: Mount: initialization failed
main.FATAL Unable to mount file system, see syslog for details

Wasabi S3 compatibility issue: s3/ListObjects status code 500

Wasabi is another cloud provider of S3 at very competitive speeds and prices. Unfortunately, there seems to be a compatibility issue with geesefs which is not present with goofys or s3fs, related to a part of the S3 spec they must be differing in somehow.

To reproduce:

geesefs --endpoint https://s3.wasabisys.com mybucket /mnt

mkdir /mnt/a
ls /mnt/a     [SUCCEEDS]
ls /mnt       [ALWAYS SUCCEEDS]

geesefs SIGINT
geesefs --endpoint https://s3.wasabisys.com mybucket /mnt

ls /mnt/a     [FAILS --> ls: reading directory '/mnt/a': Resource temporarily unavailable]
ls /mnt       [ALWAYS SUCCEEDS]
ls /mnt/a     [SUCCEEDS]

No matter how many directories deep, unless the directory above it was read first (or recently??), it fails with the status code 500 errors.
On that failure, these errors occur:

2022/08/03 19:15:16.014305 s3.DEBUG DEBUG: Validate Response s3/ListObjects failed, attempt 2/3, error InternalError: We encountered an internal error.  Please retry the operation again later.
        status code: 500, request id: <redacted>, host id: <redacted>
2022/08/03 19:15:16.210893 s3.DEBUG DEBUG: Request s3/ListObjects Details:
---[ REQUEST POST-SIGN ]-----------------------------
GET /demo8?marker=a.%F4%8F%BF%BF&prefix= HTTP/1.1
Host: s3.wasabisys.com
User-Agent: GeeseFS/0.31.5 (go1.16.15; linux; amd64)
Accept-Encoding: identity
Authorization: AWS4-HMAC-SHA256 Credential=<redacted>/20220803/us-east-1/s3/aws4_request, SignedHeaders=accept-encoding;host;x-amz-content-sha256;x-amz-date, Signature=<redacted>
X-Amz-Content-Sha256: <redacted>
X-Amz-Date: <redacted>


-----------------------------------------------------
2022/08/03 19:15:16.326267 s3.DEBUG DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 500 Internal Server Error
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 03 Aug 2022 23:15:16 GMT
Server: WasabiS3/7.5.1035-2022-06-08-c4b39686a7 (head07)
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: <redacted>
X-Amz-Request-Id: <redacted>

I have also reached out to Wasabi support to inquire about this issue, but given your understanding of geesefs's internals, if you could provide a solution to above, or some sort of compatibility mode, it would be very greatly appreciated. Wasabi does offer a free trial which may aid debugging, and I would be more than happy to do anything possible on my side to help.

Unable to run within container

When trying to run geesefs within a container the error no such file or directory is returned.

For example

$ ls -la 
total 29712
drwxrwxr-x  2 ubuntu ubuntu     4096 Jun 18 08:00 .
drwxr-xr-x 37 ubuntu ubuntu    20480 Jun 18 07:59 ..
-rwxrwxr-x  1 root   root   30397828 May 24 10:59 geesefs-linux-amd64

$ docker run --rm -w $PWD -v $PWD:$PWD -it --privileged busybox ls -la 
total 29696
drwxrwxr-x    2 1000     1000          4096 Jun 18 08:00 .
drwxr-xr-x    3 root     root          4096 Jun 18 08:06 ..
-rwxrwxr-x    1 root     root      30397828 May 24 10:59 geesefs-linux-amd64

$ docker run --rm -w $PWD -v $PWD:$PWD -it --privileged busybox ./geesefs-linux-amd64
exec ./geesefs-linux-amd64: no such file or directory

I suspect geesefs is trying to resolve some dynamically linked library and it's failing with that generic error message.

Does it support "Assume Role" and SSO with AWS S3?

The temporary tokens are located in ~/.aws/sso/cache/.

I'm not entirely familiar with how it works, but maybe it is described here: https://docs.aws.amazon.com/sdkref/latest/guide/feature-assume-role-credentials.html

When I simply trying to run

./geesefs --profile ...

It gives:

Sep 21 01:55:25 ip-172-31-5-46 /home/ubuntu/geesefs[270616]: main.ERROR Unable to access 'github-download-experiment': BadRequest: Bad Request#012#011status code: 400, request id: 9STZND1BF01FSJS3, host id: Mn3fgLxQz8H6RcK0E4gmMezWANNBtPyClFiG+3HVgA1vXpvfgspwxxo0nKrisAHc13rjqEPzDiM=

The mounted CloudFlare R2 sooner or later stucks completely

During the last week, I've experienced multiple stuck with a mounted R2 bucket. It happens always in the end during the writing.

The command to mount I've tried does not have something special, I've tried w/ and w/o --cheap option

geesefs --endpoint=https://id.r2.cloudflarestorage.com --shared-config=/home/ubuntu/.r2_auth --memory-limit=550 bucket r2

The shared storage is the following:

[default]
aws_access_key_id = ***
aws_secret_access_key =***

The rsync process stuck around 11:22, but can't say more precise

building file list ... 
1402 files to consider
cannot delete non-empty directory: deb/dists/stable/main/binary-arm64
cannot delete non-empty directory: deb/dists/stable/main/binary-arm64
cannot delete non-empty directory: deb/dists/stable/main/binary-amd64
cannot delete non-empty directory: deb/dists/stable/main/binary-amd64
cannot delete non-empty directory: deb/dists/stable/main
cannot delete non-empty directory: deb/dists/stable/main
cannot delete non-empty directory: deb/dists/stable
cannot delete non-empty directory: deb/dists/stable
cannot delete non-empty directory: deb/dists/lts/main/binary-arm64
cannot delete non-empty directory: deb/dists/lts/main/binary-arm64
cannot delete non-empty directory: deb/dists/lts/main/binary-amd64
cannot delete non-empty directory: deb/dists/lts/main/binary-amd64
cannot delete non-empty directory: deb/dists/lts/main
cannot delete non-empty directory: deb/dists/lts/main
cannot delete non-empty directory: deb/dists/lts
cannot delete non-empty directory: deb/dists/lts
cannot delete non-empty directory: deb/dists
cannot delete non-empty directory: rpm/lts/repodata
deb/pool/main/c/clickhouse/clickhouse-client_22.7.6.74_amd64.deb
         75,152 100%   40.42MB/s    0:00:00 (xfr#1, to-chk=1107/1402)
deb/pool/main/c/clickhouse/clickhouse-client_22.7.6.74_arm64.deb
         75,146 100%    6.51MB/s    0:00:00 (xfr#2, to-chk=1106/1402)
deb/pool/main/c/clickhouse/clickhouse-client_22.8.6.71_amd64.deb
         75,274 100%    4.79MB/s    0:00:00 (xfr#3, to-chk=1105/1402)
deb/pool/main/c/clickhouse/clickhouse-client_22.8.6.71_arm64.deb
         75,274 100%    1.84MB/s    0:00:00 (xfr#4, to-chk=1104/1402)
deb/pool/main/c/clickhouse/clickhouse-client_22.9.3.18_amd64.deb
         86,612 100%    1.84MB/s    0:00:00 (xfr#5, to-chk=1099/1402)
deb/pool/main/c/clickhouse/clickhouse-client_22.9.3.18_arm64.deb
         86,622 100%    1.59MB/s    0:00:00 (xfr#6, to-chk=1098/1402)
deb/pool/main/c/clickhouse/clickhouse-common-static-dbg_22.7.6.74_amd64.deb
    872,235,938 100%   18.30MB/s    0:00:45 (xfr#7, to-chk=1079/1402)
deb/pool/main/c/clickhouse/clickhouse-common-static-dbg_22.7.6.74_arm64.deb
    648,380,416  79%    7.38MB/s    0:00:22 # it's stuck here

Here's a log file geesefs.log

Падает вот с такой ошибкой

fatal error: sync: unlock of unlocked mutex

goroutine 12 [running]:
runtime.throw({0xfe66ae, 0xc025d56f60})
/usr/lib/golang/src/runtime/panic.go:1198 +0x71 fp=0xc00004bed8 sp=0xc00004bea8 pc=0x43b931
sync.throw({0xfe66ae, 0x1})
/usr/lib/golang/src/runtime/panic.go:1184 +0x1e fp=0xc00004bef8 sp=0xc00004bed8 pc=0x469d5e
sync.(*Mutex).unlockSlow(0xc00035a7b8, 0xffffffff)
/usr/lib/golang/src/sync/mutex.go:196 +0x3c fp=0xc00004bf20 sp=0xc00004bef8 pc=0x478cdc
sync.(*Mutex).Unlock(0x0)
/usr/lib/golang/src/sync/mutex.go:190 +0x29 fp=0xc00004bf40 sp=0xc00004bf20 pc=0x478c69
sync.(*Cond).Wait(0xc025d56f60)
/usr/lib/golang/src/sync/cond.go:55 +0x7e fp=0xc00004bf78 sp=0xc00004bf40 pc=0x476e9e
github.com/yandex-cloud/geesefs/internal.(*Goofys).FDCloser(0xc00035a6c0)
/root/geesefs/internal/goofys.go:344 +0x67 fp=0xc00004bfc8 sp=0xc00004bf78 pc=0xd2c647
github.com/yandex-cloud/geesefs/internal.newGoofys·dwrap·33()
/root/geesefs/internal/goofys.go:268 +0x26 fp=0xc00004bfe0 sp=0xc00004bfc8 pc=0xd2c1c6
runtime.goexit()
/usr/lib/golang/src/runtime/asm_amd64.s:1581 +0x1 fp=0xc00004bfe8 sp=0xc00004bfe0 pc=0x46eee1
created by github.com/yandex-cloud/geesefs/internal.newGoofys
/root/geesefs/internal/goofys.go:268 +0xaf1

goroutine 1 [select, 2215 minutes]:
github.com/jacobsa/fuse.(*MountedFileSystem).Join(0xc0003eaea0, {0x11cd1a8, 0xc0000a8000})
/root/go/pkg/mod/github.com/vitalif/[email protected]/mounted_file_system.go:43 +0x7b
main.main.func2(0xc0000ef4a0)
/root/geesefs/main.go:242 +0x63e
github.com/urfave/cli.HandleAction({0xe35900, 0xc0000c7b40}, 0xd)
/root/go/pkg/mod/github.com/urfave/[email protected]/app.go:514 +0xa8
github.com/urfave/cli.(*App).Run(0xc000396380, {0xc0003b2300, 0xd, 0x10})
/root/go/pkg/mod/github.com/urfave/[email protected]/app.go:274 +0x754
main.main()
/root/geesefs/main.go:254 +0x165

goroutine 21 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000116600)
/root/go/pkg/mod/[email protected]/stats/view/worker.go:276 +0xb9
created by go.opencensus.io/stats/view.init.0
/root/go/pkg/mod/[email protected]/stats/view/worker.go:34 +0x92

goroutine 3 [syscall, 2215 minutes]:
os/signal.signal_recv()
/usr/lib/golang/src/runtime/sigqueue.go:169 +0x98
os/signal.loop()
/usr/lib/golang/src/os/signal/signal_unix.go:24 +0x19
created by os/signal.Notify.func1.1
/usr/lib/golang/src/os/signal/signal.go:151 +0x2c

goroutine 35 [select, 2215 minutes]:
io.(*pipe).Read(0xc0004c0120, {0xc0003e8000, 0x1000, 0x7f38c67fba4f})
/usr/lib/golang/src/io/pipe.go:57 +0xb7
io.(*PipeReader).Read(0x0, {0xc0003e8000, 0x100000000000000, 0x7f38f4e25638})
/usr/lib/golang/src/io/pipe.go:134 +0x25
bufio.(*Scanner).Scan(0xc00005df28)
/usr/lib/golang/src/bufio/scan.go:215 +0x865
github.com/sirupsen/logrus.(*Entry).writerScanner(0xc0003bead4, 0xc0004b2078, 0xc0004b4140)
/root/go/pkg/mod/github.com/sirupsen/[email protected]/writer.go:59 +0xa5
created by github.com/sirupsen/logrus.(*Entry).WriterLevel
/root/go/pkg/mod/github.com/sirupsen/[email protected]/writer.go:51 +0x3d6

goroutine 11 [semacquire]:
sync.runtime_SemacquireMutex(0x7f38a5a64658, 0x6f, 0xc000063e70)
/usr/lib/golang/src/runtime/sema.go:71 +0x25
sync.(*Mutex).lockSlow(0xc000756658)
/usr/lib/golang/src/sync/mutex.go:138 +0x165
sync.(*Mutex).Lock(...)
/usr/lib/golang/src/sync/mutex.go:81
github.com/yandex-cloud/geesefs/internal.(*Inode).TryFlush(0xc000756600)
/root/geesefs/internal/file.go:1166 +0x12f
github.com/yandex-cloud/geesefs/internal.(*Goofys).Flusher(0xc00035a6c0)
/root/geesefs/internal/goofys.go:551 +0x393
created by github.com/yandex-cloud/geesefs/internal.newGoofys
/root/geesefs/internal/goofys.go:264 +0xa2f

goroutine 13 [runnable]:
syscall.Syscall(0x0, 0xb, 0xc033ed4000, 0x21000)
/usr/lib/golang/src/syscall/asm_linux_amd64.s:20 +0x5
syscall.read(0xc0003dd8c0, {0xc033ed4000, 0x20300a, 0x41375d})
/usr/lib/golang/src/syscall/zsyscall_linux_amd64.go:687 +0x4d
syscall.Read(...)
/usr/lib/golang/src/syscall/syscall_unix.go:189
internal/poll.ignoringEINTRIO(...)
/usr/lib/golang/src/internal/poll/fd_unix.go:582
internal/poll.(*FD).Read(0xc0003dd8c0, {0xc033ed4000, 0x21000, 0x21000})
/usr/lib/golang/src/internal/poll/fd_unix.go:163 +0x285
os.(*File).read(...)
/usr/lib/golang/src/os/file_posix.go:32
os.(*File).Read(0xc0004b2100, {0xc033ed4000, 0xc000477e08, 0x988bb9})
/usr/lib/golang/src/os/file.go:119 +0x5e
github.com/jacobsa/fuse/internal/buffer.(*InMessage).Init(0xc02a177b80, {0x11b5d40, 0xc0004b2100})
/root/go/pkg/mod/github.com/vitalif/[email protected]/internal/buffer/in_message.go:59 +0x42
github.com/jacobsa/fuse.(*Connection).readMessage(0xc0000a7110)
/root/go/pkg/mod/github.com/vitalif/[email protected]/connection.go:342 +0x49
github.com/jacobsa/fuse.(*Connection).ReadOp(0xc0000a7110)
/root/go/pkg/mod/github.com/vitalif/[email protected]/connection.go:401 +0x48
github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps(0xc00044f960, 0xc0000a7110)
/root/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:106 +0x7f
github.com/jacobsa/fuse.Mount.func1()
/root/go/pkg/mod/github.com/vitalif/[email protected]/mount.go:80 +0x38
created by github.com/jacobsa/fuse.Mount
/root/go/pkg/mod/github.com/vitalif/[email protected]/mount.go:79 +0x351

goroutine 14 [chan receive, 2215 minutes]:
main.registerSIGINTHandler.func1()
/root/geesefs/main.go:53 +0x5e
created by main.registerSIGINTHandler
/root/geesefs/main.go:51 +0x116

goroutine 149811 [IO wait]:
internal/poll.runtime_pollWait(0x7f38a6204e88, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:303 +0x85
internal/poll.(*pollDesc).wait(0xc00040a700, 0xc0002a9500, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00040a700, {0xc0002a9500, 0x1465, 0x1465})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00040a700, {0xc0002a9500, 0xc0002a9505, 0x279})
/usr/lib/golang/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc026d54000, {0xc0002a9500, 0x6, 0xc00005f7f8})
/usr/lib/golang/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00000f4e8, {0xc0002a9500, 0x0, 0x4101ed})
/usr/lib/golang/src/crypto/tls/conn.go:778 +0x3d
bytes.(*Buffer).ReadFrom(0xc0000aa278, {0x11b35a0, 0xc00000f4e8})
/usr/lib/golang/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0000aa000, {0x11b5940, 0xc026d54000}, 0x2c0)
/usr/lib/golang/src/crypto/tls/conn.go:800 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0000aa000, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:607 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/lib/golang/src/crypto/tls/conn.go:575
crypto/tls.(*Conn).Read(0xc0000aa000, {0xc01c5ff000, 0x1000, 0x0})
/usr/lib/golang/src/crypto/tls/conn.go:1278 +0x16f
net/http.(*persistConn).Read(0xc0000d0480, {0xc01c5ff000, 0xc0088ac7e0, 0xc00005fd30})
/usr/lib/golang/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc025fa1b00)
/usr/lib/golang/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc025fa1b00, 0x1)
/usr/lib/golang/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc0000d0480)
/usr/lib/golang/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1747 +0x1e05

goroutine 149781 [IO wait]:
internal/poll.runtime_pollWait(0x7f38a6205910, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:303 +0x85
internal/poll.(*pollDesc).wait(0xc0262be480, 0xc000b6e000, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0262be480, {0xc000b6e000, 0x149e, 0x149e})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc0262be480, {0xc000b6e000, 0xc000b6e005, 0x10d})
/usr/lib/golang/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc026d54010, {0xc000b6e000, 0xc026f3e120, 0xc0004747f8})
/usr/lib/golang/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc001c3c588, {0xc000b6e000, 0x0, 0x4101ed})
/usr/lib/golang/src/crypto/tls/conn.go:778 +0x3d
bytes.(*Buffer).ReadFrom(0xc0000aa978, {0x11b35a0, 0xc001c3c588})
/usr/lib/golang/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0000aa700, {0x11b5940, 0xc026d54010}, 0x149e)
/usr/lib/golang/src/crypto/tls/conn.go:800 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0000aa700, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:607 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/lib/golang/src/crypto/tls/conn.go:575
crypto/tls.(*Conn).Read(0xc0000aa700, {0xc000b4b000, 0x1000, 0x0})
/usr/lib/golang/src/crypto/tls/conn.go:1278 +0x16f
net/http.(*persistConn).Read(0xc00035ad80, {0xc000b4b000, 0x44ef20, 0xc000474ec8})
/usr/lib/golang/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc03aeb8b40)
/usr/lib/golang/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc03aeb8b40, 0x1)
/usr/lib/golang/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc00035ad80)
/usr/lib/golang/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1747 +0x1e05

goroutine 149782 [select]:
net/http.(*persistConn).writeLoop(0xc00035ad80)
/usr/lib/golang/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1748 +0x1e65

goroutine 150294 [runnable]:
syscall.Syscall(0x14, 0xb, 0xc014c476e0, 0x2)
/usr/lib/golang/src/syscall/asm_linux_amd64.s:20 +0x5
github.com/jacobsa/fuse.writev(0xc0000a7110, {0xc02931d3e0, 0x2, 0xdfdda0})
/root/go/pkg/mod/github.com/vitalif/[email protected]/writev.go:20 +0x97
github.com/jacobsa/fuse.(*Connection).Reply(0xc0000a7110, {0x11cd218, 0xc028a5f050}, {0x0, 0x0})
/root/go/pkg/mod/github.com/vitalif/[email protected]/connection.go:522 +0x625
github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0x1c, 0x113, {0x11cd218, 0xc028a5f050}, {0xdfdda0, 0xc02a177ac0})
/root/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:241 +0xac5
created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps
/root/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x205

goroutine 149818 [IO wait]:
internal/poll.runtime_pollWait(0x7f38a6539598, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:303 +0x85
internal/poll.(*pollDesc).wait(0xc03ae9c380, 0xc0002aaa00, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc03ae9c380, {0xc0002aaa00, 0x149e, 0x149e})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc03ae9c380, {0xc0002aaa00, 0xc0002aaa05, 0x10d})
/usr/lib/golang/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc026d54130, {0xc0002aaa00, 0x60, 0xc0004787f8})
/usr/lib/golang/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00030dec0, {0xc0002aaa00, 0x0, 0x4101ed})
/usr/lib/golang/src/crypto/tls/conn.go:778 +0x3d
bytes.(*Buffer).ReadFrom(0xc0000ab078, {0x11b35a0, 0xc00030dec0})
/usr/lib/golang/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0000aae00, {0x11b5940, 0xc026d54130}, 0x149e)
/usr/lib/golang/src/crypto/tls/conn.go:800 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0000aae00, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:607 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/lib/golang/src/crypto/tls/conn.go:575
crypto/tls.(*Conn).Read(0xc0000aae00, {0xc00b60e000, 0x1000, 0x0})
/usr/lib/golang/src/crypto/tls/conn.go:1278 +0x16f
net/http.(*persistConn).Read(0xc0000d0c60, {0xc00b60e000, 0x44ef20, 0xc000478ec8})
/usr/lib/golang/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc045463b60)
/usr/lib/golang/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc045463b60, 0x1)
/usr/lib/golang/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc0000d0c60)
/usr/lib/golang/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1747 +0x1e05

goroutine 149829 [IO wait]:
internal/poll.runtime_pollWait(0x7f38a5a8edb8, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:303 +0x85
internal/poll.(*pollDesc).wait(0xc00040ad00, 0xc000b81500, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00040ad00, {0xc000b81500, 0x1465, 0x1465})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00040ad00, {0xc000b81500, 0xc0004c0ba0, 0x3})
/usr/lib/golang/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc026d54138, {0xc000b81500, 0x1, 0xc0004767f8})
/usr/lib/golang/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc001c3c5b8, {0xc000b81500, 0x0, 0x4101ed})
/usr/lib/golang/src/crypto/tls/conn.go:778 +0x3d
bytes.(*Buffer).ReadFrom(0xc0002ae278, {0x11b35a0, 0xc001c3c5b8})
/usr/lib/golang/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0002ae000, {0x11b5940, 0xc026d54138}, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:800 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0002ae000, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:607 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/lib/golang/src/crypto/tls/conn.go:575
crypto/tls.(*Conn).Read(0xc0002ae000, {0xc00e3c0000, 0x1000, 0x101000000d360f5})
/usr/lib/golang/src/crypto/tls/conn.go:1278 +0x16f
net/http.(*persistConn).Read(0xc00035afc0, {0xc00e3c0000, 0x40b6bd, 0x60})
/usr/lib/golang/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc012cb6300)
/usr/lib/golang/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc012cb6300, 0x1)
/usr/lib/golang/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc00035afc0)
/usr/lib/golang/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1747 +0x1e05

goroutine 149830 [select]:
net/http.(*persistConn).writeLoop(0xc00035afc0)
/usr/lib/golang/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1748 +0x1e65

goroutine 149756 [IO wait]:
internal/poll.runtime_pollWait(0x7f38cc7a3310, 0x72)
/usr/lib/golang/src/runtime/netpoll.go:303 +0x85
internal/poll.(*pollDesc).wait(0xc00693c100, 0xc000b80000, 0x0)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:84 +0x32
internal/poll.(*pollDesc).waitRead(...)
/usr/lib/golang/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00693c100, {0xc000b80000, 0x149e, 0x149e})
/usr/lib/golang/src/internal/poll/fd_unix.go:167 +0x25a
net.(*netFD).Read(0xc00693c100, {0xc000b80000, 0xc000b80005, 0x10d})
/usr/lib/golang/src/net/fd_posix.go:56 +0x29
net.(*conn).Read(0xc026d54008, {0xc000b80000, 0x60, 0xc0002287f8})
/usr/lib/golang/src/net/net.go:183 +0x45
crypto/tls.(*atLeastReader).Read(0xc00030de90, {0xc000b80000, 0x0, 0x4101ed})
/usr/lib/golang/src/crypto/tls/conn.go:778 +0x3d
bytes.(*Buffer).ReadFrom(0xc0000aa5f8, {0x11b35a0, 0xc00030de90})
/usr/lib/golang/src/bytes/buffer.go:204 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0000aa380, {0x11b5940, 0xc026d54008}, 0x149e)
/usr/lib/golang/src/crypto/tls/conn.go:800 +0xe5
crypto/tls.(*Conn).readRecordOrCCS(0xc0000aa380, 0x0)
/usr/lib/golang/src/crypto/tls/conn.go:607 +0x112
crypto/tls.(*Conn).readRecord(...)
/usr/lib/golang/src/crypto/tls/conn.go:575
crypto/tls.(*Conn).Read(0xc0000aa380, {0xc007b42000, 0x1000, 0x0})
/usr/lib/golang/src/crypto/tls/conn.go:1278 +0x16f
net/http.(*persistConn).Read(0xc01db58000, {0xc007b42000, 0x44ef20, 0xc000228ec8})
/usr/lib/golang/src/net/http/transport.go:1926 +0x4e
bufio.(*Reader).fill(0xc026bd5e60)
/usr/lib/golang/src/bufio/bufio.go:101 +0x103
bufio.(*Reader).Peek(0xc026bd5e60, 0x1)
/usr/lib/golang/src/bufio/bufio.go:139 +0x5d
net/http.(*persistConn).readLoop(0xc01db58000)
/usr/lib/golang/src/net/http/transport.go:2087 +0x1ac
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1747 +0x1e05

goroutine 149819 [select]:
net/http.(*persistConn).writeLoop(0xc0000d0c60)
/usr/lib/golang/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1748 +0x1e65

goroutine 149757 [select]:
net/http.(*persistConn).writeLoop(0xc01db58000)
/usr/lib/golang/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1748 +0x1e65

goroutine 149812 [select]:
net/http.(*persistConn).writeLoop(0xc0000d0480)
/usr/lib/golang/src/net/http/transport.go:2386 +0xfb
created by net/http.(*Transport).dialConn
/usr/lib/golang/src/net/http/transport.go:1748 +0x1e65

Symlinks feature works incorrectly: failed to restore symlinks.

Steps to reproduce:

  1. Mount folder /tmp/gfs.
    ./geesefs --endpoint https://s3.amazonaws.com/ test:geesefs-cloud /tmp/gfs
  2. Create data files and symlinks that points to them.
    Folders structure example:
/tmp/gfs/data/log.txt
/tmp/gfs/data/log2.txt
/tmp/gfs/symlinks/log.txt -> /tmp/gfs/data/log.txt
/tmp/gfs/symlinks/log2.txt -> /tmp/gfs/data/log2.txt
  1. Check your S3 bucket with stored files. All is good. x-amz-meta---symlink-target is present inside symlink files on S3.
  2. umount /tmp/gfs/
  3. Use geesefs to mount /tmp/gfs folder again.
    ./geesefs --endpoint https://s3.amazonaws.com/ test:geesefs-cloud /tmp/gfs
  4. Run ls -la /tmp/gfs/symlinks/. All files inside symlinks directory are not symlinks anymore.

Does not work out of the box (printing "bucket does not exist" in the log)

$ ./geesefs-linux-amd64 github-download-experiment s3fuse
2022/04/09 16:22:18.098230 main.FATAL Unable to mount file system, see syslog for details

It prints that the bucket does not exist:

Apr  9 16:28:43 ip-172-31-32-168 /home/ubuntu/geesefs-linux-amd64[211205]: main.ERROR Unable to access 'github-download-experiment': bucket github-download-experiment does not exist
Apr  9 16:28:43 ip-172-31-32-168 /home/ubuntu/geesefs-linux-amd64[211205]: main.FATAL Mounting file system: Mount: initialization failed

But it does:

$ aws s3 ls s3://github-download-experiment
                           PRE zip/

Credentials have been set up:

$ ls -l ~/.aws/
total 8
-rw------- 1 ubuntu ubuntu  45 Apr  9 02:04 config
-rw------- 1 ubuntu ubuntu 116 Apr  9 02:04 credentials

I tried several different variants:

 ./geesefs-linux-amd64 github-download-experiment s3fuse
 ./geesefs-linux-amd64 github-download-experiment ./s3fuse
 ./geesefs-linux-amd64 s3://github-download-experiment s3fuse

Nothing work.

Memory used

Подключено два бакета.

Заметил, что приложение потребляет достаточно много памяти для одно из них:

image

image

Это нормально?

Второй бакет:

image

image

How to ignore certificate validation

I have a custom minio endpoint which has certifcate issued by certmanager. When I try to use geesefs with endpoing https I am getting error certificate authority is not valid.

How can I either turn off https and use http or ignore the TLS certificate check?

Thanks

main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0

main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0 #012goroutine 277553 [running]:#012runtime/debug.Stack(0xc000615970, 0x10af760, 0xc017d0a000)#12#011/opt/hostedtoolcache/go/1.16.8/x64/src/runtime/debug/stack.go:24 +0x9f#012github.com/yandex-cloud/geesefs/api/common.LogPanic(0xc000615f38)#12#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:32 +0x76#012panic(0x10af760, 0xc017d0a000)#12#011/opt/hostedtoolcache/go/1.16.8/x64/src/runtime/panic.go:965 +0x1b9#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018fa600)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:840 +0x165#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018fa300)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018f9e00)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b62d80)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0013ec180)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000ea9500)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc00070d680)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*DirHandle).ReadDir(0xc012fd4020, 0x0, 0x0, 0xc001356060, 0x1, 0x1)#12#011/home/runner/work/geesefs/geesefs/internal/dir.go:572 +0x210#012github.com/yandex-cloud/geesefs/internal.(*Goofys).ReadDir(0xc000416000, 0x133b0b8, 0xc013d9a150, 0xc003696840, 0x0, 0xc016902d20)#12#011/home/runner/work/geesefs/geesefs/internal/goofys.go:1127 +0x265#012github.com/yandex-cloud/geesefs/api/common.FusePanicLogger.ReadDir(0x134b380, 0xc000416000, 0x133b0b8, 0xc013d9a150, 0xc003696840, 0x0, 0x0)#12#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:101 +0x8c#012github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0xc0003bf3c0, 0xc00048e9c0, 0x133b0b8, 0xc013d9a150, 0xf4b320, 0xc003696840)#12#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:183 +0xc87#012created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x1a5

fuse.ERROR *fuseops.ReadDirOp error: input/output error

Hard links support

Thanks @vitalif ,
really cool driver, performance is great!
Is there any plans to support hard links?
sometimes its really required.

macos cannot copy to mounted directory

Hi everyone!

macOS 10.15.7 (19H1615)

I've installed prebuilt binary for mac/amd64.

In one window doing this

sudo geesefs -f --endpoint https://storage.yandexcloud.net/ my-videos videos
2022/04/20 16:28:07.687106 main.INFO File system has been successfully mounted.

On the other window

ls
ls: videos: No such file or directory

Binary only installation does not allow to work with fstab

When you use geesefs just with downloading binary, just adding an entry to fstab produces the following error:

$ sudo mount -a
/bin/sh: 1: geesefs: not found```

What should you do to provide fuse with a knowledge, that the handler for `fuse.geesefs` filesystem type resides in a binary, that you've just downloaded (this should be added into installation documentation I think)?

Running inside docker without privileges?

I am trying to run geesefs inside docker.

All works fine when I add privileges to container
docker run --rm -it --device /dev/fuse --cap-add SYS_ADMIN <image> /bin/bash

But it doesn't work without --cap-add

$ docker run --rm -it -v /lib/modules:/lib/modules <image> /bin/bash

root@6b40303f1416:$ modprobe fuse
root@6b40303f1416:$ AWS_ACCESS_KEY_ID=<key> AWS_SECRET_ACCESS_KEY=<secret> geesefs -f --endpoint <endpoint> <bucket>:<path> /tmp/geesefs-mountpoint
/usr/bin/fusermount: fuse device not found, try 'modprobe fuse' first
2022/02/04 10:28:24.830729 main.FATAL Mounting file system: Mount: mount: running /usr/bin/fusermount: exit status 1

Any way to make it work?

Dockerfile

FROM ubuntu:focal

RUN apt-get update -qq && apt-get install -y fuse kmod && rm -rf /var/lib/apt/lists/*

COPY ./geesefs-linux-amd64 /usr/bin/geesefs

RUN mkdir /tmp/geesefs-mountpoint

Docker bind mount "permission denied"

How can I launch a docker container that maps a mounted directory on the host?

docker -v /home/user/src:/target ...

gives

docker: Error response from daemon: error while creating mount source path '/home/user/src': mkdir /home/user/src: file exists.

while

--mount type=bind,source=/home/user/src,destination=/data

gives

docker: Error response from daemon: invalid mount config for type "bind": stat /home/user/src: permission denied.

А Кеш вообще работает ?

Указал папку, и что бы я не делал (загружал или скачивал несколько раз), папка не заполняется кешем. Хотя при загрузке явно используется кеш, так как внутри файловой системы копирование в хранилище происходит быстро, значит он использует кеш. Но хотелось бы контролировать его расположение.

Problem with reading many files

I try to execute ls (1500 small files in a directory bucket_name/directory), but it freeze.
After ~20 min I got message:
ls: reading directory '.': Software caused connection abort and lost connection :(

I can execute ls on the same directory with goofys.

Recipe section with an fstab example for Backblaze B2?

Hey Yandex,

Great work on GeeseFS, it's much faster than s3fs in my usage! An /etc/fstab example for Backblaze B2 would probably help a bunch of people who want to use geesefs:

bucket /path/to/mountpoint fuse.geesefs _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--list-type=1,--endpoint=https://s3.eu-central-003.backblazeb2.com,--shared-config=/path/to/backblaze/config.aws.toml 0 0

I guess most people would be able to easily figure out the --shared-config bit as I did, but I was surprised by the need to change --list-type.

Having a small section of recipes for different providers might be a nice to have, but it would be another thing to maintain.

symlinks and case sensitivity

Hi,

I've run into an issue with symlinks on geesefs where the case sensitivity of the target is lost, despite the metadata returning it correctly. For example, if I sync some files across, the case is shown correctly the first time.

1st time (after rsync):

$ ls -al /opt/geesefs/galaxy/v1/data.galaxyproject.org/byhand/xenTro1/bwa_index/xenTro1.fa
lrw-r--r-- 1 ubuntu ubuntu 0 May 18 18:21 /opt/geesefs/galaxy/v1/data.galaxyproject.org/byhand/xenTro1/bwa_index/xenTro1.fa -> ../seq/xenTro1.fa

2nd time (after unmount/remount)

$ ls -al /opt/geesefs/galaxy/v1/data.galaxyproject.org/byhand/xenTro1/bwa_index/xenTro1.fa
lrw-r--r-- 1 ubuntu ubuntu 17 May 18 17:54 /opt/geesefs/galaxy/v1/data.galaxyproject.org/byhand/xenTro1/bwa_index/xenTro1.fa -> ../seq/xentro1.fa

As can be seen, the target's case is no longer preserved, and therefore, the link is broken.

The metadata does preserve case correctly.

-----------------------------------------------------
2022/05/18 18:25:15.579592 s3.DEBUG DEBUG: Response s3/HeadObject Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Type: binary/octet-stream
Date: Wed, 18 May 2022 18:25:16 GMT
Etag: "d41d8cd98f00b204e9800998ecf8427e"
Last-Modified: Wed, 18 May 2022 18:21:59 GMT
Server: AmazonS3
X-Amz-Meta---Symlink-Target: ../seq/xenTro1.fa
X-Amz-Request-Id: 6RTAQPQGTEBKAY48
Content-Length: 0

This is on Amazon S3. I did see issue: #14 and do understand that a direct HEAD on the symlink is needed for things to work correctly. But this case-sensitivity issue seems different?

I'm using the latest geesefs.

VERSION:
   0.31.1

main.FATAL Mounting file system: Mount: mount: running /usr/bin/fusermount: exit status 1

ОС: Ubuntu 20.04.1 LTS
Способ установки и запуска:

~> wget https://github.com/yandex-cloud/geesefs/releases/latest/download/geesefs-linux-amd64
~> chmod a+x ./geesefs-linux-amd64
~> ./geesefs-linux-amd64 <backet-name> <mounted-folder>

~/.aws/credentials

[default]
aws_access_key_id = <AWS_ACCES_KEY_ID>
aws_secret_access_key = <AWS_SECRET_ACCESS_KEY>
$  ~/geesefs-linux-amd64 <backet-name> <mounted-folder>

/etc/fstab

....
<backet-name> <mounted-folder> fuse.geesefs _netdev,allow_other,--file-mode=0666,--dir-mode=0777 0 0

После запуска получаю следующую ошибку:
main.FATAL Unable to mount file system, see syslog for details

Иду в /var/log/syslog и вижу
/home/egor/geesefs-linux-amd64[53342]: main.FATAL Mounting file system: Mount: mount: running /usr/bin/fusermount: exit status 1

Can we use `geesefs` for data writing to s3 ?

I can not get writing to s3 through fuse using geesefs mounting with the following command:

geesefs-linux-amd64  --file-mode=0666 --dir-mode=0777 --uid=1000 test-bucket-7d7fsdfs ./mnt

Creating files or directories in ./mnt doesn't change anything of test-bucket-7d7fsdfs
At the same time, copying files to the bucket via aws s3 works

aws --endpoint-url=https://storage.yandexcloud.net s3 cp some-file s3://test-bucket-7d7fsdfs/

main.ERROR stacktrace from panic: runtime error: slice bounds out of range

Окружение.

4.18.0-240.10.1.el8_3.x86_64
CentOS Linux release 8.3.2011

geesefs version 0.28.4

fuse-overlayfs-1.6-1.module_el8.4.0+886+c9a8d9ad.x86_64
fuse3-libs-3.2.1-12.el8.x86_64
fuse3-3.2.1-12.el8.x86_64
s3fs-fuse-1.90-1.el8.x86_64
fuse-common-3.2.1-12.el8.x86_64
fuse-2.9.7-12.el8.x86_64
fuse-libs-2.9.7-12.el8.x86_64

В произвольный момент возникает ситуация, указанная ниже.
Можно воспроизвести со 100%-ной вероятностью просто при серфинге каталогов.
Причем, зависимости от большого числа файлов не замечено.

Какие еще данные нужны для анализа?

2021/10/04 22:52:39.303735 main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0
goroutine 86 [running]:
runtime/debug.Stack(0xc000da3930, 0x10af760, 0xc000376990)
        /opt/hostedtoolcache/go/1.16.8/x64/src/runtime/debug/stack.go:24 +0x9f
github.com/yandex-cloud/geesefs/api/common.LogPanic(0xc000da3f38)
        /home/runner/work/geesefs/geesefs/api/common/panic_logger.go:32 +0x76
panic(0x10af760, 0xc000376990)
        /opt/hostedtoolcache/go/1.16.8/x64/src/runtime/panic.go:965 +0x1b9
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b70d80)
        /home/runner/work/geesefs/geesefs/internal/dir.go:840 +0x165
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b70c00)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b70780)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b70300)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000473e00)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000473980)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0002e9c80)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0002e8c00)
        /home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3
github.com/yandex-cloud/geesefs/internal.(*DirHandle).ReadDir(0xc000d92780, 0x0, 0x0, 0xc000da8530, 0x1, 0x1)
        /home/runner/work/geesefs/geesefs/internal/dir.go:572 +0x210
github.com/yandex-cloud/geesefs/internal.(*Goofys).ReadDir(0xc0000b0000, 0x133b0b8, 0xc000c90720, 0xc000a3ec00, 0x0, 0x0)
        /home/runner/work/geesefs/geesefs/internal/goofys.go:1127 +0x265
github.com/yandex-cloud/geesefs/api/common.FusePanicLogger.ReadDir(0x134b380, 0xc0000b0000, 0x133b0b8, 0xc000c90720, 0xc000a3ec00, 0x0, 0x0)
        /home/runner/work/geesefs/geesefs/api/common/panic_logger.go:101 +0x8c
github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0xc000684460, 0xc000118680, 0x133b0b8, 0xc000c90720, 0xf4b320, 0xc000a3ec00)
        /home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:183 +0xc87
created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps
        /home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x1a5

2021/10/04 22:52:39.303756 fuse.DEBUG Op 0x00000040        connection.go:502] -> Error: "input/output error"
2021/10/04 22:52:39.303767 fuse.ERROR *fuseops.ReadDirOp error: input/output error

Подозрение на утечку памяти

Версия 0.30.9

Сценарий, на котором хорошо воспроизводится: раз в час find обходит бакет (около 230 000 файлов), удаляет попавшие под условие.
В таком режиме за 5 дней процесс geesefs отъедает 25Гб памяти и убивается ООМ.
Использованные опции:

-o rw,allow_other,--file-mode=0666,--dir-mode=0777,--uid=yyy,--gid=xxx,--shared-config=/etc/passwd-geesefs,--endpoint=http://host:port,--http-timeout=10s,--retry-interval=5s,--list-type=2,dev,suid,--debug,--log-file=/tmp/log111.txt

Нашел тикет #23 , прочитал про переменную PPROF, но почему-то порт не открывается, в дебаг-логе только "2022/04/28 13:42:11.549886 main.INFO File system has been successfully mounted."
Почитал в /proc/21217/environ , переменная есть:

xargs --null --max-args=1 echo < /proc/21217/environ | grep PPROF
PPROF=6060

S3 не от яндекса.

SIGSEGV invalid memory address or nil pointer dereference

2021/10/10 16:10:15.618312 main.INFO File system has been successfully mounted.
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xe4e690]

goroutine 16751 [running]:
github.com/yandex-cloud/geesefs/internal.(*S3Backend).MultipartBlobAdd(0xc000642000, 0xc0008491d0, 0x3b8000, 0xc07b46c080, 0xc000849170)
        /home/runner/work/geesefs/geesefs/internal/backend_s3.go:974 +0xb0
github.com/yandex-cloud/geesefs/internal.(*Inode).FlushPart(0xc0008e9680, 0x15)
        /home/runner/work/geesefs/geesefs/internal/file.go:1747 +0x390
github.com/yandex-cloud/geesefs/internal.(*Inode).SendUpload.func4(0xc0008e9680, 0x15, 0x6900000, 0x500000)
        /home/runner/work/geesefs/geesefs/internal/file.go:1404 +0x5b
created by github.com/yandex-cloud/geesefs/internal.(*Inode).SendUpload
        /home/runner/work/geesefs/geesefs/internal/file.go:1402 +0x291

VERSION:
0.28.7

Делаю:

find /mnt/s3_zprivatedatastor_geesefs/junk/recode/src/ -mindepth 1 -maxdepth 1 -type d -printf "%P\0" | sort -z -r | xargs -0 -n1 -P8 -I sRc rsync -arvhW --size-only --remove-source-files --progress '/mnt/s3_zprivatedatastor_geesefs/junk/recode/src/sRc' '/mnt/s3_zprivatedatastor_geesefs/data/src/'

По сути перекладываю видяшки по 2гб из стандартного в холодное (ранее монтировал с дефолтным типом хранилища, исправляю это)

в этот раз монтировал вот так
sudo nohup geesefs -f --uid=1000 --gid=1000 --dir-mode=0777 --file-mode=0777 --memory-limit=8000 --storage-clas s=STANDARD_IA -o allow_other zprivatedatastor /mnt/s3_zprivatedatastor_geesefs

Ошибка возникает если прервать процесс rsync сигналом SIGINT, у меня воспроизводится в 100% случаев

No official package repositories

Hello,

This project cannot really be taken seriously without official supported package repositories for most common distros (Debian/Ubuntu and CentOS).

Weird things happen while copying dir in MC

I mounted YC s3 bucket in YC compute VM to run some tests. Put a dir with about 1k files (random docs, several kb to 50Mb in size, ~1,5Gb total). Then tried to copy directory from mount point to local FS with Midnight Commander and got some strange results. At random point it shows a warning dialog, telling me that the target file already exists. If I choose to overwrite all existing files, it goes on copying and shows really weird info like this:

image
Note, that processed files counters show correct number of files and total source dir size at the right side.

Eventually it gets the job done and all the files are actually OK, but this behaviour is really weird.
This issue seem to happen only in MC, cp -ir works just fine. It also seem to depend on the number of files in dir. With ~1k files it happens every time I try, with ~500 files it happens sometimes, and with ~300 files I couldn't reproduce it.

Feature request: set file's `mtime`

Hello, thanks for a great tool, it's much faster than s3fs

We have a feature request that would make it a magnitude more useful with rsync.

Now I have to sync files as rsync --no-times --size-only --delete src dst, and it has an obvious flaw. It works more or less for big files, but when the file's size remains the same it won't be updated.

There were headers present in the request which were not signed in case of load balancing

I use geesefs with Minio. When it works directly with Minio it works fine. But for HA and LB reasons I use cluster of 2 WatchGuard XTM 515 UTM devices. This cluster can balance requests between few servers and works fine for my web services. For ex. I distribute files from my minio servers using it. But when I setup geesefs to use cluster instead of standalone minio server, I got following error when just tries to copy file from mounted drive to local:

Dec 27 15:46:54 wp-angstrem /sbin/geesefs[158010]: main.ERROR Error reading 0 +131072 of webplanner/12/02/1202669.b3db: AccessDenied: There were headers present in the request which were not signed#012#011status code: 400, request id: 16C49D6368BE555C, host id:
Dec 27 15:46:54 wp-angstrem /sbin/geesefs[158010]: fuse.ERROR *fuseops.VectoredReadOp error: invalid argument

At the same time I can download it via https or using aws cli. This is mounting record in fstab:

wp3d-archive /archive-s3 fuse.geesefs _netdev,allow_other,--file-mode=0660,--dir-mode=0770,--uid=1001,--gid=1001,--shared-config=/etc/passwd-s3fs,--endpoint=https://s3-cold.bazissoft.ru,--region=ru-central 0 0

It's so interesting that when I copy file to mounted drive and then copy back all works fine. Think because of cashing.

geesefs version 0.30.4

Quota support?

Can geesefs support file system quotas on a "per-directory" basis? (Say just top level directories, immediately under the bucket root.)
a. Have they been implemented?
b. If not, are they feasible?

Cannot connect to s3 bucket at startup

I have added this lines to the /etc/fstab

s3_bucket /mnt/aws fuse.geesefs _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--profile=default,--uid=1000,--gid=1000,--shared-config=/home/user/.aws/credentials,--endpoint=https://s3.amazonaws.com,--memory-limit=4000,--max-flushers=32,--max-parallel-parts=32,--part-sizes=25 0

But, i'm getting this error in the system logs and the s3_bucket is not mounting at the startup

s3.ERROR code=RequestError msg=send request failed, err=Head "https://s3.amazonaws.com/s3_bucket/5fzi2ktkalwlgdso7dsahfjuy95t712it": dial tcp: lookup s3.amazonaws.com: Temporary failure in name resolution

but i can connect manually with this command

geesefs --endpoint https://s3.amazonaws.com s3_bucket /mnt/aws/

fuse.ERROR writeMessage: invalid argument [80 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]

Добрый день.

При попытке подключиться к Yandex Object Storage получаю такую ошибку:

-----------------------------------------------------
2021/09/30 16:19:52.769225 fuse.DEBUG Op 0x00000001        connection.go:411] <- init
2021/09/30 16:19:52.769257 fuse.DEBUG Op 0x00000001        connection.go:500] -> OK ()
2021/09/30 16:19:52.769278 fuse.ERROR writeMessage: invalid argument [80 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
2021/09/30 16:19:52.769294 main.INFO File system has been successfully mounted.
2021/09/30 16:19:52.785066 s3.DEBUG DEBUG: Response s3/ListMultipartUploads Details:
---[ RESPONSE ]--------------------------------------

В итоге папка подключается, но:

ls -lha
ls: cannot access share: Connection refused

d??????????  ? ?    ?       ?            ? share

Что с дефолтными параметрами запуска:

./geesefs -f --debug_s3 --debug_fuse --debug share /mnt/share

что при изменения каких-либо параметров, в частности, --dir-mode и --file-mode, результат и ошибка не меняются.

При этом, запуск

./goofys -f --debug_s3 --debug_fuse share /mnt/share

приводит к корректному подключению:

-----------------------------------------------------
2021/09/30 16:34:03.086817 fuse.DEBUG Op 0x00000001        connection.go:408] <- init
2021/09/30 16:34:03.086842 fuse.DEBUG Op 0x00000001        connection.go:491] -> OK ()
2021/09/30 16:34:03.086871 main.INFO File system has been successfully mounted.
2021/09/30 16:34:03.098758 s3.DEBUG DEBUG: Response s3/ListMultipartUploads Details:
---[ RESPONSE ]--------------------------------------
drwxr-xr-x.  2 root root 4.0K Sep 30 16:34 share

Не могли бы указать причину и способ устранения?
Спасибо!

fail build

"Ubuntu 20.04.3 LTS"
go version go1.13.8 linux/amd64

go get github.com/yandex-cloud/geesefs
# github.com/golang-jwt/jwt
go/src/github.com/golang-jwt/jwt/ecdsa.go:135:4: r.FillBytes undefined (type *big.Int has no field or method FillBytes)
go/src/github.com/golang-jwt/jwt/ecdsa.go:136:4: s.FillBytes undefined (type *big.Int has no field or method FillBytes)

config fstab geesefs version 0.30.9

Если прописать так: bucket /mnt/mountpoint fuse.geesefs _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--profile=default 0 0, то профиль цепляется и можно работать с файлами в бакете.

bucket /mnt/mountpoint fuse.geesefs _netdev,allow_other,~~uid=1000,~~gid=1001,~~file-mode=0775,~~dir-mode=0775,--shared-config=/home/user/.aws/credentials 0 0, то работаеть не будет, профиль default не цепляется и не работает.

geesefs drops credentials when bucket is public

geesefs drops credentials when YC bucket is public

# geesefs -f  --debug_s3 --debug --iam <redacted> <redacted>
2022/05/21 21:16:27.491339 s3.INFO Successfully acquired IAM Token
2022/05/21 21:16:27.524579 s3.DEBUG HEAD https://storage.yandexcloud.net/<redacted> = 200 []
2022/05/21 21:16:27.524613 s3.DEBUG X-Amz-Request-Id = [XXXXXX]
2022/05/21 21:16:27.524643 s3.DEBUG Server = [nginx]
2022/05/21 21:16:27.524658 s3.DEBUG Date = [Sat, 21 May 2022 21:16:27 GMT]
2022/05/21 21:16:27.524672 s3.DEBUG Content-Type = [application/xml]
2022/05/21 21:16:27.524686 s3.INFO anonymous bucket detected

Probable cause:

if len(s.config.Profile) == 0 && os.Getenv("AWS_ACCESS_KEY_ID") == "" {
s.awsConfig.Credentials = credentials.AnonymousCredentials
s3Log.Infof("anonymous bucket detected")

Current fix: set --profile 1

--use-set-content-type works only partially and unpredictably

Enabled --use-set-content-type when mounting bucket (Bucket is from Yandex Cloud). Bucket versioning enabled. Static website hosting enabled. GeeseFS version geesefs version 0.30.5. Then uploaded files into it using rsync -r --rsync-path="sudo rsync" ~/Downloads/cc-images/ <host>:/mnt/ccimages

This file is served with "application/octet-stream" and browsers can't display it correctly. But this one has mime type set correctly to image/svg+xml.

FSTab is like this:

/mnt/ccimages fuse.geesefs _netdev,allow_other,--file-mode=0666,--dir-mode=0777,--iam,--uid=33,--gid=33,--use-content-type 0 0

external cache invalidation

Hi, there are several issues in our environment related to geesefs 1 minute dir cache timeout that is by default.

One is file uploading. When user uploaded a file with one replica and then got onto another replica when he have to check if the file was successfully uploaded. At moment we delay uploading for one minute but it is not optimal way.

Another issue is in user local cache. There is local cache for remote file system provided by geeses. We handle event (with our services) when new file arrive, but geesefs do not know in time about that file. User check if remote file system have new items, it reports that No, and client cache directory contents for a quite long time((

What do you think if it is possible to provide mechanics invalidate directory cache with external application to decrease appearance of new items and still provide fast subsequent listings? I have as follows:

  1. i think it is possible to use ipc, i d never implemented something with this protocol but sure it is possible to communicate between processes and client would request geesefs to invalidate internal cache for a folder contents.
  2. by placing magic file into a folder, e.g. .invalidate.me or .cache.forget that would be handled by geesefs
  3. we already setup S3 Trigger in Yandex to handle changes in s3 buckets, and what if we send messages to sqs message queues in that trigger so them would be handled with geesefs? So i imagine queue url would be passed to gessefs as parameter.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.