Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3fs version is 1.9.4,I find warning log printf "Not enough local storage to cache write request till multipart upload can start" .what will be affect? #2444

Open
loongyiyao opened this issue Apr 13, 2024 · 4 comments

Comments

@loongyiyao
Copy link

Additional Information

Version of s3fs being used (s3fs --version)

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

Kernel information (uname -r)

GNU/Linux Distribution, if applicable (cat /etc/os-release)

How to run s3fs, if applicable

/usr/bin/s3fs fangzhenyun /dev/mount1 -o url=https://video.ge.com:9000 -o endpoint=cn-east-1 -o sigv2 -o passwd_file=/home/zxsrtn/.passwd-s3fs -o use_path_request_style -o allow_other -o umask=0 -o use_cache=/dev/cache1 -o del_cache -o ensure_diskfree=12288 -o enable_noobj_cache -o parallel_count=20 -o multipart_size=52 -o dbglevel=warn -o logfile=/home/output.log

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Details about issue

2024-04-13T01:19:54.389Z [WAN] fdcache_entity.cpp:WriteMixMultipart(2307): Not enough local storage to cache write request till multipart upload can start: [path=/vcd001/280/230/28023006/video/10-001328023006-1-20240412212748782-00001-20240413032810.mpg.rtp/119.dat][physical_fd=1036][offset=9407996][size=127492]
2024-04-13T01:19:54.389Z [WAN] s3fs.cpp:s3fs_write(2958): failed to write file(/vcd001/280/230/28023006/video/10-001328023006-1-20240412212748782-00001-20240413032810.mpg.rtp/119.dat). result=-28
2024-04-13T01:19:54.810Z [WAN] fdcache_entity.cpp:WriteMixMultipart(2307): Not enough local storage to cache write request till multipart upload can start: [path=/vcd001/280/231/28023135/video/10-001328023135-1-20240412212740722-00001-20240413032818.mpg.rtp/119.dat][physical_fd=230][offset=11246738][size=127854]
2024-04-13T01:19:54.810Z [WAN] s3fs.cpp:s3fs_write(2958): failed to write file(/vcd001/280/231/28023135/video/10-001328023135-1-20240412212740722-00001-20240413032818.mpg.rtp/119.dat). result=-28
2024-04-13T01:27:29.073Z [ERR] curl.cpp:RequestPerform(2544): HEAD HTTP response code 400, returning EPERM.
2024-04-13T01:37:07.311Z [WAN] fdcache_entity.cpp:WriteMixMultipart(2307): Not enough local storage to cache write request till multipart upload can start: [path=/vcd001/280/228/28022809/video/10-001328022809-1-20240412212736882-00002-20240413092815.mpg.rtp/3.dat][physical_fd=417][offset=1828178][size=129710]
2024-04-13T01:37:07.311Z [WAN] s3fs.cpp:s3fs_write(2958): failed to write file(/vcd001/280/228/28022809/video/10-001328022809-1-20240412212736882-00002-20240413092815.mpg.rtp/3.dat). result=-28

@ggtakec
Copy link
Member

ggtakec commented Apr 14, 2024

@loongyiyao
In order to upload, the cache file (and work file) partition must have free space equal to the size of the file you are trying to upload (more precisely, the size of the updating area size).
Since you have specified ensure_diskfree=12288, the <partition free space size> - <12GB> will be the size that can be used when uploading.
Since s3fs does not automatically delete cached files, it may be necessary to delete cache files using an external process.

And, there is something a little concerning about your log.
The directory path to create the cache file is specified as use_cache=/dev/cache1, but in the error log it is /vcd001/280/231/....
Do you know anything about these directory paths?

@loongyiyao
Copy link
Author

loongyiyao commented Apr 15, 2024

@loongyiyao In order to upload, the cache file (and work file) partition must have free space equal to the size of the file you are trying to upload (more precisely, the size of the updating area size). Since you have specified ensure_diskfree=12288, the <partition free space size> - <12GB> will be the size that can be used when uploading. Since s3fs does not automatically delete cached files, it may be necessary to delete cache files using an external process.
@ggtakec hi,delete cache will cause data loss? how delete cache?

And, there is something a little concerning about your log. The directory path to create the cache file is specified as use_cache=/dev/cache1, but in the error log it is /vcd001/280/231/.... Do you know anything about these directory paths?
@ggtakec hi, /dev/cache1 this is cache dictionary, i mount real dictionary is /dev/mount, so do not print /dev/cache1 in s3 logs. Another question is i find Memory leakage by https ways with v1.9.4 version.

@ggtakec
Copy link
Member

ggtakec commented Apr 15, 2024

@loongyiyao
Please let me check a few things.

/dev/cache1 this is cache dictionary,

Does this mean the directory is mounted using cachefs or something? (I'm not familiar with this.)
The path specified in the s3fs's use_cache option is assumed to be a normal directory path, and if it is a path mounted with cachefs, I don't think we have confirmed its behavior.

delete cache will cause data loss? how delete cache

When deleting a file that exists under the cache directory, it is necessary that the file is not open.
As long as it is not open, deleting it should not be a problem.
(It may be difficult to verify that it is not being accessed)

Another question is i find Memory leakage by https ways with v1.9.4 version.

Memory leak is a serious problem, so please explain in detail.
If possible, it would be helpful if you could create an issue with only the memory leak as an issue.
(Is that what you have already reported in #2441?)

@loongyiyao
Copy link
Author

loongyiyao commented Apr 16, 2024

Memory leak is a serious problem, so please explain in detail. If possible, it would be helpful if you could create an issue with only the memory leak as an issue. (Is that what you have already reported in #2441?)

@ggtakec yes! i find memory leakage, The memory occupied by s3fs has increased from tens of megabytes to a few grams, and within half a month。

1、first memory leakage
0 0x7fc798f9b348
1 0x7fc798f9da68 fuse_fs_fgetattr
2 0x7fc798fa0e43 fuse_fs_create
3 0x7fc798fa722d fuse_reply_iov
4 0x7fc798fa8b6b fuse_reply_iov
5 0x7fc798fa5401 fuse_session_loop
6 0x7fc79719ee65 start_thread pthread_create.c
7 0x7fc796ec78ad __clone

2、second memory leakage

0 0x44aaf5 __gnu_cxx::new_allocator<std::_Rb_tree_node<std::pair<std::string const, stat_cache_entry> > >::allocate(unsigned long, void const*) /usr/include/c++/4.8.2/ext/new_allocator.h:104
1 0x448229 std::map<std::string, stat_cache_entry, std::less<std::string>, std::allocator<std::pair<std::string const, stat_cache_entry> > >::operator[](std::string const&) /usr/include/c++/4.8.2/bits/stl_map.h:465
2 0x410824 get_object_attribute(char const*, stat*, std::map<std::string, std::string, header_nocase_cmp, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*, bool) /root/s3fs-fuse-1.94/src/s3fs.cpp:689
3 0x411521 check_object_access(char const*, int, stat*) /root/s3fs-fuse-1.94/src/s3fs.cpp:763
4 0x411c3d s3fs_getattr(char const*, stat*) /root/s3fs-fuse-1.94/src/s3fs.cpp:1021
5 0x7fc798f9d9c8 fuse_fs_fgetattr
6 0x7fc798f9dbfd fuse_fs_fgetattr
7 0x7fc798fa8b6b fuse_reply_iov
8 0x7fc798fa5401 fuse_session_loop
9 0x7fc79719ee65 start_thread pthread_create.c
10 0x7fc796ec78ad __clone

3、third memory leakage
12 0x7fc798d74555 ossl_connect_step1 openssl.c
13 0x7fc798d75143 ossl_connect_common openssl.c
14 0x7fc798d75dc6 Curl_ssl_connect_nonblocking
15 0x7fc798d2d362 https_connecting http.c
16 0x7fc798d2eb53 Curl_http_connect
17 0x7fc798d4b51b multi_runsingle multi.c
18 0x7fc798d4c473 curl_multi_perform
19 0x7fc798d44c1b curl_easy_perform
20 0x4323d5 S3fsCurl::RequestPerform(bool) /root/s3fs-fuse-1.94/src/curl.cpp:2493
21 0x43735a S3fsCurl::HeadRequest(char const*, std::map<std::string, std::string, header_nocase_cmp, std::allocator<std::pair<std::string const, std::string> > >&) /root/s3fs-fuse-1.94/src/curl.cpp:3269
22 0x41078c get_object_attribute(char const*, stat*, std::map<std::string, std::string, header_nocase_cmp, std::allocator<std::pair<std::string const, std::string> > >*, bool, bool*, bool) /root/s3fs-fuse-1.94/src/s3fs.cpp:582
23 0x411521 check_object_access(char const*, int, stat*) /root/s3fs-fuse-1.94/src/s3fs.cpp:763
24 0x411797 check_parent_object_access(char const*, int) /root/s3fs-fuse-1.94/src/s3fs.cpp:874
25 0x419fc5 s3fs_flush(char const*, fuse_file_info*) /root/s3fs-fuse-1.94/src/s3fs.cpp:3013
26 0x7fc798fa1447 fuse_fs_lock
27 0x7fc798fa16d0 fuse_fs_lock
28 0x7fc798fa7d06 fuse_reply_iov
29 0x7fc798fa8b6b fuse_reply_iov
30 0x7fc798fa5401 fuse_session_loop
31 0x7fc79719ee65 start_thread pthread_create.c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants