You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using s3fs to mount an s3 bucket ('experiment-input-bucket') onto an EC2 instance via ClearML's AWS Autoscaler, specifically within the init script which looks like this:
Then I see that the first run works fine and as expected, but on subsequent runs I see this error message related to the mount:
docker: Error response from daemon: invalid mount config for type "bind": stat /mounted-bucket: transport endpoint is not connected.
See 'docker run --help'.
indicating that the s3fs bucket is disconnected from the EC2 instance.
Does anyone know how the s3fs cache could be causing this behavior? I'm almost certain that the s3fs cache is the culprit since the experiment runs successfully multiple times subsequently without specifying use_cache in the FSTAB_ENTRY. For added context, the cache directory is /dev/sda1 which is a 500GB gp3 EBS volume configured onto my autoscaled machine.
The text was updated successfully, but these errors were encountered:
Specify a local directory path, such as /tmp/cache, rather than a device such as /dev/sda1.
You can also obtain detailed information by specifying the option dbglevel, which outputs detailed error logs for s3fs.
Additional Information
Version of s3fs being used (
s3fs --version
)V1.90
Version of fuse being used (
pkg-config --modversion fuse
,rpm -qi fuse
ordpkg -s fuse
)no fuse installed but s3fs still works
Kernel information (
uname -r
)6.2.0-1017-aws
GNU/Linux Distribution, if applicable (
cat /etc/os-release
)Ubuntu 22.04.3 LTS
How to run s3fs, if applicable
s3fs syslog messages (
grep s3fs /var/log/syslog
,journalctl | grep s3fs
, ors3fs outputs
)Details about issue
I am using s3fs to mount an s3 bucket ('experiment-input-bucket') onto an EC2 instance via ClearML's AWS Autoscaler, specifically within the init script which looks like this:
Everything runs fine with the init script specified as above, but when I add a use_cache argument to the FSTAB_ENTRY like so:
Then I see that the first run works fine and as expected, but on subsequent runs I see this error message related to the mount:
indicating that the s3fs bucket is disconnected from the EC2 instance.
Does anyone know how the s3fs cache could be causing this behavior? I'm almost certain that the s3fs cache is the culprit since the experiment runs successfully multiple times subsequently without specifying use_cache in the FSTAB_ENTRY. For added context, the cache directory is /dev/sda1 which is a 500GB gp3 EBS volume configured onto my autoscaled machine.
The text was updated successfully, but these errors were encountered: