-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong documentation or wrong lock handling? #571
Comments
Additionally the error message from the logs: mapproxycontainer.1.e74chc81xvvc@myserver2 | fatal error in wmts for /wmts/1.0.0/myplan/default/zg/15/26/78.png? |
Strace from accesses of both containers wenn error occurs attached |
Hi, my understanding is that the documentation is correct. MapProxy does support file locking which coordinates multiple processes on the same server avoiding the same source image being requested at a given time. This mechanism doesn't however extent to network filesystems due to poor support for lock files on network filesystems. |
Hi, thx for your Feedback. As Gluster is not a Network Filesystem like NFS or CIFS and is Posix compliant it supports locks, so for a gluster Setup with multiple mapproxy Servers support for shared locks would make sense. Also its actually needed for a High-availability Setup. I think it should be quite easy to add such a feature. |
Hi,
We are currently working on a migration of our NFS based setup to a new glusterfs based setup. While testing mapproxy on the new gluster storage with a fuse mount, we found problems with locking. After checking the documentation on https://mapproxy.org/docs/latest/deployment.html#load-balancing-and-high-availablity in my understanding mapproxy does file locking and i see files generated like /tilecache/cache_data/tile_locks/f84e8ee12c1996390bdd0094eee34f31-0-5-11.lck however the locks seem to be ignored.
The setup is like this:
node1 -> gluster mounted and the gluster dir bind mounted in mapproxy container1
node2 -> gluster mounted and the gluster dir bind mounted in mapproxy container2
Now from time to time 1 of the containers gives back a 500 error and it seems to be lock related. If running a single container everything works as expected.
Are the locks shared for both instances, or does the second container maybe not care about the other's lock file?
From the docs I found
"Since file locking doesn’t work well on most network filesystems you are likely to get errors when MapProxy writes these files on network filesystems. You should configure MapProxy to write all lock files on a local filesystem to prevent this. See [globals.cache.lock_dir](https://mapproxy.org/docs/latest/configuration.html#lock-dir) and [globals.cache.tile_lock_dir](https://mapproxy.org/docs/latest/configuration.html#tile-lock-dir)."
But the above makes no sense in my understanding as this would disable "shared locking" which could lead to write conflicts from both containers.
Any help would be much appreciated
regards
M.
The text was updated successfully, but these errors were encountered: