Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong documentation or wrong lock handling? #571

Open
thetuxinator opened this issue Mar 15, 2022 · 4 comments
Open

Wrong documentation or wrong lock handling? #571

thetuxinator opened this issue Mar 15, 2022 · 4 comments

Comments

@thetuxinator
Copy link

Hi,

We are currently working on a migration of our NFS based setup to a new glusterfs based setup. While testing mapproxy on the new gluster storage with a fuse mount, we found problems with locking. After checking the documentation on https://mapproxy.org/docs/latest/deployment.html#load-balancing-and-high-availablity in my understanding mapproxy does file locking and i see files generated like /tilecache/cache_data/tile_locks/f84e8ee12c1996390bdd0094eee34f31-0-5-11.lck however the locks seem to be ignored.

The setup is like this:

node1 -> gluster mounted and the gluster dir bind mounted in mapproxy container1
node2 -> gluster mounted and the gluster dir bind mounted in mapproxy container2

Now from time to time 1 of the containers gives back a 500 error and it seems to be lock related. If running a single container everything works as expected.

Are the locks shared for both instances, or does the second container maybe not care about the other's lock file?

From the docs I found "Since file locking doesn’t work well on most network filesystems you are likely to get errors when MapProxy writes these files on network filesystems. You should configure MapProxy to write all lock files on a local filesystem to prevent this. See [globals.cache.lock_dir](https://mapproxy.org/docs/latest/configuration.html#lock-dir) and [globals.cache.tile_lock_dir](https://mapproxy.org/docs/latest/configuration.html#tile-lock-dir)."

But the above makes no sense in my understanding as this would disable "shared locking" which could lead to write conflicts from both containers.

Any help would be much appreciated

regards

M.

@thetuxinator
Copy link
Author

Additionally the error message from the logs:

mapproxycontainer.1.e74chc81xvvc@myserver2 | fatal error in wmts for /wmts/1.0.0/myplan/default/zg/15/26/78.png?
mapproxycontainer.1.e74chc81xvvc@myserver2 | Traceback (most recent call last):
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/wsgiapp.py", line 141, in call
mapproxycontainer.1.e74chc81xvvc@myserver2 | resp = self.handlers[handler_name].handle(req)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/base.py", line 30, in handle
mapproxycontainer.1.e74chc81xvvc@myserver2 | return handler(parsed_req)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/wmts.py", line 102, in tile
mapproxycontainer.1.e74chc81xvvc@myserver2 | tile = tile_layer.render(request, coverage=limited_to, decorate_img=decorate_img)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/tile.py", line 312, in render
mapproxycontainer.1.e74chc81xvvc@myserver2 | tile = self.tile_manager.load_tile_coord(tile_coord,
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 118, in load_tile_coord
mapproxycontainer.1.e74chc81xvvc@myserver2 | return self.load_tile_coords(
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 142, in load_tile_coords
mapproxycontainer.1.e74chc81xvvc@myserver2 | tiles = self._load_tile_coords(
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 174, in _load_tile_coords
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles = creator.create_tiles(uncached_tiles)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 353, in create_tiles
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles = self._create_meta_tiles(meta_tiles)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 452, in _create_meta_tiles
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles.extend(self._create_meta_tile(meta_tile))
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 464, in _create_meta_tile
mapproxycontainer.1.e74chc81xvvc@myserver2 | with self.tile_mgr.lock(main_tile):
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 46, in enter
mapproxycontainer.1.e74chc81xvvc@myserver2 | self.lock()
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 69, in lock
mapproxycontainer.1.e74chc81xvvc@myserver2 | self._lock = self._try_lock()
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 60, in _try_lock
mapproxycontainer.1.e74chc81xvvc@myserver2 | return LockFile(self.lock_file)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/ext/lockfile.py", line 118, in init
mapproxycontainer.1.e74chc81xvvc@myserver2 | fp = open(path, 'w+')
mapproxycontainer.1.e74chc81xvvc@myserver2 | OSError: [Errno 116] Stale file handle: '/tilecache/./cache_data/tile_locks/f84e8ee12c1996390bdd0094eee34f31-75-25-15.lck'
mapproxycontainer.1.e74chc81xvvc@myserver2 | Traceback (most recent call last):
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/wsgiapp.py", line 141, in call
mapproxycontainer.1.e74chc81xvvc@myserver2 | resp = self.handlers[handler_name].handle(req)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/base.py", line 30, in handle
mapproxycontainer.1.e74chc81xvvc@myserver2 | return handler(parsed_req)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/wmts.py", line 102, in tile
mapproxycontainer.1.e74chc81xvvc@myserver2 | tile = tile_layer.render(request, coverage=limited_to, decorate_img=decorate_img)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/service/tile.py", line 312, in render
mapproxycontainer.1.e74chc81xvvc@myserver2 | tile = self.tile_manager.load_tile_coord(tile_coord,
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 118, in load_tile_coord
mapproxycontainer.1.e74chc81xvvc@myserver2 | return self.load_tile_coords(
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 142, in load_tile_coords
mapproxycontainer.1.e74chc81xvvc@myserver2 | tiles = self._load_tile_coords(
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 174, in _load_tile_coords
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles = creator.create_tiles(uncached_tiles)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 353, in create_tiles
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles = self._create_meta_tiles(meta_tiles)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 452, in _create_meta_tiles
mapproxycontainer.1.e74chc81xvvc@myserver2 | created_tiles.extend(self._create_meta_tile(meta_tile))
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/cache/tile.py", line 464, in _create_meta_tile
mapproxycontainer.1.e74chc81xvvc@myserver2 | with self.tile_mgr.lock(main_tile):
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 46, in enter
mapproxycontainer.1.e74chc81xvvc@myserver2 | self.lock()
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 69, in lock
mapproxycontainer.1.e74chc81xvvc@myserver2 | self._lock = self._try_lock()
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/lock.py", line 60, in _try_lock
mapproxycontainer.1.e74chc81xvvc@myserver2 | return LockFile(self.lock_file)
mapproxycontainer.1.e74chc81xvvc@myserver2 | File "/usr/local/lib/python3.9/dist-packages/mapproxy/util/ext/lockfile.py", line 118, in init
mapproxycontainer.1.e74chc81xvvc@myserver2 | fp = open(path, 'w+')
mapproxycontainer.1.e74chc81xvvc@myserver2 | OSError: [Errno 116] Stale file handle: '/tilecache/./cache_data/tile_locks/f84e8ee12c1996390bdd0094eee34f31-75-25-15.lck'

@thetuxinator
Copy link
Author

thetuxinator commented Mar 15, 2022

Strace from accesses of both containers wenn error occurs attached
mapproxycontainer1.log
mapproxycontainer2.log

@walkermatt
Copy link
Contributor

Hi, my understanding is that the documentation is correct.

MapProxy does support file locking which coordinates multiple processes on the same server avoiding the same source image being requested at a given time. This mechanism doesn't however extent to network filesystems due to poor support for lock files on network filesystems.

@thetuxinator
Copy link
Author

Hi, my understanding is that the documentation is correct.

MapProxy does support file locking which coordinates multiple processes on the same server avoiding the same source image being requested at a given time. This mechanism doesn't however extent to network filesystems due to poor support for lock files on network filesystems.

Hi, thx for your Feedback.

As Gluster is not a Network Filesystem like NFS or CIFS and is Posix compliant it supports locks, so for a gluster Setup with multiple mapproxy Servers support for shared locks would make sense. Also its actually needed for a High-availability Setup. I think it should be quite easy to add such a feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants