Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

job lock #591

Open
daiagou opened this issue Jun 1, 2023 · 0 comments
Open

job lock #591

daiagou opened this issue Jun 1, 2023 · 0 comments

Comments

@daiagou
Copy link

daiagou commented Jun 1, 2023

There are two issues here that can cause the job to lock,

  1. Because the lock is first applied and then the service is executed, it will not be unlocked until the service is completed. If an abnormal exit occurs midway through the execution (such as a server restart), the lock will remain locked, with a default timeout of one day.
    Our current approach to this issue is to modify the 'EXPIRE_LOCK_TIME' value of the job table to ensure that the job is not locked for too long.
  2. Hazelcast will cause job lock, as the default delay for job startup is 120s. However, if Hazelcast is integrated, it may be executed through distributed services, bypassing the 120s delay, and there is a possibility that the instance may not be initialized successfully and the job has already been executed, causing anomalies. This exception has not been unlocked for processing.

The current temporary solution is to set the 'local_only' attribute of each job to true.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant