Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERROR] WorkerImpl Could not close Jedis connection on server restart #20

Open
edwardotis opened this issue Apr 27, 2017 · 5 comments
Open

Comments

@edwardotis
Copy link

Using v1.2.1, when I restart my app deployed to Tomcat, the logs spew the following redis errors for each workerimpl

Apr 27 01:24:56  myenv_i-06c3e45d653a46807 catalina.out:  27-Apr-2017 08:24:56.560 INFO [main] org.apache.catalina.core.StandardServer.await A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Apr 27 01:24:56 myenv_i-06c3e45d653a46807 catalina.out:  27-Apr-2017 08:24:56.561 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["http-nio-8080"]
Apr 27 01:24:56 myenv_i-06c3e45d653a46807 catalina.out:  27-Apr-2017 08:24:56.563 INFO [main] org.apache.coyote.AbstractProtocol.pause Pausing ProtocolHandler ["ajp-nio-8009"]
Apr 27 01:24:56 myenv_i-06c3e45d653a46807 catalina.out:  27-Apr-2017 08:24:56.563 INFO [main] org.apache.catalina.core.StandardService.stopInternal Stopping service Catalina

2017-04-27 08:24:56,810 [INFO ] JesqueDelayedJobThreadService Stopping the jesque delayed job thread
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  2017-04-27 08:24:56,945 [INFO ] JesqueDelayedJobThreadService Stopping jesque delayed job thread because thread state changed to Stopped
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  2017-04-27 08:24:56,958 [ERROR] WorkerImpl Could not close Jedis connection
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.util.Pool.returnBrokenResourceObject(Pool.java:90)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnBrokenResource(JedisPool.java:111)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:126)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:12)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.Jedis.close(Jedis.java:3314)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.returnJedis(WorkerImpl.groovy:733)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.withJedis(WorkerImpl.groovy:715)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.run(WorkerImpl.groovy:141)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at java.lang.Thread.run(Thread.java:745)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  Caused by: java.lang.IllegalStateException: Invalidated object not currently part of this pool
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at org.apache.commons.pool2.impl.GenericObjectPool.invalidateObject(GenericObjectPool.java:640)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.util.Pool.returnBrokenResourceObject(Pool.java:88)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	... 8 common frames omitted
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  Exception in thread "Worker-0 Jesque-2.1.1:STOPPING" redis.clients.jedis.exceptions.JedisException: Could not return the resource to the pool
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.util.Pool.returnBrokenResourceObject(Pool.java:90)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnBrokenResource(JedisPool.java:111)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:126)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.JedisPool.returnResource(JedisPool.java:12)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.jedis.Jedis.close(Jedis.java:3314)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.returnJedis(WorkerImpl.groovy:733)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.withJedis(WorkerImpl.groovy:715)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at grails.plugins.jesque.WorkerImpl.run(WorkerImpl.groovy:141)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at java.lang.Thread.run(Thread.java:745)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  Caused by: java.lang.IllegalStateException: Invalidated object not currently part of this pool
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at org.apache.commons.pool2.impl.GenericObjectPool.invalidateObject(GenericObjectPool.java:640)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	at redis.clients.util.Pool.returnBrokenResourceObject(Pool.java:88)
Apr 27 01:24:57 myenv_i-06c3e45d653a46807 catalina.out:  	... 8 more

And other error for each worker:


Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:  27-Apr-2017 08:25:02.564 WARNING [localhost-startStop-2] org.apache.catalina.loader.WebappClassLoaderBase.clearReferencesThreads The web application [ROOT] appears to have started a thread named [Thread-28] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.net.SocketInputStream.socketRead0(Native Method)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.net.SocketInputStream.read(SocketInputStream.java:171)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.net.SocketInputStream.read(SocketInputStream.java:141)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.net.SocketInputStream.read(SocketInputStream.java:127)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.util.RedisInputStream.ensureFill(RedisInputStream.java:195)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:40)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.Protocol.process(Protocol.java:141)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.Protocol.read(Protocol.java:205)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:297)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.Connection.getRawObjectMultiBulkReply(Connection.java:242)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.JedisPubSub.process(JedisPubSub.java:108)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.JedisPubSub.proceed(JedisPubSub.java:102)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   redis.clients.jedis.Jedis.subscribe(Jedis.java:2591)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   grails.plugins.jesque.AdminImpl.run(AdminImpl.java:166)
Apr 27 01:25:02 myenv_i-06c3e45d653a46807 catalina.out:   java.lang.Thread.run(Thread.java:745)

Using:
grails jesque v1.2.1
grails 3.1.16
tomcat 8.0.41
Linux

@peh
Copy link
Contributor

peh commented Apr 27, 2017

That "issue" has been around forever now. The reason is that the redis connections are already closed before the workers are actually stopped. i will check whether we can hook into the shutdown process earlier to stop workers but this will keep happening if there is a long running job running, while the app is being stopped. There is simply no way of gracefully killing that thread while it is still working on something.

@edwardotis
Copy link
Author

Thanks, @peh.
FYI, my queue was empty so the workers were not actively performing tasks.

@peh
Copy link
Contributor

peh commented Apr 27, 2017

yeah that happens from time to time when timing is bad. but it's also nothing bad at all and can happen to other connections that are not handled in the "main thread"

@edwardotis
Copy link
Author

gotcha, ok, so sounds like a low priority issue.

@bp-FLN
Copy link
Contributor

bp-FLN commented Jan 18, 2019

this has been fixed by #28

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants