You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Tasks get stuck in “RUNNING” status and this bug causes other executions currently running executions to queue up. Once it is manually terminated, we have a spike in the amount of completed tasks and the service goes back to normal. In logs we observe a high amount of error logs (1-3k every 1-2 minutes).
Details
Conductor version: 3.14.0
Persistence implementation: Postgres
Queue implementation: Dynoqueues
Lock: Redis It occurred in several types of workflows and tasks.
To Reproduce We were unable to replicate the bug so far.
Expected behavior
Task to finish and continue with the next task in the workflow.
Screenshots
This is a HTTP task that shouldn’t have lasted more than a couple minutes and ran for 26 minutes before we manually terminated it:
Additional context
When this issue happens, we’ve observed an influx of 1-3k+ error messages every 1-2 minutes. Below is a sample of these errors:
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.execution.WorkflowExecutor.decide(WorkflowExecutor.java:1082)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.execution.WorkflowExecutor.decide(WorkflowExecutor.java:1082)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.execution.WorkflowExecutor.decide(WorkflowExecutor.java:1082)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.execution.WorkflowExecutor.decide(WorkflowExecutor.java:1078)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.dal.ExecutionDAOFacade.updateTasks(ExecutionDAOFacade.java:530)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.core.dal.ExecutionDAOFacade.updateTask(ExecutionDAOFacade.java:505)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at io.orkes.conductor.dao.archive.ArchivedExecutionDAO.updateTask(ArchivedExecutionDAO.java:86)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at io.micrometer.core.instrument.composite.CompositeTimer.record(CompositeTimer.java:79)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at io.orkes.conductor.dao.archive.ArchivedExecutionDAO.lambda$updateTask$1(ArchivedExecutionDAO.java:88)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.redis.dao.RedisExecutionDAO.updateTask(RedisExecutionDAO.java:254)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.netflix.conductor.redis.dao.BaseDynoDAO.toJson(BaseDynoDAO.java:70)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3964)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ObjectMapper._writeValueAndClose(ObjectMapper.java:4719)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:318)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:479)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:790)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
at com.fasterxml.jackson.databind.JsonMappingException.prependPath(JsonMappingException.java:455)
Jan 22 18:26:47.960
i-0abadd3b65ed5a1a2
rp-meli-orkes
Exception in thread "sweeper-thread-17" java.lang.StackOverflowError
The text was updated successfully, but these errors were encountered:
Describe the bug
Tasks get stuck in “RUNNING” status and this bug causes other executions currently running executions to queue up. Once it is manually terminated, we have a spike in the amount of completed tasks and the service goes back to normal. In logs we observe a high amount of error logs (1-3k every 1-2 minutes).
Details
Conductor version: 3.14.0
Persistence implementation: Postgres
Queue implementation: Dynoqueues
Lock: Redis
It occurred in several types of workflows and tasks.
To Reproduce
We were unable to replicate the bug so far.
Expected behavior
Task to finish and continue with the next task in the workflow.
Screenshots
This is a HTTP task that shouldn’t have lasted more than a couple minutes and ran for 26 minutes before we manually terminated it:
Additional context
When this issue happens, we’ve observed an influx of 1-3k+ error messages every 1-2 minutes. Below is a sample of these errors:
The text was updated successfully, but these errors were encountered: