Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logging middleware doesn't give support for enqueueing logs #473

Open
eprothro opened this issue Sep 22, 2022 · 5 comments
Open

Logging middleware doesn't give support for enqueueing logs #473

eprothro opened this issue Sep 22, 2022 · 5 comments

Comments

@eprothro
Copy link

Logs for enqueuing retries are logged at info level.

# in lib/exq/redis/job_queue.ex
def retry_job(redis, namespace, job, retry_count, error) do
...
Logger.info("Queueing job #{job.jid} to retry in #{offset} seconds")
...
end

For one application we'd prefer they be at debug level and with a different format that includes more information. Would adding an after_enqueued hook be an appropriate addition to the middleware behavior? Or is a lighter weight approach preferred?

@ananthakumaran
Copy link
Collaborator

For one application we'd prefer they be at debug level and with a different format that includes more information

Have you tried compile time purging? It's hard to make everyone agree on what should be the correct log level. From my perspective, the above info is quite useful for debugging, so don't think we can disable it by default

with a different format that includes more information. Would adding an after_enqueued hook be an appropriate addition to the middleware behavior? Or is a lighter weight approach preferred?

I would prefer a telemetry event for these cases if necessary. after_failed hook actually exposes most of the info. Let me know what info is missing there

@eprothro
Copy link
Author

Thank you for the rapid response. I agree in general, my point was that whether that is debug or info level is application dependent.

Agree the telemetry approach is better than another middleware stage, but don't see how to use the current telemetry or middleware for this, since theoretically the retry could be enqueued but then after_failed not be called. Is this too pedantic?

@ananthakumaran
Copy link
Collaborator

ananthakumaran commented Sep 22, 2022

but don't see how to use the current telemetry

I meant, a new telemetry event could be added if the after_failed is not suitable for your case. Let me know what information you want to log

since theoretically the retry could be enqueued but then after_failed not be called. Is this too pedantic?

after_failed will be called after any job failure. Exq.Redis.JobQueue.dead?/1 can be used to figure out whether the job is dead (which means no further retry) or not. Though, some info like when it is scheduled (offset seconds) might not be available at the middleware level.

If you mean, after logging it could fail, I think it's hard to handle those kind of cases :), since the log is going to stdout and enqueue is going to redis and I don't think we can make then atomic :)

my point was that whether that is debug or info level is application dependent.

agree, don't see any good solution other than configuring log_level for each case, which is quite cumbersome.

@eprothro
Copy link
Author

eprothro commented Oct 3, 2022

Sorry for the delay. My preference would be a new telemetry event.

  1. We'd like to log the delay for the retry. Due to fuzzing in our backoff, we can't know this from the after_failed
  2. It still makes me itch to log "enqueued for retry" in the after_failed, as we're making an assumption that this happened or will happen or is happening based our our expectation of the implementation somewhere else.

A telemetry event feels much more correct for this case and for Logger.info("Max retries on job #{job.jid} exceeded"). Job Retry and Job Death both feel like first class "events" of Exq.

@aaronrenner
Copy link

It might be nice if there was metadata available in the [:exq, :job, :stop] event that would indicate if the job is being retried.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants