Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve timeout calculations with more accurate start time and by including all tasks #1035

Open
kkmuffme opened this issue Feb 24, 2024 · 1 comment
Labels
priority: normal The issue/PR is normal priority—not many people are affected or there’s a workaround, etc. type: enhancement The issue is a request for an enhancement.

Comments

@kkmuffme
Copy link

kkmuffme commented Feb 24, 2024

I'd like to suggest a few small improvements and hope to gather some feedback on them:

  1. $this->created_time = microtime( true ); in QueueRunner should be changed to:
$this->created_time = isset( $_SERVER['REQUEST_TIME_FLOAT'] ) && is_float( $_SERVER['REQUEST_TIME_FLOAT'] ) ? $_SERVER['REQUEST_TIME_FLOAT'] : microtime( true );

Since the action scheduler might only be loaded after a significant amount of time has passed already (especially in wp-admin with the async request, there are loads of incredibly slow sites) thus distorting the calculations

  1. time_likely_to_be_exceeded only takes into account the $processed_actions - however this can be off signifcantly if the time between the time the request started (even worse now with the change of 1) and the time "run" was called - either bc some unrelated stuff beforehand was slow (see 1) OR there were lots of clean up tasks that took a while.

I suggest to create an additional property which holds microtime( true ) immediately before the do/while loop in run starts and use only that to calculate the average action time.

  1. $estimated_time = $execution_time + ( $time_per_action * 3 );
    should be changed to
$estimated_time        = $execution_time + ( $time_per_action * apply_filters( 'action_scheduler_queue_runner_batch_size', 25 ) );

This will massively reduce DB load and speed up execution because:

  • at the moment it claims a full batch (= DB writes = bad, bc we need to go to DB master) even if it knows it won't be able to complete a full batch but only at least 3 actions (instead of 25 or whatever it is filtered too). The bigger the batch size is, the worse the problem is
  • once it's claimed, we have to write again to unclaim it - the claims created 2 unnecessary writes since we already knew that we won't have enough time
  • the task is blocked and it's not immediately freed when the request times out, thus delaying the execution of tasks

Perhaps even *1.5 it to be on the save side (should be apply filterable too)

EDIT: for 3) where batch_limits_exceeded is called in do_batch - this needs to be handled separately too, since there it should not use the batch size or hardcoded 3, but it should use only 1, bc we only need to break when the next iteration of the loop will put us over the limit.

@coreymckrill coreymckrill added the type: enhancement The issue is a request for an enhancement. label Feb 28, 2024
@lsinger lsinger added the priority: normal The issue/PR is normal priority—not many people are affected or there’s a workaround, etc. label Mar 1, 2024
@lsinger
Copy link
Contributor

lsinger commented Mar 1, 2024

Thanks @kkmuffme, those sound like plausible improvements. If you want to submit a PR we'd be happy to take a look.

We'd especially interested in some sort of performance test that shows under which situations performance would improve. If you have any ideas for something like that we'd also be happy to discuss.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: normal The issue/PR is normal priority—not many people are affected or there’s a workaround, etc. type: enhancement The issue is a request for an enhancement.
Projects
None yet
Development

No branches or pull requests

3 participants