Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] core: Add generic task queue #1728

Open
wants to merge 1 commit into
base: 0.x
Choose a base branch
from
Open

[WIP] core: Add generic task queue #1728

wants to merge 1 commit into from

Conversation

ddeboer
Copy link
Member

@ddeboer ddeboer commented Jun 18, 2017

Description

  • Client modules can add items to the queue.
  • This queue can replace z_pivot_rsc for both its task queue and its pivot queue.

Checklist

  • documentation updated
  • tests added
  • no BC breaks

To do

  • Make z_queue:dequeue go through a worker pool (sidejob) to speed up processing but do so safely through capacity limiting. Create a pool per module/function queue callback to separate different types of load/backpressure, e.g. (1) CPU-intensive (resource pivoting, database calls), (2) memory-intensive or (3) dependent on external systems (such as an API or document store).

* Client modules can add items to the queue.
* This queue can replace z_pivot_rsc for both its task queue and its
  pivot queue.
@mention-bot
Copy link

@ddeboer, thanks for your PR! By analyzing the history of the files in this pull request, we identified @mworrell, @arjan and @ArthurClemens to be potential reviewers.

@ddeboer ddeboer requested a review from mworrell June 18, 2017 19:33
ddeboer added a commit to driebit/mod_elasticsearch that referenced this pull request Jun 19, 2017
ok = z_db:create_table(
?TABLE,
[
#column_def{name = id, type = "serial", is_nullable = false, primary_key = true},
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make that a bigserial

ok = z_db:create_table(
?TABLE,
[
#column_def{name = id, type = "serial", is_nullable = false, primary_key = true},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use 'bigserial' Every new task will increment, and a system will gather easily millions of tasks a month.


due(Seconds) when is_integer(Seconds) ->
calendar:gregorian_seconds_to_datetime(
calendar:datetime_to_gregorian_seconds(calendar:universal_time() + Seconds)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this should be (parentheses):

calendar:datetime_to_gregorian_seconds(calendar:universal_time()) + Seconds

undefined ->
undefined;
Task ->
batch(task(Task), Context)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could do this with a single query, where we join the task-queue with itself?


task(Props) ->
Callback = list_to_tuple(
[list_to_atom(binary_to_list(L)) || L <- binary:split(proplists:get_value(callback, Props), <<":">>)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nowadays we also have binary_to_atom(B, utf8)

@mworrell
Copy link
Member

Saw that a long time ago I started a review, but apparently never clicked on "submit" ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants