Skip to content

Latest commit

 

History

History
392 lines (247 loc) · 11.3 KB

tasks.rst

File metadata and controls

392 lines (247 loc) · 11.3 KB

Tasks

Async

Use async from your code to quickly offload tasks to the Cluster:

from django_q.tasks import async, result

# create the task
async('math.copysign', 2, -2)

# or with import and storing the id
import math.copysign

task_id = async(copysign, 2, -2)

# get the result
task_result = result(task_id)

# result returns None if the task has not been executed yet
# you can wait for it
task_result = result(task_id, 200)

# but in most cases you will want to use a hook:

async('math.modf', 2.5, hook='hooks.print_result')

# hooks.py
def print_result(task):
    print(task.result)

async can take the following optional keyword arguments:

hook

The function to call after the task has been executed. This function gets passed the complete Task object as its argument.

group

A group label. Check groups for group functions.

save

Overrides the result backend's save setting for this task.

timeout

Overrides the cluster's timeout setting for this task.

sync

Simulates a task execution synchronously. Useful for testing. Can also be forced globally via the sync configuration option.

broker

A broker instance, in case you want to control your own connections.

q_options

None of the option keywords get passed on to the task function. As an alternative you can also put them in a single keyword dict named q_options. This enables you to use these keywords for your function call:

# Async options in a dict

opts = {'hook': 'hooks.print_result',
        'group': 'math',
        'timeout': 30}

async('math.modf', 2.5, q_options=opts)

Please not that this will override any other option keywords.

Note

For tasks to be processed you will need to have a worker cluster running in the background using python manage.py qcluster or you need to configure Django Q to run in synchronous mode for testing using the sync option.

Groups

You can group together results by passing async the optional group keyword:

# result group example
from django_q.tasks import async, result_group

for i in range(4):
    async('math.modf', i, group='modf')

# after the tasks have finished you can get the group results
result = result_group('modf')
print(result)
[(0.0, 0.0), (0.0, 1.0), (0.0, 2.0), (0.0, 3.0)]

Take care to not limit your results database too much and call delete_group before each run, unless you want your results to keep adding up. Instead of result_group you can also use fetch_group to return a queryset of Task objects.:

# fetch group example
from django_q.tasks import fetch_group, count_group, result_group

# count the number of failures
failure_count = count_group('modf', failures=True)

# only use the successes
results = fetch_group('modf')
if failure_count:
    results = results.exclude(success=False)
results =  [task.result for task in successes]

# this is the same as
results = fetch_group('modf', failures=False)
results =  [task.result for task in successes]

# and the same as
results = result_group('modf') # filters failures by default

Getting results by using result_group is of course much faster than using fetch_group, but it doesn't offer the benefits of Django's queryset functions.

Note

Calling Queryset.values for the result on Django 1.7 or lower will return a list of encoded results. If you can't upgrade to Django 1.8, use list comprehension or an iterator to return decoded results.

You can also access group functions from a task result instance:

from django_q.tasks import fetch

task = fetch('winter-speaker-alpha-ceiling')
if  task.group_count() > 100:
    print(task.group_result())
    task.group_delete()
    print('Deleted group {}'.format(task.group))

Synchronous testing

async can be instructed to execute a task immediately by setting the optional keyword sync=True. The task will then be injected straight into a worker and the result saved by a monitor instance:

from django_q.tasks import async, fetch

# create a synchronous task
task_id = async('my.buggy.code', sync=True)

# the task will then be available immediately
task = fetch(task_id)

# and can be examined
if not task.success:
    print('An error occurred: {}'.format(task.result))
An error occurred: ImportError("No module named 'my'",)

Note that async will block until the task is executed and saved. This feature bypasses the Redis server and is intended for debugging and development. Instead of setting sync on each individual async you can also configure sync as a global override.

Connection pooling

Django Q tries to pass broker instances around its parts as much as possible to save you from running out of connections. When you are making individual calls to async a lot though, it can help to set up a broker to reuse for async:

# broker connection economy example
from django_q.tasks import async
from django_q.brokers import get_broker

broker = get_broker()
for i in range(50):
    async('math.modf', 2.5, broker=broker)

Tip

If you are using django-redis , you can configure <django_redis> Django Q to use its connection pool.

Reference

param object func

The task function to execute

param tuple args

The arguments for the task function

param object hook

Optional function to call after execution

param str group

An optional group identifier

param int timeout

Overrides global cluster timeout.

param bool save

Overrides global save setting for this task.

param bool sync

If set to True, async will simulate a task execution

param redis

Optional redis connection

param dict q_options

Options dict, overrides option keywords

param dict kwargs

Keyword arguments for the task function

returns

The uuid of the task

rtype

str