Skip to content

Releases: Koed00/django-q

v0.5.3

19 Aug 11:08
Compare
Choose a tag to compare
  • adds catch_up configuration setting for missed schedules
  • updated to test with Django 1.7.10 and 1.8.4

If your cluster has not run for a while, the default behavior for the scheduler is to play catch up with the schedules and keep executing them until they are up to date.

With version 0.5.3 you can change this by setting the catch_up configuration to False.
The scheduler will then skip execution of scheduled events in the past. Instead those tasks will run only once and normal scheduling resumes.

v0.5.2

13 Aug 18:02
Compare
Choose a tag to compare
  • Adds a sync configuration option. When set to True this will force all async() calls to be run with `sync=True'.
    Effectively making all operations synchronous. Useful for testing.
  • Adds a queue_size function , which will tel you how many tasks are currently held in the broker queue.
    Does not count any tasks being processed by workers.

v0.5.1

12 Aug 12:14
Compare
Choose a tag to compare

Adds the 'qinfo' management command, which provides a quick overview of all your clusters.

$ python manage.py qinfo

info

Task rate and average execution time are based on executed tasks in the last 24 hours.
If you have a SAVE_LIMIT defined, this will influence the numbers.

v0.5.0

06 Aug 08:54
Compare
Choose a tag to compare

Adds minute schedules

You now have the option to add a Minutes type schedule with a variable minutes parameter:

schedule('math.hypot',
        3, 4,
        schedule_type=Schedule.MINUTES,
        minutes = 5

The minutes field is also available in the Admin and is ignored by everything but the Minutes schedule type.

_Warning_
Please run migrations with python manage.py migrate after this update to add the new minutes field to the database.

v0.4.6

04 Aug 20:02
Compare
Choose a tag to compare
  • Circumvents problems with OS X's multiprocessing implementation
  • Offers alternative for platforms that do not support cpu count.

OS X does not implement the size function of multiprocessing queues. There is no real fix for this,other than a custom solution, so for now the monitor on OSX will not show a Task Queue or a Result Queue count.

Some platforms do not support Python's cpu count correctly. In those cases you can set the WORKERS configuration manually or you can install psutil as an alternative cpu count provider.

Credits to @sebasmagri for the OSX testing

v0.4.5

01 Aug 14:17
Compare
Choose a tag to compare
  • Sets the pickle protocol to highest available on your platform.
  • Unpacking now takes place in the Pusher instead of the Workers.
  • Fixes save_limit + 1 bug

Notes

Django Q now uses the highest available pickle protocol on your platform. This should increase performance considerably when sending large objects as arguments to your functions. It also expands the types of objects that can be successfully pickled.

The decompress-unsign-unpickle phase of the task package now takes place in the Pusher process instead of the workers. This has the advantage that all of the cluster processes can access the tasks information during its life cycle in the cluster and we will be using this in future enhancements. Since the pusher is usually pushing faster than the cluster can manage, this should also improve performance a little, but probably not enough to notice for most.

v0.4.4

27 Jul 16:17
Compare
Choose a tag to compare
  • adds group filter to the admin views
  • monitor TQ indicates when queue_limit has been reached
  • closes old database connections on worker and monitor spawn

In some environments the workers and monitors would re-use stale db connections, causing problems.
Closing old connections on spawn will hopefully prevent this.

v0.4.3

24 Jul 11:46
Compare
Choose a tag to compare
  • adds queue_limit configuration option
  • package signing is now salted by name

Warning make sure your queues are empty and your clusters have stopped, before deploying this release. This changes the way packages are signed, so any tasks that were created with a previous version will be discarded as invalid by workers of this new version.

The new queue_limit option limits the amount of tasks a single cluster will hold in memory. It does not limit the amount of tasks you can queue with async() to Redis. This setting can be useful to balance the workload and memory consumption of a cluster or manage data loss in case of a cluster failure.

v0.4.2

22 Jul 15:34
Compare
Choose a tag to compare
  • fixes timeout issues
  • minor performance tweaks

v0.4.1

21 Jul 11:18
Compare
Choose a tag to compare
  • adds save override for tasks
  • adds optional q_options options dict instead of keywords

You can now instruct async() to override the global save settings for a particular task. This can be useful, for example , when you have a task that generates many sub-tasks , but you only want to see the main task in your result database. In that case you set ``save=False' when you async the sub-tasks.

For convenience, async() now also accepts all the option keywords as a single dict named q_options.
This has the benefit of freeing up keywords like 'save' , 'group' etc. and being a little tidier.
Note that when you use the ``q_options` dict, all other keywords arguments get passed on to the task function and are ignored as options.

opts = {'save': False, 'group': 'Indexer'}
async('tasks.index','www', q_options=opts)

This also enables schedules to use async options when you add the q_options dict to the schedule's keywords.