Skip to content

Commit

Permalink
Merge pull request #49 from Koed00/dev
Browse files Browse the repository at this point in the history
v0.5.3
  • Loading branch information
Koed00 committed Aug 19, 2015
2 parents 97a3fc9 + 9fbf859 commit 2011efb
Show file tree
Hide file tree
Showing 11 changed files with 211 additions and 194 deletions.
4 changes: 2 additions & 2 deletions .travis.yml
Expand Up @@ -8,8 +8,8 @@ python:
- "3.4"

env:
- DJANGO=1.8.3
- DJANGO=1.7.9
- DJANGO=1.8.4
- DJANGO=1.7.10

install:
- pip install -q django==$DJANGO
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Expand Up @@ -32,7 +32,7 @@ Requirements
- `Arrow <https://github.com/crsmithdev/arrow>`__
- `Blessed <https://github.com/jquast/blessed>`__

Tested with: Python 2.7 & 3.4. Django 1.7.9 & 1.8.3
Tested with: Python 2.7 & 3.4. Django 1.7.10 & 1.8.4


Installation
Expand Down
2 changes: 1 addition & 1 deletion django_q/__init__.py
Expand Up @@ -9,6 +9,6 @@
from .cluster import Cluster
from .monitor import Stat

VERSION = (0, 5, 2)
VERSION = (0, 5, 3)

default_app_config = 'django_q.apps.DjangoQConfig'
2 changes: 1 addition & 1 deletion django_q/conf.py
Expand Up @@ -78,7 +78,7 @@ class Conf(object):
SYNC = conf.get('sync', False)

# If set to False the scheduler won't execute tasks in the past.
# Instead it will reschedule the next run in the future. Defaults to True.
# Instead it will run once and reschedule the next run in the future. Defaults to True.
CATCH_UP = conf.get('catch_up', True)

# Use the secret key for package signing
Expand Down
6 changes: 3 additions & 3 deletions docs/cluster.rst
Expand Up @@ -42,14 +42,14 @@ Stopping the cluster with ctrl-c or either the ``SIGTERM`` and ``SIGKILL`` signa
16:44:14 [Q] INFO Process-1:9 stopped monitoring results
16:44:15 [Q] INFO Q Cluster-31781 has stopped.

The number of workers, optional timeouts, recycles and cpu_affinity can be controlled via the :ref:`configuration` settings.
The number of workers, optional timeouts, recycles and cpu_affinity can be controlled via the :doc:`configure` settings.

Multiple Clusters
-----------------
You can have multiple clusters on multiple machines, working on the same queue as long as:

- They connect to the same Redis server or Redis cluster.
- They use the same cluster name. See :ref:`configuration`
- They use the same cluster name. See :doc:`configure`
- They share the same ``SECRET_KEY`` for Django.

Using a Procfile
Expand Down Expand Up @@ -80,7 +80,7 @@ An example :file:`circus.ini` ::


Note that we only start one process. It is not a good idea to run multiple instances of the cluster in the same environment since this does nothing to increase performance and in all likelihood will diminish it.
Control your cluster using the ``workers``, ``recycle`` and ``timeout`` settings in your :ref:`configuration`
Control your cluster using the ``workers``, ``recycle`` and ``timeout`` settings in your :doc:`configure`

Architecture
------------
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Expand Up @@ -72,7 +72,7 @@
# The short X.Y version.
version = '0.5'
# The full version, including alpha/beta/rc tags.
release = '0.5.2'
release = '0.5.3'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
188 changes: 188 additions & 0 deletions docs/configure.rst
@@ -0,0 +1,188 @@
Configuration
-------------
.. py:currentmodule:: django_q
Configuration is handled via the ``Q_CLUSTER`` dictionary in your :file:`settings.py`

.. code:: python
# settings.py example
Q_CLUSTER = {
'name': 'myproject',
'workers': 8,
'recycle': 500,
'timeout': 60,
'compress': True,
'save_limit': 250,
'queue_limit': 500,
'cpu_affinity': 1,
'label': 'Django Q',
'redis': {
'host': '127.0.0.1',
'port': 6379,
'db': 0, }
}
All configuration settings are optional:

name
~~~~

Used to differentiate between projects using the same Redis server. Defaults to ``'default'``.
This can be useful if you have several projects using the same Redis server.

.. note::
Tasks are encrypted. When a worker encounters a task it can not decrypt, it will be discarded.

workers
~~~~~~~

The number of workers to use in the cluster. Defaults to CPU count of the current host, but can be set to a custom number. [#f1]_

recycle
~~~~~~~

The number of tasks a worker will process before recycling . Useful to release memory resources on a regular basis. Defaults to ``500``.

.. _timeout:

timeout
~~~~~~~

The number of seconds a worker is allowed to spend on a task before it's terminated. Defaults to ``None``, meaning it will never time out.
Set this to something that makes sense for your project. Can be overridden for individual tasks.

compress
~~~~~~~~

Compresses task packages to Redis. Useful for large payloads, but can add overhead when used with many small packages.
Defaults to ``False``

.. _save_limit:

save_limit
~~~~~~~~~~

Limits the amount of successful tasks saved to Django.
- Set to ``0`` for unlimited.
- Set to ``-1`` for no success storage at all.
- Defaults to ``250``
- Failures are always saved.

.. _sync:

sync
~~~~

When set to ``True`` this configuration option forces all :func:`async` calls to be run with ``sync=True``.
Effectively making everything synchronous. Useful for testing. Defaults to ``False``.

.. _queue_limit:

queue_limit
~~~~~~~~~~~

This does not limit the amount of tasks that can be queued overall on Redis, but rather how many tasks are kept in memory by a single cluster.
Setting this to a reasonable number, can help balance the workload and the memory overhead of each individual cluster.
It can also be used to manage the loss of data in case of a cluster failure.
Defaults to ``None``, meaning no limit.

label
~~~~~

The label used for the Django Admin page. Defaults to ``'Django Q'``

.. _catch_up:

catch_up
~~~~~~~~
The default behavior for schedules that didn't run while a cluster was down, is to play catch up and execute all the missed time slots until things are back on schedule.
You can override this behavior by setting ``catch_up`` to ``False``. This will make those schedules run only once when the cluster starts and normal scheduling resumes.
Defaults to ``True``.

redis
~~~~~

Connection settings for Redis. Defaults::

redis: {
'host': 'localhost',
'port': 6379,
'db': 0,
'password': None,
'socket_timeout': None,
'charset': 'utf-8',
'errors': 'strict',
'unix_socket_path': None
}

For more information on these settings please refer to the `Redis-py <https://github.com/andymccurdy/redis-py>`__ documentation

.. _django_redis:

django_redis
~~~~~~~~~~~~

If you are already using `django-redis <https://github.com/niwinz/django-redis>`__ for your caching, you can take advantage of its excellent connection backend by supplying the name
of the cache connection you want to use::

# example django-redis connection
Q_CLUSTER = {
'name': 'DJRedis',
'workers': 4,
'timeout': 90,
'django_redis: 'default'
}



.. tip::
Django Q uses your ``SECRET_KEY`` to encrypt task packages and prevent task crossover. So make sure you have it set up in your Django settings.

cpu_affinity
~~~~~~~~~~~~

Sets the number of processor each worker can use. This does not affect auxiliary processes like the sentinel or monitor and is only useful for tweaking the performance of very high traffic clusters.
The affinity number has to be higher than zero and less than the total number of processors to have any effect. Defaults to using all processors::

# processor affinity example.

4 processors, 4 workers, cpu_affinity: 1

worker 1 cpu [0]
worker 2 cpu [1]
worker 3 cpu [2]
worker 4 cpu [3]

4 processors, 4 workers, cpu_affinity: 2

worker 1 cpu [0, 1]
worker 2 cpu [2, 3]
worker 3 cpu [0, 1]
worker 4 cpu [2, 3]

8 processors, 8 workers, cpu_affinity: 3

worker 1 cpu [0, 1, 2]
worker 2 cpu [3, 4, 5]
worker 3 cpu [6, 7, 0]
worker 4 cpu [1, 2, 3]
worker 5 cpu [4, 5, 6]
worker 6 cpu [7, 0, 1]
worker 7 cpu [2, 3, 4]
worker 8 cpu [5, 6, 7]


In some cases, setting the cpu affinity for your workers can lead to performance improvements, especially if the load is high and consists of many repeating small tasks.
Start with an affinity of 1 and work your way up. You will have to experiment with what works best for you.
As a rule of thumb; cpu_affinity 1 favors repetitive short running tasks, while no affinity benefits longer running tasks.

.. note::

The ``cpu_affinity`` setting requires the optional :ref:`psutil <psutil>` module.

.. py:module:: django_q
.. rubric:: Footnotes

.. [#f1] Uses :func:`multiprocessing.cpu_count()` which can fail on some platforms. If so , please set the worker count in the configuration manually or install :ref:`psutil<psutil>` to provide an alternative cpu count method.
3 changes: 2 additions & 1 deletion docs/index.rst
Expand Up @@ -24,14 +24,15 @@ Features
- Python 2 and 3


Django Q is tested with: Python 2.7 & 3.4. Django 1.7.9 & 1.8.3
Django Q is tested with: Python 2.7 & 3.4. Django 1.7.10 & 1.8.4

Contents:

.. toctree::
:maxdepth: 2

Installation <install>
Configuration <configure>
Tasks <tasks>
Schedules <schedules>
Cluster <cluster>
Expand Down

0 comments on commit 2011efb

Please sign in to comment.