Skip to content
This repository has been archived by the owner on Apr 30, 2024. It is now read-only.

Releases: Workiva/furious

v1.6.5

06 Jul 14:35
af4d2a5
Compare
Choose a tag to compare

Info

Build: (waiting for build to complete)
Skynet Results: (waiting for Skynet results)
Pipeline: (waiting for pipeline to start)
This patch release includes the following changes:

Miscellaneous

  • #189 Move imports inside of csrf_check

Notes created on Wednesday, July 06 02:35 PM UTC

v1.6.4

05 Jul 20:43
179af8c
Compare
Choose a tag to compare

Info

Build: (waiting for build to complete)
Skynet Results: (waiting for Skynet results)
Pipeline: (waiting for pipeline to start)
This patch release includes the following changes:

New Features and Improvements

  • #188 RED-5108 - Check taskqueue ip
    • RED-5108 Fix furious RCE

Notes created on Tuesday, July 05 08:43 PM UTC

Minor Release 1.6.3

05 Jul 15:14
dd483d8
Compare
Choose a tag to compare

Testing release to pypi

Minor Release 1.6.2

28 Jun 19:08
1705ad1
Compare
Choose a tag to compare

Testing pypi deploy

Minor Release 1.6.1

23 Jun 20:34
243fa31
Compare
Choose a tag to compare

Updating deployment process

Minor Release 1.6.0

26 May 20:35
Compare
Choose a tag to compare

#169 Update README for defaults decorator location

#173 Async subclasses can now decorate their target functions

  • There's a couple reasons you'd want to leverage this:

    1. Instead of decorating every target function's definition, just use the
      appropriate Async type.
    2. When you only want the decorated function for tasks.

    Examples of the latter, where you'd want extra functionality for the Task but
    not when using the target function as a normal local procedure call, would be:

    logging out extra context for the task
    top-level try/except handling for the task
    a retry loop, instead of just kicking off a new task when a transient error happens

#174 Some README & test cleanups.

  • Correct some imports and small mistakes in some examples in the README.
    Found out after the README changes that Travis CI was failing because of some typos in the tests. Added a commit for that too.

#175 Trigger error handler on DeadlineExceededError

  • We will now trigger the failure handler from any exception that inherits from BaseException (such as google.appengine.runtime.DeadlineExceededError) instead of Exception.

Minor Release 1.5.0

12 Oct 16:28
Compare
Choose a tag to compare

#166 Update processors.py

  • This seems of little use and is in every non-completion furious task. Minor but could just be removed.

#167 Update examples to have the correct furious handler

  • The examples should work now. I'm not sure why the queue base URL change affected the handling of the tasks but backing up to the commit prior to that one and the URL handling is fine. I did this little tweak to get it working. I suspected that the * in the app yaml routing before the change was being handled differently but after a bunch of searching I was not able to find what actually was handling the furious calls (to my chagrin). Was hacking on some completion stuff and couldn't get the examples to work so I carved this off to get it looked at first.

#170 encode / decode _process_results option

  • currently, the _process_results option is not correctly encoded/decoded with the rest of the async options. passing in a callable will result in a json error. this change handles _process_results exactly like _context_checker is currently handled.

Minor Release 1.4.0

02 Jun 19:41
Compare
Choose a tag to compare

#162 insert_tasks_ignore_duplicates - New _insert_tasks_ignore_duplicate_name

  • ...s to gracefully handle DuplicateTaskNameError exceptions raised by inserts.

    1. Let me know if you guys would prefer to just modify the existing _insert_tasks function, and add another option.
    2. Should we add an options that just selects the _insert_tasks_ignore_duplicate_names automatically, which shields the implementation from the user.

    Right now a typical usage would be:

      with context.new(batch_size=100,
                       insert_tasks=insert_tasks_ignore_duplicate_names) as ctx:
        ctx.add(...) 
    

#163 Add info needed to add Furious to the public pypi

  • Since Furious is open source we should be good people and add our lib to pypi so
    others can easily use it via pip/easy_install and not be required to build from
    source.

    I've added the required info to setup.py and setup.cfg. Wasn't much. Also added
    the generated pypi files to the .gitignore so we don't push those up.

#164 Mounting furious on /_queue/async instead of /_ah/queue/async

  • This includes a new furious version that moved furious URIs from /_ah/queue/async -> /_queue/async. All the yaml files now map the old and new URIs to the furious router with the following regex: url: /_(ah/queue|queue)/async.*. Testing needs to ensure that tasks that are inflight with the old URIs are able to drain successfully with a new deploy.

Minor Release 1.3.0

21 Jan 16:18
Compare
Choose a tag to compare

#158 Fix the error callback example

  • This fixes the example to show how you can assign an async to be executed if there is an unhanded exception in your own async. Due to changes with how we handle results this example no longer work and was throwing its own exceptions trying to access properties that no longer existed. This fixes that.

I think the expected behavior of the error callback example is to re-raise the exception that was found in the other task.

#159 add an extra_task_info async option which is logged before task execution

  • If an extra_task_info option is present on the Async, it will be logged immediately before task execution, with the rest of the task info.

#160 Transient error retry on start

  • Re-opened PR against Workiva/furious

    Original PR here: markshaule-wf#1
    Async Changes:
    Async.start() now sleeps before attempting to re-add task on a TransientError.
    Add option retry_transient_errors to override the retry behaviour in Async.start(). False can be specified to just re-raise the TransientError, and not attempt a retry.

    Context Changes:
    Context _insert_tasks now re-raises TransientError if retry option has been set to False.
    Renamed parameter 'retry_errors' to 'retry_transient_errors' in _insert_tasks function.
    _insert_tasks - retry_transient_errors parameter is now passed onto recursive calls correctly.

    Notes on compatibility:
    Since I've renamed the parameter for _insert_tasks, that could potentially break someone's custom implementation of that function.

Minor Release 1.2.0

24 Oct 17:00
Compare
Choose a tag to compare

#147 Add parent id and request id to Async

  • Add a parent id and request id to the Async object. The parent id will be passed
    down to "children" asyncs. This will allow us to track the chain/graph of asyncs
    without developer intervention.

#149 Small doc-related improvements

  • Bump docutils requirement to 0.12; 0.10 is not available on PyPI.
    Correct and expand .pth file instructions, and improve doc building instructions while I'm at it.

#155 transient_error_retry - Retry transient errors when attempting to insert_tasks, and re-raise if the reinsert fails

  • Instead of silently failing on a TransientError on task inserts, we check the options to see if we should retry the failure (after a delay). If the retry fails, we re-raise.

    Currently, the default option is to retry. I can't think of a scenario that the called would not want to have the errors retry, or at least trigger an error.
    As a side effect, this may break custom implementations of _insert_tasks since we have added a parameter.

#156 Only reinsert tasks which were not enqueued

  • Addresses issue #154.

    If a list of more than one Tasks is given, a raised exception does not
    guarantee that no tasks were added to the queue (unless transactional is set
    to True). To determine which tasks were successfully added when an exception
    is raised, check the Task.was_enqueued property.

#157 Inherit Queue for completion check

  • Addresses concerns over completion checks and cleanup tasks occurring in the default queue in BigSky.

    Three configurable options were added

    cleanupqueue - where cleanup tasks for completion markers should be run
    cleanupdelay - how long should cleanup tasks be delayed
    defaultqueue - what is the default queue for completion and cleanup tasks (if no cleanupqueue is defined)

    Completion checks will now inherit which queue they will run in from the tasks that kick them off. There is a new example of how this works added to the examples folder called context_inherit.

    This pattern of inheritance allows greater freedom to shard async's across different queues and balance out their own performance (completion performance included).