Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow simulation output to be cached #1404

Open
MichaelClerx opened this issue Oct 12, 2021 · 2 comments
Open

Allow simulation output to be cached #1404

MichaelClerx opened this issue Oct 12, 2021 · 2 comments
Labels

Comments

@MichaelClerx
Copy link
Member

There could be use cases for caching at several levels, e.g.

  • CachingForwardModel(model), checks times and parameters against a (limited size?) dict and returns cached results if possible. Not sure when you'd use this
  • CachingSingle/MultiSeriesProblem(problem), checks parameters against a (limited size?) dict and returns cached results if possible. Useful in Handling outputs from the same model with different likelihoods / score functions #1403 .
  • CachingLikelihood and CachingError, checks parameters against a (limited size?) dict and returns cached results if possible, useful if we expect methods to test the same parameter sets multiple times (do we?)

I've written them as wrapper classes here, as that's the minimal effort for developers solution (no changes to underlying classes). But we could also consider updating the base classes with some reusable caching code and making all derived classes use it

@MichaelClerx
Copy link
Member Author

MichaelClerx commented Aug 13, 2022

We might only need it for Error and LogPdf ?

Here's an example for an error.
It uses the lru_cache decorator.

The only complication is that, because it needs to place x in a dict, x needs to be hashable. Numpy arrays are not hashable, so here it converts it to a tuple first.

class CachingError(pints.ErrorMeasure):
    """
    Wraps around another error measure and provides a ``sensitivities()``
    method for use with e.g. ``scipy``.

    All calls to the error and to :meth:`sensitivities` are mapped to
    :meth:`evaluateS1`. To reduce redundant calculations, up to 32 results of
    calling ``evaluateS1`` will be kept in cache.

    Note: Using this wrapper for methods that don't require sensitivities will
    result in very poor performance.
    """

    def __init__(self, error):
        self._e = error

    def __call__(self, x):
        return self._both(tuple(x))[0]

    @functools.lru_cache(maxsize=32)
    def _both(self, x):
        return self._e.evaluateS1(x)

    def evaluateS1(self, x):
        return self._both(tuple(x))

    def n_parameters(self):
        return self._e.n_parameters()

    def sensitivities(self, x):
        return self._both(tuple(x))[1]

@MichaelClerx
Copy link
Member Author

Just noticed that this is not required for scipy, you should set jac=True instead and have it call evaluateS1 :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant