Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow user to disable unnecessary model evaluations after #2202 #2215

Open
torfjelde opened this issue May 6, 2024 · 5 comments
Open

Allow user to disable unnecessary model evaluations after #2202 #2215

torfjelde opened this issue May 6, 2024 · 5 comments

Comments

@torfjelde
Copy link
Member

After #2202 we will perform one additional evaluation per MCMC iteration than before.

In most use-cases we and users encounter in practice, this is negligible compared to the inference itself, e.g. when using NUTS, but in some cases it can be a significant increase in computational cost.

Unfortunately this is necessary to support "all" possible models and stealthy bugs causing incorrect results in some cases, e.g. see referenced issues in #2202. But IMO we should also provide the user a way to turn this off, as it is not required for most models encountered in practice.

One simple way of doing this is to let the user tell Turing.jl that "hey, this model is 'static' (in some sense), so perform optimizations where you can". At the moment, this would have to be done by the user manually, e.g. by setting a variable attached to the Model itself telling us that it has static support, etc., which is not ideal, but I don't really see a way around it in the near future (long-term we can definitely do better, e.g. analyze the IR in the model).

I'm of the opinion that adding something like this now is worth it, because I know I would personally be annoyed if I'm forced to perform redundant operations when my model does not require it, without an easy way to turn it off. But I know there are other opinions floating around regarding this.

Thoughts? @devmotion @yebai @sunxd3

@yebai
Copy link
Member

yebai commented May 6, 2024

Can we simply pass an additional argument to the sample function, e.g. sample(;is_static=true)?

@yebai
Copy link
Member

yebai commented May 6, 2024

For HMC, this is per MCMC proposal instead of per leapfrog iteration, right?

@torfjelde
Copy link
Member Author

Can we simply pass an additional argument to the sample function, e.g. sample(;is_static=true)?

Technically possible, but this will be very invasive as it requires propagating kwargs through sooo much of the Turing.jl code base. I started out doing this in an attempt to add this and then just gave up because it got so unwieldy.

@sunxd3
Copy link
Collaborator

sunxd3 commented May 6, 2024

I am checking out the has_static_support stuff, but it's removed. Tor are you suggesting some similar mechanism?

@torfjelde
Copy link
Member Author

Exactly:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants