New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reenable PriorContext for Optimization #2165
base: master
Are you sure you want to change the base?
Conversation
AFAICT support for |
I see. It was working before though. Do you think it makes sense to support it for some of the user use cases? |
IMO if we 're allowing both |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had one minor style change to suggest.
I also got some extra peace of mind by, in the test, checking that
true_prior_logpdf = -Distributions.logpdf(Uniform(0, 2), a[1])
@test Turing.OptimLogDensity(m1, prictx)(a) ≈ true_prior_logpdf
Could incorporate a more explicit check like that.
Both of these are optional, I'd be okay with the code as is. I'll leave approving for @torfjelde since there might be context (pun unintended) to this that I lack.
DynamicPPL.tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) = | ||
(0, vi) | ||
|
||
DynamicPPL.dot_tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) = | ||
(0, vi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DynamicPPL.tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) = | |
(0, vi) | |
DynamicPPL.dot_tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) = | |
(0, vi) | |
DynamicPPL.tilde_observe(ctx::OptimizationContext{<:PriorContext}, args...) = | |
DynamicPPL.tilde_observe(ctx.context, args...) | |
DynamicPPL.dot_tilde_observe(ctx::OptimizationContext{<:PriorContext}, args...) = | |
DynamicPPL.dot_tilde_observe(ctx.context, args...) |
I think falling back on the PriorContext
implementation is a bit neater. Result is the same.
This PR reenables the support for PriorContext in optimization, which was disabled by #2022.
While it is not directly supported by the current optim_problem() interface (only MLE/MAP are supported, although adding MaxPrior is possible), I think it is useful to support PriorContext for Turing.OptimLogDensity().
In my code I use it for multi-objective Pareto optimization (prior + llh).