Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reenable PriorContext for Optimization #2165

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

alyst
Copy link

@alyst alyst commented Feb 3, 2024

This PR reenables the support for PriorContext in optimization, which was disabled by #2022.
While it is not directly supported by the current optim_problem() interface (only MLE/MAP are supported, although adding MaxPrior is possible), I think it is useful to support PriorContext for Turing.OptimLogDensity().
In my code I use it for multi-objective Pareto optimization (prior + llh).

@devmotion
Copy link
Member

This PR reenables the support for PriorContext in optimization, which was disabled by #2022.

AFAICT support for PriorContext has never existed and has never been tested. The check in the PR was only introduced to communicate this more clearly and provide a better error message.

@alyst
Copy link
Author

alyst commented Feb 4, 2024

AFAICT support for PriorContext has never existed and has never been tested.

I see. It was working before though. Do you think it makes sense to support it for some of the user use cases?

@torfjelde
Copy link
Member

IMO if we 're allowing both DefaultContext and LikelihoodContext, I feel like allowing PriorContext also seems reasonable (i.e. all the leaf-contexts).

Copy link

@mhauru mhauru left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had one minor style change to suggest.

I also got some extra peace of mind by, in the test, checking that

            true_prior_logpdf = -Distributions.logpdf(Uniform(0, 2), a[1])
            @test Turing.OptimLogDensity(m1, prictx)(a) ≈ true_prior_logpdf

Could incorporate a more explicit check like that.

Both of these are optional, I'd be okay with the code as is. I'll leave approving for @torfjelde since there might be context (pun unintended) to this that I lack.

Comment on lines +86 to +90
DynamicPPL.tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) =
(0, vi)

DynamicPPL.dot_tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) =
(0, vi)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
DynamicPPL.tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) =
(0, vi)
DynamicPPL.dot_tilde_observe(::OptimizationContext{<:PriorContext}, right, left, vi) =
(0, vi)
DynamicPPL.tilde_observe(ctx::OptimizationContext{<:PriorContext}, args...) =
DynamicPPL.tilde_observe(ctx.context, args...)
DynamicPPL.dot_tilde_observe(ctx::OptimizationContext{<:PriorContext}, args...) =
DynamicPPL.dot_tilde_observe(ctx.context, args...)

I think falling back on the PriorContext implementation is a bit neater. Result is the same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants