Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ultranest hangs upon completion #93

Open
tomkimpson opened this issue Apr 15, 2023 · 1 comment
Open

Ultranest hangs upon completion #93

tomkimpson opened this issue Apr 15, 2023 · 1 comment

Comments

@tomkimpson
Copy link

  • UltraNest version: 3.5.7
  • Python version: 3.9
  • Operating System: OSX12.3.1

Description

I am testing out Ultranest, trying to do an inference on a simple 2-parameter likelihood.

Ultranest converges well; I can see from the live point display that the live points are converged very close to the true values, e.g.


Mono-modal Volume: ~exp(-61.48)   Expected Volume: exp(-62.36) Quality: ok

phi0 : +0.0000031416|                            +0.1999999999  *  +0.2000000001                                                                                                                                                                |+1.0000000000
omega: +0.000000010000046051808|                                                                                  +0.000000499999999999999  *  +0.000000500000000000001                                                                                    |+0.000000999995394840417

(true values are 0.20 for phi0 and 5e-7 for omega).

At this point, rather than finishing off, the code just hangs and does not complete.

What I Did

A psuedo example is as follows

def prior_transform(quantile_cube):


    transformed_parameters = np.empty_like(quantile_cube)
    # first parameter: a uniform distribution
    transformed_parameters[0] = 0.0 + np.pi * quantile_cube[0]

    # second parameter: a log-uniform distribution
    transformed_parameters[1] = 10**(-8 + 2 * quantile_cube[1])

    return transformed_parameters


def my_likelihood(params):

    p_copy = guessed_parameters.copy() # this is a dictionary defined globally 


    phi_var, omega_var = params 

    p_copy["phi0_gw"] = phi_var
    p_copy["omega_gw"] = omega_var

    ll = KF.likelihood(p_copy) # this is a function of the class `KF` which accepts a parameters dictionary and returns a likelihood
    
    return ll



param_names = ['phi0', "omega"]


import ultranest
print("Running sampler")
sampler = ultranest.ReactiveNestedSampler(param_names, my_likelihood, prior_transform)
result = sampler.run()
sampler.print_results()

I expect this could be due to the likelihood surface being very very narrow at the true values of the parameters. In this case, is there are way to force Ultranest to complete gracefully?

@JohannesBuchner
Copy link
Owner

Looks like you are probably hitting limitations of floating points. Between +0.1999999999 and +0.2000000001, there aren't a lot of values, so the proposal procedure may have difficulty finding new, unique points with higher likelihood. You could try reparametrizing the problem. Is it possible that the likelihood is a bit unrealistic (too informative)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants