Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No Noise #555

Closed
tinop99 opened this issue Mar 15, 2023 · 13 comments
Closed

No Noise #555

tinop99 opened this issue Mar 15, 2023 · 13 comments

Comments

@tinop99
Copy link

tinop99 commented Mar 15, 2023

Hi, we would like to perform parameter estimation for an ODE-model without considering any stochastic effects.
Is it possible to define a PETAB problem without a noise parameter/set the noise to zero in any way?

@dilpath
Copy link
Member

dilpath commented Mar 16, 2023

Hi, could you please define the objective function that you'd like to use?

If least squares is fine, you can simply set the following for all observables: noiseDistribution=normal and noiseFormula=1. A simulator that uses the negative log-likelihood (e.g. AMICI) will then produce a least squares objective.

@tinop99
Copy link
Author

tinop99 commented Mar 16, 2023

Hi again, thanks for your quick response! We're using pyPESTO (& amici) to estimate some parameters of our model that we imported as a petab problem. As objective function we are simply using the pyPESTO create_objective() function which creates an amici objective. We are not entirely sure what exact objective function this will yield (we were a bit confused about the pyPESTO / amici documentations since we are new to optimization).
We tried using your advice, but now all estimated parameters were estimated to their lower bound values. Do you have any idea on why that could be?

@dilpath
Copy link
Member

dilpath commented Mar 16, 2023

There's documentation on the objective function in the PEtab [1] and AMICI [2] docs.

As for why the parameters are estimated to be at the lower bound, there could be many reasons, so maybe the best thing to do is have a positive control, i.e. setup a PEtab problem where you know what the parameters should be. pyPESTO has a guide on how to do this [3]. If optimization of the synthetic problem also fails, maybe your scripts have some bug. If the synthetic problem works, maybe your PEtab problem has some issue.

[1] https://petab.readthedocs.io/en/stable/documentation_data_format.html#noise-distributions
[2] https://amici.readthedocs.io/en/latest/generated/amici.import_utils.html#amici.import_utils.noise_distribution_to_cost_function
[3] https://github.com/ICB-DCM/pyPESTO/blob/main/doc/example/synthetic_data.ipynb

@tinop99
Copy link
Author

tinop99 commented Mar 23, 2023

Hi, the positive control worked to some extent. Building the amici-model where (only) the 4 varied parameters are being estimated works perfectly (all 4 parameter values are recovered succesfully). However, if we want to estimate more than those 4 parameters, the problem occurs again (all estimated parameters are estimated to their lower bound). Increasing the maxSteps helps in some cases, in others not. Setting 'model.requireSensitivitiesForAllParameters()' also helps sometimes. We use 100 starts for the optimization and tried various parameter combinations and settings. The error message reads: "AMICI forward simulation failed at t = 0: AMICI failed to integrate the forward problem" which seems to be the problem.

@dilpath
Copy link
Member

dilpath commented Mar 23, 2023

However, if we want to estimate more than those 4 parameters, the problem occurs again (all estimated parameters are estimated to their lower bound).

Presumably, objective_function(synthetic_parameters) is the global optimum, could you verify that objective_function(synthetic_parameters) > objective_function(lower_bounds)? If so, then you could initialize optimization at the synthetic parameters. If optimization is working properly, the optimizer should not move away from the synthetic parameters.

Increasing the maxSteps helps in some cases, in others not.

If you are seeing this often, you could try relaxing simulation tolerances. For example, with pyPESTO's AMICI objective

pypesto_problem.objective.amici_solver.setAbsoluteTolerance(1e-10)
pypesto_problem.objective.amici_solver.setRelativeTolerance(1e-8)

Setting 'model.requireSensitivitiesForAllParameters()' also helps sometimes.

This might indicate that one sensitivity method is better for you than another. e.g., if you are using the forward method, switch to the adjoint method. e.g.:

pypesto_problem.objective.amici_solver.setSensitivityMethod(amici.SensitivityMethod.adjoint)

The error message reads: "AMICI forward simulation failed at t = 0: AMICI failed to integrate the forward problem" which seems to be the problem.

There could be many causes of this, could you provide the full log?

@tinop99
Copy link
Author

tinop99 commented Mar 23, 2023

Thanks a lot! We'll try what you've described.
Attached is a log from a recent synthetic data run with the described problem.
Petab.o6676198.txt

@dilpath
Copy link
Member

dilpath commented Mar 23, 2023

Some of these errors shouldn't be ignored, e.g.

2023-03-23 12:11:50.664 - amici.swig_wrappers - WARNING - [model1_data1][AMICI:NaN] AMICI encountered a NaN value for x0_rdata[0] (R)
2023-03-23 12:11:50.664 - amici.swig_wrappers - WARNING - [model1_data1][AMICI:NaN] AMICI encountered a NaN value for p[0] (N)
2023-03-23 12:11:50.664 - amici.swig_wrappers - WARNING - [model1_data1][AMICI:NaN] AMICI encountered a NaN value for x0[0] (R)

e.g.

AMICI encountered a NaN value for p[0]

says that somehow you supplied a NaN as a parameter value.

One way to debug this is to get a parameter vector that causes this issue, then see where the NaN is coming from. Since you're seeing this during optimization, you could get the initial vectors from optimization and try simulating some of those. Here's how you can get a list of those points

start_x0_list = [start.x0 for start in pypesto_result.optimize_result.list]

and here's how to simulate one

x = start_x0_list[13]
result = pypesto_problem.objective(x[pypesto_problem.x_free_indices], sensi_orders=(0,1), return_dict=True)

@tinop99
Copy link
Author

tinop99 commented Mar 29, 2023

Hello again! When simulating x0 and x(=lowerBound estimation), both return fval=inf and a NaN gradient array:
x0 [4.43052460e+03 4.74989700e-02 1.76712096e+03 2.69725482e+03 9.80273433e-01 9.54331653e-04 4.52053080e+03 3.58057466e+03 1.86169367e+03 2.64421356e+03 2.65209090e-03 2.75037286e+03 1.93020909e-02 2.83434052e+03 3.59520909e-02 2.91517797e+03 5.26020909e-02 2.96677340e+03 6.93020909e-02 2.99579012e+03 8.59520909e-02 3.06557160e+03 1.02602091e-01 3.10126015e+03 1.19252091e-01 3.12867698e+03 1.35952091e-01 3.15831189e+03 1.52602091e-01 3.14290823e+03 1.69252091e-01 3.18581002e+03 1.85952091e-01 3.19864538e+03 2.02602091e-01 3.20554129e+03 2.19252091e-01 3.21252340e+03 2.35952091e-01 3.19813480e+03 2.52602091e-01 3.22262167e+03 2.69252091e-01 3.22824335e+03 2.85952091e-01 3.23468588e+03 3.02602091e-01 3.23782629e+03 3.19252091e-01 3.25141108e+03 3.35902091e-01 3.24639006e+03 3.52602091e-01 3.24876951e+03 3.69252091e-01 3.25050901e+03 3.85902091e-01 3.25371060e+03 4.02602091e-01 3.25642896e+03 4.19252091e-01 3.25657983e+03 4.35902091e-01 3.24451484e+03 4.52602091e-01 3.26237638e+03 4.69252091e-01 3.26167089e+03 4.85902091e-01 3.24311995e+03 5.02602091e-01 3.26353612e+03 5.19252091e-01 3.26339401e+03 5.35902091e-01 3.26332639e+03 5.52602091e-01 3.26553744e+03 5.69252091e-01 3.24398431e+03 5.85952091e-01 3.26685789e+03 6.02602091e-01 3.24943753e+03 6.19252091e-01 3.26650529e+03 6.35902091e-01 3.26626080e+03 6.52602091e-01 3.27568271e+03 6.69252091e-01 3.26703273e+03 6.85902091e-01 3.24808303e+03 7.02602091e-01 3.26864919e+03 7.19252091e-01 3.26633746e+03 7.35902091e-01 3.27428317e+03 7.52602091e-01 3.27690597e+03 7.69252091e-01 3.26693971e+03 7.85902091e-01 3.27561340e+03 8.02602091e-01 3.27563563e+03 8.19252091e-01 3.27536958e+03 8.35902091e-01 3.27678516e+03 8.52602091e-01 3.27718620e+03 8.69252091e-01 3.27429557e+03 8.85902091e-01 3.26810619e+03 9.02602091e-01 3.26800691e+03 9.19252091e-01 3.26640993e+03 9.35902091e-01 3.24755846e+03 9.52602091e-01 3.26870342e+03 9.69252091e-01 3.26563586e+03 9.85902091e-01 3.22519827e+03 1.00260209e+00 3.22346204e+03 1.01925209e+00 3.22489023e+03 1.03595209e+00 3.22466455e+03 1.05260209e+00 3.22359151e+03 1.06925209e+00 3.22473065e+03 1.08595209e+00 3.22500711e+03 1.10260209e+00 3.22315719e+03 1.11925209e+00 3.22454320e+03 1.13595209e+00 3.22424792e+03 1.15260209e+00 3.22331656e+03 1.16925209e+00 3.22345892e+03 1.18595209e+00 3.22458822e+03 1.20260209e+00 3.22371640e+03 1.21925209e+00 3.22465342e+03 1.23595209e+00 3.22495099e+03 1.25260209e+00 3.22426652e+03 1.26925209e+00 3.22324470e+03 1.28595209e+00 3.22556022e+03 1.30260209e+00 3.22428114e+03 1.31925209e+00 3.22347496e+03 1.33595209e+00 3.22594140e+03 1.35260209e+00 3.22420772e+03 1.36925209e+00 3.22384860e+03 1.38595209e+00 3.22530905e+03 1.40260209e+00 3.22367264e+03 1.41925209e+00 3.22344025e+03 1.43595209e+00 3.22503696e+03 1.45260209e+00 3.22396857e+03 1.46925209e+00 3.22501750e+03 1.48595209e+00 3.22574980e+03 1.50260209e+00 3.22318426e+03 1.51925209e+00 3.22379937e+03 1.53595209e+00 3.22418496e+03 1.55260209e+00 3.22310319e+03 1.56925209e+00 3.22506700e+03 1.58595209e+00 3.22442738e+03 1.60260209e+00 3.22368520e+03 1.61925209e+00 3.22428282e+03 1.63595209e+00 3.22486078e+03 1.65260209e+00 2.50011384e+03 1.25290362e+03 3.24606955e+03 3.48927058e+03 1.52702484e+03 1.00000000e+01 1.00000000e+00 1.00000000e+00] {'fval': inf, 'grad': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]), 'rdatas': [<ReturnDataView(<amici.amici.ReturnData; proxy of <Swig Object of type 'amici::ReturnData *' at 0x7f8d00314c90> >)>]} 2023-03-29 14:39:55.249 - amici.swig_wrappers - WARNING - [model1_data1][AMICI:NaN] AMICI encountered a NaN value for sxdot[0] (0) 2023-03-29 14:39:55.249 - amici.swig_wrappers - DEBUG - [model1_data1][CVODES:CVode:OTHER] AMICI ERROR: in module CVODES in function CVode : The sensitivity right-hand side routine failed at the first call. 2023-03-29 14:39:55.249 - amici.swig_wrappers - ERROR - [model1_data1][FORWARD_FAILURE] AMICI forward simulation failed at t = 0: AMICI failed to integrate the forward problem x [5.00000000e+01 4.74989700e-02 1.00000000e-08 1.00000000e-08 9.80273433e-01 9.54331653e-04 5.00000000e+03 1.00000000e-08 1.00000000e-08 2.64421356e+03 2.65209090e-03 2.75037286e+03 1.93020909e-02 2.83434052e+03 3.59520909e-02 2.91517797e+03 5.26020909e-02 2.96677340e+03 6.93020909e-02 2.99579012e+03 8.59520909e-02 3.06557160e+03 1.02602091e-01 3.10126015e+03 1.19252091e-01 3.12867698e+03 1.35952091e-01 3.15831189e+03 1.52602091e-01 3.14290823e+03 1.69252091e-01 3.18581002e+03 1.85952091e-01 3.19864538e+03 2.02602091e-01 3.20554129e+03 2.19252091e-01 3.21252340e+03 2.35952091e-01 3.19813480e+03 2.52602091e-01 3.22262167e+03 2.69252091e-01 3.22824335e+03 2.85952091e-01 3.23468588e+03 3.02602091e-01 3.23782629e+03 3.19252091e-01 3.25141108e+03 3.35902091e-01 3.24639006e+03 3.52602091e-01 3.24876951e+03 3.69252091e-01 3.25050901e+03 3.85902091e-01 3.25371060e+03 4.02602091e-01 3.25642896e+03 4.19252091e-01 3.25657983e+03 4.35902091e-01 3.24451484e+03 4.52602091e-01 3.26237638e+03 4.69252091e-01 3.26167089e+03 4.85902091e-01 3.24311995e+03 5.02602091e-01 3.26353612e+03 5.19252091e-01 3.26339401e+03 5.35902091e-01 3.26332639e+03 5.52602091e-01 3.26553744e+03 5.69252091e-01 3.24398431e+03 5.85952091e-01 3.26685789e+03 6.02602091e-01 3.24943753e+03 6.19252091e-01 3.26650529e+03 6.35902091e-01 3.26626080e+03 6.52602091e-01 3.27568271e+03 6.69252091e-01 3.26703273e+03 6.85902091e-01 3.24808303e+03 7.02602091e-01 3.26864919e+03 7.19252091e-01 3.26633746e+03 7.35902091e-01 3.27428317e+03 7.52602091e-01 3.27690597e+03 7.69252091e-01 3.26693971e+03 7.85902091e-01 3.27561340e+03 8.02602091e-01 3.27563563e+03 8.19252091e-01 3.27536958e+03 8.35902091e-01 3.27678516e+03 8.52602091e-01 3.27718620e+03 8.69252091e-01 3.27429557e+03 8.85902091e-01 3.26810619e+03 9.02602091e-01 3.26800691e+03 9.19252091e-01 3.26640993e+03 9.35902091e-01 3.24755846e+03 9.52602091e-01 3.26870342e+03 9.69252091e-01 3.26563586e+03 9.85902091e-01 3.22519827e+03 1.00260209e+00 3.22346204e+03 1.01925209e+00 3.22489023e+03 1.03595209e+00 3.22466455e+03 1.05260209e+00 3.22359151e+03 1.06925209e+00 3.22473065e+03 1.08595209e+00 3.22500711e+03 1.10260209e+00 3.22315719e+03 1.11925209e+00 3.22454320e+03 1.13595209e+00 3.22424792e+03 1.15260209e+00 3.22331656e+03 1.16925209e+00 3.22345892e+03 1.18595209e+00 3.22458822e+03 1.20260209e+00 3.22371640e+03 1.21925209e+00 3.22465342e+03 1.23595209e+00 3.22495099e+03 1.25260209e+00 3.22426652e+03 1.26925209e+00 3.22324470e+03 1.28595209e+00 3.22556022e+03 1.30260209e+00 3.22428114e+03 1.31925209e+00 3.22347496e+03 1.33595209e+00 3.22594140e+03 1.35260209e+00 3.22420772e+03 1.36925209e+00 3.22384860e+03 1.38595209e+00 3.22530905e+03 1.40260209e+00 3.22367264e+03 1.41925209e+00 3.22344025e+03 1.43595209e+00 3.22503696e+03 1.45260209e+00 3.22396857e+03 1.46925209e+00 3.22501750e+03 1.48595209e+00 3.22574980e+03 1.50260209e+00 3.22318426e+03 1.51925209e+00 3.22379937e+03 1.53595209e+00 3.22418496e+03 1.55260209e+00 3.22310319e+03 1.56925209e+00 3.22506700e+03 1.58595209e+00 3.22442738e+03 1.60260209e+00 3.22368520e+03 1.61925209e+00 3.22428282e+03 1.63595209e+00 3.22486078e+03 1.65260209e+00 5.00000000e+03 5.00000000e+03 5.00000000e+03 5.00000000e+03 1.00000000e-08 1.00000000e+01 1.00000000e+00 1.00000000e+00] {'fval': inf, 'grad': array([nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]), 'rdatas': [<ReturnDataView(<amici.amici.ReturnData; proxy of <Swig Object of type 'amici::ReturnData *' at 0x7f8d00b4bb40> >)>]}
I'm really confused as to why fval=inf even though print(optimize.result[0]) gives fval=28320.509 and fval0=64060137386671.375 (output.txt) and you can still visualize the curve. Switching the sensitivityMethod or setting different tolerances didn't work either.
However when narrowing the parameter bounds, there was some estimation =/= lowerBounds but still not close to the real values under which the synthetic dataset was created. Do you maybe have an idea as to how to debug this further?

@dilpath
Copy link
Member

dilpath commented Mar 30, 2023

Thanks, are you able to share the scripts for optimization and then simulation?

Since optimization for one start "time" in your log is just 19 seconds, I would suggest not loading the saved result, to avoid this error

WARNING: You are loading a problem.
This problem is not to be used without a separately created objective.

by simulating immediately after optimization completes, in the same script. If that doesn't work, we could have a short Zoom call sometime next week, if that suits you, since then it would be easier for me to help debug.

@tinop99
Copy link
Author

tinop99 commented Mar 30, 2023

Hi, we tried simulating directly after optimization but got the same result. Strangely enough, if we simulate directly after optimizing in the working case, the simulation also results in fval=inf (however, if we do load the results from a file this error doesn't appear in the working cases). I'll append our whole model setup for you with the synthetic dataset. (model.tar.gz)
Thank you very much for the kind offer of a zoom call! We will gladly take you up on that offer if we cannot resolve the issue.

@dilpath
Copy link
Member

dilpath commented Apr 1, 2023

Could you try optimizing the parameters on log10 scale? e.g. simply specify this in the PEtab parameters table. It appears to resolve many instances of the issues.

@tinop99
Copy link
Author

tinop99 commented Apr 4, 2023

That seems to resolve the lowerBound error, thanks! The estimation is now able to recover some but not all the true parameter values and the NaN-value error messages still appear in the output. Maybe our parameter space is just too big or we need a lot more starts.

@dilpath
Copy link
Member

dilpath commented Apr 6, 2023

Nice! Yes, may just be an issue of nonidentifiability due to the data, i.e. if you were to do practical identifiability analysis (e.g. naively with profile likelihoods or MCMC sampling in pyPESTO), you might see that the missing true parameters are covered by the intervals, in which case more starts won't help.

@dweindl dweindl closed this as completed May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants