-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR: nlopt failure #33
Comments
Which algorithm are you using? At some point, you will run into roundoff-error limitations, although in that case NLopt should ideally throw a "roundoff-limited" exception. |
LD_LBFGS. I've never encountered a "roundoff-limited" exception, only "nlopt failure". |
Do you have a simple test case that illustrates this? |
I don't, unfortunately. I realize that makes it pretty difficult to diagnose the problem without one. The problem only appears for fairly complicated optimization problems. If I can recreate it without a lot of optimization variables, I'll post the code here. |
See also stevengj/nlopt#28 (a vaguely similar issue with SLSQP, which I'm also having difficulty reproducing). |
Using NLopt (LD_LBFGS solver in Julia), I am receiving nlopt failure error. I am suspicious that the options are not set properly. I am wondering how to check that option parameters passed is actually set correctly? I ask because I am setting fairly large values as below
But what I see is that, while objective is fixed up to 6 digits, solution up to 9 digits, and gradient is small (order of 1e-7 to 1e-15) the optimization does not stop, until it sends an
which there is no clue why the error happens. |
If the options are incorrect it would throw a different exception. The BFGS failures are a bit harder to diagnose (in part because the Fortran BFGS code I am calling is rather opaque), but usually stem from you gradient not being accurate enough. Do you have an analytical gradient? If so, have you checked it against a finite-difference approximation? |
Yes, analytical gradients are used, and I have checked correctness against Calculus.gradient. |
I'm getting this problem too in my complicated optimization. I'm using LBFGS and analytic gradients. I've never seen a roundoff error, only "nlopt failure -1". It goes away if I relax ftol_abs. Sorry I don't have a simple test case. |
I am using NLOPT on G1 CEC test problem, I use the lower bounds as start points which are all in the feasible region. However, The error
appears, but after running several times it may not appear but then appears again. It doesn't make the code predictable, so I wonder if its due to a bug in the nlopt library, please advise me how to get over it. Thanks |
@reem21090 Can you provide a minimal working example of your code? |
Thank you, it's working well now. But I am suspicious about this unpredictable behavior. |
Hi, |
So for me, I've got this error a lot when the initial points are too close to the minimum. So I'll artificially set the somewhere else (e.g. by the |
I often encounter "ERROR: nlopt failure" when I run NLopt with ftol_abs set to let less than 1e-6. Is that behavior to be expected? I'd like to set ftol_abs to about 1e-8. The derivatives I pass to NLopt are all analytic, and I've verified them in test cases by comparing them to results from numeric differentiation. Are these errors likely due to numerical instability, i.e. slight incompatibilities between the objective function's values the derivatives? Or are there other common causes of nlopt failures? Are there any standard ways to work around this type of error?
Thanks.
The text was updated successfully, but these errors were encountered: