Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tracing #16

Open
HarlanH opened this issue Jun 5, 2014 · 10 comments
Open

tracing #16

HarlanH opened this issue Jun 5, 2014 · 10 comments

Comments

@HarlanH
Copy link

HarlanH commented Jun 5, 2014

Hi, this package works well for something I'm working on! But I wonder if it'd be possible to get support for tracing, ala the store_trace/show_trace options in Optim.jl. I can sorta fake it by adding print statements to my evaluation function, but that's obviously not a great solution.

Skimming through the NLopt docs, I'm not actually sure if this is possible, but if it is, it'd be a great-to-have. Thanks!

@stevengj
Copy link
Member

stevengj commented Jun 5, 2014

The philosophy of NLopt has always been that this sort of thing is best implemented by just adding a line or two to your objective function. (Either to print things or to store them in a global or curried parameter.) What could NLopt do for you that this couldn't?

@HarlanH
Copy link
Author

HarlanH commented Jun 5, 2014

Well, the package could make it as easy as just adding an option, but yes, it wouldn't be that difficult to write by hand. I'll do that for now. But I do think it's a good feature that most other optimization packages that I've seen support.

@stevengj
Copy link
Member

stevengj commented Jun 5, 2014

I'm still struggling to see what is to be gained by supporting this. How is something like:

trace_vals!(opt, true)
....
vals = get_trace_vals(opt)

so much easier than just adding:

push!(vals, val)

to your objective function, where vals is a global or a parameter?

@HarlanH
Copy link
Author

HarlanH commented Jun 5, 2014

It's the "global or parameter" thing and the need to change (or wrap) my
objective function. Globals are problematic for a variety of reasons, of
course, and parameters require writing a bit of extra code in each case.
And it's trickier to turn on or off tracing.

If the goal is to make it easy for people to write an objective function
and optimize it while being able to watch the performance of the algorithm,
having NLopt manage the process is going to be easier than requiring users
to write the boilerplate.

Has every other optimization system under the sun, including Optim.jl, made
a mistake somehow? I don't understand where you're coming from.

On Thu, Jun 5, 2014 at 12:47 PM, Steven G. Johnson <notifications@github.com

wrote:

I'm still struggling to see what is to be gained by supporting this. How
is:

trace_vals(opt, true)
....
vals = get_trace_vals(opt)

so much easier than just adding:

push!(vals, val)

to your objective function where vals is a global or a parameter?


Reply to this email directly or view it on GitHub
#16 (comment).

@mlubin
Copy link
Member

mlubin commented Aug 1, 2014

Another use case for this is if you're calling NLopt from JuMP. In that case, JuMP generates the objective function automatically, and the user doesn't have the ability to drop in arbitrary print statements.

@stevengj
Copy link
Member

stevengj commented Aug 2, 2014

@mlubin, but in that case shouldn't tracing be added to the JuMP interface?

@mlubin
Copy link
Member

mlubin commented Aug 2, 2014

NLopt is the only solver at this point that doesn't have some sort of informational printouts. Also, this output typically includes relevant algorithmic information like primal/dual infeasibilities and complementarity in interior-point methods, so that's not something we could do from JuMP in a generic way.

@stevengj
Copy link
Member

stevengj commented Aug 2, 2014

I'm still confused; if you are using the JuMP interface, how would you hook into an NLopt-specific method for printouts? Shouldn't there be a solver-independent JuMP interface to request "informational printouts", tracing, etc.?

@mlubin
Copy link
Member

mlubin commented Aug 2, 2014

Currently, anything solver-specific is controlled by passing options to the MathProgBase solver object. So in this case it would be something like NLoptSolver(algorithm=:LD_MMA,verbose=true). It's more hairy than it seems to map options at the JuMP level in a generic way, but it's an open issue: jump-dev/JuMP.jl#91.

@stevengj
Copy link
Member

stevengj commented Aug 5, 2014

Since JuMP generates the objective function for you, so you can't insert tracing if you want to, it seems like this should be a JuMP issue. It seems like you'd be better off adding options to JuMP to generate an objective function that does printouts, store a trace of the objective-function values, etcetera, in a common way across backends, rather than having each backend do something completely different.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants