Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewriten finite difference logging #283

Open
wants to merge 16 commits into
base: master
Choose a base branch
from

Conversation

daringli
Copy link
Contributor

Rewrites part of MPIFiniteDifference and FiniteDifference to:

  • Support Jacobian calculations for both scipy.optimize.minimize and scipy.optimize.least_squares. The former needs df/dx_i and the latter dR_i/dx_j, where R_i are the components of the residual vector.
  • Separates the logging of the calculated Jacobian from the calculation. The logging is now done by a decorator to (MPI)FiniteDifference.jac. This lets us log details about the solver that FiniteDifference may not know about.
  • Supports several different options for logging the Jacobian: the current format (default), df/dx_i and dR_i/dx_j (only for least_squares).
  • Makes it possible to use FD derivatives with simsopt.solve.serial.serial_solve. Updates this function to work with the latest version of simsopt.

The current format of the Jacobian log only logs the points at which the objective was evaluated to calculate the Jacobian, and not the actual function value at these points. For more expensive objectives (such as those used for direct turbulence optimization), it becomes non-trivial to reevaluate the Jacobian, so the output is not as useful.

A concern I have is that this pull request currently introduces some "switch-case based programming", where the (MPI)FiniteDifference objects and the finite_difference_jac_decorator uses if-clauses to adjust their outputs, partly based on the needs of the different solvers.
A different approach would be to embrace that MPIFiniteDifference and FiniteDifference are only meant for use with scipy.optimize.least_squares, and create a derived class FiniteDifference_minimize that overrides the jac method to flatten the output for scipy.optimize.minimize. The logging could then again be done in the FiniteDifference objects themselves, since they would be specific to each solver and thus could print hard-coded solver info. Users using different solvers would then create derived FiniteDifference classes with different solver information. Input and ideas are appreciated.

Stefan Buller and others added 14 commits November 3, 2022 17:42
This new objective wraps the other objective, but uses the precalculated values to evaluate points close to the precalculated points.
The intended usage is to quickly redo parts of an optimization, that got interrupted due to for example time limits.
… a least squares problem or generic.

Updated the "general" solver in serial.
No longer useful for reseting the least squares optimizer, but that wasn't a good default due to the 1000s of residuals in the QS objective.
…licate work and because calling fd.fn() in the logger resulted in the execution freezing on cobra.
@daringli daringli marked this pull request as draft January 11, 2023 22:05
… DOFs if the underlying optimizable got updated.
@daringli daringli marked this pull request as ready for review January 11, 2023 22:36
@codecov
Copy link

codecov bot commented Jan 12, 2023

Codecov Report

Base: 91.52% // Head: 91.28% // Decreases project coverage by -0.23% ⚠️

Coverage data is based on head (f52f8b8) compared to base (2553989).
Patch coverage: 75.83% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #283      +/-   ##
==========================================
- Coverage   91.52%   91.28%   -0.24%     
==========================================
  Files          61       61              
  Lines        9249     9328      +79     
==========================================
+ Hits         8465     8515      +50     
- Misses        784      813      +29     
Flag Coverage Δ
unittests 91.28% <75.83%> (-0.24%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/simsopt/field/magneticfieldclasses.py 96.01% <ø> (ø)
src/simsopt/solve/serial.py 59.32% <23.07%> (-0.86%) ⬇️
src/simsopt/_core/finite_difference.py 86.62% <80.30%> (-7.24%) ⬇️
src/simsopt/solve/mpi.py 93.16% <100.00%> (+0.17%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Contributor

@landreman landreman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It feels suboptimal to expose finite_difference_jac_decorator and to have to call it explicitly in the solvers. I would think a simpler interface would be to move the verbose keyword to the __init__ method of FiniteDifference and MPIFiniteDifference.

It also feels suboptimal that the logging functionality is spread between finite_difference_jac_decorator, FiniteDifference, and MPIFiniteDifference. For example, both FiniteDifference, and MPIFiniteDifference have a init_log() function (would be nice to eliminate this redundancy) whereas lots of the actual printing to the log file is in finite_difference_jac_decorator. It would be ideal to encapsulate all the logging in one class.

Would it make sense for the jacobian logging functionality to go in a new class, call it FiniteDifferenceLog for now? This way all the variables related to logging like log_file, new_log_file, and log_header_written would be stored by the FiniteDifferenceLog object. One option would be for FiniteDifference and MPIFiniteDifference to each have a FiniteDifferenceLog instance as an attribute. Or it could be the other way around: a FiniteDifferenceLog object could have a fd attribute which is either a FiniteDifference or MPIFiniteDifference instance. I kind of like this last approach.

BTW "decorator" has a specific meaning in python, and finite_difference_jac_decorator is not a decorator in that sense. So I would use a different name.

It would be good to have test coverage of the various cases, e.g. the different options for verbose.

Is the idea of the flatten_out variable to handle non-least-squares problems? All the new if self.flatten_out statements seem suboptimal - it would be best to have a single "logic" instead of two parallel "logics". I feel like it would be more elegant to handle scalar objectives by having a grad() function in FiniteDifference and MPIFiniteDifference that returns the gradient instead of the Jacobian, so the non-least-squares algorithm would call grad instead of jac.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants