Skip to content

Flexibly customizable Mathemetical optimization

Notifications You must be signed in to change notification settings

phoaivu/sawdown-opt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sawdown is an optimization framework, designed for constrained optimization and integer programs. Current features include:

  1. Unconstrained optimization
  2. Linearly-constrained optimization, both equality and inequality constraints.
  3. Mixed-Integer Program solver via branch-and-bound.

API

Configuration

Experimental: Symbolic derivatives

sawdown comes with built-in symbolic derivation. You define a function containing the formulation of the objective, and use .objective_symbolic() to use it, like so:

import numpy as np

import sawdown as sd
from sawdown import ops

a = np.arange(15).reshape((3, 5))
b = np.ones((3,), dtype=float)
        
def _objective(x, y):
    x_cost = ops.matmul(a, x) + b[:, None]
    x_cost = ops.sum(ops.square(x_cost))
    y_cost = 0.2 * ops.sum(ops.square(y))
    return x_cost + y_cost

optimizer = sd.FirstOrderOptimizer().objective_symbolic(_objective, var_dims=(5, 4)) \
    .fixed_initializer(np.ones((9, ), dtype=float)) \
    .steepest_descent().decayed_steplength(decay_steps=20, initial_steplength=1e-3) \
    .stop_after(100).stop_small_steps().epsilon(1e-5).stream_diary('stdout')

solution = optimizer.optimize()

# The solution (`solution.x`) will be a 9-dimensional vector, specified by `var_dims=(5,4)` and
# the way `_objective(x, y)` takes `x` and `y` as its arguments.

In the current state, this feature is limited, with very few operations and less-than-ideal performance. You can of course use other symbolic derivative libraries and use .objective_instance() to do this in a more efficient way.

The idea behind doing symbolic derivative in sawdown is to support optimizers that utilizes 2nd-order derivatives, but once that happens (although not clear when), it is likely going to be done in a different way.

Remarks

Precision

Most numerical optimization algorithms depends on floating-point operations, which has certain level of precision. In sawdown, you can control the precision via the .config() function:

import sawdown

r = sawdown.FirstOrderOptimizer().config(epsilon=1e-24)

In general, large epsilon allows the algorithm stops sooner, but the solution is less exact. With epsilon = 1e-24, the solution, if any, is roughly precise up to 1e-12. The rationale can be seen in sawdown.stoppers.

This is different from actual machine precision, which, in Python, is configured via numpy. In general, if numpy's precision is about 1e-48 (which is not the smallest it can take), the smallest value you can use for sawdown is 1e-24.

For most practical problems, you would be good with as small as epsilon=1e-7 in sawdown.

Initialization

For linear constraints, sawdown can initialize automatically. If your problem has at least a linear constraint, you don't need to specify an initializer.

Why sawdown

This is a photo of sâu đo (/ʂəw ɗɔ/), the animal, in Vietnamese. It moves similarly to how most numerical optimization algorithms work: one step at a time, with the shortest most-effective step.

image

With a little play of words, it becomes sawdown. I later realized it should better be called showdoor, but changes are costly.

About

Flexibly customizable Mathemetical optimization

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages