Skip to content

Operator splitting can be used to design easy-to-train models for predict-and-optimize tasks, which scale effortlessly to problems with thousands of variables.

Notifications You must be signed in to change notification settings

mines-opt-ml/fpo-dys

Repository files navigation

License: MIT Docs

Faster Predict-and-Optimize (FPO) with Davis-Yin Splitting (DYS)

In many practical settings, a combinatorial problem must be repeatedly solved with similar, but distinct parameters. Yet, the parameters $w$ are not directly observed; only contextual data $d$ that correlates with $w$ is available. It is tempting to use a neural network to predict $w$ given $d$, but training such a model requires reconciling the discrete nature of combinatorial optimization with the gradient-based frameworks used to train neural networks. One approach to overcoming this issue is to consider a continuous relaxation of the combinatorial problem. While existing such approaches have shown to be highly effective on small problems (10--100 variables) they do not scale well to large problems. In this work, we show how recent results in operator splitting can be used to design such a system which is easy to train and scales effortlessly to problems with thousands of variables.

Installation

See src/requirements.txt for standard required packages. In addition, the code in this repository also requires PyEPO package.

Quick Start

After installation, type

python -m src.knapsack.train --model_type DYS --num_item 20 --num-knapsack 2

to train our DYS model on a sample knapsack task. To duplicate, for example, our shortest path experiment, type

bash shortest_path_experiment.sh

Publication

Learning to Optimize Integer Linear Programs with Davis-Yin Splitting (arXiv Link)

@article{mckenzie2023faster,
  title={Faster Predict-and-Optimize with Davis-Yin Splitting},
  author={McKenzie, Daniel and Wu Fung, Samy and Heaton, Howard},
  journal={arXiv preprint arXiv:2301.13395},
  year={2023}
}

See documentation site for more details.

About

Operator splitting can be used to design easy-to-train models for predict-and-optimize tasks, which scale effortlessly to problems with thousands of variables.

Topics

Resources

Stars

Watchers

Forks