Code, based on the PyTorch framework, for reproducing the experiments in Continuous vs. Discrete Optimization of Deep Neural Networks.
Tested with python 3.9.2
pip install -r requirements.txt
- To enable cuda (GPU) to speed up the running time, install pytorch version 1.8.1 with CUDA compatible to your system from here.
The following command runs an experiment and plots the resulting graph (in this specific example we run the experiment with the fully connected linear model):
python experiment_runner.py \
--experiment "fully_connected_linear" \
--epochs 10000 \
--learning_rate 0.001
- The
experiment
run argument should refer to one of the following models "fully_connected_linear" or "fully_connected_relu" or "conv_subsample" or "conv_maxpool".
For the example above we get the following plot:
For citing the paper, you can use:
@inproceedings{elkabetz2021continuous,
title={Continuous vs. Discrete Optimization of Deep Neural Networks},
author={Elkabetz, Omer and Cohen, Nadav},
booktitle={Advances in Neural Information Processing Systems},
year={2021}
}