Skip to content

elkabzo/cont_disc_opt_dnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Continuous vs. Discrete Optimization of Deep Neural Networks

Code, based on the PyTorch framework, for reproducing the experiments in Continuous vs. Discrete Optimization of Deep Neural Networks.

Install Requirements

Tested with python 3.9.2

pip install -r requirements.txt
  • To enable cuda (GPU) to speed up the running time, install pytorch version 1.8.1 with CUDA compatible to your system from here.

Running the experiments

The following command runs an experiment and plots the resulting graph (in this specific example we run the experiment with the fully connected linear model):

python experiment_runner.py \
--experiment "fully_connected_linear" \
--epochs 10000 \
--learning_rate 0.001
  • The experiment run argument should refer to one of the following models "fully_connected_linear" or "fully_connected_relu" or "conv_subsample" or "conv_maxpool".

Example of plot

For the example above we get the following plot:

Citation

For citing the paper, you can use:

@inproceedings{elkabetz2021continuous,
  title={Continuous vs. Discrete Optimization of Deep Neural Networks},
  author={Elkabetz, Omer and Cohen, Nadav},
  booktitle={Advances in Neural Information Processing Systems},
  year={2021}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages