Skip to content

[ICML 2022, Oral] The PyTorch Implementation of Adaptive Inertia Methods. The algorithms are based on our paper: "Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum".

License

zeke-xie/adaptive-inertia-adai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

adaptive-inertia-adai

The Pytorch Implementation of Adaptive Inertia Methods.

Adaptive Inertia Optimization was proposed in our work:

Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.

This work has been accepted as ICML2022 Oral (Acceptance Rate ~ 2%).

In this work, we design a novel adaptive optimization method named Adaptive Inertia (Adai), which uses parameter-wise inertia (the momentum hyperparameter as a vector) to accelerate saddle-point escaping and provably select flat minima as well as SGD. Adai combines the advantages of Adam and SGD on saddle-point escaping and minima selection, respectively.

Our experiments demonstrate that Adai can significantly outperform SGD and existing Adam variants for various DNNs where flat minima are desired. We especially recommend Adai for training of CNNs.

The environment is as bellow:

Python 3.7.3

PyTorch >= 1.4.0

Usage

You may use it as a standard PyTorch optimizer.

import adai_optim

Adai = adai_optim.Adai(net.parameters(), lr=lr, betas=(0.1, 0.99), eps=1e-03, weight_decay=5e-4, decoupled=False)
AdaiW = adai_optim.Adai(net.parameters(), lr=lr, betas=(0.1, 0.99), eps=1e-03, weight_decay=5e-4, decoupled=True)

Hyperparameters

The recommended learning rate of Adai is equal to the choice of SGD or 10 times the choice of SGD Momentum (beta=0.9).

The recommended weight decay of Adai is euqal to the choice of SGD and SGD Momentum, usually 1e-4 or 5e-4 for CNNs.

AdaiW adoptes decoupled weight decay instead of L2 regularization. Thus, the optimal weight decay of AdaiW depends on the learning rate choice.

In principle, the optimal hyperparameter choice of Adai should be close to the optimal hyperparameter choice of SGD (no Momentum).

The recommended hyperparameters for Transformers are not avaliable yet. In our recent experiments on Transformers, the original Adai often works better than SGD but worse than Adam. Maybe some Adai variants with stronger adaptivity are required for training Transformers.

AdaiV2

AdaiV2 is a novel optimizer, a generalized variant of the original Adai in our paper. Adai is a special case of AdaiV2 with dampening=1.

If we let dampening<1, AdaiV2 will show some adaptive-moment behavior. This adaptive-moment behavior is achived by $E[m] = E[g] * (1 - beta1)^{dampening -1 }$ instead of Adaptive Learning Rate. The adaptive factor $(1 - beta1)^{dampening -1 }$ is large along the flat direction.

We notice that, in some tasks (e.g. Transformers), Adam are still powerful. AdaiV2 provides an easy way to fuse two adaptive optimization mechanisms together.

We add the dampening hyperparameter into Adai. Setting dampening<1 can employ adaptive moments and adaptive inertia at the same time.

Note that AdaiV2 is in testing phase. We may continue to upgrade it.

Theoretical Comparison

SGD Adaptive Learning Rate Adaptive Inertia
Saddle-Escaping Slow ✗ Fast ✓ Fast ✓
Minima Selection Flat ✓ Sharp ✗ Flat ✓

Test performance

Dataset Model AdaiW Adai SGD M Adam AMSGrad AdamW AdaBound Padam Yogi RAdam
CIFAR-10 ResNet18 4.590.16 4.740.14 5.010.03 6.530.03 6.160.18 5.080.07 5.650.08 5.120.04 5.870.12 6.010.10
VGG16 5.810.07 6.000.09 6.420.02 7.310.25 7.140.14 6.480.13 6.760.12 6.150.06 6.900.22 6.560.04
CIFAR-100 ResNet34 21.050.10 20.790.22 21.520.37 27.160.55 25.530.19 22.990.40 22.870.13 22.720.10 23.570.12 24.410.40
DenseNet121 19.440.21 19.590.38 19.810.33 25.110.15 24.430.09 21.550.14 22.690.15 21.100.23 22.150.36 22.270.22
GoogLeNet 20.500.25 20.550.32 21.210.29 26.120.33 25.530.17 21.290.17 23.180.31 21.820.17 24.240.16 22.230.15

Citing

If you use Adai or other Adai variants in your work, please cite Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.

@InProceedings{xie2022adaptive,
  title = 	 {Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum},
  author =       {Xie, Zeke and Wang, Xinrui and Zhang, Huishuai and Sato, Issei and Sugiyama, Masashi},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {24430--24459},
  year = 	 {2022}
  volume = 	 {162},
  series = 	 {Proceedings of Machine Learning Research}
}

About

[ICML 2022, Oral] The PyTorch Implementation of Adaptive Inertia Methods. The algorithms are based on our paper: "Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum".

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published