Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configs for Image Classification (cifar10) #16

Open
keroro824 opened this issue Jan 28, 2021 · 10 comments
Open

Configs for Image Classification (cifar10) #16

keroro824 opened this issue Jan 28, 2021 · 10 comments

Comments

@keroro824
Copy link

Thanks for the great work!
I have a question regarding the hyperparams for training cifar10. I used the setting in this repo and replaced several hyperparams (eg n_layers n_heads etc) with the ones reported in the paper, but the best testing acc I got was 0.36:

import ml_collections

NUM_EPOCHS = 200
TRAIN_EXAMPLES = 45000
VALID_EXAMPLES = 10000

def get_config():
"""Get the default hyperparameter configuration."""
config = ml_collections.ConfigDict()
config.batch_size = 256
config.eval_frequency = TRAIN_EXAMPLES // config.batch_size
config.num_train_steps = (TRAIN_EXAMPLES // config.batch_size) * NUM_EPOCHS
config.num_eval_steps = VALID_EXAMPLES // config.batch_size
config.weight_decay = 0.
config.grad_clip_norm = None

config.save_checkpoints = True
config.restore_checkpoints = False
config.checkpoint_freq = (TRAIN_EXAMPLES //
config.batch_size) * NUM_EPOCHS // 2
config.random_seed = 0

config.learning_rate = .0005 (if using 0.01 from the paper, the loss is not going down)
config.factors = 'constant * linear_warmup * cosine_decay'
config.warmup = (TRAIN_EXAMPLES // config.batch_size) * 1
config.steps_per_cycle = (TRAIN_EXAMPLES // config.batch_size) * NUM_EPOCHS

model params

config.model = ml_collections.ConfigDict()
config.model.emb_dim = 32
config.model.num_heads = 4
config.model.num_layers = 3
config.model.qkv_dim = 64
config.model.mlp_dim = 128
config.model.dropout_rate = 0.3
config.model.attention_dropout_rate = 0.2
config.model.classifier_pool = 'CLS'
config.model.learn_pos_emb = True

config.trial = 0 # dummy for repeated runs.
return config

Could you point out which params I could adjust to match the accuracy (this is for full attention).

@vanzytay vanzytay assigned vanzytay and unassigned vanzytay Jan 28, 2021
@vanzytay
Copy link
Collaborator

@MostafaDehghani for clarity on image tasks.

Note: we might take awhile to reply due to the upcoming deadlines. Thanks

@keroro824
Copy link
Author

@vanzytay Thanks for the quick response!
@MostafaDehghani Could you help me check this when you got time and let me know any possible ones I could try (we're also trying for the upcoming deadline :)) Thanks!

@MostafaDehghani
Copy link
Collaborator

Thank you @keroro824 for the question.

So if I understood correctly, you are looking for the configs for the vanilla transformer to reproduce its results on CIFAR10 dataset in LRA. For that, you can use the followings as the model hparams:

  # model params
  config.model = ml_collections.ConfigDict()
  config.model.emb_dim = 128
  config.model.num_heads = 8
  config.model.num_layers = 1
  config.model.qkv_dim = 64
  config.model.mlp_dim = 128
  config.model.dropout_rate = 0.3
  config.model.attention_dropout_rate = 0.2
  config.model.classifier_pool = 'CLS'
  config.model.learn_pos_emb = True

We are planning to release the code for all models and the best performing configurations, as soon as possible. In the meantime, please let us know if you had any questions :)

@keroro824
Copy link
Author

@MostafaDehghani Thank you !!! I can replicate it now!

@MostafaDehghani
Copy link
Collaborator

No problem at all! Perfect!
... and good luck with the deadline :)

@alexmathfb
Copy link

The above comment states 1 layer and leaves learning rate unspecified. This means learning rate will be 0.0005 inherited from base_cifar10_config.

The arxiv paper states: 3 layers, learning rate 0.01.

The openreview paper states: 3 layers, learning rate 0.01.

Notably, the config file still contains nothing.

Currently, the code in this repository is inconsistent with the published articles. Do you plan on fixing these inconsistencies? Or did you abandon this project?

@vanzytay
Copy link
Collaborator

IIRC config files takes precedence over the paper hparams. We will update the readme here to state this.

@vanzytay vanzytay reopened this Aug 27, 2021
@MostafaDehghani
Copy link
Collaborator

MostafaDehghani commented Aug 27, 2021

The best results in the paper are all reproducible from the code in the repo. Have you tried the configs that are shared here? Many people reproduced the results without any issue after our last update.

LRA is a living benchmark. We tried our best to tune hyper-parameters of each model we had in the paper and some of the authors of those models reached out to help us find better hyper parameters. The codebase has the most updated version of those and it can be used for reproducing the results.

Notably, the config file still contains nothing.

If you read the code carefully, you can see that the config file you are referring to is inheriting from the base config!

@alexmathfb
Copy link

alexmathfb commented Aug 27, 2021

IIRC config files takes precedence over the paper hparams. We will update the readme here to state this.

This was not clear to me, I apologize for the misunderstanding.

If you read the code carefully, you can see that the config file you are referring to is inheriting from the base config!

I meant that the file was empty, so learning rate was inherited as 0.0005 from base config file while article reported learning rate 0.01. I was under the impression that the article hyperparams would be used, but as vanzytay clarified this is not the case.

@vanzytay
Copy link
Collaborator

It is a good reminder to us that an update of the paper is due to ask researchers to defer to the codebase to reproduce the results.

In our 2nd update, we ran all the cifar results again to make sure they were reproducible. So the code configs should be good. Do give it a try and let us know if you run into any other issues. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants