Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add argparse parametrization for the finetuning script #7

Open
gsarti opened this issue Mar 25, 2020 · 0 comments
Open

Add argparse parametrization for the finetuning script #7

gsarti opened this issue Mar 25, 2020 · 0 comments
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed

Comments

@gsarti
Copy link
Owner

gsarti commented Mar 25, 2020

Similar to what is currently available in download_model.py, add Argparse with parameters in finetune_nli.py for parameters:

  • model_name, default 'models/scibert', type str

  • batch_size, default 64, type int

  • model_save_path, default 'models/scibert_nli', type str

  • num_epochs, default 2, type int

  • warmup_steps, default None, not required

  • do_mean_pooling with action='store_true'

  • do_cls_pooling with action='store_true'

  • do_max_pooling with action='store_true'

Then:

  • Add check for only one of the pooling condition to be verified (raise an AttributeError if more than one is). If none is specified, we use mean pooling strategy.

  • Check if the warmup_step parameter is set before setting it to 10% of training: if it is, keep the user-defined value.

@gsarti gsarti added enhancement New feature or request help wanted Extra attention is needed good first issue Good for newcomers labels Mar 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant