Skip to content

Releases: OpenNMT/OpenNMT-tf

OpenNMT-tf 2.10.1

04 Jun 13:33
Compare
Choose a tag to compare

Fixes and improvements

  • Fix error when running RNN models with onmt-main
  • Add some missing functions in the online API documentation

OpenNMT-tf 2.10.0

28 May 15:05
Compare
Choose a tag to compare

New features

  • Update to TensorFlow 2.2 and TensorFlow Addons 0.10
  • Inputters can override the method keep_for_training to customize data filtering during the training

Fixes and improvements

  • Fix serving function for language models
  • Fix possible crash when using the prf evaluator
  • Reduce time to start the training by optimizing the computation of the training data size
  • Add more logs during inference to indicate that progress is being made and the process is not "stuck"
  • Update SacreBLEU to 1.4.9

OpenNMT-tf 2.9.3

06 May 08:43
Compare
Choose a tag to compare

Fixes and improvements

  • Fix type error when online tokenization is called on an empty line

OpenNMT-tf 2.9.2

22 Apr 09:04
Compare
Choose a tag to compare

Fixes and improvements

  • Pin sacrebleu package to version 1.4.4 to fix installation issue on Windows
  • Clarify error when the training dataset is empty

OpenNMT-tf 2.9.1

14 Apr 08:44
Compare
Choose a tag to compare

Fixes and improvements

  • Fix error in onmt-main when using run types other than train

OpenNMT-tf 2.9.0

07 Apr 16:18
Compare
Choose a tag to compare

New features

  • Horovod support with the training flag --horovod
  • New external scorers:
    • wer: Word Error Rate
    • ter: Translation Error Rate
    • prf: Precision, Recall, and F-Measure
  • Evaluation parameter max_exports_to_keep to limit the number of exported models

Fixes and improvements

  • Do not report "target words/s" when training language models

OpenNMT-tf 2.8.1

24 Mar 12:45
Compare
Choose a tag to compare

Fixes and improvements

  • Disable dropout in layers that are frozen by the freeze_layers parameter
  • Fix batch size autotuning that ignored the mixed precision flag
  • Fix sparse gradients that were unnecessarily converted to dense gradients in mixed precision training
  • Only compute the gradients global norm every save_summary_steps steps to save a few computation during training
  • Simplify some ops in the inference graph

OpenNMT-tf 2.8.0

02 Mar 15:15
Compare
Choose a tag to compare

OpenNMT-tf 2.8.0

New features

  • Allow setting different number of encoder/decoder layers in Transformer constructor

Fixes and improvements

  • Fix decoder initialization with multi source encoder
  • Fix command line parsing when --config is used before the run type
  • Log more information on length mismatch when parsing alignments
  • Log number of model parameters when starting the training

OpenNMT-tf 2.7.0

14 Feb 13:11
f53ce99
Compare
Choose a tag to compare

OpenNMT-tf 2.7.0

New features

  • Enable CTranslate2 export for TransformerBaseRelative and TransformerBigRelative models
  • Update TensorFlow Addons to 0.8

Fixes and improvements

  • Log the number of frozen weights when using the parameter freeze_layers
  • More helpful error messages when layer names configured in freeze_layers are incorrect
  • Improve beam search efficiency by avoiding some unnecessary state reordering
  • Add usage examples for symbols in opennmt.data

OpenNMT-tf 2.6.0

28 Jan 08:40
Compare
Choose a tag to compare

OpenNMT-tf 2.6.0

New features

  • Multiple training files can be configured in train_features_file/train_labels_file, and train_files_weights optionally assign a weight to each file (see Data section in the documentation)
  • Support exporting compatible models to CTranslate2 format (see export_format option)
  • moving_average_decay training parameter to enable exponential moving average of the model variables

Fixes and improvements

  • Fix error when starting a language model training
  • Use tf.keras.layers.LayerNormalization instead of custom implementation for improved efficiency
  • Fix possible duplicate call to checkpoint saving at the end of the training
  • Ignore BLEU evaluation warning when run on tokenized data (which also caused duplicated logs for the rest of the training)
  • Improve accuracy of reported prediction time by ignoring the initial graph construction