Skip to content

Releases: facebookresearch/fairseq

v0.12.2

27 Jun 19:32
Compare
Choose a tag to compare
v0.12.2 release

v0.12.1

13 Jun 15:07
Compare
Choose a tag to compare
v0.12.1 release

v0.12.0

10 Jun 14:45
Compare
Choose a tag to compare
0.12.0 release

v0.10.2

05 Jan 20:26
Compare
Choose a tag to compare

Bug fixes:

  • fix register_model_architecture for Transformer language model (#3097)
  • fix logging to use stdout instead of stderr (#3052)

v0.10.1

21 Nov 20:52
Compare
Choose a tag to compare

This minor release includes fixes for torch.distributed.launch, --user-dir and a few smaller bugs. We also include prebuilt wheels for common platforms.

v0.10.0

12 Nov 14:22
Compare
Choose a tag to compare

It's been a long time since our last release (0.9.0) nearly a year ago! There have been numerous changes and new features added since then, which we've tried to summarize below. While this release carries the same major version as our previous release (0.x.x), if you have code that relies on 0.9.0, it is likely you'll need to adapt it before updating to 0.10.0.

Looking forward, this will also be the last significant release with the 0.x.x numbering. The next release will be 1.0.0 and will include a major migration to the Hydra configuration system, with an eye towards modularizing fairseq to be more usable as a library.

Changelog:

New papers:

Major new features:

  • TorchScript support for Transformer and SequenceGenerator (PyTorch 1.6+ only)
  • Model parallel training support (see Megatron-11b)
  • TPU support via --tpu and --bf16 options (7751229)
  • Added VizSeq (a visual analysis toolkit for evaluating fairseq models)
  • Migrated to Python logging (fb76dac)
  • Added “SlowMo” distributed training backend (0dac0ff)
  • Added Optimizer State Sharding (ZeRO) (5d7ed6a)
  • Added several features to improve speech recognition support in fairseq: CTC criterion, external ASR decoder support (currently only wav2letter decoder) with KenLM and fairseq language model fusion

Minor features:

  • Added --patience for early stopping
  • Added --shorten-method=[none|truncate|random_crop] to language modeling (and other) tasks
  • Added --eval-bleu for computing BLEU scores during training (60fbf64)
  • Added support for training huggingface models (e.g. hf_gpt2) (2728f9b)
  • Added FusedLAMB optimizer (--optimizer=lamb) (f75411a)
  • Added LSTM-based language model (lstm_lm) (9f4256e)
  • Added dummy tasks and models for benchmarking (91f0534; a541b19)
  • Added tutorial and pretrained models for paraphrasing (630701e)
  • Support quantization for Transformer (6379573)
  • Support multi-GPU validation in fairseq-validate (2f7e3f3)
  • Support batched inference in hub interface (3b53962)
  • Support for language model fusion in standard beam search (5379461)

Breaking changes:

  • Updated requirements to Python 3.6+ and PyTorch 1.5+
  • --max-sentences renamed to --batch-size
  • Main entry point scripts (eval_lm.py, generate.py, etc.) removed from root directory into fairseq_cli
  • Changed format for generation output; H- now corresponds to tokenized system outputs and newly added D- lines correspond to detokenized outputs (f353913)
  • We now log the stats from the log-interval (displayed as train_inner) instead of a rolling average over each epoch.
  • SequenceGenerator/Scorer does not print alignment by default, re-enable with --print-alignment
  • Print base 2 scores in generation scripts (660d69f)
  • Incremental decoding interface changed to use FairseqIncrementalState (4e48c4a; 88185fc)
  • Refactor namespaces in Criterions to support library usage (introduce LegacyFairseqCriterion for BC) (46b773a)
  • Deprecate FairseqCriterion::aggregate_logging_outputs interface, use FairseqCriterion::reduce_metrics instead (8679339)
  • Moved fairseq.meters to fairseq.logging.meters and added new metrics aggregation module (fairseq.logging.metrics) (1e324a5; f8b795f)
  • Reset mid-epoch stats every log-interval steps (244835d)
  • Ignore duplicate entries in dictionary files (dict.txt) and support manual overwrite with #fairseq:overwrite option (dd1298e; 937535d)
  • Use 1-based indexing for epochs everywhere (aa79bb9)

Minor interface changes:

  • Added FairseqTask::begin_epoch hook (122fc1d)
  • FairseqTask::build_generator interface changed (cd2555a)
  • Change RobertaModel base class to FairseqEncoder (307df56)
  • Expose FairseqOptimizer.param_groups property (8340b2d)
  • Deprecate --fast-stat-sync and replace with FairseqCriterion::logging_outputs_can_be_summed interface (fe6c2ed)
  • --raw-text and --lazy-load are fully deprecated; use --dataset-impl instead
  • Mixture of expert tasks moved to examples/ (8845dcf)

Performance improvements:

  • Use cross entropy from apex for improved memory efficiency (5065077)
  • Added buffered dataloading (--data-buffer-size) (4115317)

v0.9.0

04 Dec 14:31
Compare
Choose a tag to compare

Possibly breaking changes:

  • Set global numpy seed (4a7cd58)
  • Split in_proj_weight into separate k, v, q projections in MultiheadAttention (fdf4c3e)
  • TransformerEncoder returns namedtuples instead of dict (27568a7)

New features:

  • Add --fast-stat-sync option (e1ba32a)
  • Add --empty-cache-freq option (315c463)
  • Support criterions with parameters (ba5f829)

New papers:

  • Simple and Effective Noisy Channel Modeling for Neural Machine Translation (49177c9)
  • Levenshtein Transformer (86857a5, ...)
  • Cross+Self-Attention for Transformer Models (4ac2c5f)
  • Jointly Learning to Align and Translate with Transformer Models (1c66792)
  • Reducing Transformer Depth on Demand with Structured Dropout (dabbef4)
  • Unsupervised Cross-lingual Representation Learning at Scale (XLM-RoBERTa) (e23e5ea)
  • BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (a92bcda)
  • CamemBERT: a French BERT (b31849a)

Speed improvements:

  • Add CUDA kernels for LightConv and DynamicConv (f840564)
  • Cythonization of various dataloading components (4fc3953, ...)
  • Don't project mask tokens for MLM training (718677e)

v0.8.0

14 Aug 12:16
Compare
Choose a tag to compare

Changelog:

  • Relicensed under MIT license
  • Add RoBERTa
  • Add wav2vec
  • Add WMT'19 models
  • Add initial ASR code
  • Changed torch.hub interface (generate renamed to translate)
  • Add --tokenizer and --bpe
  • f812e52: Renamed data.transforms -> data.encoders
  • 654affc: New Dataset API (optional)
  • 47fd985: Deprecate old Masked LM components
  • 5f78106: Set mmap as default dataset format and infer format automatically
  • Misc fixes for sampling
  • Misc fixes to support PyTorch 1.2

v0.7.2

19 Jul 13:41
Compare
Choose a tag to compare

No major API changes since the last release. Cutting a new release since we'll be merging significant (possibly breaking) changes to logging, data loading and the masked LM implementation soon.

v0.7.1

20 Jun 15:24
Compare
Choose a tag to compare

Changelog:

  • 9462a81: Enhanced MMapIndexedDataset: less memory, higher speed
  • 392fce8: Add code for wav2vec paper