Skip to content

Releases: pytorch/ignite

New metrics, handlers and moved contrib handlers/metrics to ignite.handlers/metrics

01 Apr 10:35
Compare
Choose a tag to compare

Major changes

  • Added new metrics: CosineSimilarity, Entropy, PearsonCorrelation. Big thanks to @kzkadc!
  • Moved ignite.contrib.metrics and ignite.contrib.handlers to ignite.handlers and ignite.metrics. Thanks to @leej3!

What's Changed

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project

New Contributors

Thank you !

Full Changelog

v0.4.13...v0.5.0.post1

PyTorch-Ignite 0.4.13 - Release Notes

19 Oct 08:22
50051ea
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.4.12...v0.4.13

Bug fixes, new features and housekeeping

01 May 08:23
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.12 - Release Notes

New Features

Engine and Events

  • Added model_transform to create_supervised_evaluator so that user be able to transform model output into actual prediction (y_pred) (#2896)

Metrics and handlers

  • Updated the NeptuneLogger (#2881)
  • Improved ClearMLLogger. Accessing attributes of the logger, retrieves those of the underlying clearml task. get_task method is also added (#2898)
  • Added score_sign to add_early_stopping_by_val_score and gen_save_best_models_by_val_score to support both error-like and accuracy-like scores (#2898)

Bug Fixes

  • Fixed error on importing Events in Python3.11 (#2907)
  • Fixed an inefficiency in SSIM metric (#2914)
  • Fixed NeptuneSaver (#2900, #2902)

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@AlexanderChaptykov, @DeepC004, @Hummer12007, @divij-pawar, @guptaaryan16, @kshitij12345, @moienr, @normandy7, @sadra-barikbin, @sallycaoyu, @twolodzko, @vfdev-5

New Contributors

Full Changelog: v0.4.11...v0.4.12

New features, bug fixes and housekeeping

18 Feb 01:34
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.11 - Release Notes

New Features

Engine and Events

  • Added before and after events filters (#2727)
  • Can mix every and before/after event filters (#2860)
  • once event filter can accept a sequence of int (#2858)
# "once" event filter
@engine.on(Events.ITERATION_STARTED(once=[50, 60]))
def call_once(engine):
    # do something on 50th and 60th iterations

# "before" and "after" event filter
@engine.on(Events.EPOCH_STARTED(after=10, before=30))
def call_after_and_before(engine):
    # do something in 11 to 29 epoch

# Mixing "every" and "before" / "after" event filters
@engine.on(Events.EPOCH_STARTED(every=5, after=8, before=25))
def call_after_and_before_every(engine):
    # do something on 9, 14, 19, 24 epochs
  • Improved deterministic engine (#2756)
  • Grad accumulation should not effect value of loss (#2737)
  • Added model_transform in create supervised trainer (#2848)

Distributed module

  • Updated idist.all_gather to take group arg (#2715)
  • Updated idist.all_reduce to take group arg (#2712)
  • Added idist.new_group method (#2711)

Metrics and handlers

  • Updated LRFinder to have more than one parameter (#2704)
  • Added get_param method to ParamGroupScheduler (#2720)
  • Updated Polyaxon_logger (#2776)
  • Dropped TrainsLoger and TrainsSaver also removed the BC code (#2742)
  • Refactored PSNR and SSIM (#2797)
  • [BC-breaking] Aligned SSIM output with PSNR output, both give tensors (#2794)
  • Added distributed support to RocCurve (#2802)
  • Refactored EpochMetric and made it idempotent (#2800)

Bug fixes

  • Fixed device issue with metric tests SSIM, updated PSNR (#2796)
  • Fixed LRScheduler issue and fixed CI (#2780)
  • Fixed the code and now raise ModuleNotFoundError instead of RuntimeError (#2750)
  • Fixed sync_all_reduce to cover update->compute->update case (#2803)

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@DeepC004, @JakubDz2208, @Moh-Yakoub, @RishiKumarRay, @abhi-glitchhg, @crj1998, @guptaaryan16, @louis-she, @pacificdragon, @puhuk, @sadra-barikbin, @sallycaoyu, @soma2000-lang, @theory-in-progress, @vfdev-5, @ydcjeff

New Contributors

New features, bug fixes and housekeeping

05 Sep 11:15
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.10 - Release Notes

New Features

Engine

  • Added Engine interrupt/continue feature (#2699, #2682)

Example:

from ignite.engine import Engine, Events

data = range(10)
max_epochs = 3

def check_input_data(e, b):
    print(f"Epoch {engine.state.epoch}, Iter {engine.state.iteration} | data={b}")
    i = (e.state.iteration - 1) % len(data)
    assert b == data[i]

engine = Engine(check_input_data)

@engine.on(Events.ITERATION_COMPLETED(every=11))
def call_interrupt():
    engine.interrupt()

print("Start engine run with interruptions:")
state = engine.run(data, max_epochs=max_epochs)
print("1 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("2 Engine run is interrupted at ", state.epoch, state.iteration)
state = engine.run(data, max_epochs=max_epochs)
print("3 Engine ended the run at ", state.epoch, state.iteration)
Output
Start engine run with interruptions:
Epoch 1, Iter 1 | data=0
Epoch 1, Iter 2 | data=1
Epoch 1, Iter 3 | data=2
Epoch 1, Iter 4 | data=3
Epoch 1, Iter 5 | data=4
Epoch 1, Iter 6 | data=5
Epoch 1, Iter 7 | data=6
Epoch 1, Iter 8 | data=7
Epoch 1, Iter 9 | data=8
Epoch 1, Iter 10 | data=9
Epoch 2, Iter 11 | data=0
1 Engine run is interrupted at  2 11
Epoch 2, Iter 12 | data=1
Epoch 2, Iter 13 | data=2
Epoch 2, Iter 14 | data=3
Epoch 2, Iter 15 | data=4
Epoch 2, Iter 16 | data=5
Epoch 2, Iter 17 | data=6
Epoch 2, Iter 18 | data=7
Epoch 2, Iter 19 | data=8
Epoch 2, Iter 20 | data=9
Epoch 3, Iter 21 | data=0
Epoch 3, Iter 22 | data=1
2 Engine run is interrupted at  3 22
Epoch 3, Iter 23 | data=2
Epoch 3, Iter 24 | data=3
Epoch 3, Iter 25 | data=4
Epoch 3, Iter 26 | data=5
Epoch 3, Iter 27 | data=6
Epoch 3, Iter 28 | data=7
Epoch 3, Iter 29 | data=8
Epoch 3, Iter 30 | data=9
3 Engine ended the run at  3 30
  • Deprecated and replaced Events.default_event_filter with None (#2644)
  • [BC-breaking] Rewritten Engine's terminate and terminate_epoch logic (#2645)
  • Improved logging time taken message showing milliseconds (#2650)

Metrics and handlers

  • Added ZeRO built-in support to Checkpoint in a distributed configuration (#2658, #2642)
  • Added save_on_rank argument to DiskSaver and Checkpoint (#2641)
  • Added a handle_buffers option for EMAHandler (#2592)
  • Improved Precision and Recall metrics (#2573)

Bug fixes

  • Median metrics (e.g median absolute error) are now using np.median-compatible torch median implementation (#2681)
  • Fixed issues when removing handlers on filtered events (#2690)
  • Few minor fixes in Engine and Event (#2680)
  • [BC-breaking] Fixed Engine.terminate() behaviour when resumed (#2678)

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@BowmanChow, @daniellepintz, @haochunchang, @kamalojasv181, @puhuk, @sadra-barikbin, @sandylaker, @sdesrozis, @vfdev-5

Features, bug fixes and housekeeping

04 May 20:24
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.9 - Release Notes

New Features

  • Added whitelist argument to log only desired weights/grads with experiment tracking system handlers: #2550, #2523
  • Added ReduceLROnPlateauScheduler parameter scheduler: #2449
  • Added filename components in Checkpoint: #2498
  • Added missing args to ModelCheckpoint, parity with Checkpoint: #2486
  • [BC-breaking] LRScheduler is now attachable to Events.ITERATION_STARTED: #2496

Bug fixes

  • Fixed zero_grad place in create_supervised_trainer resulting in grad zero logs: #2560, #2559, #2555, #2547
  • Fixed bug in Checkpoint when loading a single non-nn.Module object: #2487
  • Removed warning in DDP if Metric.reset/update are not decorated: #2549
  • [BC-breaking] Fixed SSIM metric implementation and issue with variable batch inputs: #2564, #2563
    • compute method now returns float instead of torch.Tensor

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Davidportlouis, @DevPranjal, @Ishan-Kumar2, @KevinMusgrave, @Moh-Yakoub, @asmayer, @divo12, @gorarakelyan, @jreese, @leotac, @nishantb06, @nmcguire101, @sadra-barikbin, @sayantan1410, @sdesrozis, @vfdev-5, @yuta0821

Mostly bug fixes

17 Jan 15:27
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.8 - Release Notes

New Features

  • Added data as None option to Engine.run (#2369)
  • Now Checkpoint.load_objects can accept str and load the checkpoint internally (#2305)

Bug fixes

  • Fixed issue with DeterministicEngine.state_dict() (#2412)
  • Fixed EMAHandler warm-up behaviour (#2333)
  • Fixed _compute_nproc_per_node in case of bad dist configuration (#2288)
  • Fixed state parameter scheduler to work with EMAHandler (#2326)
  • Fixed a bug on StateParamScheduler.attach method (#2316)
  • Fixed ClearMLLogger to retrieve current task before trying to create a new one (#2344)
  • Added hashing a checkpoint utility: #2272, #2283, #2273
  • Fixed config check issue with multi-node spawn method (#2424)

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Abo7atm, @DevPranjal, @Eunjnnn, @FarehaNousheen, @H4dr1en, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @bibhabasumohapatra, @fco-dv, @louis-she, @sandylaker, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff

State parameter scheduler, loggers improvements and bug fixes

13 Oct 09:09
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.7 - Release Notes

New Features

  • Enabled LRFinder to run multiple epochs (#2200)
  • save_handler automatically detects DiskSaver when path passed (#2198)
  • Improved Checkpoint to use score_name as metric's key (#2146)
  • Added State parameter scheduler (#2090)
  • Added state attributes for loggers (tqdm, Polyaxon, MLFlow, WandB, Neptune, Tensorboard, Visdom, ClearML) (#2162, #2161, #2160, #2154, #2153, #2152, #2151, #2148, #2140, #2137)
  • Added gradient accumulation to supervised training step functions (#2223)
  • Automatic jupyter environment detection (#2188)
  • Added an additional argument to auto_optim to allow gradient accumulation (#2169)
  • Added micro averaging for Bleu Score (#2179)
  • Expanded BLEU, ROUGE to be calculated on batch input (#2259, #2180)
  • Moved BasicTimeProfiler, HandlersTimeProfiler, ParamScheduler, LRFinder to core (#2136, #2135, #2132)

Bug fixes

  • Fixed docstring examples with huge bottom padding (#2225)
  • Fixed NCCL warning caused by barrier if using idist (#2257, #2254)
  • Fixed hostname list expansion (#2208, #2204)
  • Fixed tcp error with PyTorch v1.9.1 (#2211)

Housekeeping (docs, CI, examples, tests, etc)

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@Chandan-h-509, @Ishan-Kumar2, @KickItLikeShika, @Priyansi, @fco-dv, @gucifer, @kennethleungty, @logankilpatrick, @mfoglio, @sandylaker, @sdesrozis, @theory-in-progress, @toxa23, @trsvchn, @vfdev-5, @ydcjeff

FID/IS metrics for GANs, EMA handler and bug fixes

02 Aug 17:02
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.6 - Release Notes

New Features

  • Added start_lr option to FastaiLRFinder (#2111)
  • Added Model's EMA handler (#2098, #2102)
  • Improved SLURM support: added hostlist expansion without using scontrol (#2092)

Metrics

Bug fixes

  • Modified auto_dataloader to not wrap user provided DistributedSampler (#2119)
  • Raise error in DistributedProxySampler when sampler is already a DistributedSampler (#2120)
  • Improved LRFinder error message (#2127)
  • Added py.typed for type checkers (#2095)

Housekeeping

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@01-vyom, @KickItLikeShika, @gucifer, @sandylaker, @schuhschuh, @sdesrozis, @trsvchn, @vfdev-5, @ydcjeff

New metrics, extended DDP support and bug fixes

24 Jun 22:50
Compare
Choose a tag to compare

PyTorch-Ignite 0.4.5 - Release Notes

New Features

Metrics

  • Added BLEU metric (#1834)
  • Added ROUGE metric (#1772)
  • Added MultiLabelConfusionMatrix metric (#1613)
  • Added Cohen Kappa metric (#1690)
  • Extended sync_all_reduce API (#1823)
  • Made EpochMetric more generic by extending the list of valid types (#1748)
  • Fixed issue with metric's output device (#2062)
  • Added support for list of tensors as metric input (#2055)
  • Implemented Jaccard Index shortcut for metrics (#1682)
  • Updated Loss metric to use required_output_keys (#2027)
  • Added classification report metric (#1887)
  • Added output detach for Canberra metric (#1820)
  • Improved ROC AUC (#1762)
  • Improved AveragePrecision metric and tests (#1756)
  • Uniformly handling of metric types for all loggers (#2021)
  • More DDP support for multiple contrib metrics (#1891, #1869, #1865, #1850, #1830, #1829, #1806, #1805, #1803)

Engine

  • Added native torch.cuda.amp and apex automatic mixed precision for create_supervised_trainer and create_supervised_evaluator (#1714, #1589)
  • Updated state.batch/state.output lifespan in Engine (#1919)

Distributed module

  • Handled IterableDataset with auto_dataloader (#2028)
  • Updated Loss metric to use required_output_keys (#2027)
  • Enabled gpu support for gloo backend (#2016)
  • Added safe_mode for idist broadcast (#1839)
  • Improved idist to support different init_methods (#1767)

Other improvements

  • Added LR finder improvements, moved to core (#2045, #1998, #1996, #1987, #1981, #1961, #1951, #1930)
  • Moved param handler to core (#1988)
  • Added an option to store EpochOutputStore data on engine.state, moved to core (#1982, #1974)
  • Set seed for xla in ignite.utils.manual_seed (#1970)
  • Fixed case for Precision/Recall in multi_label, not averaged configuration for DDP (#1646)
  • Updated PolyaxonLogger to handle v1 and v0 (#1625)
  • Added Arguments *args, **kwargs to BaseLogger.attach method (#2034)
  • Enabled metric ordering on ProgressBar (#1937)
  • Updated wandb logger (#1896)
  • Fixed type hint for ProgressBar (#2079)

Bug fixes

  • BC-breaking: Improved loggers to keep configuration (#1945)
  • Fixed warnings in CI (#2023)
  • Fixed Precision for all zero predictions (#2017)
  • Renamed the default logger (#2006)
  • Fixed Accumulation metric with Nvidia/Apex (#1978)
  • Updated code to raise an error if SLURM is used with torch dist launcher (#1976)
  • Updated nltk-smooth2 for BLEU metric (#1911)
  • Added full read permissions to saved file (1876) (#1880)
  • Fixed a bug with horovod _do_manual_all_reduce (#1848)
  • Fixed small bug in "Finetuning EfficientNet-B0 on CIFAR100" tutorial (#2073)
  • Fixed f-string in mnist_save_resume_engine.py example (#2077)
  • Fixed an issue when rng states accidentaly on cuda for DeterministicEngine (#2081)

Housekeeping

A lot of PRs

Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

@01-vyom, @Devanshu24, @Juddd, @KickItLikeShika, @Moh-Yakoub, @Muktan, @OBITORASU, @Priyansi, @afzal442, @ahmedo42, @aksg87, @aniezurawski, @cozek, @devrimcavusoglu, @fco-dv, @gucifer, @log-layer, @mouradmourafiq, @radekosmulski, @sahilg06, @sdesrozis, @sparkingdark, @thomasjpfan, @touqir14, @trsvchn, @vfdev-5, @ydcjeff