Releases: fastai/fastai
Releases Β· fastai/fastai
v2.7.2
v2.7.1
v2.7.0
Breaking changes
- Distributed training now uses Hugging Face Accelerate, rather than fastai's launcher.
Distributed training is now supported in a notebook -- see this tutorial for details
New Features
resize_images
creates folder structure atdest
whenrecurse=True
(#3692)- Integrate nested callable and getcallable (#3691), thanks to @muellerzr
- workaround pytorch subclass performance bug (#3682)
- Torch 1.12.0 compatibility (#3659), thanks to @josiahls
- Integrate Accelerate into fastai (#3646), thanks to @muellerzr
- New Callback event, before and after backward (#3644), thanks to @muellerzr
- Let optimizer use built torch opt (#3642), thanks to @muellerzr
- Support PyTorch Dataloaders with
DistributedDL
(#3637), thanks to @tmabraham - Add
channels_last
cb (#3634), thanks to @tcapelle - support all timm kwargs (#3631)
- send
self.loss_func
to device if it is an instance on nn.Module (#3395), thanks to @arampacha
Bugs Squashed
- Solve hanging
load_model
and let LRFind be ran in a distributed setup (#3689), thanks to @muellerzr - pytorch subclass functions fail if no positional args (#3687)
- Workaround for performance bug in PyTorch with subclassed tensors (#3683), thanks to @warner-benjamin
- Fix
Tokenizer.get_lengths
(#3667), thanks to @karotchykau load_learner
withcpu=False
doesn't respect the current cuda device if model exported on another; fixes #3656 (#3657), thanks to @ohmeow- [Bugfix] Fix smoothloss on distributed (#3643), thanks to @muellerzr
- WandbCallback Error: "Tensors must be CUDA and dense" on distributed training (#3291)
- vision tutorial failed at
learner.fine_tune(1)
(#3283)
v2.6.3
v2.6.2
v2.6.1
v2.6.0
New Features
- add support for Ross Wightman's Pytorch Image Models (timm) library (#3624)
- rename
cnn_learner
tovision_learner
since we now support models other than CNNs too (#3625)
Bugs Squashed
- Fix AccumMetric name.setter (#3621), thanks to @warner-benjamin
- Fix Classification Interpretation (#3563), thanks to @warner-benjamin
v2.5.6
v2.5.5
v2.5.4
New Features
- Support py3.10 annotations (#3601)
Bugs Squashed
- Fix pin_memory=True breaking (batch) Transforms (#3606), thanks to @johan12345
- Add Python 3.9 to
setup.py
for PyPI (#3604), thanks to @nzw0301 - removes add_vert from get_grid calls (#3593), thanks to @kevinbird15
- Making
loss_not_reduced
work with DiceLoss (#3583), thanks to @hiromis - Fix bug in URLs.path() in 04_data.external (#3582), thanks to @malligaraj
- Custom name for metrics (#3573), thanks to @bdsaglam
- Update import for show_install (#3568), thanks to @fr1ll
- Fix Classification Interpretation (#3563), thanks to @warner-benjamin
- Updates Interpretation class to be memory efficient (#3558), thanks to @warner-benjamin
- Learner.show_results uses passed dataloader via dl_idx or dl arguments (#3554), thanks to @warner-benjamin
- Fix learn.export pickle error with MixedPrecision Callback (#3544), thanks to @warner-benjamin
- Fix concurrent LRFinder instances overwriting each other by using tempfile (#3528), thanks to @warner-benjamin
- Fix _get_shapes to work with dictionaries (#3520), thanks to @ohmeow
- Fix torch version checks, remove clip_grad_norm check (#3518), thanks to @warner-benjamin
- Fix nested tensors predictions compatibility with fp16 (#3516), thanks to @tcapelle
- Learning rate passed via OptimWrapper not updated in Learner (#3337)
- Different results after running
lr_find()
at different times (#3295) - lr_find() may fail if run in parallel from the same directory (#3240)