Releases: mwalmsley/zoobot
v2.0
What's Changed
- New pretrained architectures: ConvNeXT, EfficientNetV2, MaxViT, and more. Each in several sizes. Available on HuggingFace.
- Reworked finetuning procedure. All these architectures are finetuneable through a common method.
- Reworked finetuning options. Batch norm finetuning removed. Cosine schedule option added.
- Reworked finetuning saving/loading. Auto-downloads encoder from HuggingFace.
- Now supports regression finetuning (as well as multi-class and binary). See
pytorch/examples/finetuning
- Updated
timm
to 0.9.10, allowing latest model architectures. Previously downloaded checkpoints may not load correctly! - (internal until published) GZ Evo v2 now includes Cosmic Dawn (HSC H2O). Significant performance improvement on HSC finetuning. Also now includes GZ UKIDSS (dragged from our archives).
- Updated
pytorch
to2.1.0
- Added support for webdatasets (only recommended for large-scale distributed training)
- Improved per-question logging when training from scratch
- Added option to compile encoder for max speed (not recommended for finetuning, only for pretraining).
- Deprecates TensorFlow. The CS research community focuses on PyTorch and new frameworks like JAX.
Full Changelog: v1.0.5...v2.0
v1.0.5 - Finetuning Improvements, Max-ViT
What's Changed
Major improvements to finetuning.
- Now supports finetuning resnet and maxvit-tiny! This previously only worked as static pretrained models.
- Now specify num_blocks instead of num_layers, reflecting the network structure (effnet, resnet50, maxvit)
- Now you can optionally keep batchnorm layers always trainable
Small changes:
Update dependency to use galaxy-datasets==0.15.0
Replace multi-class example with new galaxy-mnist dataset
Fixes #108
The next release will update timm to 0.9ish
Pytorch 2.1.0 already works, and the next release will also reflect that
Full Changelog: v1.0.4...v1.0.5
Color and multi-class support
This minor release adds pretrained color models (i.e. with 3 channels) and also adds support for multi-class finetuning.
What's Changed
- sync by @mwalmsley in #92
- 1.0.3 version bump by @mwalmsley in #99
- Rebuild docs by @mwalmsley in #103
- Add color weights, trivial QoL changes by @mwalmsley in #104
- Add multiclass example by @mwalmsley in #105
- Add multiclass support by @mwalmsley in #106
Full Changelog: v1.0.3...v1.0.4
JOSS paper release
Minor release marking Zoobot's state on acceptance of the JOSS paper.
What's Changed
- Finetune v1 onto docs by @mwalmsley in #85
- Dev by @mwalmsley in #87
- sync by @mwalmsley in #93
- Delete requirements.txt by @mwalmsley in #96
- Bring paper fix from docs to main by @mwalmsley in #97
- Highlight Colab notebook by @mwalmsley in #98
Full Changelog: 1.0.2...v1.0.3
v1.0.2 - Lighning v2 and galaxy-datasets updates
- Lightning v2 now correctly uses early stopping
- galaxy-datasets version bumped to correctly ignore alpha channel of png images
What's Changed
- Hotfix - add early stopping back, update docs by @mwalmsley in #91
Full Changelog: v1.0.1...1.0.2
v1.0.1 - Lightning v2.0 support
Small incremental release adjusting the LightningModule hooks used in order to support Lightning v2. Lightning v2.0.0 (currently latest) is now the minimum required version.
What's Changed
- Update with Maja's changes by @mwalmsley in #89
- Support Lightning v2.0.0 by @mwalmsley in #90
Full Changelog: v1.0.0...v1.0.1
v1 release
This release completely rewrites the API and documentation based on user feedback during the one-year beta period.
Major changes include:
- New API for finetuning
- Shift to PyTorch/Lightning
- Support for all
timm
models - Docs refocused on finetuning
- Pretrained model library added
- Data loading refactored to albumentations and mwalmsley/galaxy-datasets
Documentation is here. Installation and quickstart are on the README.
Thank you all for the help!
What's Changed
- Update refactor with latest changes by @mwalmsley in #15
- Dockerize zoobot for pytorch and tensorflow versions by @camallen in #14
- Refactor pytorch datasets by @mwalmsley in #16
- missed a rename, thanks cam by @mwalmsley in #17
- allow multiple catalog paths to be passed by @camallen in #19
- allow install of git based packages (pytorch_galaxy_datasets) by @camallen in #20
- Bring old dev branch up-to-date by @mwalmsley in #23
- Update finetuning_advanced example by @mwalmsley in #24
- PL logging by @mwalmsley in #22
- Improve data files for docker by @camallen in #21
- allow checkpointing setup to be customized by @camallen in #26
- Include Cam's latest features on generic dev by @mwalmsley in #27
- improve docker setup by @camallen in #28
- Trivial updates by @mwalmsley in #29
- Refactor PyTorch for explicit Lighnting/Wandb hyperparameters by @mwalmsley in #30
- Adding Zoobot to pypi by @mwalmsley in #31
- add gh action CI system by @camallen in #34
- add gh action to publish package to pypi by @camallen in #33
- remove travis CI integration by @camallen in #35
- Deprecate TFRecords by @mwalmsley in #32
- Fix doc build, tweak readme by @mwalmsley in #40
- Trivial typing change left behind by @mwalmsley in #55
- Create CODE_OF_CONDUCT.md by @camallen in #46
- Add PyTorch Finetuning Capability, Examples by @mwalmsley in #59
- Benchmarks by @mwalmsley in #58
- remove extra
_non-star
label for artifact task by @camallen in #60 - allow more params to finetuning via config object by @mwalmsley in #63
- add wandb logging, freeze batchnorm by default by @mwalmsley in #62
- Make sure TF and Torch versions have similar performance by @mwalmsley in #64
- Big messy PR of misc. improvements by @mwalmsley in #65
- use correct torchvision 0.13.1 package by @camallen in #69
- Update with previous commits by @mwalmsley in #72
- use latest galaxy-datasets package by @camallen in #75
- add timm package for pytoch cuda by @camallen in #77
- Docs by @mwalmsley in #78
- Merge pull request #78 from mwalmsley/docs by @mwalmsley in #79
- Finetune v1 by @mwalmsley in #80
- Joss by @mwalmsley in #81
- Syncing branches by @mwalmsley in #83
- Update finetune-v1 with docs changes by @mwalmsley in #84
- Finetune v1 onto dev by @mwalmsley in #86
- Add tensorboard writer callback to pytorch by @maja-jablonska in #54
- V1 release by @mwalmsley in #88
New Contributors
- @maja-jablonska made their first contribution in #54
Full Changelog: v0.0.3...v1.0.0
Docs pre-release
Improved documentation and refactored train API (pytorch).
Awaiting results from several segmentation experiments ahead of public release (inc pytorch version).
v0.0.2
v0.0.1
Initial release.
This had enough documentation and code to replicate the DECaLS model and make predictions. There are a few minor missing arguments and similar typos that you might have stumbled into, because I made some last minute changes without updating the docs, but everything worked with a little stack tracing.