Skip to content

Commit

Permalink
Preparation for release 0.11.0 (#807)
Browse files Browse the repository at this point in the history
We are happy to announce the new skorch 0.11 release:

Two basic but very useful features have been added to our collection of callbacks. First, by setting `load_best=True` on the  [`Checkpoint` callback](https://skorch.readthedocs.io/en/latest/callbacks.html#skorch.callbacks.Checkpoint), the snapshot of the network with the best score will be loaded automatically when training ends. Second, we added a callback [`InputShapeSetter`](https://skorch.readthedocs.io/en/latest/callbacks.html#skorch.callbacks.InputShapeSetter) that automatically adjusts your input layer to have the size of your input data (useful e.g. when that size is not known beforehand).

When it comes to integrations, the [`MlflowLogger`](https://skorch.readthedocs.io/en/latest/callbacks.html#skorch.callbacks.MlflowLogger) now allows to automatically log to [MLflow](https://mlflow.org/). Thanks to a contributor, some regressions in `net.history` have been fixed and it even runs faster now.

On top of that, skorch now offers a new module, `skorch.probabilistic`. It contains new classes to work with **Gaussian Processes** using the familiar skorch API. This is made possible by the fantastic [GPyTorch](https://github.com/cornellius-gp/gpytorch) library, which skorch uses for this. So if you want to get started with Gaussian Processes in skorch, check out the [documentation](https://skorch.readthedocs.io/en/latest/user/probabilistic.html) and this [notebook](https://nbviewer.org/github/skorch-dev/skorch/blob/master/notebooks/Gaussian_Processes.ipynb). Since we're still learning, it's possible that we will change the API in the future, so please be aware of that.

Morever, we introduced some changes to make skorch more customizable. First of all, we changed the signature of some methods so that they no longer assume the dataset to always return exactly 2 values. This way, it's easier to work with custom datasets that return e.g. 3 values. Normal users should not notice any difference, but if you often create custom nets, take a look at the [migration guide](https://skorch.readthedocs.io/en/latest/user/FAQ.html#migration-from-0-10-to-0-11).

And finally, we made a change to how custom modules, criteria, and optimizers are handled. They are now "first class citizens" in skorch land, which means: If you add a second module to your custom net, it is treated exactly the same as the normal module. E.g., skorch takes care of moving it to CUDA if needed and of switching it to train or eval mode. This way, customizing your networks architectures with skorch is easier than ever. Check the [docs](https://skorch.readthedocs.io/en/latest/user/customization.html#initialization-and-custom-modules) for more details.

Since these are some big changes, it's possible that you encounter issues. If that's the case, please check our [issue](https://github.com/skorch-dev/skorch/issues) page or create a new one.

As always, this release was made possible by outside contributors. Many thanks to:

- Autumnii
- Cebtenzzre
- Charles Cabergs
- Immanuel Bayer
- Jake Gardner
- Matthias Pfenninger
- Prabhat Kumar Sahu 

Find below the list of all changes:

Added

- Added `load_best` attribute to `Checkpoint` callback to automatically load state of the best result at the end of training
- Added a `get_all_learnable_params` method to retrieve the named parameters of all PyTorch modules defined on the net, including of criteria if applicable
- Added `MlflowLogger` callback for logging to Mlflow (#769)
- Added `InputShapeSetter` callback for automatically setting the input dimension of the PyTorch module
- Added a new module to support Gaussian Processes through [GPyTorch](https://gpytorch.ai/). To learn more about it, read the [GP documentation](https://skorch.readthedocs.io/en/latest/user/probabilistic.html) or take a look at the [GP notebook](https://nbviewer.jupyter.org/github/skorch-dev/skorch/blob/master/notebooks/Gaussian_Processes.ipynb). This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way (#782)

Changed

- Changed the signature of `validation_step`, `train_step_single`, `train_step`, `evaluation_step`, `on_batch_begin`, and `on_batch_end` such that instead of receiving `X` and `y`, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an `(X, y)` tuple, which is true for quite a few PyTorch datasets; please refer to the [migration guide](https://skorch.readthedocs.io/en/latest/user/FAQ.html#migration-from-0-10-to-0-11) if you encounter problems (#699)
- Checking of arguments to `NeuralNet` is now during `.initialize()`, not during `__init__`, to avoid raising false positives for yet unknown module or optimizer attributes
- Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the [docs](https://skorch.readthedocs.io/en/latest/user/customization.html#initialization-and-custom-modules) for more details, #751)
- `CVSplit` is renamed to `ValidSplit` to avoid confusion (#752)

Fixed

- Fixed a few bugs in the `net.history` implementation (#776)
- Fixed a bug in `TrainEndCheckpoint` that prevented it from being unpickled (#773)
  • Loading branch information
BenjaminBossan committed Oct 31, 2021
1 parent c625ccf commit baf0580
Show file tree
Hide file tree
Showing 8 changed files with 96 additions and 96 deletions.
15 changes: 12 additions & 3 deletions CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,17 +9,25 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

### Changed

### Fixed

## [0.11.0] - 2021-10-11

### Added

- Added `load_best` attribute to `Checkpoint` callback to automatically load state of the best result at the end of training
- Added a `get_all_learnable_params` method to retrieve the named parameters of all PyTorch modules defined on the net, including of criteria if applicable
- Added `MlflowLogger` callback for logging to Mlflow (#769)
- Added `InputShapeSetter` callback for automatically setting the input dimension of the PyTorch module
- Added a new module to support Gaussian Processes through [GPyTorch](https://gpytorch.ai/). To learn more about it, read the [GP documentation](https://skorch.readthedocs.io/en/latest/user/probabilistic.html) or take a look at the [GP notebook](https://nbviewer.jupyter.org/github/skorch-dev/skorch/blob/master/notebooks/Gaussian_Processes.ipynb). This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way.
- Added a new module to support Gaussian Processes through [GPyTorch](https://gpytorch.ai/). To learn more about it, read the [GP documentation](https://skorch.readthedocs.io/en/latest/user/probabilistic.html) or take a look at the [GP notebook](https://nbviewer.jupyter.org/github/skorch-dev/skorch/blob/master/notebooks/Gaussian_Processes.ipynb). This feature is experimental, i.e. the API could be changed in the future in a backwards incompatible way (#782)

### Changed

- Changed the signature of `validation_step`, `train_step_single`, `train_step`, `evaluation_step`, `on_batch_begin`, and `on_batch_end` such that instead of receiving `X` and `y`, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an `(X, y)` tuple, which is true for quite a few PyTorch datasets; please refer to the [migration guide](https://skorch.readthedocs.io/en/latest/user/FAQ.html#migration-from-0-9-to-0-10) if you encounter problems
- Changed the signature of `validation_step`, `train_step_single`, `train_step`, `evaluation_step`, `on_batch_begin`, and `on_batch_end` such that instead of receiving `X` and `y`, they receive the whole batch; this makes it easier to deal with datasets that don't strictly return an `(X, y)` tuple, which is true for quite a few PyTorch datasets; please refer to the [migration guide](https://skorch.readthedocs.io/en/latest/user/FAQ.html#migration-from-0-10-to-0-11) if you encounter problems (#699)
- Checking of arguments to `NeuralNet` is now during `.initialize()`, not during `__init__`, to avoid raising false positives for yet unknown module or optimizer attributes
- Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the [docs](https://skorch.readthedocs.io/en/latest/user/customization.html#initialization-and-custom-modules) for more details)
- Modules, criteria, and optimizers that are added to a net by the user are now first class: skorch takes care of setting train/eval mode, moving to the indicated device, and updating all learnable parameters during training (check the [docs](https://skorch.readthedocs.io/en/latest/user/customization.html#initialization-and-custom-modules) for more details, #751)
- `CVSplit` is renamed to `ValidSplit` to avoid confusion (#752)

### Fixed
Expand Down Expand Up @@ -253,3 +261,4 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
[0.8.0]: https://github.com/skorch-dev/skorch/compare/v0.7.0...v0.8.0
[0.9.0]: https://github.com/skorch-dev/skorch/compare/v0.8.0...v0.9.0
[0.10.0]: https://github.com/skorch-dev/skorch/compare/v0.9.0...v0.10.0
[0.11.0]: https://github.com/skorch-dev/skorch/compare/v0.10.0...v0.11.0
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
0.10.1dev
0.11.0
6 changes: 3 additions & 3 deletions docs/user/FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -408,10 +408,10 @@ the **greatest** score.
Migration guide
---------------

Migration from 0.9 to 0.10
^^^^^^^^^^^^^^^^^^^^^^^^^^
Migration from 0.10 to 0.11
^^^^^^^^^^^^^^^^^^^^^^^^^^^

With skorch 0.10, we pushed the tuple unpacking of values returned by
With skorch 0.11, we pushed the tuple unpacking of values returned by
the iterator to methods lower down the call chain. This way, it is
much easier to work with iterators that don't return exactly two
values, as per the convention.
Expand Down
1 change: 1 addition & 0 deletions skorch/callbacks/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -775,6 +775,7 @@ class InputShapeSetter(Callback):
methods.
Basic usage:
>>> class MyModule(torch.nn.Module):
... def __init__(self, input_dim=1):
... super().__init__()
Expand Down
8 changes: 3 additions & 5 deletions skorch/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -252,12 +252,10 @@ def __init__(
"but ValidSplit got {}".format(cv))

if not self._is_float(cv) and random_state is not None:
# TODO: raise a ValueError instead of a warning
warnings.warn(
raise ValueError(
"Setting a random_state has no effect since cv is not a float. "
"This will raise an error in a future. You should leave "
"random_state to its default (None), or set cv to a float value.",
FutureWarning
"You should leave random_state to its default (None), or set cv "
"to a float value.",
)

self.cv = cv
Expand Down
11 changes: 8 additions & 3 deletions skorch/tests/callbacks/test_all.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,11 @@ def test_set_params_with_unknown_key_raises(self, base_cls):
with pytest.raises(ValueError) as exc:
base_cls().set_params(foo=123)

# TODO: check error message more precisely, depending on what
# the intended message shouldb e from sklearn side
assert exc.value.args[0].startswith('Invalid parameter foo for')
msg_start = (
"Invalid parameter foo for estimator <skorch.callbacks.base.Callback")
msg_end = (
"Check the list of available parameters with "
"`estimator.get_params().keys()`.")
msg = exc.value.args[0]
assert msg.startswith(msg_start)
assert msg.endswith(msg_end)
14 changes: 7 additions & 7 deletions skorch/tests/callbacks/test_scoring.py
Original file line number Diff line number Diff line change
Expand Up @@ -401,7 +401,7 @@ class MySkorchDataset(skorch.dataset.Dataset):
pass

rawsplit = lambda ds: (ds, ds)
validsplit = ValidSplit(2, random_state=0)
valid_split = ValidSplit(2)

def split_ignore_y(ds, y):
return rawsplit(ds)
Expand All @@ -416,12 +416,12 @@ def split_ignore_y(ds, y):
((MySkorchDataset(*data), None), rawsplit, MySkorchDataset, True),

# Test a split that splits datasets using torch Subset
(data, validsplit, np.ndarray, False),
(data, validsplit, Subset, True),
((MyTorchDataset(*data), None), validsplit, Subset, False),
((MyTorchDataset(*data), None), validsplit, Subset, True),
((MySkorchDataset(*data), None), validsplit, np.ndarray, False),
((MySkorchDataset(*data), None), validsplit, Subset, True),
(data, valid_split, np.ndarray, False),
(data, valid_split, Subset, True),
((MyTorchDataset(*data), None), valid_split, Subset, False),
((MyTorchDataset(*data), None), valid_split, Subset, True),
((MySkorchDataset(*data), None), valid_split, np.ndarray, False),
((MySkorchDataset(*data), None), valid_split, Subset, True),
]

for input_data, train_split, expected_type, caching in table:
Expand Down

0 comments on commit baf0580

Please sign in to comment.