Releases: jdb78/pytorch-forecasting
Releases · jdb78/pytorch-forecasting
Update to pytorch 2.0
Breaking Changes
- Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the lightning upgrade guide. For PyTorch Forecasting, this particularly means if you are developing own models, the class method
epoch_end
has been renamed toon_epoch_end
and replacingmodel.summarize()
withModelSummary(model, max_depth=-1)
andTuner(trainer)
is its own class, sotrainer.tuner
needs replacing. (#1280) - Changed the
predict()
interface returning named tuple - see tutorials.
Changes
- The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).
Fixed
- Fixed robust scaler when quantiles are 0.0, and 1.0, i.e. minimum and maximum (#1142)
Poetry update
Multivariate networks
Added
- DeepVar network (#923)
- Enable quantile loss for N-HiTS (#926)
- MQF2 loss (multivariate quantile loss) (#949)
- Non-causal attention for TFT (#949)
- Tweedie loss (#949)
- ImplicitQuantileNetworkDistributionLoss (#995)
Fixed
- Fix learning scale schedule (#912)
- Fix TFT list/tuple issue at interpretation (#924)
- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)
- Fix Aggregation and CompositeMetric resets (#949)
Changed
- Dropping Python 3.6 suppport, adding 3.10 support (#479)
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
- Changed transformation format for Encoders to dict from tuple (#949)
Contributors
- jdb78
Bugfixes
Adding N-HiTS network (N-BEATS successor)
Added
- Added new
N-HiTS
network that has consistently beatenN-BEATS
(#890) - Allow using torchmetrics as loss metrics (#776)
- Enable fitting
EncoderNormalizer()
with limited data history usingmax_length
argument (#782) - More flexible
MultiEmbedding()
with convenienceoutput_size
andinput_size
properties (#829) - Fix concatentation of attention (#902)
Fixed
- Fix pip install via github (#798)
Contributors
- jdb78
- christy
- lukemerrick
- Seon82
Maintenance Release
Maintenance Release (26/09/2021)
Added
- Use target name instead of target number for logging metrics (#588)
- Optimizer can be initialized by passing string, class or function (#602)
- Add support for multiple outputs in Baseline model (#603)
- Added Optuna pruner as optional parameter in
TemporalFusionTransformer.optimize_hyperparameters
(#619) - Dropping support for Python 3.6 and starting support for Python 3.9 (#639)
Fixed
- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)
- Added missing transformation of prediction for MLP (#602)
- Fixed logging hyperparameters (#688)
- Ensure MultiNormalizer fit state is detected (#681)
- Fix infinite loop in TimeDistributedEmbeddingBag (#672)
Contributors
- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K
Simplified API
Breaking changes
-
Removed
dropout_categoricals
parameter fromTimeSeriesDataSet
.
Usecategorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)
) instead (#518) -
Rename parameter
allow_missings
forTimeSeriesDataSet
toallow_missing_timesteps
(#518) -
Transparent handling of transformations. Forward methods should now call two new methods (#518):
transform_output
to explicitly rescale the network outputs into the de-normalized spaceto_network_output
to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Onlyprediction
is still required which is the main network output.
Example:
def forward(self, x): normalized_prediction = self.module(x) prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"]) return self.to_network_output(prediction=prediction)
Added
- Improved validation of input parameters of TimeSeriesDataSet (#518)
Fixed
Generic distribution loss(es)
Added
- Allow lists for multiple losses and normalizers (#405)
- Warn if normalization is with scale
< 1e-7
(#429) - Allow usage of distribution losses in all settings (#434)
Fixed
- Fix issue when predicting and data is on different devices (#402)
- Fix non-iterable output (#404)
- Fix problem with moving data to CPU for multiple targets (#434)
Contributors
- jdb78
- domplexity
Simple models
Added
- Adding a filter functionality to the timeseries datasset (#329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (#380)
- Allow usage of any torch optimizer such as SGD (#380)
Fixed
- Moving predictions to CPU to avoid running out of memory (#329)
- Correct determination of
output_size
for multi-target forecasting with the TemporalFusionTransformer (#328) - Tqdm autonotebook fix to work outside of Jupyter (#338)
- Fix issue with yaml serialization for TensorboardLogger (#379)
Contributors
- jdb78
- JakeForsey
- vakker