Skip to content

Releases: lmnt-com/haste

Haste 0.5.0-rc0

05 Jul 00:52
Compare
Choose a tag to compare
Haste 0.5.0-rc0 Pre-release
Pre-release
v0.5.0-rc0

Bump version to 0.5.0-rc0 in preparation for PyPI release.

Haste 0.4.0

13 Apr 19:16
Compare
Choose a tag to compare

Added

  • New layer normalized GRU layer (LayerNormGRU).
  • New IndRNN layer.
  • CPU support for all PyTorch layers.
  • Support for building PyTorch API on Windows.
  • Added state argument to PyTorch layers to specify initial state.
  • Added weight transforms to TensorFlow API (see docs for details).
  • Added get_weights method to extract weights from RNN layers (TensorFlow).
  • Added to_native_weights and from_native_weights to PyTorch API for LSTM and GRU layers.
  • Validation tests to check for correctness.

Changed

  • Performance improvements to GRU layer.
  • BREAKING CHANGE: PyTorch layers default to CPU instead of GPU.
  • BREAKING CHANGE: h must not be transposed before passing it to gru::BackwardPass::Iterate.

Fixed

  • Multi-GPU training with TensorFlow caused by invalid sharing of cublasHandle_t.

Haste 0.3.0

09 Mar 23:14
Compare
Choose a tag to compare

Added

  • PyTorch support.
  • New layer normalized LSTM layer (LayerNormLSTM).
  • New fused layer normalization layer.

Fixed

  • Occasional uninitialized memory use in TensorFlow LSTM implementation.

Haste 0.2.0

24 Feb 19:43
Compare
Choose a tag to compare

This release focuses on LSTM performance.

Added

  • New time-fused API for LSTM (lstm::ForwardPass::Run, lstm::BackwardPass::Run).
  • Benchmarking code to evaluate the performance of an implementation.

Changed

  • Performance improvements to existing iterative LSTM API.
  • BREAKING CHANGE: h must not be transposed before passing it to lstm::BackwardPass::Iterate.
  • BREAKING CHANGE: dv does not need to be allocated and v must be passed instead to lstm::BackwardPass::Iterate.

Haste 0.1.0

29 Jan 18:57
Compare
Choose a tag to compare

Initial release.