Skip to content

Releases: keras-team/keras

Keras Release 2.6.0 RC3

04 Aug 20:53
Compare
Choose a tag to compare
Pre-release

Keras Release 2.6.0 RC3 fix a security issue for loading keras models via yaml, which could allow arbitrary code execution.

Keras Release 2.6.0 RC2

26 Jul 21:09
Compare
Choose a tag to compare
Pre-release

Keras 2.6.0 RC2 is a minor bug-fix release.

  1. Fix TextVectorization layer with output_sequence_length on unknown input shapes.
  2. Output int64 by default from Discretization layer.
  3. Fix serialization of Hashing layer.
  4. Add more explicit error message for instance type checking of optimizer.

Keras Release 2.6.0 RC1

26 Jul 21:03
Compare
Choose a tag to compare
Pre-release

Keras 2.6.0 RC1 is a minor bug-fix release

  1. Pin the Protobuf version to 3.9.2 which is same as the version used by Tensorflow.

Keras Release 2.6.0 RC0

26 Jul 20:51
Compare
Choose a tag to compare
Pre-release

Keras 2.6.0 is the first release of TensorFlow implementation of Keras in the present repo.

The code under tensorflow/python/keras is considered legacy and will be removed in future releases (tf 2.7 or later). For any user who import tensorflow.python.keras, please update your code to public tf.keras instead.

The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras.

For the detailed release notes about tf.keras behavior changes, please take a look for tensorflow release notes.

Keras 2.4.0

17 Jun 22:22
b5cb82c
Compare
Choose a tag to compare

As previously announced, we have discontinued multi-backend Keras to refocus exclusively on the TensorFlow implementation of Keras.

In the future, we will develop the TensorFlow implementation of Keras in the present repo, at keras-team/keras. For the time being, it is being developed in tensorflow/tensorflow and distributed as tensorflow.keras. In this future, the keras package on PyPI will be the same as tf.keras.

This release (2.4.0) simply redirects all APIs in the standalone keras package to point to tf.keras. This helps address user confusion regarding differences and incompatibilities between tf.keras and the standalone keras package. There is now only one Keras: tf.keras.

  • Note that this release may be breaking for some workflows when going from Keras 2.3.1 to 2.4.0. Test before upgrading.
  • Note that we still recommend that you import Keras as from tensorflow import keras, rather than import keras, for the time being.

Keras 2.3.1

07 Oct 20:06
Compare
Choose a tag to compare

Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.

Changes

  • Bug fixes
  • Documentation fixes
  • No API changes
  • No breaking changes

Keras 2.3.0

17 Sep 17:09
Compare
Choose a tag to compare

Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.

This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.

This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.

Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.

API changes

  • Add size(x) to backend API.
  • add_metric method added to Layer / Model (used in a similar way as add_loss, but for metrics), as well as the metrics property.
  • Variables set as attributes of a Layer are now tracked in layer.weights (including layer.trainable_weights or layer.non_trainable_weights as appropriate).
  • Layers set as attributes of a Layer are now tracked (so the weights/metrics/losses/etc of a sublayer are tracked by parent layers). This behavior already existed for Model specifically and is now extended to all Layer subclasses.
  • Introduce class-based losses (inheriting from Loss base class). This enables losses to be parameterized via constructor arguments. Loss classes added:
    • MeanSquaredError
    • MeanAbsoluteError
    • MeanAbsolutePercentageError
    • MeanSquaredLogarithmicError
    • BinaryCrossentropy
    • CategoricalCrossentropy
    • SparseCategoricalCrossentropy
    • Hinge
    • SquaredHinge
    • CategoricalHinge
    • Poisson
    • LogCosh
    • KLDivergence
    • Huber
  • Introduce class-based metrics (inheriting from Metric base class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:
    • Accuracy
    • MeanSquaredError
    • Hinge
    • CategoricalHinge
    • SquaredHinge
    • FalsePositives
    • TruePositives
    • FalseNegatives
    • TrueNegatives
    • BinaryAccuracy
    • CategoricalAccuracy
    • TopKCategoricalAccuracy
    • LogCoshError
    • Poisson
    • KLDivergence
    • CosineSimilarity
    • MeanAbsoluteError
    • MeanAbsolutePercentageError
    • MeanSquaredError
    • MeanSquaredLogarithmicError
    • RootMeanSquaredError
    • BinaryCrossentropy
    • CategoricalCrossentropy
    • Precision
    • Recall
    • AUC
    • SparseCategoricalAccuracy
    • SparseTopKCategoricalAccuracy
    • SparseCategoricalCrossentropy
  • Add reset_metrics argument to train_on_batch and test_on_batch. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only.
  • Add model.reset_metrics() method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops.
  • Rename lr to learning_rate for all optimizers.
  • Deprecate argument decay for all optimizers. For learning rate decay, use LearningRateSchedule objects in tf.keras.

Breaking changes

  • TensorBoard callback:
    • batch_size argument is deprecated (ignored) when used with TF 2.0
    • write_grads is deprecated (ignored) when used with TF 2.0
    • embeddings_freq, embeddings_layer_names, embeddings_metadata, embeddings_data are deprecated (ignored) when used with TF 2.0
  • Change loss aggregation mechanism to sum over batch size. This may change reported loss values if you were using sample weighting or class weighting. You can achieve the old behavior by making sure your sample weights sum to 1 for each batch.
  • Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass metrics=['acc'], your metric will be reported under the string "acc", not "accuracy", and inversely metrics=['accuracy'] will be reported under the string "accuracy".
  • Change default recurrent activation to sigmoid (from hard_sigmoid) in all RNN layers.

Keras 2.2.5

22 Aug 16:43
Compare
Choose a tag to compare

Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).

The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras.

At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration with TensorFlow features.

API Changes

  • Add new Applications: ResNet101, ResNet152, ResNet50V2, ResNet101V2, ResNet152V2.
  • Callbacks: enable callbacks to be passed in evaluate and predict.
    • Add callbacks argument (list of callback instances) in evaluate and predict.
    • Add callback methods on_train_batch_begin, on_train_batch_end, on_test_batch_begin, on_test_batch_end, on_predict_batch_begin, on_predict_batch_end, as well as on_test_begin, on_test_end, on_predict_begin, on_predict_end. Methods on_batch_begin and on_batch_end are now aliases for on_train_batch_begin and on_train_batch_end.
  • Allow file pointers in save_model and load_model (in place of the filepath)
  • Add name argument in Sequential constructor
  • Add validation_freq argument in fit, controlling the frequency of validation (e.g. setting validation_freq=3 would run validation every 3 epochs)
  • Allow Python generators (or Keras Sequence objects) to be passed in fit, evaluate, and predict, instead of having to use *_generator methods.
    • Add generator-related arguments max_queue_size, workers, use_multiprocessing to these methods.
  • Add dilation_rate argument in layer DepthwiseConv2D.
  • MaxNorm constraint: rename argument m to max_value.
  • Add dtype argument in base layer (default dtype for layer's weights).
  • Add Google Cloud Storage support for model.save_weights and model.load_weights.
  • Add JSON-serialization to the Tokenizer class.
  • Add H5Dict and model_to_dot to utils.
  • Allow default Keras path to be specified at startup via environment variable KERAS_HOME.
  • Add arguments expand_nested, dpi to plot_model.
  • Add update_sub, stack, cumsum, cumprod, foldl, foldr to CNTK backend
  • Add merge_repeated argument to ctc_decode in TensorFlow backend

Thanks to the 89 committers who contributed code to this release!

Keras 2.2.4

03 Oct 20:58
Compare
Choose a tag to compare

This is a bugfix release, addressing two issues:

  • Ability to save a model when a file with the same name already exists.
  • Issue with loading legacy config files for the Sequential model.

See here for the changelog since 2.2.2.

Keras 2.2.3

01 Oct 23:39
Compare
Choose a tag to compare

Areas of improvement

  • API completeness & usability improvements
  • Bug fixes
  • Documentation improvements

API changes

  • Keras models can now be safely pickled.
  • Consolidate the functionality of the activation layers ThresholdedReLU and LeakyReLU into the ReLU layer.
  • As a result, the ReLU layer now takes new arguments negative_slope and threshold, and the relu function in the backend takes a new threshold argument.
  • Add update_freq argument in TensorBoard callback, controlling how often to write TensorBoard logs.
  • Add the exponential function to keras.activations.
  • Add data_format argument in all 4 Pooling1D layers.
  • Add interpolation argument in UpSampling2D layer and in resize_images backend function, supporting modes "nearest" (previous behavior, and new default) and "bilinear" (new).
  • Add dilation_rate argument in Conv2DTranspose layer and in conv2d_transpose backend function.
  • The LearningRateScheduler now receives the lr key as part of the logs argument in on_epoch_end (current value of the learning rate).
  • Make GlobalAveragePooling1D layer support masking.
  • The the filepath argument save_model and model.save() can now be a h5py.Group instance.
  • Add argument restore_best_weights to EarlyStopping callback (optionally reverts to the weights that obtained the highest monitored score value).
  • Add dtype argument to keras.utils.to_categorical.
  • Support run_options and run_metadata as optional session arguments in model.compile() for the TensorFlow backend.

Breaking changes

  • Modify the return value of Sequential.get_config(). Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keys layers, name, and an optional key build_input_shape. The old config is equivalent to new_config['layers']. This makes the output of get_config consistent across all model classes.

Credits

Thanks to our 38 contributors whose commits are featured in this release:

@BertrandDechoux, @ChrisGll, @Dref360, @JamesHinshelwood, @MarcoAndreaBuchmann, @ageron, @alfasst, @blue-atom, @chasebrignac, @cshubhamrao, @danFromTelAviv, @datumbox, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @hadifar, @heytitle, @hsgkim, @jankrepl, @joelthchao, @knightXun, @kouml, @linjinjin123, @lvapeab, @nikoladze, @ozabluda, @qlzh727, @roywei, @rvinas, @sriyogesh94, @tacaswell, @taehoonlee, @tedyu, @xuhdev, @yanboliang, @yongzx, @yuanxiaosc