Releases: keras-team/keras
Keras Release 2.6.0 RC3
Keras Release 2.6.0 RC3 fix a security issue for loading keras models via yaml, which could allow arbitrary code execution.
Keras Release 2.6.0 RC2
Keras 2.6.0 RC2 is a minor bug-fix release.
- Fix TextVectorization layer with output_sequence_length on unknown input shapes.
- Output int64 by default from Discretization layer.
- Fix serialization of Hashing layer.
- Add more explicit error message for instance type checking of optimizer.
Keras Release 2.6.0 RC1
Keras 2.6.0 RC1 is a minor bug-fix release
- Pin the Protobuf version to 3.9.2 which is same as the version used by Tensorflow.
Keras Release 2.6.0 RC0
Keras 2.6.0 is the first release of TensorFlow implementation of Keras in the present repo.
The code under tensorflow/python/keras
is considered legacy and will be removed in future releases (tf 2.7 or later). For any user who import tensorflow.python.keras
, please update your code to public tf.keras
instead.
The API endpoints for tf.keras stay unchanged, but are now backed by the keras PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository keras-team/keras
.
For the detailed release notes about tf.keras
behavior changes, please take a look for tensorflow release notes.
Keras 2.4.0
As previously announced, we have discontinued multi-backend Keras to refocus exclusively on the TensorFlow implementation of Keras.
In the future, we will develop the TensorFlow implementation of Keras in the present repo, at keras-team/keras
. For the time being, it is being developed in tensorflow/tensorflow
and distributed as tensorflow.keras
. In this future, the keras
package on PyPI will be the same as tf.keras
.
This release (2.4.0) simply redirects all APIs in the standalone keras
package to point to tf.keras
. This helps address user confusion regarding differences and incompatibilities between tf.keras
and the standalone keras
package. There is now only one Keras: tf.keras
.
- Note that this release may be breaking for some workflows when going from Keras 2.3.1 to 2.4.0. Test before upgrading.
- Note that we still recommend that you import Keras as
from tensorflow import keras
, rather thanimport keras
, for the time being.
Keras 2.3.1
Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.
Changes
- Bug fixes
- Documentation fixes
- No API changes
- No breaking changes
Keras 2.3.0
Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.
This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.
This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.
Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.
API changes
- Add
size(x)
to backend API. add_metric
method added to Layer / Model (used in a similar way asadd_loss
, but for metrics), as well as the metricsproperty
.- Variables set as attributes of a Layer are now tracked in
layer.weights
(includinglayer.trainable_weights
orlayer.non_trainable_weights
as appropriate). - Layers set as attributes of a Layer are now tracked (so the weights/metrics/losses/etc of a sublayer are tracked by parent layers). This behavior already existed for Model specifically and is now extended to all Layer subclasses.
- Introduce class-based losses (inheriting from
Loss
base class). This enables losses to be parameterized via constructor arguments. Loss classes added:MeanSquaredError
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredLogarithmicError
BinaryCrossentropy
CategoricalCrossentropy
SparseCategoricalCrossentropy
Hinge
SquaredHinge
CategoricalHinge
Poisson
LogCosh
KLDivergence
Huber
- Introduce class-based metrics (inheriting from
Metric
base class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:Accuracy
MeanSquaredError
Hinge
CategoricalHinge
SquaredHinge
FalsePositives
TruePositives
FalseNegatives
TrueNegatives
BinaryAccuracy
CategoricalAccuracy
TopKCategoricalAccuracy
LogCoshError
Poisson
KLDivergence
CosineSimilarity
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredError
MeanSquaredLogarithmicError
RootMeanSquaredError
BinaryCrossentropy
CategoricalCrossentropy
Precision
Recall
AUC
SparseCategoricalAccuracy
SparseTopKCategoricalAccuracy
SparseCategoricalCrossentropy
- Add
reset_metrics
argument totrain_on_batch
andtest_on_batch
. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only. - Add
model.reset_metrics()
method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops. - Rename
lr
tolearning_rate
for all optimizers. - Deprecate argument
decay
for all optimizers. For learning rate decay, useLearningRateSchedule
objects in tf.keras.
Breaking changes
- TensorBoard callback:
batch_size
argument is deprecated (ignored) when used with TF 2.0write_grads
is deprecated (ignored) when used with TF 2.0embeddings_freq
,embeddings_layer_names
,embeddings_metadata
,embeddings_data
are deprecated (ignored) when used with TF 2.0
- Change loss aggregation mechanism to sum over batch size. This may change reported loss values if you were using sample weighting or class weighting. You can achieve the old behavior by making sure your sample weights sum to 1 for each batch.
- Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass
metrics=['acc']
, your metric will be reported under the string "acc", not "accuracy", and inverselymetrics=['accuracy']
will be reported under the string "accuracy". - Change default recurrent activation to
sigmoid
(fromhard_sigmoid
) in all RNN layers.
Keras 2.2.5
Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).
The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras
.
At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras
in TensorFlow 2.0. tf.keras
is better maintained and has better integration with TensorFlow features.
API Changes
- Add new Applications:
ResNet101
,ResNet152
,ResNet50V2
,ResNet101V2
,ResNet152V2
. - Callbacks: enable callbacks to be passed in
evaluate
andpredict
.- Add
callbacks
argument (list of callback instances) inevaluate
andpredict
. - Add callback methods
on_train_batch_begin
,on_train_batch_end
,on_test_batch_begin
,on_test_batch_end
,on_predict_batch_begin
,on_predict_batch_end
, as well ason_test_begin
,on_test_end
,on_predict_begin
,on_predict_end
. Methodson_batch_begin
andon_batch_end
are now aliases foron_train_batch_begin
andon_train_batch_end
.
- Add
- Allow file pointers in
save_model
andload_model
(in place of the filepath) - Add
name
argument in Sequential constructor - Add
validation_freq
argument infit
, controlling the frequency of validation (e.g. settingvalidation_freq=3
would run validation every 3 epochs) - Allow Python generators (or Keras Sequence objects) to be passed in
fit
,evaluate
, andpredict
, instead of having to use*_generator
methods.- Add generator-related arguments
max_queue_size
,workers
,use_multiprocessing
to these methods.
- Add generator-related arguments
- Add
dilation_rate
argument in layerDepthwiseConv2D
. - MaxNorm constraint: rename argument
m
tomax_value
. - Add
dtype
argument in base layer (default dtype for layer's weights). - Add Google Cloud Storage support for model.save_weights and model.load_weights.
- Add JSON-serialization to the
Tokenizer
class. - Add
H5Dict
andmodel_to_dot
to utils. - Allow default Keras path to be specified at startup via environment variable KERAS_HOME.
- Add arguments
expand_nested
,dpi
toplot_model
. - Add
update_sub
,stack
,cumsum
,cumprod
,foldl
,foldr
to CNTK backend - Add
merge_repeated
argument toctc_decode
in TensorFlow backend
Thanks to the 89 committers who contributed code to this release!
Keras 2.2.4
This is a bugfix release, addressing two issues:
- Ability to save a model when a file with the same name already exists.
- Issue with loading legacy config files for the
Sequential
model.
See here for the changelog since 2.2.2.
Keras 2.2.3
Areas of improvement
- API completeness & usability improvements
- Bug fixes
- Documentation improvements
API changes
- Keras models can now be safely pickled.
- Consolidate the functionality of the activation layers
ThresholdedReLU
andLeakyReLU
into theReLU
layer. - As a result, the
ReLU
layer now takes new argumentsnegative_slope
andthreshold
, and therelu
function in the backend takes a newthreshold
argument. - Add
update_freq
argument inTensorBoard
callback, controlling how often to write TensorBoard logs. - Add the
exponential
function tokeras.activations
. - Add
data_format
argument in all 4Pooling1D
layers. - Add
interpolation
argument inUpSampling2D
layer and inresize_images
backend function, supporting modes"nearest"
(previous behavior, and new default) and"bilinear"
(new). - Add
dilation_rate
argument inConv2DTranspose
layer and inconv2d_transpose
backend function. - The
LearningRateScheduler
now receives thelr
key as part of thelogs
argument inon_epoch_end
(current value of the learning rate). - Make
GlobalAveragePooling1D
layer support masking. - The the
filepath
argumentsave_model
andmodel.save()
can now be ah5py.Group
instance. - Add argument
restore_best_weights
toEarlyStopping
callback (optionally reverts to the weights that obtained the highest monitored score value). - Add
dtype
argument tokeras.utils.to_categorical
. - Support
run_options
andrun_metadata
as optional session arguments inmodel.compile()
for the TensorFlow backend.
Breaking changes
- Modify the return value of
Sequential.get_config()
. Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keyslayers
,name
, and an optional keybuild_input_shape
. The old config is equivalent tonew_config['layers']
. This makes the output ofget_config
consistent across all model classes.
Credits
Thanks to our 38 contributors whose commits are featured in this release:
@BertrandDechoux, @ChrisGll, @Dref360, @JamesHinshelwood, @MarcoAndreaBuchmann, @ageron, @alfasst, @blue-atom, @chasebrignac, @cshubhamrao, @danFromTelAviv, @datumbox, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @hadifar, @heytitle, @hsgkim, @jankrepl, @joelthchao, @knightXun, @kouml, @linjinjin123, @lvapeab, @nikoladze, @ozabluda, @qlzh727, @roywei, @rvinas, @sriyogesh94, @tacaswell, @taehoonlee, @tedyu, @xuhdev, @yanboliang, @yongzx, @yuanxiaosc