Skip to content

Releases: keras-team/keras

Keras 2.0.9

01 Nov 21:01
Compare
Choose a tag to compare

Areas of improvement

  • RNN improvements:
    • Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the RNN base class.
    • Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
    • Add CuDNNLSTM and CuDNNGRU layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.
    • Add RNN Sequence-to-sequence example script.
    • Add constants argument in RNN's call method, making RNN attention easier to implement.
  • Easier multi-GPU data parallelism via keras.utils.multi_gpu_model.
  • Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
  • Documentation improvements and examples improvements.

API changes

  • Add "fashion mnist" dataset as keras.datasets.fashion_mnist.load_data()
  • Add Minimum merge layer as keras.layers.Minimum (class) and keras.layers.minimum(inputs) (function)
  • Add InceptionResNetV2 to keras.applications.
  • Support bool variables in TensorFlow backend.
  • Add dilation to SeparableConv2D.
  • Add support for dynamic noise_shape in Dropout
  • Add keras.layers.RNN() base class for batch-level RNNs (used to implement custom RNN layers from a cell class).
  • Add keras.layers.StackedRNNCells() layer wrapper, used to stack a list of RNN cells into a single cell.
  • Add CuDNNLSTM and CuDNNGRU layers.
  • Deprecate implementation=0 for RNN layers.
  • The Keras progbar now reports time taken for each past epoch, and average time per step.
  • Add option to specific resampling method in keras.preprocessing.image.load_img().
  • Add keras.utils.multi_gpu_model for easy multi-GPU data parallelism.
  • Add constants argument in RNN's call method, used to pass a list of constant tensors to the underlying RNN cell.

Breaking changes

  • Implementation change in keras.losses.cosine_proximity results in a different (correct) scaling behavior.
  • Implementation change for samplewise normalization in ImageDataGenerator results in a different normalization behavior.

Credits

Thanks to our 59 contributors whose commits are featured in this release!

@alok, @Danielhiversen, @Dref360, @HelgeS, @JakeBecker, @MPiecuch, @MartinXPN, @RitwikGupta, @TimZaman, @adammenges, @aeftimia, @ahojnnes, @akshaychawla, @alanyee, @aldenks, @andhus, @apbard, @aronj, @bangbangbear, @bchu, @bdwyer2, @bzamecnik, @cclauss, @colllin, @datumbox, @deltheil, @dhaval067, @durana, @ericwu09, @facaiy, @farizrahman4u, @fchollet, @flomlo, @fran6co, @grzesir, @hgaiser, @icyblade, @jsaporta, @julienr, @jussihuotari, @kashif, @lucashu1, @mangerlahn, @myutwo150, @nicolewhite, @noahstier, @nzw0301, @olalonde, @ozabluda, @patrikerdes, @podhrmic, @qin, @raelg, @roatienza, @shadiakiki1986, @smgt, @souptc, @taehoonlee, @y0z

Keras 2.0.8

25 Aug 19:21
Compare
Choose a tag to compare

The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.

No API changes for this release. A few bug fixes.

Keras 2.0.7

21 Aug 23:31
Compare
Choose a tag to compare

Areas of improvement

  • Bug fixes.
  • Performance improvements.
  • Documentation improvements.
  • Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
  • Improve TensorBoard UX with better grouping of ops into name scopes.
  • Improve test coverage.

API changes

  • Add clone_model method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.
  • Add target_tensors argument in compile, enabling to use custom tensors or placeholders as model targets.
  • Add steps_per_epoch argument in fit, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.
  • Similarly, add steps argument in predict and evaluate.
  • Add Subtract merge layer, and associated layer function subtract.
  • Add weighted_metrics argument in compile to specify metric functions meant to take into account sample_weight or class_weight.
  • Make the stop_gradients backend function consistent across backends.
  • Allow dynamic shapes in repeat_elements backend function.
  • Enable stateful RNNs with CNTK.

Breaking changes

  • The backend methods categorical_crossentropy, sparse_categorical_crossentropy, binary_crossentropy had the order of their positional arguments (y_true, y_pred) inverted. This change does not affect the losses API. This change was done to achieve API consistency between the losses API and the backend API.
  • Move constraint management to be based on variable attributes. Remove the now-unused constraints attribute on layers and models (not expected to affect any user).

Credits

Thanks to our 47 contributors whose commits are featured in this release!

@5ke, @alok, @Danielhiversen, @Dref360, @NeilRon, @abnerA, @acburigo, @airalcorn2, @angeloskath, @athundt, @brettkoonce, @cclauss, @denfromufa, @enkait, @erg, @ericwu09, @farizrahman4u, @fchollet, @georgwiese, @ghisvail, @gokceneraslan, @hgaiser, @inexxt, @joeyearsley, @jorgecarleitao, @kennyjacob, @keunwoochoi, @krizp, @lukedeo, @milani, @n17r4m, @nicolewhite, @nigeljyng, @nyghtowl, @nzw0301, @rapatel0, @souptc, @srinivasreddy, @staticfloat, @taehoonlee, @td2014, @titu1994, @tleeuwenburg, @udibr, @waleedka, @wassname, @yashk2810

Keras 2.0.6

07 Jul 20:59
Compare
Choose a tag to compare

Areas of improvement

  • Improve generator methods (predict_generator, fit_generator, evaluate_generator) and add data enqueuing utilities.
  • Bug fixes and performance improvements.
  • New features: new Conv3DTranspose layer, new MobileNet application, self-normalizing networks.

API changes

  • Self-normalizing networks: add selu activation function, AlphaDropout layer, lecun_normal initializer.
  • Data enqueuing: add Sequence, SequenceEnqueuer, GeneratorEnqueuer to utils.
  • Generator methods: rename arguments pickle_safe (replaced with use_multiprocessing) and max_q_size (replaced with max_queue_size).
  • Add MobileNet to the applications module.
  • Add Conv3DTranspose layer.
  • Allow custom print functions for model's summary method (argument print_fn).

Keras 2.0.5

12 Jun 18:52
Compare
Choose a tag to compare
  • Add beta CNTK backend.
  • TensorBoard improvements.
  • Documentation improvements.
  • Bug fixes and performance improvements.
  • Improve style transfer example script.

API changes:

  • Add return_state constructor argument to RNNs.
  • Add skip_compile option to load_model.
  • Add categorical_hinge loss function.
  • Add sparse_top_k_categorical_accuracy metric.
  • Add new options to TensorBoard callback.
  • Add TerminateOnNaN callback.
  • Generalize the Embedding layer to N (>=2) input dimensions.

Keras 2.0.4

29 Apr 23:19
Compare
Choose a tag to compare
  • Documentation improvements.
  • Docstring improvements.
  • Update some examples scripts (in particular, new deep dream example).
  • Bug fixes and performance improvements.

API changes:

  • Add logsumexp and identity to backend.
  • Add logcosh loss.
  • New signature for add_weight in Layer.
  • get_initial_states in Recurrent is now get_initial_state.

2.0.0

05 May 03:11
Compare
Choose a tag to compare

Keras 2 release notes

This document details changes, in particular API changes, occurring from Keras 1 to Keras 2.

Training

  • The nb_epoch argument has been renamed epochs everywhere.
  • The methods fit_generator, evaluate_generator and predict_generator now work by drawing a number of batches from a generator (number of training steps), rather than a number of samples.
    • samples_per_epoch was renamed steps_per_epoch in fit_generator.
    • nb_val_samples was renamed validation_steps in fit_generator.
    • val_samples was renamed steps in evaluate_generator and predict_generator.
  • It is now possible to manually add a loss to a model by calling model.add_loss(loss_tensor). The loss is added to the other losses of the model and minimized during training.
  • It is also possible to not apply any loss to a specific model output. If you pass None as the loss argument for an output (e.g. in compile, loss={'output_1': None, 'output_2': 'mse'}, the model will expect no Numpy arrays to be fed for this output when using fit, train_on_batch, or fit_generator. The output values are still returned as usual when using predict.
  • In TensorFlow, models can now be trained using fit if some of their inputs (or even all) are TensorFlow queues or variables, rather than placeholders. See this test for specific examples.

Losses & metrics

  • The objectives module has been renamed losses.
  • Several legacy metric functions have been removed, namely matthews_correlation, precision, recall, fbeta_score, fmeasure.
  • Custom metric functions can no longer return a dict, they must return a single tensor.

Models

  • Constructor arguments for Model have been renamed:
    • input -> inputs
    • output -> outputs
  • The Sequential model not longer supports the set_input method.
  • For any model saved with Keras 2.0 or higher, weights trained with backend X will be converted to work with backend Y without any manual conversion step.

Layers

Removals

Deprecated layers MaxoutDense, Highway and TimedistributedDense have been removed.

Call method

  • All layers that use the learning phase now support a training argument in call (Python boolean or symbolic tensor), allowing to specify the learning phase on a layer-by-layer basis. E.g. by calling a Dropout instance as dropout(inputs, training=True) you obtain a layer that will always apply dropout, regardless of the current global learning phase. The training argument defaults to the global Keras learning phase everywhere.
  • The call method of layers can now take arbitrary keyword arguments, e.g. you can define a custom layer with a call signature like call(inputs, alpha=0.5), and then pass a alpha keyword argument when calling the layer (only with the functional API, naturally).
  • __call__ now makes use of TensorFlow name_scope, so that your TensorFlow graphs will look pretty and well-structured in TensorBoard.

All layers taking a legacy dim_ordering argument

dim_ordering has been renamed data_format. It now takes two values: "channels_first" (formerly "th") and "channels_last" (formerly "tf").

Dense layer

Changed interface:

  • output_dim -> units
  • init -> kernel_initializer
  • added bias_initializer argument
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • b_constraint -> bias_constraint
  • bias -> use_bias

Dropout, SpatialDropout*D, GaussianDropout

Changed interface:

  • p -> rate

Embedding

Convolutional layers

  • The AtrousConvolution1D and AtrousConvolution2D layer have been deprecated. Their functionality is instead supported via the dilation_rate argument in Convolution1D and Convolution2D layers.
  • Convolution* layers are renamed Conv*.
  • The Deconvolution2D layer is renamed Conv2DTranspose.
  • The Conv2DTranspose layer no longer requires an output_shape argument, making its use much easier.

Interface changes common to all convolutional layers:

  • nb_filter -> filters
  • float kernel dimension arguments become a single tuple argument, kernel size. E.g. a legacy call Conv2D(10, 3, 3) becomes Conv2D(10, (3, 3))
  • kernel_size can be set to an integer instead of a tuple, e.g. Conv2D(10, 3) is equivalent to Conv2D(10, (3, 3)).
  • subsample -> strides. Can also be set to an integer.
  • border_mode -> padding
  • init -> kernel_initializer
  • added bias_initializer argument
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • b_constraint -> bias_constraint
  • bias -> use_bias
  • dim_ordering -> data_format
  • In the SeparableConv2D layers, init is split into depthwise_initializer and pointwise_initializer.
  • Added dilation_rate argument in Conv2D and Conv1D.
  • 1D convolution kernels are now saved as a 3D tensor (instead of 4D as before).
  • 2D and 3D convolution kernels are now saved in format spatial_dims + (input_depth, depth)), even with data_format="channels_first".

Pooling1D

  • pool_length -> pool_size
  • stride -> strides
  • border_mode -> padding

Pooling2D, 3D

  • border_mode -> padding
  • dim_ordering -> data_format

ZeroPadding layers

The padding argument of the ZeroPadding2D and ZeroPadding3D layers must be a tuple of length 2 and 3 respectively. Each entry i contains by how much to pad the spatial dimension i. If it's an integer, symmetric padding is applied. If it's a tuple of integers, asymmetric padding is applied.

Upsampling1D

  • length -> size

BatchNormalization

The mode argument of BatchNormalization has been removed; BatchNorm now only supports mode 0 (use batch metrics for feature-wise normalization during training, and use moving metrics for feature-wise normalization during testing).

  • beta_init -> beta_initializer
  • gamma_init -> gamma_initializer
  • added arguments center, scale (booleans, whether to use a beta and gamma respectively)
  • added arguments moving_mean_initializer, moving_variance_initializer
  • added arguments beta_regularizer, gamma_regularizer
  • added arguments beta_constraint, gamma_constraint
  • attribute running_mean is renamed moving_mean
  • attribute running_std is renamed moving_variance (it is in fact a variance with the current implementation).

ConvLSTM2D

Same changes as for convolutional layers and recurrent layers apply.

PReLU

  • init -> alpha_initializer

GaussianNoise

  • sigma -> stddev

Recurrent layers

  • output_dim -> units
  • init -> kernel_initializer
  • inner_init -> recurrent_initializer
  • added argument bias_initializer
  • W_regularizer -> kernel_regularizer
  • b_regularizer -> bias_regularizer
  • added arguments kernel_constraint, recurrent_constraint, bias_constraint
  • dropout_W -> dropout
  • dropout_U -> recurrent_dropout
  • consume_less -> implementation. String values have been replaced with integers: implementation 0 (default), 1 or 2.
  • LSTM only: the argument forget_bias_init has been removed. Instead there is a boolean argument unit_forget_bias, defaulting to True.

Lambda

The Lambda layer now supports a mask argument.

Utilities

Utilities should now be imported from keras.utils rather than from specific submodules (e.g. no more keras.utils.np_utils...).

Backend

random_normal and truncated_normal

  • std -> stddev

Misc

  • In the backend, set_image_ordering and image_ordering are now set_data_format and data_format.
  • Any arguments (other than nb_epoch) prefixed with nb_ has been renamed to be prefixed with num_ instead. This affects two datasets and one preprocessing utility.