Skip to content

Releases: tensorflow/tensorflow

TensorFlow 2.8.0-rc1

24 Jan 17:33
244b9d7
Compare
Choose a tag to compare
TensorFlow 2.8.0-rc1 Pre-release
Pre-release

Release 2.8.0

Major Features and Improvements

  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.raw_ops.Bucketize op on CPU.
      • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
      • tf.random.normal op for output data type tf.float32 on CPU.
      • tf.random.uniform op for output data type tf.float32 on CPU.
      • tf.random.categorical op for output data type tf.int64 on CPU.
  • tensorflow.experimental.tensorrt:

    • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
    • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
    • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
  • tf.tpu.experimental.embedding:

    • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
    • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
  • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

  • (Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.

Bug Fixes and Other Changes

  • tf.data:

    • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
    • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
      • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
      • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
  • tf.lite:

    • Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
    • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
  • tf.keras:

    • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
    • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
    • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
    • Added additional standardize and split modes to TextVectorization:
      • standardize="lower" will lowercase inputs.
      • standardize="string_punctuation" will remove all puncuation.
      • split="character" will split on every unicode character.
    • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
    • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
    • tf.random.Generator for keras initializers and all RNG code.
    • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
    • tf.keras.callbacks.experimental.BackupAndRestore is now available as tf.keras.callbacks.BackupAndRestore. The experimental endpoint is deprecated and will be removed in a future release.
    • tf.keras.experimental.SidecarEvaluator is now available as tf.keras.utils.SidecarEvaluator. The experimental endpoint is deprecated and will be removed in a future release.
    • Metrics update and collection logic in default Model.train_step() is now customizable via overriding Model.compute_metrics().
    • Losses computation logic in default Model.train_step() is now customizable via overriding Model.compute_loss().
    • jit_compile added to Model.compile() on an opt-in basis to compile the model's training step with XLA. Note that jit_compile=True may not necessarily work for all models.
  • Deterministic Op Functionality:

    • Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
    • Add determinsitic GPU implementations of:
      • tf.function(jit_compile=True)'s that use Scatter.
      • (since v2.7) Stateful ops used in tf.data.Dataset
      • (since v2.7) tf.convert_to_tensor when fed with (sparse) tf.IndexedSlices (because it uses tf.math.unsorted_segment_sum)
      • (since v2.7) tf.gather backprop (because tf.convert_to_tensor reduces tf.gather's (sparse) tf.IndexedSlices gradients into its dense params input)
      • (since v2.7) tf.math.segment_mean
      • (since v2.7) tf.math.segment_prod
      • (since v2.7) tf.math.segment_sum
      • (since v2.7) tf.math.unsorted_segment_mean
      • (since v2.7) tf.math.unsorted_segment_prod
      • (since v2.7) tf.math.unsorted_segment_sum
      • (since v2.7) tf.math.unsorted_segment_sqrt
      • (since v2.7) tf.nn.ctc_loss (resolved, possibly in prior release, and confirmed with tests)
      • (since v2.7)tf.nn.sparse_softmax_crossentropy_with_logits
    • (since v2.7) Run tf.scatter_nd and other related scatter functions, such as tf.tensor_scatter_nd_update, on CPU (with significant performance penalty).
    • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after tf.config.experimental.enable_op_determinism has been called), an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message), unless otherwise specified, to be thrown.
      • FakeQuantWithMinMaxVarsGradient and FakeQuantWithMinMaxVarsPerChannelGradient
      • (since v2.7) tf.compat.v1.get_seed if the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
      • (since v2.7) tf.compat.v1.nn.fused_batch_norm backprop to offset when is_training=False
      • (since v2.7) tf.image.adjust_contrast forward
      • (since v2.7) tf.image.resize with method=ResizeMethod.NEAREST backprop
      • (since v2.7) tf.linalg.svd
      • (since v2.7) tf.math.bincount
      • (since v2.7) tf.nn.depthwise_conv2d backprop to filter when not using cuDNN convolution
      • (since v2.7) tf.nn.dilation2d gradient
      • (since v2.7) tf.nn.max_pool_with_argmax gradient
      • (since v2.7) tf.raw_ops.DebugNumericSummary and tf.raw_ops.DebugNumericSummaryV2
      • (since v2.7) tf.timestamp. Throws FailedPrecondition
      • (since v2.7) tf.Variable.scatter_add (and other scatter methods, both on ref and resource variables)
      • (since v2.7) The random-number-generating ops in the tf.random module when the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from ...
Read more

TensorFlow 2.8.0-rc0

22 Dec 20:42
804ef72
Compare
Choose a tag to compare
TensorFlow 2.8.0-rc0 Pre-release
Pre-release

Release 2.8.0

Major Features and Improvements

  • tf.lite:
    • Added TFLite builtin op support for the following TF ops:
      • tf.raw_ops.Bucketize op on CPU.
      • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
      • tf.random.normal op for output data type tf.float32 on CPU.
      • tf.random.uniform op for output data type tf.float32 on CPU.
      • tf.random.categorical op for output data type tf.int64 on CPU.
  • tensorflow.experimental.tensorrt:
    • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
    • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
    • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOPs.
  • tf.tpu.experimental.embedding:
    • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
    • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
  • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated.
    • The "Bug Fixes and Other Changes" section lists more determinism-related changes.

Bug Fixes and Other Changes

  • tf.data:

    • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
    • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
  • tf.lite:

    • GPU
      • Adds GPU Delegation support for serialization to Java API. This boosts initialization time upto 90% when OpenCL is available.
    • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
  • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.

  • tf.keras:

    • Preprocessing Layers
      • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
      • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
      • Added additional standardize and split modes to TextVectorization.
        • standardize="lower" will lowercase inputs.
        • standardize="string_punctuation" will remove all puncuation.
        • split="character" will split on every unicode character.
      • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
      • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
    • tf.random.Generator for keras initializers and all RNG code.
      • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in TF 2.8, and the behavior change will likely to cause some breakage on user side (eg. if the test is checking against some golden number). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg tf 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
    • tf.keras.callbacks.experimental.BackupAndRestore is now available as tf.keras.callbacks.BackupAndRestore. The experimental endpoint is deprecated and will be removed in a future release.
    • tf.keras.experimental.SidecarEvaluator is now available as tf.keras.utils.SidecarEvaluator. The experimental endpoint is deprecated and will be removed in a future release.
    • Metrics update and collection logic in default Model.train_step() is now customizable via overriding Model.compute_metrics().
    • Losses computation logic in default Model.train_step() is now customizable via overriding Model.compute_loss().
    • jit_compile added to Model.compile() on an opt-in basis to compile the model's training step with XLA. Note that jit_compile=True may not necessarily work for all models.
  • Deterministic Op Functionality

    • Add determinsitic GPU implementations of:
      • tf.function(jit_compile=True)'s that use Scatter.
      • (since v2.7) Stateful ops used in tf.data.Dataset
      • (since v2.7) tf.convert_to_tensor when fed with (sparse) tf.IndexedSlices (because it uses tf.math.unsorted_segment_sum)
      • (since v2.7) tf.gather backprop (because tf.convert_to_tensor reduces tf.gather's (sparse) tf.IndexedSlices gradients into its dense params input)
      • (since v2.7) tf.math.segment_mean
      • (since v2.7) tf.math.segment_prod
      • (since v2.7) tf.math.segment_sum
      • (since v2.7) tf.math.unsorted_segment_mean
      • (since v2.7) tf.math.unsorted_segment_prod
      • (since v2.7) tf.math.unsorted_segment_sum
      • (since v2.7) tf.math.unsorted_segment_sqrt
      • (since v2.7) tf.nn.ctc_loss (resolved, possibly in prior release, and confirmed with tests)
      • (since v2.7)tf.nn.sparse_softmax_crossentropy_with_logits
    • (since v2.7) Run the following ops on CPU (with significant performance penalty):
      • tf.scatter_nd and other related scatter functions, such as tf.tensor_scatter_nd_update
    • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after tf.config.experimental.enable_op_determinism has been called), an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message), unless otherwise specified, to be thrown.
      • FakeQuantWithMinMaxVarsGradient and FakeQuantWithMinMaxVarsPerChannelGradient
      • (since v2.7) tf.compat.v1.get_seed if the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
      • (since v2.7) tf.compat.v1.nn.fused_batch_norm backprop to offset when is_training=False
      • (since v2.7) tf.image.adjust_contrast forward
      • (since v2.7) tf.image.resize with method=ResizeMethod.NEAREST backprop
      • (since v2.7) tf.linalg.svd
      • (since v2.7) tf.math.bincount
      • (since v2.7) tf.nn.depthwise_conv2d backprop to filter when not using cuDNN convolution
      • (since v2.7) tf.nn.dilation2d gradient
      • (since v2.7) tf.nn.max_pool_with_argmax gradient
      • (since v2.7) tf.raw_ops.DebugNumericSummary and tf.raw_ops.DebugNumericSummaryV2
      • (since v2.7) tf.timestamp. Throws FailedPrecondition
      • (since v2.7) tf.Variable.scatter_add (and other scatter methods, both on ref and resource variables)
      • (since v2.7) The random-number-generating ops in the tf.random module when the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
  • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. This is equivalent to setting the previously-existing TF_DETERMINISTIC_OPS environmental variable to 1. The environmental variable is now deprecated, so the enable_op_determinism function should be used instead.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Su...

Read more

TensorFlow 2.6.2

04 Nov 15:05
c2363d6
Compare
Choose a tag to compare

Release 2.6.2

This release just fixes an issue where keras, tensorflow_estimator and tensorboard were missing proper upper bounds and resulted in broken installs after Keras 2.7 release for all packages in TensorFlow ecosystem

TensorFlow 2.7.0

04 Nov 21:39
c256c07
Compare
Choose a tag to compare

Release 2.7.0

Breaking Changes

  • tf.keras:

    • The methods Model.fit(), Model.predict(), and Model.evaluate() will no longer uprank input data of shape (batch_size,) to become (batch_size, 1). This enables Model subclasses to process scalar data in their train_step()/test_step()/predict_step() methods.
      Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the train_step()/test_step()/predict_step() methods, e.g. if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1). Functional models as well as Sequential models built with an explicit input shape are not affected.
    • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.
    • LinearModel and WideDeepModel are moved to the tf.compat.v1.keras.models. namespace (tf.compat.v1.keras.models.LinearModel and tf.compat.v1.keras.models.WideDeepModel), and their experimental endpoints (tf.keras.experimental.models.LinearModel and tf.keras.experimental.models.WideDeepModel) are being deprecated.
    • RNG behavior change for all tf.keras.initializers classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a determinisitic sequence. This change will make the initialize behavior align between v1 and v2.
  • tf.lite:

    • Rename fields SignatureDef table in schema to maximize the parity with TF SavedModel's Signature concept.
    • Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
    • Deprecate tflite::OpResolver::GetDelegates. The list returned by TfLite's BuiltinOpResolver::GetDelegates is now always empty. Instead, recommend using new method tflite::OpResolver::GetDelegateCreators in order to achieve lazy initialization on TfLite delegate instances.
  • TF Core:

    • tf.Graph.get_name_scope() now always returns a string, as documented. Previously, when called within name_scope("") or name_scope(None) contexts, it returned None; now it returns the empty string.
    • tensorflow/core/ir/ contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.
    • Deprecated and removed attrs() function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there.
    • The following Python symbols were accidentally added in earlier versions of TensorFlow and now are removed. Each symbol has a replacement that should be used instead, but note the replacement's argument names are different.
      • tf.quantize_and_dequantize_v4 (accidentally introduced in TensorFlow 2.4): Use tf.quantization.quantize_and_dequantize_v2 instead.
      • tf.batch_mat_mul_v3 (accidentally introduced in TensorFlow 2.6): Use tf.linalg.matmul instead.
      • tf.sparse_segment_sum_grad (accidentally introduced in TensorFlow 2.6): Use tf.raw_ops.SparseSegmentSumGrad instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient of tf.sparse.segment_sum.
    • Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.
  • Modular File System Migration:

    • Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The tensorflow-io python package should be installed for S3 and HDFS support with tensorflow.

Major Features and Improvements

  • Improvements to the TensorFlow debugging experience:

    • Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).

      This behavior can be disabled by calling tf.debugging.disable_traceback_filtering(), and can be re-enabled via tf.debugging.enable_traceback_filtering(). If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by calling tf.debugging.is_traceback_filtering_enabled().

      Note that this feature is only available with Python 3.7 or higher.

    • Improve the informativeness of error messages raised by Keras Layer.__call__(), by adding the full list of argument values passed to the layer in every exception.

  • Introduce the tf.compat.v1.keras.utils.track_tf1_style_variables decorator, which enables using large classes of tf1-style variable_scope, get_variable, and compat.v1.layer-based components from within TF2 models running with TF2 behavior enabled.

  • tf.data:

    • tf.data service now supports auto-sharding. Users specify the sharding policy with tf.data.experimental.service.ShardingPolicy enum. It can be one of OFF (equivalent to today's "parallel_epochs" mode), DYNAMIC (equivalent to today's "distributed_epoch" mode), or one of the static sharding policies: FILE, DATA, FILE_OR_DATA, or HINT (corresponding to values of tf.data.experimental.AutoShardPolicy).

      Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses in tensorflow.data.experimental.DispatcherConfig.

    • tf.data.experimental.service.register_dataset now accepts optional compression argument.

  • Keras:

    • tf.keras.layers.Conv now includes a public convolution_op method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your own call method:
        class StandardizedConv2D(tf.keras.layers.Conv2D):
          def call(self, inputs):
            mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
            return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
      Alternatively, you can override convolution_op:
        class StandardizedConv2D(tf.keras.Layer):
          def convolution_op(self, inputs, kernel):
            mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True)
            # Author code uses std + 1e-5
            return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
    • Added merge_state() method to tf.keras.metrics.Metric for use in distributed computations.
    • Added sparse and ragged options to tf.keras.layers.TextVectorization to allow for SparseTensor and RaggedTensor outputs from the layer.
  • distribute.experimental.rpc package:

    • distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.

    • Example usage to create server:

         server = tf.distribute.experimental.rpc.Server.create("grpc", 
                 "127.0.0.1:1234")
         @tf.function(input_signature=[
           tf.TensorSpec([], tf.int32),
           tf.TensorSpec([], dtypes.int32)
         ])
         def _remote_multiply(a, b):
           return tf.math.multiply(a, b)
      
         server.register("multiply", _remote_multiply)
    • Example usage to create client:

      client = tf.distribute.experimental.rpc.Client.create("grpc", address)
      a = tf.constant(2, dtype=tf.int32)
      b = tf.constant(3, dtype=tf.int32)
      result = client.multiply(a, b)
  • tf.lite:

    • Add experimental API experimental_from_jax to support conversion from Jax models to TensorFlow Lite.
    • Support uint32 data type for cast op.
    • Add experimental quantization debugger tf.lite.QuantizationDebugger
  • Extension Types

    • Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with tf.experimental.ExtensionType as its base, and use type annotations to specify the type for each field. E.g.:
      class MaskedTensor(tf.experimental.ExtensionType):
        values: tf.Tensor
        mask: tf.Tensor
      The tf.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library.
    • Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
    • Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as tf.add or tf.concat) when they are applied to ExtensionType values.
    • The BatchableExtensionType API can be used to define e...
Read more

TensorFlow 2.6.1

01 Nov 20:36
3aa40c3
Compare
Choose a tag to compare

Release 2.6.1

This release introduces several vulnerability fixes:

  • Fixes a code injection issue in saved_model_cli (CVE-2021-41228)
  • Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
  • Fixes a heap OOB in FusedBatchNorm kernels (CVE-2021-41223)
  • Fixes an arbitrary memory read in ImmutableConst (CVE-2021-41227)
  • Fixes a heap OOB in SparseBinCount (CVE-2021-41226)
  • Fixes a heap OOB in SparseFillEmptyRows (CVE-2021-41224)
  • Fixes a segfault due to negative splits in SplitV (CVE-2021-41222)
  • Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in Cudnn* ops (CVE-2021-41221)
  • Fixes a null pointer exception when Exit node is not preceded by Enter op (CVE-2021-41217)
  • Fixes an integer division by 0 in tf.raw_ops.AllToAll (CVE-2021-41218)
  • Fixes a use after free and a memory leak in CollectiveReduceV2 (CVE-2021-41220)
  • Fixes an undefined behavior via nullptr reference binding in sparse matrix multiplication (CVE-2021-41219)
  • Fixes a heap buffer overflow in Transpose (CVE-2021-41216)
  • Prevents deadlocks arising from mutually recursive tf.function objects (CVE-2021-41213)
  • Fixes a null pointer exception in DeserializeSparse (CVE-2021-41215)
  • Fixes an undefined behavior arising from reference binding to nullptr in tf.ragged.cross (CVE-2021-41214)
  • Fixes a heap OOB read in tf.ragged.cross (CVE-2021-41212)
  • Fixes a heap OOB in shape inference for QuantizeV2 (CVE-2021-41211)
  • Fixes a heap OOB read in all tf.raw_ops.QuantizeAndDequantizeV* ops (CVE-2021-41205)
  • Fixes an FPE in ParallelConcat (CVE-2021-41207)
  • Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
  • Fixes a heap OOB read in tf.raw_ops.SparseCountSparseOutput (CVE-2021-41210)
  • Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
  • Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
  • Fixes a vulnerability caused by unitialized access in EinsumHelper::ParseEquation (CVE-2021-41201)
  • Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
  • Fixes an overflow producing a crash in tf.range (CVE-2021-41202)
  • Fixes an overflow producing a crash in tf.image.resize when size is large (CVE-2021-41199)
  • Fixes an overflow producing a crash in tf.tile when tiling tensor is large (CVE-2021-41198)
  • Fixes a vulnerability produced due to incomplete validation in tf.summary.create_file_writer (CVE-2021-41200)
  • Fixes multiple crashes due to overflow and CHECK-fail in ops with large tensor shapes (CVE-2021-41197)
  • Fixes a crash in max_pool3d when size argument is 0 or negative (CVE-2021-41196)
  • Fixes a crash in tf.math.segment_* operations (CVE-2021-41195)
  • Updates curl to 7.78.0 to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.

TensorFlow 2.5.2

01 Nov 20:36
957590e
Compare
Choose a tag to compare

Release 2.5.2

This release introduces several vulnerability fixes:

  • Fixes a code injection issue in saved_model_cli (CVE-2021-41228)
  • Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
  • Fixes a heap OOB in FusedBatchNorm kernels (CVE-2021-41223)
  • Fixes an arbitrary memory read in ImmutableConst (CVE-2021-41227)
  • Fixes a heap OOB in SparseBinCount (CVE-2021-41226)
  • Fixes a heap OOB in SparseFillEmptyRows (CVE-2021-41224)
  • Fixes a segfault due to negative splits in SplitV (CVE-2021-41222)
  • Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in Cudnn* ops (CVE-2021-41221)
  • Fixes a null pointer exception when Exit node is not preceded by Enter op (CVE-2021-41217)
  • Fixes an integer division by 0 in tf.raw_ops.AllToAll (CVE-2021-41218)
  • Fixes an undefined behavior via nullptr reference binding in sparse matrix multiplication (CVE-2021-41219)
  • Fixes a heap buffer overflow in Transpose (CVE-2021-41216)
  • Prevents deadlocks arising from mutually recursive tf.function objects (CVE-2021-41213)
  • Fixes a null pointer exception in DeserializeSparse (CVE-2021-41215)
  • Fixes an undefined behavior arising from reference binding to nullptr in tf.ragged.cross (CVE-2021-41214)
  • Fixes a heap OOB read in tf.ragged.cross (CVE-2021-41212)
  • Fixes a heap OOB read in all tf.raw_ops.QuantizeAndDequantizeV* ops (CVE-2021-41205)
  • Fixes an FPE in ParallelConcat (CVE-2021-41207)
  • Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
  • Fixes a heap OOB read in tf.raw_ops.SparseCountSparseOutput (CVE-2021-41210)
  • Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
  • Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
  • Fixes a vulnerability caused by unitialized access in EinsumHelper::ParseEquation (CVE-2021-41201)
  • Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
  • Fixes an overflow producing a crash in tf.range (CVE-2021-41202)
  • Fixes an overflow producing a crash in tf.image.resize when size is large (CVE-2021-41199)
  • Fixes an overflow producing a crash in tf.tile when tiling tensor is large (CVE-2021-41198)
  • Fixes a vulnerability produced due to incomplete validation in tf.summary.create_file_writer (CVE-2021-41200)
  • Fixes multiple crashes due to overflow and CHECK-fail in ops with large tensor shapes (CVE-2021-41197)
  • Fixes a crash in max_pool3d when size argument is 0 or negative (CVE-2021-41196)
  • Fixes a crash in tf.math.segment_* operations (CVE-2021-41195)
  • Updates curl to 7.78.0 to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.

TensorFlow 2.4.4

01 Nov 20:36
6491886
Compare
Choose a tag to compare

Release 2.4.4

NOTE: This is the last release in the 2.4.x line

This release introduces several vulnerability fixes:

  • Fixes a code injection issue in saved_model_cli (CVE-2021-41228)
  • Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
  • Fixes a heap OOB in FusedBatchNorm kernels (CVE-2021-41223)
  • Fixes an arbitrary memory read in ImmutableConst (CVE-2021-41227)
  • Fixes a heap OOB in SparseBinCount (CVE-2021-41226)
  • Fixes a heap OOB in SparseFillEmptyRows (CVE-2021-41224)
  • Fixes a segfault due to negative splits in SplitV (CVE-2021-41222)
  • Fixes segfaults and vulnerabilities caused by accesses to invalid memory during shape inference in Cudnn* ops (CVE-2021-41221)
  • Fixes a null pointer exception when Exit node is not preceded by Enter op (CVE-2021-41217)
  • Fixes an integer division by 0 in tf.raw_ops.AllToAll (CVE-2021-41218)
  • Fixes an undefined behavior via nullptr reference binding in sparse matrix multiplication (CVE-2021-41219)
  • Fixes a heap buffer overflow in Transpose (CVE-2021-41216)
  • Prevents deadlocks arising from mutually recursive tf.function objects (CVE-2021-41213)
  • Fixes a null pointer exception in DeserializeSparse (CVE-2021-41215)
  • Fixes an undefined behavior arising from reference binding to nullptr in tf.ragged.cross (CVE-2021-41214)
  • Fixes a heap OOB read in tf.ragged.cross (CVE-2021-41212)
  • Fixes a heap OOB read in all tf.raw_ops.QuantizeAndDequantizeV* ops (CVE-2021-41205)
  • Fixes an FPE in ParallelConcat (CVE-2021-41207)
  • Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
  • Fixes a heap OOB read in tf.raw_ops.SparseCountSparseOutput (CVE-2021-41210)
  • Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
  • Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
  • Fixes a vulnerability caused by unitialized access in EinsumHelper::ParseEquation (CVE-2021-41201)
  • Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
  • Fixes an overflow producing a crash in tf.range (CVE-2021-41202)
  • Fixes an overflow producing a crash in tf.image.resize when size is large (CVE-2021-41199)
  • Fixes an overflow producing a crash in tf.tile when tiling tensor is large (CVE-2021-41198)
  • Fixes a vulnerability produced due to incomplete validation in tf.summary.create_file_writer (CVE-2021-41200)
  • Fixes multiple crashes due to overflow and CHECK-fail in ops with large tensor shapes (CVE-2021-41197)
  • Fixes a crash in max_pool3d when size argument is 0 or negative (CVE-2021-41196)
  • Fixes a crash in tf.math.segment_* operations (CVE-2021-41195)
  • Updates curl to 7.78.0 to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.

TensorFlow 2.7.0-rc1

22 Oct 16:53
ff68385
Compare
Choose a tag to compare
TensorFlow 2.7.0-rc1 Pre-release
Pre-release

Release 2.7.0

Breaking Changes

  • tf.keras:

    • The methods Model.fit(), Model.predict(), and Model.evaluate() will no longer uprank input data of shape (batch_size,) to become (batch_size, 1). This enables Model subclasses to process scalar data in their train_step()/test_step()/predict_step() methods.
      Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the train_step()/test_step()/predict_step() methods, e.g. if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1). Functional models as well as Sequential models built with an explicit input shape are not affected.
    • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.
    • LinearModel and WideDeepModel are moved to the tf.compat.v1.keras.models. namespace (tf.compat.v1.keras.models.LinearModel and tf.compat.v1.keras.models.WideDeepModel), and their experimental endpoints (tf.keras.experimental.models.LinearModel and tf.keras.experimental.models.WideDeepModel) are being deprecated.
    • RNG behavior change for all tf.keras.initializers classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a determinisitic sequence. This change will make the initialize behavior align between v1 and v2.
  • tf.lite:

    • Rename fields SignatureDef table in schema to maximize the parity with TF SavedModel's Signature concept.
    • Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
    • Deprecate tflite::OpResolver::GetDelegates. The list returned by TfLite's BuiltinOpResolver::GetDelegates is now always empty. Instead, recommend using new method tflite::OpResolver::GetDelegateCreators in order to achieve lazy initialization on TfLite delegate instances.
  • TF Core:

    • tf.Graph.get_name_scope() now always returns a string, as documented. Previously, when called within name_scope("") or name_scope(None) contexts, it returned None; now it returns the empty string.
    • tensorflow/core/ir/ contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.
    • Deprecated and removed attrs() function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there.
    • The following Python symbols were accidentally added in earlier versions of TensorFlow and now are removed. Each symbol has a replacement that should be used instead, but note the replacement's argument names are different.
      • tf.quantize_and_dequantize_v4 (accidentally introduced in TensorFlow 2.4): Use tf.quantization.quantize_and_dequantize_v2 instead.
      • tf.batch_mat_mul_v3 (accidentally introduced in TensorFlow 2.6): Use tf.linalg.matmul instead.
      • tf.sparse_segment_sum_grad (accidentally introduced in TensorFlow 2.6): Use tf.raw_ops.SparseSegmentSumGrad instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient of tf.sparse.segment_sum.
    • Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.
  • Modular File System Migration:

    • Support for S3 and HDFS file systems has been migrated to a modular file systems based approach and is now available in https://github.com/tensorflow/io. The tensorflow-io python package should be installed for S3 and HDFS support with tensorflow.

Major Features and Improvements

  • Improvements to the TensorFlow debugging experience:

    • Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).

      This behavior can be disabled by calling tf.debugging.disable_traceback_filtering(), and can be re-enabled via tf.debugging.enable_traceback_filtering(). If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by calling tf.debugging.is_traceback_filtering_enabled().

      Note that this feature is only available with Python 3.7 or higher.

    • Improve the informativeness of error messages raised by Keras Layer.__call__(), by adding the full list of argument values passed to the layer in every exception.

  • Introduce the tf.compat.v1.keras.utils.track_tf1_style_variables decorator, which enables using large classes of tf1-style variable_scope, get_variable, and compat.v1.layer-based components from within TF2 models running with TF2 behavior enabled.

  • tf.data:

    • tf.data service now supports auto-sharding. Users specify the sharding policy with tf.data.experimental.service.ShardingPolicy enum. It can be one of OFF (equivalent to today's "parallel_epochs" mode), DYNAMIC (equivalent to today's "distributed_epoch" mode), or one of the static sharding policies: FILE, DATA, FILE_OR_DATA, or HINT (corresponding to values of tf.data.experimental.AutoShardPolicy).

      Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses in tensorflow.data.experimental.DispatcherConfig.

    • tf.data.experimental.service.register_dataset now accepts optional compression argument.

  • Keras:

    • tf.keras.layers.Conv now includes a public convolution_op method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your own call method:
        class StandardizedConv2D(tf.keras.layers.Conv2D):
          def call(self, inputs):
            mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
            return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
      Alternatively, you can override convolution_op:
        class StandardizedConv2D(tf.keras.Layer):
          def convolution_op(self, inputs, kernel):
            mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True)
            # Author code uses std + 1e-5
            return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
    • Added merge_state() method to tf.keras.metrics.Metric for use in distributed computations.
    • Added sparse and ragged options to tf.keras.layers.TextVectorization to allow for SparseTensor and RaggedTensor outputs from the layer.
  • distribute.experimental.rpc package:

    • distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.

    • Example usage to create server:

         server = tf.distribute.experimental.rpc.Server.create("grpc", 
                 "127.0.0.1:1234")
         @tf.function(input_signature=[
           tf.TensorSpec([], tf.int32),
           tf.TensorSpec([], dtypes.int32)
         ])
         def _remote_multiply(a, b):
           return tf.math.multiply(a, b)
      
         server.register("multiply", _remote_multiply)
    • Example usage to create client:

      client = tf.distribute.experimental.rpc.Client.create("grpc", address)
      a = tf.constant(2, dtype=tf.int32)
      b = tf.constant(3, dtype=tf.int32)
      result = client.multiply(a, b)
  • tf.lite:

    • Add experimental API experimental_from_jax to support conversion from Jax models to TensorFlow Lite.
    • Support uint32 data type for cast op.
    • Add experimental quantization debugger tf.lite.QuantizationDebugger
  • Extension Types

    • Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with tf.experimental.ExtensionType as its base, and use type annotations to specify the type for each field. E.g.:
      class MaskedTensor(tf.experimental.ExtensionType):
        values: tf.Tensor
        mask: tf.Tensor
      The tf.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library.
    • Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
    • Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as tf.add or tf.concat) when they are applied to ExtensionType values.
    • The BatchableExtensionType API can be used to define e...
Read more

TensorFlow 2.7.0-rc0

07 Oct 18:26
ce35e5c
Compare
Choose a tag to compare
TensorFlow 2.7.0-rc0 Pre-release
Pre-release

Release 2.7.0

Breaking Changes

  • tf.keras:

    • The methods Model.fit(), Model.predict(), and Model.evaluate() will no longer uprank input data of shape (batch_size,) to become (batch_size, 1). This enables Model subclasses to process scalar data in their train_step()/test_step()/predict_step() methods.
      Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in the train_step()/test_step()/predict_step() methods, e.g. if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1). Functional models as well as Sequential models built with an explicit input shape are not affected.
    • The methods Model.to_yaml() and keras.models.model_from_yaml have been replaced to raise a RuntimeError as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.
    • LinearModel and WideDeepModel are moved to the tf.compat.v1.keras.models. namespace (tf.compat.v1.keras.models.LinearModel and tf.compat.v1.keras.models.WideDeepModel), and their experimental endpoints (tf.keras.experimental.models.LinearModel and tf.keras.experimental.models.WideDeepModel) are being deprecated.
    • RNG behavior change for all tf.keras.initializers classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a determinisitic sequence. This change will make the initialize behavior align between v1 and v2.
  • tf.lite:

    • Rename fields SignatureDef table in schema to maximize the parity with TF SavedModel's Signature concept.
    • Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
    • Deprecate tflite::OpResolver::GetDelegates. The list returned by TfLite's BuiltinOpResolver::GetDelegates is now always empty. Instead, recommend using new method tflite::OpResolver::GetDelegateCreators in order to achieve lazy initialization on TfLite delegate instances.
  • TF Core:

    • tf.Graph.get_name_scope() now always returns a string, as documented. Previously, when called within name_scope("") or name_scope(None) contexts, it returned None; now it returns the empty string.
    • tensorflow/core/ir/ contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.
    • Deprecated and removed attrs() function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there.
    • The following Python symbols were accidentally added in earlier versions of TensorFlow and now are removed. Each symbol has a replacement that should be used instead, but note the replacement's argument names are different.
      • tf.quantize_and_dequantize_v4 (accidentally introduced in TensorFlow 2.4): Use tf.quantization.quantize_and_dequantize_v2 instead.
      • tf.batch_mat_mul_v3 (accidentally introduced in TensorFlow 2.6): Use tf.linalg.matmul instead.
      • tf.sparse_segment_sum_grad (accidentally introduced in TensorFlow 2.6): Use tf.raw_ops.SparseSegmentSumGrad instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient of tf.sparse.segment_sum.
    • Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.

Major Features and Improvements

  • Improvements to the TensorFlow debugging experience:

    • Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).

      This behavior can be disabled by calling tf.debugging.disable_traceback_filtering(), and can be re-enabled via tf.debugging.enable_traceback_filtering(). If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by calling tf.debugging.is_traceback_filtering_enabled().

      Note that this feature is only available with Python 3.7 or higher.

    • Improve the informativeness of error messages raised by Keras Layer.__call__(), by adding the full list of argument values passed to the layer in every exception.

  • Introduce the tf.compat.v1.keras.utils.track_tf1_style_variables decorator, which enables using large classes of tf1-style variable_scope, get_variable, and compat.v1.layer-based components from within TF2 models running with TF2 behavior enabled.

  • tf.data:

    • tf.data service now supports auto-sharding. Users specify the sharding policy with tf.data.experimental.service.ShardingPolicy enum. It can be one of OFF (equivalent to today's "parallel_epochs" mode), DYNAMIC (equivalent to today's "distributed_epoch" mode), or one of the static sharding policies: FILE, DATA, FILE_OR_DATA, or HINT (corresponding to values of tf.data.experimental.AutoShardPolicy).

      Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses in tensorflow.data.experimental.DispatcherConfig.

    • tf.data.experimental.service.register_dataset now accepts optional compression argument.

  • Keras:

  • tf.keras.layers.Conv now includes a public convolution_op method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your own call method:

      class StandardizedConv2D(tf.keras.layers.Conv2D):
        def call(self, inputs):
          mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
          return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))

    Alternatively, you can override convolution_op:

      class StandardizedConv2D(tf.keras.Layer):
        def convolution_op(self, inputs, kernel):
          mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True)
          # Author code uses std + 1e-5
          return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
  • Added merge_state() method to tf.keras.metrics.Metric for use in distributed computations.

  • Added sparse and ragged options to tf.keras.layers.TextVectorization to allow for SparseTensor and RaggedTensor outputs from the layer.

  • distribute.experimental.rpc package:

    • distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.

    • Example usage to create server:

         server = tf.distribute.experimental.rpc.Server.create("grpc", 
                 "127.0.0.1:1234")
         @tf.function(input_signature=[
           tf.TensorSpec([], tf.int32),
           tf.TensorSpec([], dtypes.int32)
         ])
         def _remote_multiply(a, b):
           return tf.math.multiply(a, b)
      
         server.register("multiply", _remote_multiply)
    • Example usage to create client:

      client = tf.distribute.experimental.rpc.Client.create("grpc", address)
      a = tf.constant(2, dtype=tf.int32)
      b = tf.constant(3, dtype=tf.int32)
      result = client.multiply(a, b)
  • tf.lite:

    • Add experimental API experimental_from_jax to support conversion from Jax models to TensorFlow Lite.
    • Support uint32 data type for cast op.
    • Add experimental quantization debugger tf.lite.QuantizationDebugger
  • Extension Types

    • Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with tf.experimental.ExtensionType as its base, and use type annotations to specify the type for each field. E.g.:
      class MaskedTensor(tf.experimental.ExtensionType):
        values: tf.Tensor
        mask: tf.Tensor
      The tf.ExtensionType base class works similarly to typing.NamedTuple and @dataclasses.dataclass from the standard Python library.
    • Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
    • Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as tf.add or tf.concat) when they are applied to ExtensionType values.
    • The BatchableExtensionType API can be used to define extension types that support APIs that make use of batching, such as tf.data.Dataset and tf.map_fn.

Bug Fixes and Other Changes

  • TF Core:
    • Random number generation (RNG) system
      • Add argument alg to tf.random.stateless_* functions to explicitly select the RNG algorithm.
      • Add `tf...
Read more

TensorFlow 2.4.3

12 Aug 16:04
Compare
Choose a tag to compare

Release 2.4.3

This release introduces several vulnerability fixes:

  • Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
  • Fixes a floating point exception in SparseDenseCwiseDiv (CVE-2021-37636)
  • Fixes a null pointer dereference in CompressElement (CVE-2021-37637)
  • Fixes a null pointer dereference in RaggedTensorToTensor (CVE-2021-37638)
  • Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
  • Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
  • Fixes a division by 0 in ResourceScatterDiv (CVE-2021-37642)
  • Fixes a heap OOB in RaggedGather (CVE-2021-37641)
  • Fixes a std::abort raised from TensorListReserve (CVE-2021-37644)
  • Fixes a null pointer dereference in MatrixDiagPartOp (CVE-2021-37643)
  • Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
  • Fixes a bad allocation error in StringNGrams caused by integer conversion (CVE-2021-37646)
  • Fixes a null pointer dereference in SparseTensorSliceDataset (CVE-2021-37647)
  • Fixes an incorrect validation of SaveV2 inputs (CVE-2021-37648)
  • Fixes a null pointer dereference in UncompressElement (CVE-2021-37649)
  • Fixes a segfault and a heap buffer overflow in {Experimental,}DatasetToTFRecord (CVE-2021-37650)
  • Fixes a heap buffer overflow in FractionalAvgPoolGrad (CVE-2021-37651)
  • Fixes a use after free in boosted trees creation (CVE-2021-37652)
  • Fixes a division by 0 in ResourceGather (CVE-2021-37653)
  • Fixes a heap OOB and a CHECK fail in ResourceGather (CVE-2021-37654)
  • Fixes a heap OOB in ResourceScatterUpdate (CVE-2021-37655)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToSparse (CVE-2021-37656)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixDiagV* ops (CVE-2021-37657)
  • Fixes an undefined behavior arising from reference binding to nullptr in MatrixSetDiagV* ops (CVE-2021-37658)
  • Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
  • Fixes a division by 0 in inplace operations (CVE-2021-37660)
  • Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
  • Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
  • Fixes a heap OOB in boosted trees (CVE-2021-37664)
  • Fixes vulnerabilities arising from incomplete validation in QuantizeV2 (CVE-2021-37663)
  • Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
  • Fixes an undefined behavior arising from reference binding to nullptr in RaggedTensorToVariant (CVE-2021-37666)
  • Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
  • Fixes an FPE in tf.raw_ops.UnravelIndex (CVE-2021-37668)
  • Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
  • Fixes a heap OOB in UpperBound and LowerBound (CVE-2021-37670)
  • Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
  • Fixes a heap OOB in SdcaOptimizerV2 (CVE-2021-37672)
  • Fixes a CHECK-fail in MapStage (CVE-2021-37673)
  • Fixes a vulnerability arising from incomplete validation in MaxPoolGrad (CVE-2021-37674)
  • Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
  • Fixes a division by 0 in most convolution operators (CVE-2021-37675)
  • Fixes vulnerabilities arising from missing validation in shape inference for Dequantize (CVE-2021-37677)
  • Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
  • Fixes a heap OOB in nested tf.map_fn with RaggedTensors (CVE-2021-37679)
  • Fixes a division by zero in TFLite (CVE-2021-37680)
  • Fixes an NPE in TFLite (CVE-2021-37681)
  • Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
  • Fixes an FPE in TFLite division operations (CVE-2021-37683)
  • Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
  • Fixes an infinite loop in TFLite (CVE-2021-37686)
  • Fixes a heap OOB in TFLite (CVE-2021-37685)
  • Fixes a heap OOB in TFLite's Gather* implementations (CVE-2021-37687)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
  • Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
  • Fixes a FPE in LSH in TFLite (CVE-2021-37691)
  • Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
  • Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
  • Updates curl to 7.77.0 to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.