Releases: tensorflow/tensorflow
TensorFlow 2.14.0-rc0
Release 2.14.0
Tensorflow
Breaking Changes
-
tf.Tensor
- The class hierarchy for
tf.Tensor
has changed, and there are now explicitEagerTensor
andSymbolicTensor
classes for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g.type(t) == tf.Tensor
) will need to update their code to useisinstance(t, tf.Tensor)
. Thetf.is_symbolic_tensor
helper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
- The class hierarchy for
-
tf.compat.v1.Session
tf.compat.v1.Session.partial_run
andtf.compat.v1.Session.partial_run_setup
will be deprecated in the next release.
Known Caveats
tf.lite
- when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
- If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set
Major Features and Improvements
-
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
- Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
- Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
-
tf.lite
- Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication
Bug Fixes and Other Changes
-
tf.py_function
andtf.numpy_function
can now be used as function decorators for clearer code:@tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1
-
tf.lite
- Strided_Slice now supports
UINT32
.
- Strided_Slice now supports
-
tf.config.experimental.enable_tensor_float_32_execution
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
tf.config.experimental.enable_tensor_float_32_execution(False)
will cause TPUs to use float32 precision for such ops instead of bfloat16.
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
-
tf.experimental.dtensor
- API changes for Relayout. Added a new API,
dtensor.relayout_like
, for relayouting a tensor according to the layout of another tensor. - Added
dtensor.get_default_mesh
, for retrieving the current default mesh under the dtensor context. - *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/ fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharded input.
- API changes for Relayout. Added a new API,
-
tf.experimental.strict_mode
- Added a new API,
strict_mode
, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.
- Added a new API,
-
TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
-
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag
--define=tf_force_rtti=true
to Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues. -
tf.ones
,tf.zeros
,tf.fill
,tf.ones_like
,tf.zeros_like
now take an additional Layout argument that controls the output layout of their results. -
tf.nest
andtf.data
now support user defined classes implementing__tf_flatten__
and__tf_unflatten__
methods. See nest_util code examples for an example.
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Major Features and Improvements
tf.keras
Model.compile
now supportsteps_per_execution='auto'
as a parameter, allowing automatic tuning of steps per execution duringModel.fit
,Model.predict
, andModel.evaluate
for a significant performance boost.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, georgiie, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, Learning-To-Play, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Tensorflow Jenkins, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang
TensorFlow 2.13.0
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite
- Added 16-bit and 64-bit float type support for built-in op
cast
. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clustering
to turn-off delegate clustering. - Added int16x8 support for the built-in op
exp
- Added int16x8 support for the built-in op
mirror_pad
- Added int16x8 support for the built-in ops
space_to_batch_nd
andbatch_to_space_nd
- Added 16-bit int type support for built-in op
less
,greater_than
,equal
- Added 8-bit and 16-bit support for
floor_div
andfloor_mod
. - Added 16-bit and 32-bit int support for the built-in op
bitcast
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor
- Added int16 indices support for built-in op
gather
andgather_nd
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift
- Added reference implementation for 16-bit int unquantized
add
. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul
. add_op
supports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k
.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function
- ConcreteFunction (
tf.types.experimental.ConcreteFunction
) as generated throughget_concrete_function
now performs holistic input validation similar to callingtf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nn
tf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
now support ids and weights described bytf.RaggedTensor
s.- Added a new boolean argument
allow_fast_lookup
totf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.
-
tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e.Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supportstf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.math
tf.nn.top_k
now supports specifying the output index type via parameterindex_type
. Supported types aretf.int16
,tf.int32
(default), andtf.int64
.
-
tf.SavedModel
- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct aFingerprint
object directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable
- Changed resource variables to inherit from
tf.compat.v2.Variable
instead oftf.compat.v1.Variable
. Some checks forisinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute
- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor
- Deprecated
dtensor.run_on
in favor ofdtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layout
anddtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_string
is removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Pythontuple
as the type annotation of its fields.
-
tf.nest
- Deprecated API
tf.nest.is_sequence
has now been deleted. Please usetf.nest.is_nested
instead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifier
andKerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.keras
extension. If this breaks you, simply addsave_format="h5"
to your.save()
call to revert back to the prior behavior. - Added
keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport keras
and you usedkeras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src
, but keep in mind thesrc
namespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore
,tf.keras.metrics.F1Score
, andtf.keras.metrics.R2Score
. - Added activation function
tf.keras.activations.mish
. - Added experimental
keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lion
optimizer. - Added
tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExport
callback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy
, via theexact_evaluation_shards
argument inModel.fit
andModel.evaluate
. - Added
tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, andtf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizers
classes have been merged withtf.keras.optimizers
. You can migrate your code to usetf.keras.optimizers
directly. The API namespace fortf.keras.dtensor.experimental.optimizers
will be removed in future releases. - Added support for
class_weight
for 3+ dimensional targets (e.g. image segmentation masks) inModel.fit
. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy
. - Remove the
tf.keras.dtensor.experimental.layout_map_scope()
. You can user thetf.keras.dtensor.experimental.LayoutMap.scope()
instead.
Security
- Fixes correct values rank in UpperBound and LowerBound CVE-2023-33976
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Pl...
TensorFlow 2.12.1
Release 2.12.1
Bug Fixes and Other Changes
- The use of the ambe config to build and test aarch64 is not needed. The ambe config will be removed in the future. Making cpu_arm64_pip.sh and cpu_arm64_nonpip.sh more similar for easier future maintenance.
TensorFlow 2.13.0-rc2
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite
- Added 16-bit and 64-bit float type support for built-in op
cast
. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clustering
to turn-off delegate clustering. - Added int16x8 support for the built-in op
exp
- Added int16x8 support for the built-in op
mirror_pad
- Added int16x8 support for the built-in ops
space_to_batch_nd
andbatch_to_space_nd
- Added 16-bit int type support for built-in op
less
,greater_than
,equal
- Added 8-bit and 16-bit support for
floor_div
andfloor_mod
. - Added 16-bit and 32-bit int support for the built-in op
bitcast
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor
- Added int16 indices support for built-in op
gather
andgather_nd
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift
- Added reference implementation for 16-bit int unquantized
add
. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul
. add_op
supports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k
.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function
- ConcreteFunction (
tf.types.experimental.ConcreteFunction
) as generated throughget_concrete_function
now performs holistic input validation similar to callingtf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nn
tf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
now support ids and weights described bytf.RaggedTensor
s.- Added a new boolean argument
allow_fast_lookup
totf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.
-
tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e.Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supportstf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.math
tf.nn.top_k
now supports specifying the output index type via parameterindex_type
. Supported types aretf.int16
,tf.int32
(default), andtf.int64
.
-
tf.SavedModel
- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct aFingerprint
object directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable
- Changed resource variables to inherit from
tf.compat.v2.Variable
instead oftf.compat.v1.Variable
. Some checks forisinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute
- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor
- Deprecated
dtensor.run_on
in favor ofdtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layout
anddtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_string
is removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Pythontuple
as the type annotation of its fields.
-
tf.nest
- Deprecated API
tf.nest.is_sequence
has now been deleted. Please usetf.nest.is_nested
instead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifier
andKerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.keras
extension. If this breaks you, simply addsave_format="h5"
to your.save()
call to revert back to the prior behavior. - Added
keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport keras
and you usedkeras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src
, but keep in mind thesrc
namespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore
,tf.keras.metrics.F1Score
, andtf.keras.metrics.R2Score
. - Added activation function
tf.keras.activations.mish
. - Added experimental
keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lion
optimizer. - Added
tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExport
callback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy
, via theexact_evaluation_shards
argument inModel.fit
andModel.evaluate
. - Added
tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, andtf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizers
classes have been merged withtf.keras.optimizers
. You can migrate your code to usetf.keras.optimizers
directly. The API namespace fortf.keras.dtensor.experimental.optimizers
will be removed in future releases. - Added support for
class_weight
for 3+ dimensional targets (e.g. image segmentation masks) inModel.fit
. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy
. - Remove the
tf.keras.dtensor.experimental.layout_map_scope()
. You can user thetf.keras.dtensor.experimental.LayoutMap.scope()
instead.
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, K...
TensorFlow 2.13.0-rc1
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite
- Added 16-bit and 64-bit float type support for built-in op
cast
. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clustering
to turn-off delegate clustering. - Added int16x8 support for the built-in op
exp
- Added int16x8 support for the built-in op
mirror_pad
- Added int16x8 support for the built-in ops
space_to_batch_nd
andbatch_to_space_nd
- Added 16-bit int type support for built-in op
less
,greater_than
,equal
- Added 8-bit and 16-bit support for
floor_div
andfloor_mod
. - Added 16-bit and 32-bit int support for the built-in op
bitcast
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor
- Added int16 indices support for built-in op
gather
andgather_nd
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift
- Added reference implementation for 16-bit int unquantized
add
. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul
. add_op
supports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k
.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function
- ConcreteFunction (
tf.types.experimental.ConcreteFunction
) as generated throughget_concrete_function
now performs holistic input validation similar to callingtf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nn
tf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
now support ids and weights described bytf.RaggedTensor
s.- Added a new boolean argument
allow_fast_lookup
totf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.
-
tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e.Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supportstf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.math
tf.nn.top_k
now supports specifying the output index type via parameterindex_type
. Supported types aretf.int16
,tf.int32
(default), andtf.int64
.
-
tf.SavedModel
- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct aFingerprint
object directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable
- Changed resource variables to inherit from
tf.compat.v2.Variable
instead oftf.compat.v1.Variable
. Some checks forisinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute
- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor
- Deprecated
dtensor.run_on
in favor ofdtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layout
anddtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_string
is removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Pythontuple
as the type annotation of its fields.
-
tf.nest
- Deprecated API
tf.nest.is_sequence
has now been deleted. Please usetf.nest.is_nested
instead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifier
andKerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.keras
extension. If this breaks you, simply addsave_format="h5"
to your.save()
call to revert back to the prior behavior. - Added
keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport keras
and you usedkeras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src
, but keep in mind thesrc
namespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore
,tf.keras.metrics.F1Score
, andtf.keras.metrics.R2Score
. - Added activation function
tf.keras.activations.mish
. - Added experimental
keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lion
optimizer. - Added
tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExport
callback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy
, via theexact_evaluation_shards
argument inModel.fit
andModel.evaluate
. - Added
tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, andtf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizers
classes have been merged withtf.keras.optimizers
. You can migrate your code to usetf.keras.optimizers
directly. The API namespace fortf.keras.dtensor.experimental.optimizers
will be removed in future releases. - Added support for
class_weight
for 3+ dimensional targets (e.g. image segmentation masks) inModel.fit
. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy
. - Remove the
tf.keras.dtensor.experimental.layout_map_scope()
. You can user thetf.keras.dtensor.experimental.LayoutMap.scope()
instead.
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, K...
TensorFlow 2.13.0-rc0
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite
- Add 16-bit and 64-bit float type support for built-in op
cast
. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clustering
to turn-off delegate clustering. - Add int16x8 support for the built-in op
exp
- Add int16x8 support for the built-in op
mirror_pad
- Add int16x8 support for the built-in ops
space_to_batch_nd
andbatch_to_space_nd
- Add 16-bit int type support for built-in op
less
,greater_than
,equal
- Add 8-bit and 16-bit support for
floor_div
andfloor_mod
. - Add 16-bit and 32-bit int support for the built-in op
bitcast
. - Add 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor
- Add int16 indices support for built-in op
gather
andgather_nd
. - Add 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift
- Add reference implementation for 16-bit int unquantized
add
. - Add reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul
. add_op
supports broadcasting up to 6 dimensions.- Add 16-bit support for
top_k
.
- Add 16-bit and 64-bit float type support for built-in op
-
tf.function
- ConcreteFunction (
tf.types.experimental.ConcreteFunction
) as generated throughget_concrete_function
now performs holistic input validation similar to callingtf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nn
tf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
now support ids and weights described bytf.RaggedTensor
s.- Added a new boolean argument
allow_fast_lookup
totf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.
-
tf.data
tf.data.Dataset.zip
now supports Python-style zipping, i.e.Dataset.zip(a, b, c)
.tf.data.Dataset.shuffle
now supports full shuffling. To specify that data should be fully shuffled, usedataset = dataset.shuffle(dataset.cardinality())
. This will load the full dataset into memory so that it can be shuffled, so make sure to only use this with datasets of filenames or other small datasets.
-
tf.math
tf.nn.top_k
now supports specifying the output index type via parameterindex_type
. Supported types aretf.int16
,tf.int32
(default), andtf.int64
.
-
tf.SavedModel
- Introduce class method
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct aFingerprint
object directly from a protobuf. - Introduce member method
tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.
- Introduce class method
Bug Fixes and Other Changes
-
tf.Variable
- Changed resource variables to inherit from
tf.compat.v2.Variable
instead oftf.compat.v1.Variable
. Some checks forisinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute
- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor
- Deprecated
dtensor.run_on
in favor ofdtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of dtensor.Layout and dtensor.Mesh have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably, Layout.serialized_string is removed.
- Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionType
tf.experimental.ExtensionType
now supports Pythontuple
as the type annotation of its fields.
-
tf.nest
- Deprecated API
tf.nest.is_sequence
has now been deleted. Please usetf.nest.is_nested
instead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
-
tf.keras
- Removed the Keras scikit-learn API wrappers (
KerasClassifier
andKerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.keras
extension. If this breaks you, simply addsave_format="h5"
to your.save()
call to revert back to the prior behavior. - Added
keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport keras
and you usedkeras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keras
keras.src
, but keep in mind thesrc
namespace is not stable and those APIs may change or be removed in the future.
- Removed the Keras scikit-learn API wrappers (
Major Features and Improvements
-
tf.keras
- Added F-Score metrics
tf.keras.metrics.FBetaScore
,tf.keras.metrics.F1Score
, andtf.keras.metrics.R2Score
. - Added activation function
tf.keras.activations.mish
. - Added experimental
keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lion
optimizer. - Added
tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExport
callback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy
, via theexact_evaluation_shards
argument inModel.fit
andModel.evaluate
. - Added
tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, andtf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizers
classes have been merged withtf.keras.optimizers
. You can migrate your code to usetf.keras.optimizers
directly. The API namespace fortf.keras.dtensor.experimental.optimizers
will be removed in future releases. - Added support for
class_weight
for 3+ dimensional targets (e.g. image segmentation masks) inModel.fit
. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy
. - Remove the
tf.keras.dtensor.experimental.layout_map_scope()
. You can user thetf.keras.dtensor.experimental.LayoutMap.scope()
instead.
- Added F-Score metrics
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstanti...
TensorFlow 2.12.0
Release 2.12.0
TensorFlow
Breaking Changes
-
Build, Compilation and Packaging
- Removed redundant packages
tensorflow-gpu
andtf-nightly-gpu
. These packages were removed and replaced with packages that direct users to switch totensorflow
ortf-nightly
respectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removed redundant packages
-
tf.function
:tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:- Using
functools.wraps
on a function with different signature - Using
functools.partial
with an invalidtf.function
input
- Using
tf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.function
s are assumed to have an emptyinput_signature
instead of an undefined one even if theinput_signature
is unspecified. tf.types.experimental.TraceType
now requires an additionalplaceholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimization
andtf.config.experimental.disable_mlir_graph_optimization
were removed.
Major Features and Improvements
-
Support for Python 3.11 has been added.
-
Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.
-
tf.lite
:- Add 16-bit float type support for built-in op
fill
. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.experimental.dtensor
:- Coordination service now works with
dtensor.initialize_accelerator_system
, and enabled by default. - Add
tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data
:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpoint
option oftf.data.Options()
. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseed
is set andrerandomize_each_iteration=True
, therandom()
operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseed
is set andrerandomize_each_iteration=True
, thesample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test
:- Added
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor
:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.SavedModel
:- Introduced new class
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.
- Introduced new class
tf.random
- Added non-experimental aliases for
tf.random.split
andtf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType
- Added function
experimental.extension_type.as_dict()
, which converts an instance oftf.experimental.ExtensionType
to adict
representation.
- Added function
stream_executor
- Top level
stream_executor
directory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor
.
- Top level
tf.nn
- Added
tf.nn.experimental.general_dropout
, which is similar totf.random.experimental.stateless_dropout
but accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction
- The
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout
- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
andMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridge
- Removed enums
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
tf.keras
:
- Moved all saving-related utilities to a new namespace,
keras.saving
, for example:keras.saving.load_model
,keras.saving.save_model
,keras.saving.custom_object_scope
,keras.saving.get_custom_objects
,keras.saving.register_keras_serializable
,keras.saving.get_registered_name
andkeras.saving.get_registered_object
. The previous API locations (inkeras.utils
andkeras.models
) will be available indefinitely, but we recommend you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
- Whether you represent a ragged tensor as a
- If you use masked losses with Keras the loss values may be different in TensorFlow
2.12
compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
Major Features and Improvements
tf.keras
:
- The new Keras model saving format (
.keras
) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the.keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdas
are disallowed at loading time. If you want to uselambdas
, you can passsafe_mode=False
to the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensor
input support totf.keras.layers.Embedding
layer. The layer now accepts a new boolean argumentsparse
. Ifsparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compile
as a settable property totf.keras.Model
. - Added
synchronized
optional parameter tolayers.BatchNormalization
. - Added deprecation warning to
layers.experimental.SyncBatchNormalization
and suggested to uselayers.BatchNormalization
withsynchronized=True
instead. - Updated
tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance. - Add
tf.keras.layers.Identity
, a placeholder pass-through layer. - Add
show_trainable
option totf.keras.utils.model_to_dot
to display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpace
object, viafeature_space.save("myfeaturespace.keras")
, and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras")
. - Added utility
tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.
Bug Fixes and Other Changes
- N/A
Security
- Fixes an FPE in TFLite in conv kernel CVE-2023-27579
- Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
- Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
- Fixes a segfault in Bincount with XLA CVE-2023-25675
- Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
- Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
- Fixes segment...
TensorFlow 2.11.1
Release 2.11.1
Note: TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11, you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.
- Security vulnerability fixes will no longer be patched to this Tensorflow version. The latest Tensorflow version includes the security vulnerability fixes. You can update to the latest version (recommended) or patch security vulnerabilities yourself steps. You can refer to the release notes of the latest Tensorflow version for a list of newly fixed vulnerabilities. If you have any questions, please create a GitHub issue to let us know.
This release also introduces several vulnerability fixes:
- Fixes an FPE in TFLite in conv kernel CVE-2023-27579
- Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
- Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
- Fixes a segfault in Bincount with XLA CVE-2023-25675
- Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
- Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
- Fixes segmentation fault in tfg-translate CVE-2023-25671
- Fixes an NPE in QuantizedMatMulWithBiasAndDequantize CVE-2023-25670
- Fixes an FPE in AvgPoolGrad with XLA CVE-2023-25669
- Fixes a heap out-of-buffer read vulnerability in the QuantizeAndDequantize operation CVE-2023-25668
- Fixes a segfault when opening multiframe gif CVE-2023-25667
- Fixes an NPE in SparseSparseMaximum CVE-2023-25665
- Fixes an FPE in AudioSpectrogram CVE-2023-25666
- Fixes a heap-buffer-overflow in AvgPoolGrad CVE-2023-25664
- Fixes a NPE in TensorArrayConcatV2 CVE-2023-25663
- Fixes a Integer overflow in EditDistance CVE-2023-25662
- Fixes a Seg fault in
tf.raw_ops.Print
CVE-2023-25660 - Fixes a OOB read in DynamicStitch CVE-2023-25659
- Fixes a OOB Read in GRUBlockCellGrad CVE-2023-25658
TensorFlow 2.12.0-rc1
Release 2.12.0
Breaking Changes
-
Build, Compilation and Packaging
- Removal of redundant packages: the
tensorflow-gpu
andtf-nightly-gpu
packages have been effectively removed and replaced with packages that direct users to switch totensorflow
ortf-nightly
respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removal of redundant packages: the
-
tf.function
:tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.- This can break certain cases that were previously ignored where the signature is malformed, such as:
- Using
functools.wraps
on a function with different signature - Using
functools.partial
with an invalidtf.function
input
- Using
tf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.function
s are assumed to have an emptyinput_signature
instead of an undefined one even if theinput_signature
is unspecified. tf.types.experimental.TraceType
now requires an additionalplaceholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimization
andtf.config.experimental.disable_mlir_graph_optimization
were removed. -
tf.keras
:- Moved all saving-related utilities to a new namespace,
keras.saving
, i.e.keras.saving.load_model
,keras.saving.save_model
,keras.saving.custom_object_scope
,keras.saving.get_custom_objects
,keras.saving.register_keras_serializable
,keras.saving.get_registered_name
andkeras.saving.get_registered_object
. The previous API locations (inkeras.utils
andkeras.models
) will stay available indefinitely, but we recommend that you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask. - If you use masked losses with Keras the loss values may be different in TensorFlow
2.12
compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
- Whether you represent a ragged tensor as a
- Moved all saving-related utilities to a new namespace,
Major Features and Improvements
-
tf.lite
:- Add 16-bit float type support for built-in op
fill
. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.keras
:- The new Keras model saving format (
.keras
) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the.keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdas
are disallowed at loading time. If you want to uselambdas
, you can passsafe_mode=False
to the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensor
input support totf.keras.layers.Embedding
layer. The layer now accepts a new boolean argumentsparse
. Ifsparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compile
as a settable property totf.keras.Model
. - Added
synchronized
optional parameter tolayers.BatchNormalization
. - Added deprecation warning to
layers.experimental.SyncBatchNormalization
and suggested to uselayers.BatchNormalization
withsynchronized=True
instead. - Updated
tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance. - Add
tf.keras.layers.Identity
, a placeholder pass-through layer. - Add
show_trainable
option totf.keras.utils.model_to_dot
to display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpace
object, viafeature_space.save("myfeaturespace.keras")
, and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras")
. - Added utility
tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.
- The new Keras model saving format (
-
tf.experimental.dtensor
:- Coordination service now works with
dtensor.initialize_accelerator_system
, and enabled by default. - Add
tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data
:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpoint
option oftf.data.Options()
. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseed
is set andrerandomize_each_iteration=True
, therandom()
operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseed
is set andrerandomize_each_iteration=True
, thesample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test
:- Added
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor
:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.SavedModel
:- Introduced new class
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.
- Introduced new class
tf.random
- Added non-experimental aliases for
tf.random.split
andtf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType
- Added function
experimental.extension_type.as_dict()
, which converts an instance oftf.experimental.ExtensionType
to adict
representation.
- Added function
stream_executor
- Top level
stream_executor
directory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor
.
- Top level
tf.nn
- Added
tf.nn.experimental.general_dropout
, which is similar totf.random.experimental.stateless_dropout
but accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction
- The
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout
- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
andMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridge
- Removed enums
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, n...
TensorFlow 2.12.0-rc0
Release 2.12.0
Breaking Changes
-
Build, Compilation and Packaging
- Removal of redundant packages: the
tensorflow-gpu
andtf-nightly-gpu
packages have been effectively removed and replaced with packages that direct users to switch totensorflow
ortf-nightly
respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removal of redundant packages: the
-
tf.function
:tf.function
now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.- This can break certain cases that were previously ignored where the signature is malformed, such as:
- Using
functools.wraps
on a function with different signature - Using
functools.partial
with an invalidtf.function
input
- Using
tf.function
now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.function
s are assumed to have an emptyinput_signature
instead of an undefined one even if theinput_signature
is unspecified. tf.types.experimental.TraceType
now requires an additionalplaceholder_value
method to be defined.tf.function
now traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimization
andtf.config.experimental.disable_mlir_graph_optimization
were removed. -
tf.keras
:- Moved all saving-related utilities to a new namespace,
keras.saving
, i.e.keras.saving.load_model
,keras.saving.save_model
,keras.saving.custom_object_scope
,keras.saving.get_custom_objects
,keras.saving.register_keras_serializable
,keras.saving.get_registered_name
andkeras.saving.get_registered_object
. The previous API locations (inkeras.utils
andkeras.models
) will stay available indefinitely, but we recommend that you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask. - If you use masked losses with Keras the loss values may be different in TensorFlow
2.12
compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
- Whether you represent a ragged tensor as a
- Moved all saving-related utilities to a new namespace,
-
tf.SavedModel
:- Introduced new class
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.
- Introduced new class
Major Features and Improvements
-
tf.lite
:- Add 16-bit float type support for built-in op
fill
. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.keras
:- The new Keras model saving format (
.keras
) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the.keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdas
are disallowed at loading time. If you want to uselambdas
, you can passsafe_mode=False
to the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensor
input support totf.keras.layers.Embedding
layer. The layer now accepts a new boolean argumentsparse
. Ifsparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compile
as a settable property totf.keras.Model
. - Added
synchronized
optional parameter tolayers.BatchNormalization
. - Added deprecation warning to
layers.experimental.SyncBatchNormalization
and suggested to uselayers.BatchNormalization
withsynchronized=True
instead. - Updated
tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance. - Add
tf.keras.layers.Identity
, a placeholder pass-through layer. - Add
show_trainable
option totf.keras.utils.model_to_dot
to display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpace
object, viafeature_space.save("myfeaturespace.keras")
, and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras")
. - Added utility
tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.
- The new Keras model saving format (
-
tf.experimental.dtensor
:- Coordination service now works with
dtensor.initialize_accelerator_system
, and enabled by default. - Add
tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data
:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpoint
option oftf.data.Options()
. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseed
is set andrerandomize_each_iteration=True
, therandom()
operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseed
is set andrerandomize_each_iteration=True
, thesample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test
:- Added
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor
:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.random
- Added non-experimental aliases for
tf.random.split
andtf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType
- Added function
experimental.extension_type.as_dict()
, which converts an instance oftf.experimental.ExtensionType
to adict
representation.
- Added function
stream_executor
- Top level
stream_executor
directory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor
.
- Top level
tf.nn
- Added
tf.nn.experimental.general_dropout
, which is similar totf.random.experimental.stateless_dropout
but accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction
- The
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout
- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
andMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridge
- Removed enums
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Lueh...