Skip to content

Releases: tensorflow/similarity

V0.18

11 Sep 23:23
4709b26
Compare
Choose a tag to compare

This is the last version before 1.0 which encompass all the patches made in the last year to stabilize and improve TensorFlow Similarity.

New key features

  • New in-memory zero-dependency backend as default - TFsim now use a exact KNN fully written in TensorFlow operations as default backed as it is fast sub 1M points, has zero dependency and support realtime clusterinb [Elie, Owen, Ali]
  • New backend abstraction: Indexing/Search has been re-written to allows better backend integration [Ali]
  • FAISS support as backend [Ali]
  • Redis support as backend [Ali]
  • New data sampler that support the full Keras API API [Owen]
  • Lifted Structured Loss added Lorenzobattistela

Improvements

  • New serialization that leverage improvement in Keras serialization [Owen]

0.17 -More Loss Functions and Metric, Improved API, Performance improvements and Bug fixes

19 Mar 19:34
17ec76d
Compare
Choose a tag to compare

Community contributors

This release would not have been possible without the following contributors:

Added

  • Contrastive model now provides a create_contrastive_model() function that provides default projector and predictor MLP models.
  • Full support for all nmslib parameters in the nmslib search class. Models now correctly serialize and load custom nmslib configurations.
  • support for MultiShotFileSample to enable passing in a data loader function to read examples from disk.
  • Precision@K to the retrieval metrics. See section 3.2 of https://arxiv.org/pdf/2003.08505.pdf
  • support for retrieval metrics in the metrics callback.
  • support for clipping the retrieval metric at R per class. This masks out any values past the R expected neighbors.
  • Added VicReg Loss to contrastive losses.
  • Add support for passing the same examples for both the query and indexed set when calling retrieval_metrics. Added a new param to each retrieval_metric that enables dropping the nearest neighbor. This is useful if the nearest neighbor exists in the indexed examples.
  • New Notebooks for transfer learning new Clip embeddings and Examples for using Retrieval metrics.

Changed

  • Contrastive model now saves and loads using the standard keras methods.
  • Removed references to multiple output heads in Contrastive model.
  • Update all typing to use standard collections (see pep 585).
  • Start refactoring tests to improve coverage and use TF TestCase
  • Architectures now use syncbatchnorm instead of batchnorm to support distributed training.
  • Refactored image augmentation utilities.

Fixed

  • Contrastive model loading verified to work correctly with TF 2.8 to 2.11
  • SimClr now works correctly with distributed training.
  • Add better support for mixed precision and float16 dtype policies, and cast from local tensor dtypes when possible.
  • Speed up tests and add support for testing python 3.8 and 3.10 cross TF 2.7 and 2.11.
  • Warmup Cosine LR Schedule now correctly reaches the max LR level and the cosine decay begins after reaching the max value.
  • Other small fixes and improvements

0.16: optimization focus release

27 May 21:10
953d96e
Compare
Choose a tag to compare

Added

  • Cross-batch memory (XBM). Thanks @chjort
  • VicReg Loss - Improvement of Barlow Twins. Thanks @dewball345
  • Add augmenter function for Barlow Twins. Thanks @dewball345

Changed

  • Simplified MetricEmbedding layer. Function tracing and serialization are better supported now.
  • Refactor image augmentation modules into separate utils modules to help make them more composable. Thanks @dewball345
  • GeneralizedMeanPooling layers default value for P is now 3.0. This better aligns with the value in the paper.
  • EvalCallback now supports split validation callback. Thanks @abhisharsinha
  • Distance and losses refactor. Refactor distances call signature to now accept query and key inputs instead of a single set of embeddings or labels. Thanks @chjort

Fixed

  • Fix TFRecordDatasetSampler to ensure the correct number of examples per class per batch. Deterministic is now set to True and we have updated the docstring to call out the requirements for the tf record files.
  • Removed unneeded tf.function and registar_keras_serializable decorators.
  • Refactored the model index attribute to raise a more informative AttributeError if the index does not exist.
  • Freeze all BatchNormalization layers in architectures when loading weights.
  • Fix bug in losses.utils.LogSumExp(). tf.math.log(1 + x) should be tf.math.log(tf.math.exp(-my_max) + x). This is needed to properly account for removing the row wise max before computing the logsumexp.
  • Fix multisim loss offsets. The tfsim version of multisim uses distances instead of the inner product. However, multisim requires that we "center" the pairwise distances around 0. Here we add a new center param, which we set to 1.0 for cosine distance. Additionally, we also flip the lambda (lmda) param to add the threshold to the values instead of subtracting it. These changes will help improve the pos and neg weighting in the log1psumexp.
  • Fix nmslib save and load. nmslib requires a string path and will only read and write to local files. In order to support writing to a remote file path, we first write to a local tmp dir and then write that to the user provided path using tf.io.gfile.GFile.
  • Fix serialization of Simclr params in get_config()
  • Other fixes and improvements...

v0.15 Self-Supervised training support

21 Jan 19:28
Compare
Choose a tag to compare

This release add self-supervised training support

Added

  • Refactored Augmenters to be a class
  • Added SimCLRAugmenter
  • Added SiamSiamLoss()
  • Added SimCLR Loss
  • Added ContrastiveModel() for self-supervised training
  • Added encoder_dev() metrics as suggested in SiamSiam to detect collapsing
  • Added visualize_views() to see view side by side
  • Added self-supervised hello world narrated notebook
  • Refactored Augmenter() as class.

Changed

  • Remove augmentation argument from architectures as the augmentation arg could lead to issues when saving the model or training on TPU.
  • Removed RandAugment which is not used directly by the package and causes issues with TF 2.8+

0.14

09 Oct 00:40
9803dc3
Compare
Choose a tag to compare
0.14 Pre-release
Pre-release
  • Added Samplers.* IO notebook detailing how to efficiently sample your data for succesful training.
  • Various speed improvements and post initial release bug fixes.

Initial public release

13 Sep 19:03
ff1e289
Compare
Choose a tag to compare
Pre-release

Initial public release. See release notes for more details