Skip to content

0.1.0

Latest
Compare
Choose a tag to compare
@jeandut jeandut released this 09 Jan 14:57
· 2 commits to main since this release
c658e54

This new release of FLamby welcomes @xavier-owkin and @afilt as new contributors and introduces the following changes that targets mainly the Fed-Camelyon16 benchmark although some new features have a global impact across datasets.

We:

  • remove the legacy Camelyon16's histolab tiling as its quality is subpar although it remains accessible from previous releases.
  • leveraging this improved tiling as well as a bug fix (increasing max-tiles to 10,000), we re-run FedCamelyon16 benchmark wo changing hyper-parameters and obtain superior results:

Updated results:

Capture d’écran 2024-01-09 à 15 48 27

Legacy results:

Capture d’écran 2024-01-09 à 15 48 49

In addition we add experimental support for the recent phikon feature extractor pretrained with self-supervised learning on histology slides.

Phikon preliminary results
Capture d’écran 2024-01-09 à 15 51 46

The complete list of changes included in this release is available below:

Major

Global

  • #296 gets rid of the change of random state occuring when evaluating a model leading to results depending on the frequency of the evaluation chosen (see this pytorch discussion) (by @xavier-owkin)
  • #298 removes python3.7 support as installing it and associated libraries on the CI takes over an hour probably due to the deprecation of regular mirrors (by @xavier-owkin)

Fed-Camelyon16

  • FedCamelyon16 tiling is much improved by tweaking histolab's defaults and adding custom padding: #293 (by @xavier-owkin)
  • Pytorch's Imagenet feature extractor syntax is updated following pytorch's deprecation of the pretrained=Truesyntax #294 (by @xavier-owkin)
  • #296 also reruns FedCamelyon16 benchmark using improved tiling and fixes a bug now using 10,000 max tiles instead of 1,000 (by @xavier-owkin)
  • #295 and #297 give some experimental support for the phikon feature extractor allowing to win almost 15 AUC points over IMAGENET pretraining (FL benchmarks parameters are not optimized yet for this new feature extractor) (by @xavier-owkin)