Skip to content

v4.1.0

Latest
Compare
Choose a tag to compare
@Optimox Optimox released this 23 Jul 13:34
· 1 commit to develop since this release

4.1.0 (2023-07-23)

Bug Fixes

  • 424 allow any np.intX as training target (63a8dba)
  • compute unsupervised loss using numpy (49bd61b)
  • custom loss using inplace operations (423f7c4)
  • disable ansi (60ec6bf)
  • feature importance not dependent from dataloader (5b19091)
  • README patience to 10 (fd2c73a)
  • replace std 0 by the mean or 1 if mean is 0 (ddf02da)
  • try to disable parallel install (c4963ad)
  • typo in pandas error (5ac5583)
  • update gpg key in docker file gpu (709fcb1)
  • upgrade the ressource size (fc59ea6)
  • use numpy std with bessel correction and test (3adaf4c)

Features

  • add augmentations inside the fit method (6d0485f)
  • add warm_start matching scikit-learn (d725101)
  • added conda install option (ca14b76), closes #346
  • disable tests in docker file gpu to save CI time (233f74e)
  • enable feature grouping for attention mechanism (bcae5f4)
  • enable torch 2.0 by relaxing poetry (bbd7a4e)
  • pretraining matches paper (5adb804)
  • raise error in case cat_dims and cat_idxs are incoherent (8c3b795)
  • update python (dea62b4)