Skip to content

Releases: mala-project/mala

v1.2.1 - Minor bufixes

01 Feb 08:57
Compare
Choose a tag to compare

This release fixes some minor issues and bugs, and updates some of the meta information. It also serves as a point of reference for an upcoming scientific work.

Change notes:

  • Updated MALA logos
  • Updated Tester class to also give Kohn-Sham energies alongside LDOS calculations
  • Updated CITATION.cff file to reflect new team members and scientific supervisors
  • Fixed bug that would crash models trained with horovod when loaded for inference without horovod
  • Fixed bug that would crash training when using Optuna+MPI for hyperparameter optimization (GPU compute graph usage had not been properly adaptable for this scenario)
  • Deactivated pytorch profiling by default, can still be manually enabled

v1.2.0 - GPU and you

28 Sep 13:54
Compare
Choose a tag to compare

New features

  • Production-ready inference options
    • Full inference (from ionic configuration to observables) on either a single GPU or distributed across multiple CPU (multi-GPU support still in development)
    • Access to (volumetric) observables within seconds
  • Fast training speeds due to optimal GPU usage
  • Training on large data sets through improved lazy-loading functionalitites and data shuffling routines
  • Fast hyperparameter optimization through distributed optimizers (optuna) and training-free surrogate metrics (NASWOT/ACSD)
  • Easy-to-use interface through single Parameters object for reproducibolity and modular design
  • Internal caching system for intermediate quantities (e.g. DOS, density, band energy) for improved performance
  • Experimental features for advanced users:
    • MinterPy: Polynomial interpolation based descriptors
    • OpenPMD
    • OF-DFT-MD interface to create initial configurations for ML based sampling

Change notes:

  • Full (serial) GPU inference added
  • MALA now operates on FP32
  • Added functionality for data shuffling
  • Added functionality for cached lazy loading
  • Improved GPU usage during training
  • Added convencience functions, e.g., for ACSD analysis
  • Fixed several bugs across the code
  • Overhauled documentation

v1.1.0 - (very late) Spring cleaning

18 Oct 07:04
Compare
Choose a tag to compare

Features

  • Parallel preprocessing, network training and model inference
  • Distributed hyperparameter optimization (Optuna) and distributed training-free network architecture optimization (NASWOT)
  • Reproducibility through single Parameters object, easy interface to JSON for automated sweeps
  • Internal caching system for intermediate quantities (e.g. DOS, density, band energy) for improved performance
  • Modular design
  • OF-DFT-MD interface to create initial configurations for ML based sampling

Change notes:

  • MALA now operates internally in Angstrom consistently
    • Volumetric data that has been created with MALA v1.0.0 can still be used, but unit conversion has to be added to the scripts in question
  • Implemented caching functionality
    • The old post-processing API is still fully functional, but will not use the caching functions; instead, MALA now has a more streamlined API tying calculators to data
  • More flexible data conversion methods
  • Improved Optuna distribution scheme
  • Implemented parallel total energy inference
  • Reduced import time for MALA module
  • Several smaller bugfixes

v1.0.0 - First major release (PyPI version)

27 Apr 08:43
Compare
Choose a tag to compare

Features

  • Preprocessing of QE data using LAMMPS interface and LDOS parser (parallel via MPI)
  • Networks can be created and trained using pytorch (arallel via horovod)
  • Hyperparameter optimization using optuna, orthogonal array tuning and neural architecture search without training (NASWOT) supported
    • optuna interface supports distributed runs and NASWOT can be run in parallel via MPI
  • Postprocessing using QE total energy module (available as separate repository)
    • Network inference parallel up to the total energy calculation, which currently is still serial.
  • Reproducibility through single Parameters object, easy interface to JSON for automated sweeps
  • Modular design

Change notes:

  • full integration of Sandia ML-DFT code into MALA (network architectures, misc code still open)
  • Parallelization of routines:
    • Preprocessing (both SNAP calculation and LDOS parsing)
    • Network training (via horovod)
    • Network inference (except for total energy)
  • Technical improvements:
    • Default parameter interface is now JSON based
    • internal refactoring

v1.0.0 - First major release

12 Apr 09:00
Compare
Choose a tag to compare

Features

  • Preprocessing of QE data using LAMMPS interface and LDOS parser (parallel via MPI)
  • Networks can be created and trained using pytorch (arallel via horovod)
  • Hyperparameter optimization using optuna, orthogonal array tuning and neural architecture search without training (NASWOT) supported
    • optuna interface supports distributed runs and NASWOT can be run in parallel via MPI
  • Postprocessing using QE total energy module (available as separate repository)
    • Network inference parallel up to the total energy calculation, which currently is still serial.
  • Reproducibility through single Parameters object, easy interface to JSON for automated sweeps
  • Modular design

Change notes:

  • full integration of Sandia ML-DFT code into MALA (network architectures, misc code still open)
  • Parallelization of routines:
    • Preprocessing (both SNAP calculation and LDOS parsing)
    • Network training (via horovod)
    • Network inference (except for total energy)
  • Technical improvements:
    • Default parameter interface is now JSON based
    • internal refactoring

v0.2.0 - Regular Update

08 Oct 15:11
Compare
Choose a tag to compare

Regular update of MALA. This release mostly updates the hyperparameter optimization capabilites of MALA and fixes some minor bugs. Changelog:

  • Fixed installation instructions and OAT part of installation
  • Improved and added examples; made LDOS based examples runnable
  • Fixed direct string concat for file interaction and replaced it with path functions
  • Improved optuna hyperparameter optimization (ensemble objectives, band energy as validation loss, distributed optimization, performance study)
  • Improved OAT and NASWOT implementation
  • Fixed several things regarding documentation and citation
  • Added check ensuring that QE-MD generated input files adhere to PBC
  • Implemented visualization via tensorbard
  • Stylistic improvements (import ordering fixed, TODOs converted to issues or resolved, replaced unncesseary get_data_repo() function
  • Added bump version
  • Set up mirror to casus org and fix pipeline deployment issues when working from forks

Test data repository version: v1.1.0

v0.1.0 - Accelerating Finite-Temperature DFT with DNN

07 Jul 11:27
9cc771b
Compare
Choose a tag to compare

First alpha release of MALA. This code accompanies the publication of the same name (https://doi.org/10.1103/PhysRevB.104.035120).

Features:

  • Preprocessing of QE data using LAMMPS interface and parsers
  • Networks can be created and trained using pytorch
  • Hyperparameter optimization using optuna
    • experimental: orthogonal array tuning and neural architecture search without training supported
  • Postprocessing using QE total energy module (available as separate repository)

Test data repository version: v0.1.0

v0.0.2

08 Jun 12:59
Compare
Choose a tag to compare
v0.0.2 Pre-release
Pre-release

Added code from Sandia National Laboratories and Oak Ridge National Laboratory. Code developments will be merged beginning now.

v0.0.1

08 Jun 12:58
Compare
Choose a tag to compare
v0.0.1 Pre-release
Pre-release

Current features:

  • Preprocessing of QE data using LAMMPS interface and parsers
  • Networks can be created and trained using pytorch
  • Hyperparameter optimization using optuna
    • experimental: orthogonal array tuning and neural architecture search without training supported
  • Postprocessing using QE total energy module