Skip to content

Releases: nnaisense/evotorch

0.5.1

02 Nov 11:09
5c58566
Compare
Choose a tag to compare

Fixes

0.5.0

02 Nov 10:09
02484da
Compare
Choose a tag to compare

New Features

  • Allow the user to reach the search algorithm's internal optimizer by @engintoklu in #89
  • Make EvoTorch future-proof by @engintoklu in #77
    • Ensure compatibility with PyTorch 2.0 and Brax 0.9
    • Migrate from old Gym interface to Gymnasium
  • Inform the user when device is not set correctly by @engintoklu in #90

Fixes

0.4.1

08 Mar 10:29
e8060ff
Compare
Choose a tag to compare

Fixes

Docs

0.4.0

17 Jan 14:10
5d4bb1e
Compare
Choose a tag to compare

New Features

Fixes

  • Fix all the mkdocstrings warnings (#39) (@Higgcz)
  • Fix infinite live reloading of the docs (#36) (@Higgcz)

Docs

  • Update the logging page and add WandbLogger section (#37) (@Higgcz)

0.3.0

24 Oct 20:03
a232d04
Compare
Choose a tag to compare

New

Vectorized gym support: Added a new problem class, evotorch.neuroevolution.VecGymNE, to solve vectorized gym environments. This new problem class can work with brax environments and can exploit GPU acceleration (#20).

PicklingLogger: Added a new logger, evotorch.logging.PicklingLogger, which periodically pickles and saves the current solution to the disk (#20).

Python 3.7 support: The Python dependency was lowered from 3.8 to 3.7. Therefore, EvoTorch can now be imported from within a Google Colab notebook (#16).

API Changes

@pass_info decorator: When working with GymNE (or with the newly introduced VecGymNE), if one uses a manual policy class and wishes to receive environment-related information via keyword arguments, that manual policy now needs to be decorated via @pass_info, as follows: (#27)

from torch import nn
from evotorch.decorators import pass_info

@pass_info
class CustomPolicy(nn.Module):
    def __init__(self, **kwargs):
        ...

Recurrent policies: When defining a manual recurrent policy (as a subclass of torch.nn.Module) for GymNE or for VecGymNE, the user is now required to define the forward method of the module according to the following signature:

def forward(self, x: torch.Tensor, h: Any = None) -> Tuple[torch.Tensor, Any]:
    ...

Note: The optional argument h is the current state of the network, and the second value of the output tuple is the updated state of the network. A reset() method is not required anymore, and it will be ignored (#20).

Fixes

Fixed a performance issue caused by the undesired cloning of the entire storages of tensor slices (#21).

Fixed the signature and the docstrings of the overridable method _do_cross_over(...) of the class evotorch.operators.CrossOver (#30).

Docs

Added more example scripts and updated the related README file (#19).

Updated the documentation related to GPU usage with ray (#28).

0.2.0

31 Aug 18:15
6efb628
Compare
Choose a tag to compare

Fixes:

Docs:

0.1.1

09 Aug 09:55
3bb5996
Compare
Choose a tag to compare

What's changed

  • Re-arrange pip dependencies to make the default installation of EvoTorch runnable in most scenarios
  • Add docs badge and landing page link to the README
  • Fix broken links in PyPI

0.1.0

08 Aug 21:06
1691060
Compare
Choose a tag to compare

We are excited to release the first public version of EvoTorch - an evolutionary computation library created at NNAISENSE.

With EvoTorch, one can solve various optimization problems, without having to worry about whether or not these problems at hand are differentiable. Among the problem types that are solvable with EvoTorch are:

  • Black-box optimization problems (continuous or discrete)
  • Reinforcement learning tasks
  • Supervised learning tasks
  • etc.

Various evolutionary computation algorithms are available in EvoTorch:

  • Distribution-based search algorithms:
    • PGPE: Policy Gradients with Parameter-based Exploration.
    • XNES: Exponential Natural Evolution Strategies.
    • SNES: Separable Natural Evolution Strategies.
    • CEM: Cross-Entropy Method.
  • Population-based search algorithms:
    • SteadyStateGA: A fully elitist genetic algorithm implementation. Also supports multiple objectives, in which case behaves like NSGA-II.
    • CoSyNE: Cooperative Synapse Neuroevolution.

All these algorithms mentioned above are implemented in PyTorch, and therefore, can benefit from the vectorization and GPU capabilities of PyTorch. In addition, with the help of the Ray library, EvoTorch can further scale up these algorithms by splitting the workload across:

  • multiple CPUs
  • multiple GPUs
  • multiple computers over a Ray cluster