Skip to content

dynamicslab/hydrogym

Repository files navigation

HydroGym Logo

Ruff Language: Python License WarpX Slack Code style: yapf

About this Package

IMPORTANT NOTE: This package is still ahead of an official public release, so consider anything here as an early beta. In other words, we're not guaranteeing any of this is working or correct yet. Use at your own risk

HydroGym is an open-source library of challenge problems in data-driven modeling and control of fluid dynamics. It is roughly designed as an abstract interface for control of PDEs that is compatible with typical reinforcement learning APIs (in particular Ray/RLLib and OpenAI Gym) along with specific numerical solver implementations for some canonical flow control problems. Currently these "environments" are all implemented using the Firedrake finite element library.

Features

  • Hierarchical: Designed for analysis and controller design from a high-level black-box interface to low-level operator access
    • High-level: hydrogym.env.FlowEnv classes implement the OpenAI gym.Env interface
    • Intermediate: Typical CFD interface with hydrogym.FlowConfig and hydrogym.TransientSolver classes
    • Low-level: Access to linearized operators and sparse scipy or PETSc CSR matrices
  • Modeling and analysis tools: Global stability analysis (via SLEPc) and modal decompositions (via modred)
  • Scalable: Individual environments parallelized with MPI with a highly scalable Ray backend reinforcement learning training.

Installation

By design, the core components of Hydrogym are independent of the underlying solvers in order to avoid custom or complex third-party library installations. This means that the latest release of Hydrogym can be simply installed via PyPI:

pip install hydrogym

BEWARE: The pip-package is currently behind the main repository, and we strongly urge users to build HydroGym directly from the source code. Once we've stabilized the package, we will update the pip package in turn.

However, the package assumes that the solver backend is available, so in order to run simulations locally you will need to separately ensure the solver backend is installed (again, currently all the environments are implemented with Firedrake). Alternatively (and this is important for large-scale RL training), the core Hydrogym package can (or will soon be able to) launch reinforcement learning training on a Ray-cluster without an underlying Firedrake install. For more information and suggested approaches see the Installation Docs.

Quickstart Guide

Having installed Hydrogym into our virtual environment experimenting with Hydrogym is as easy as starting the Python interpreter

python

and then setting up a Hydrogym environment instance

import hydrogym.firedrake as hgym
env = hgym.FlowEnv({"flow": hgym.Cylinder}) # Cylinder wake flow configuration
for i in range(num_steps):
    action = 0.0   # Put your control law here
    (lift, drag), reward, done, info = env.step(action)

To test that you can run individual environment instances in a multithreaded fashion, run the steady-state Newton solver on the cylinder wake with 4 processors:

cd /path/to/hydrogym/examples/cylinder
mpiexec -np 4 python pd-control.py

For more detail, check out:

  • A quick tour of features in notebooks/overview.ipynb
  • Example codes for various simulation, modeling, and control tasks in examples
  • The ReadTheDocs

Flow configurations

There are currently a number of main flow configurations, the most prominent of which are:

  • Periodic cylinder wake at Re=100
  • Chaotic pinball at Re=130
  • Open cavity at Re=7500
  • Backwards-facing step at Re=600

with visualizations of the flow configurations available in the docs.