Skip to content

Commit

Permalink
Merge branch 'development'
Browse files Browse the repository at this point in the history
  • Loading branch information
zingale committed Feb 1, 2022
2 parents 55ee8fa + 756d00e commit 5bf85dc
Show file tree
Hide file tree
Showing 28 changed files with 146 additions and 439 deletions.
2 changes: 1 addition & 1 deletion .gitmodules
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@

[submodule "Microphysics"]
path = external/Microphysics
url = https://github.com/starkiller-astro/Microphysics.git
url = https://github.com/AMReX-Astro/Microphysics.git
10 changes: 10 additions & 0 deletions CHANGES.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,13 @@
# 22.02

* Microphysics has moved from Starkiller-Astro to AMReX-Astro. The
git submodules have been updated accordingly. The old URL should
redirect to the new location, but you are encouraged to change
the submodule URL if you use submodules. From the top-level Castro/
directory this can be done as:

git submodule set-url -- external/Microphysics/ https://github.com/AMReX-Astro/Microphysics.git

# 21.12

* Tiling was added to main loop in MHD algorithm to enable
Expand Down
4 changes: 2 additions & 2 deletions Docs/source/EOSNetwork.rst
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ will likely not converge. Usually a prior value of the temperature or
density suffices if it’s available, but if not then use ``T_guess`` or
``small_dens``.

The `Microphysics <https://github.com/starkiller-astro/Microphysics>`__
The `Microphysics <https://github.com/AMReX-Astro/Microphysics>`__
repository is the collection of microphysics routines that are compatible with the
AMReX-Astro codes. We refer you to the documentation in that repository for how to set it up
and for information on the equations of state provided. That documentation
Expand Down Expand Up @@ -235,7 +235,7 @@ In normal operation in Castro  the integration occurs over a time interval
of :math:`\Delta t/2`, where :math:`\Delta t` is the hydrodynamics timestep.

If you are interested in using actual nuclear burning networks,
you should download the `Microphysics <https://github.com/starkiller-astro/Microphysics>`__
you should download the `Microphysics <https://github.com/AMReX-Astro/Microphysics>`__
repository. This is a collection of microphysics routines that are compatible with the
AMReX Astro codes. We refer you to the documentation in that repository for how to set it up
and for information on the networks provided. That documentation
Expand Down
2 changes: 1 addition & 1 deletion Docs/source/Introduction.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Castro's major capabilities:
* spectral deferred corrections time integration for coupling hydro
and reactions (see :ref:`ch:sdc`)

* parallelization via MPI + OpenMP or MPI + CUDA
* parallelization via MPI + OpenMP (CPUs), MPI + CUDA (NVIDIA GPUs), or MPI + HIP (AMD GPUs)


Development Model
Expand Down
11 changes: 5 additions & 6 deletions Docs/source/build_system.rst
Original file line number Diff line number Diff line change
Expand Up @@ -61,19 +61,18 @@ These parameters control Fortran support:
Parallelization and GPUs
^^^^^^^^^^^^^^^^^^^^^^^^

.. index:: USE_MPI, USE_OMP, USE_CUDA, USE_ACC
.. index:: USE_MPI, USE_OMP, USE_CUDA, USE_HIP

The following parameters control how work is divided across nodes, cores, and GPUs.

* ``USE_CUDA``: compile with GPU support using CUDA.
* ``USE_MPI``: compile with the MPI library to allow for distributed parallelism.

* ``USE_ACC``: compile with OpenACC. Note: this is a work in
progress and should not be used presently.
* ``USE_OMP``: compile with OpenMP to allow for shared memory parallelism.

* ``USE_CUDA``: compile with NVIDIA GPU support using CUDA.

* ``USE_MPI``: compile with the MPI library to allow for distributed parallelism.
* ``USE_HIP``: compile with AMD GPU support using HIP.

* ``USE_OMP``: compile with OpenMP to allow for shared memory parallelism.



Expand Down
2 changes: 1 addition & 1 deletion Docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def get_version():

# General information about the project.
project = 'Castro'
copyright = '2018-2020, Castro development tem'
copyright = '2018-2022, Castro development team'
author = 'Castro development team'

html_logo = "castro_logo_hot_200.png"
Expand Down
8 changes: 4 additions & 4 deletions Docs/source/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Getting Started
.. note::

Castro has two source dependencies: `AMReX <https://github.com/AMReX-Codes/amrex>`_, the adaptive mesh
library, and `StarKiller Microphysics <https://github.com/starkiller-astro/Microphysics>`_, the collection of equations
library, and `Microphysics <https://github.com/AMReX-Astro/Microphysics>`_, the collection of equations
of state, reaction networks, and other microphysics. The
instructions below describe how to get these dependencies automatically
with Castro.
Expand Down Expand Up @@ -49,7 +49,7 @@ is installed on your machine—we recommend version 1.7.x or higher.
that all of Castro's dependencies are downloaded. Currently this
requirement is for the AMReX mesh refinement framework, which is
maintained in the AMReX-Codes organization on GitHub, and the
Microphysics repository from the starkiller-astro organization.
Microphysics repository from the AMReX-Astro organization.
AMReX adds the necessary code for the driver code for the simulation,
while Microphysics adds the equations of state, reaction
networks, and other microphysics needed to run Castro.
Expand Down Expand Up @@ -95,12 +95,12 @@ is installed on your machine—we recommend version 1.7.x or higher.
To do so, you can clone them from GitHub using::

git clone https://github.com/AMReX-Codes/amrex.git
git clone https://github.com/starkiller-astro/Microphysics.git
git clone https://github.com/AMReX-Astro/Microphysics.git

or via SSH as::

git clone git@github.com:/AMReX-Codes/amrex.git
git clone git@github.com:/starkiller-astro/Microphysics.git
git clone git@github.com:/AMReX-Astro/Microphysics.git

Then, set the ``AMREX_HOME`` environment variable to point to the
``amrex/`` directory, and the ``MICROPHYSICS_HOME`` environment
Expand Down
32 changes: 26 additions & 6 deletions Docs/source/mpi_plus_x.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,6 @@ are used on GPUs as on CPUs.
Almost all of Castro runs on GPUs, with the main exception being
the true SDC solver (``USE_TRUE_SDC = TRUE``).

To enable GPU computing, compile with::

USE_MPI = TRUE
USE_OMP = FALSE
USE_CUDA = TRUE

When using GPUs, almost all of the computing is done on the GPUs. In
the MFIter loops over boxes, the loops put a single zone on each GPU
thread, to take advantage of the massive parallelism. The Microphysics
Expand All @@ -58,6 +52,32 @@ Best performance is obtained with bigger boxes, so setting
``amr.max_grid_size = 128`` and ``amr.blocking_factor = 32`` can give
good performance.

NVIDIA GPUs
-----------

With NVIDIA GPUs, we use MPI+CUDA, compiled with GCC and the NVIDIA compilers.
To enable this, compile with::

USE_MPI = TRUE
USE_OMP = FALSE
USE_CUDA = TRUE


AMD GPUs
--------

For AMD GPUs, we use MPI+HIP, compiled with the ROCm compilers.
To enable this, compile with::

USE_MPI = TRUE
USE_OMP = FALSE
USE_HIP = TRUE


.. note::

AMD + HIP support is new and considered experimental.


Working at Supercomputing Centers
=================================
Expand Down
2 changes: 0 additions & 2 deletions Exec/Make.Castro
Original file line number Diff line number Diff line change
Expand Up @@ -251,8 +251,6 @@ ifeq ($(USE_RAD), TRUE)
DEFINES += -DRAD_INTERP

DEFINES += -DNGROUPS=$(NGROUPS)

EXTERN_CORE += $(TOP)/Util/LAPACK
endif

ifeq ($(USE_MAESTRO_INIT), TRUE)
Expand Down
56 changes: 10 additions & 46 deletions Source/driver/Castro.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,6 @@
#include <AMReX_TagBox.H>
#include <AMReX_FillPatchUtil.H>
#include <AMReX_ParmParse.H>
#ifdef MICROPHYSICS_FORT
#include <extern_parameters_F.H>
#endif

#ifdef RADIATION
#include <Radiation.H>
Expand All @@ -48,10 +45,6 @@
#include <extern_parameters.H>
#include <prob_parameters.H>

#ifdef MICROPHYSICS_FORT
#include <microphysics_F.H>
#endif

#include <problem_initialize.H>
#include <problem_initialize_state_data.H>
#ifdef MHD
Expand Down Expand Up @@ -197,13 +190,6 @@ Castro::variableCleanUp ()

desc_lst.clear();

#if !defined(NETWORK_HAS_CXX_IMPLEMENTATION)
// Fortran cleaning
#ifdef MICROPHYSICS_FORT
microphysics_finalize();
#endif
#endif

// C++ cleaning
eos_finalize();

Expand Down Expand Up @@ -594,30 +580,30 @@ Castro::read_params ()
info.SetMaxLevel(max_level);
}

if (int nval = ppr.countval("value_greater")) {
if (ppr.countval("value_greater")) {
Vector<Real> value;
ppr.getarr("value_greater", value, 0, nval);
ppr.getarr("value_greater", value, 0, ppr.countval("value_greater"));
std::string field;
ppr.get("field_name", field);
error_tags.push_back(AMRErrorTag(value, AMRErrorTag::GREATER, field, info));
}
else if (int nval = ppr.countval("value_less")) {
else if (ppr.countval("value_less")) {
Vector<Real> value;
ppr.getarr("value_less", value, 0, nval);
ppr.getarr("value_less", value, 0, ppr.countval("value_less"));
std::string field;
ppr.get("field_name", field);
error_tags.push_back(AMRErrorTag(value, AMRErrorTag::LESS, field, info));
}
else if (int nval = ppr.countval("gradient")) {
else if (ppr.countval("gradient")) {
Vector<Real> value;
ppr.getarr("gradient", value, 0, nval);
ppr.getarr("gradient", value, 0, ppr.countval("gradient"));
std::string field;
ppr.get("field_name", field);
error_tags.push_back(AMRErrorTag(value, AMRErrorTag::GRAD, field, info));
}
else if (int nval = ppr.countval("relative_gradient")) {
else if (ppr.countval("relative_gradient")) {
Vector<Real> value;
ppr.getarr("relative_gradient", value, 0, nval);
ppr.getarr("relative_gradient", value, 0, ppr.countval("relative_gradient"));
std::string field;
ppr.get("field_name", field);
error_tags.push_back(AMRErrorTag(value, AMRErrorTag::RELGRAD, field, info));
Expand Down Expand Up @@ -964,7 +950,6 @@ Castro::initData ()
// Loop over grids, call FORTRAN function to init with data.
//
const Real* dx = geom.CellSize();
const Real* prob_lo = geom.ProbLo();
MultiFab& S_new = get_new_data(State_Type);
Real cur_time = state[State_Type].curTime();

Expand Down Expand Up @@ -1089,8 +1074,6 @@ Castro::initData ()
for (MFIter mfi(S_new); mfi.isValid(); ++mfi)
{
const Box& box = mfi.validbox();
const int* lo = box.loVect();
const int* hi = box.hiVect();

auto s = S_new[mfi].array();
auto geomdata = geom.data();
Expand Down Expand Up @@ -1396,8 +1379,6 @@ Castro::initData ()
#ifdef GRAVITY
#if (AMREX_SPACEDIM > 1)
if ( (level == 0) && (spherical_star == 1) ) {
const int nc = S_new.nComp();
const int n1d = get_numpts();
int is_new = 1;
make_radial_data(is_new);
}
Expand Down Expand Up @@ -2996,7 +2977,9 @@ Castro::normalize_species (MultiFab& S_new, int ng)
[=] AMREX_GPU_HOST_DEVICE (int i, int j, int k)
{
Real rhoX_sum = 0.0_rt;
#ifndef AMREX_USE_GPU
Real rhoInv = 1.0_rt / u(i,j,k,URHO);
#endif

for (int n = 0; n < NumSpec; ++n) {
#ifndef AMREX_USE_GPU
Expand Down Expand Up @@ -3400,28 +3383,10 @@ Castro::extern_init ()
std::cout << "reading extern runtime parameters ..." << std::endl;
}

#ifdef MICROPHYSICS_FORT
const int probin_file_length = probin_file.length();
Vector<int> probin_file_name(probin_file_length);

for (int i = 0; i < probin_file_length; i++) {
probin_file_name[i] = probin_file[i];
}

// read them in in Fortran from the probin file
runtime_init(probin_file_name.dataPtr(),&probin_file_length);
#endif

// grab them from Fortran to C++; then read any C++ parameters directly
// from inputs (via ParmParse)
init_extern_parameters();

#ifdef MICROPHYSICS_FORT
// finally, update the Fortran side via ParmParse to override the
// values of any parameters that were set in inputs
update_fortran_extern_after_cxx();
#endif

}

void
Expand Down Expand Up @@ -4003,7 +3968,6 @@ Castro::make_radial_data(int is_new)
Real dr = dx[0];

auto problo = geom.ProbLoArray();
auto probhi = geom.ProbHiArray();

MultiFab& S = is_new ? get_new_data(State_Type) : get_old_data(State_Type);
const int nc = S.nComp();
Expand Down
20 changes: 0 additions & 20 deletions Source/driver/Castro_F.H
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,6 @@ extern "C"
#endif


#ifdef RADIATION
void ca_inelastic_sct
(const int i, const int j, const int k,
const amrex::Real temp,
amrex::Real* Erout,
const amrex::Real ks,
amrex::Real& dEr,
const amrex::Real dt);
#endif

#ifdef AUX_UPDATE
void ca_auxupdate
(BL_FORT_FAB_ARG(state_old),
Expand Down Expand Up @@ -54,14 +44,4 @@ BL_FORT_PROC_DECL(CA_INITDATA_OVERWRITE,ca_initdata_overwrite)
const int& r_model_start);
#endif

#ifdef MHD
BL_FORT_PROC_DECL(CA_INITMAG,ca_initmag)
(const int& level, const amrex::Real& time,
const int* lo, const int* hi,
const int& nx, BL_FORT_FAB_ARG_3D(magx),
const int& ny, BL_FORT_FAB_ARG_3D(magy),
const int& nz, BL_FORT_FAB_ARG_3D(magz),
const amrex::Real dx[], const amrex::Real xlo[], const amrex::Real xhi[]);
#endif

#endif
2 changes: 0 additions & 2 deletions Source/driver/Castro_advance.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -344,8 +344,6 @@ Castro::initialize_advance(Real time, Real dt, int amr_iteration, int amr_ncycle
for (MFIter mfi(S_old); mfi.isValid(); ++mfi)
{
const Box& box = mfi.validbox();
const int* lo = box.loVect();
const int* hi = box.hiVect();

auto s = S_old[mfi].array();
auto geomdata = geom.data();
Expand Down
7 changes: 0 additions & 7 deletions Source/driver/Castro_io.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -197,8 +197,6 @@ Castro::restart (Amr& papa,
problem_restart(dir);
}

const Real* dx = geom.CellSize();

if ( (grown_factor > 1) && (parent->maxLevel() < 1) )
{
std::cout << "grown_factor is " << grown_factor << std::endl;
Expand Down Expand Up @@ -266,10 +264,7 @@ Castro::restart (Amr& papa,
for (MFIter mfi(S_new); mfi.isValid(); ++mfi)
{

const Real* prob_lo = geom.ProbLo();
const Box& bx = mfi.validbox();
const int* lo = bx.loVect();
const int* hi = bx.hiVect();

if (! orig_domain.contains(bx)) {

Expand All @@ -296,8 +291,6 @@ Castro::restart (Amr& papa,
#if (AMREX_SPACEDIM > 1)
if ( (level == 0) && (spherical_star == 1) ) {
MultiFab& S_new = get_new_data(State_Type);
const int nc = S_new.nComp();
const int n1d = get_numpts();
int is_new = 1;
make_radial_data(is_new);
}
Expand Down

0 comments on commit 5bf85dc

Please sign in to comment.