Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow different AMR flag methods on each level #262

Open
rjleveque opened this issue Jun 5, 2020 · 5 comments
Open

Allow different AMR flag methods on each level #262

rjleveque opened this issue Jun 5, 2020 · 5 comments
Assignees

Comments

@rjleveque
Copy link
Member

For discussion, fleshing out an idea we've discussed briefly in the past...

See also http://www.clawpack.org/refinement.html

Current flagging strategy in amrclaw:

(ignoring adjoint flagging to start)

setrun.py sets values:

amrdata.flag_richardson = boolean (stored as flag_richardson in Fortran)
amrdata.flag_richardson_tol = scalar (stored as tol in Fortran)
amrdata.flag2refine = boolean (stored as flag_gradient in Fortran)
amrdata.flag2refine_tol = scalar (stored as tolsp in Fortran)

If it is regrid time on level L, the following is done in flagger.f,
looping over all patches on this level:

  1. set amrflag(i,j) = UNSET for all cells on patch.

  2. call flagregions2 to set some values to DONTFLAG if forbidden or
    DOFLAG if required to refine further, done as follows:
    If the cell center is in the union of some set of regions then the
    cell is marked

             DOFLAG if L < max(minlevel)
             DONTFLAG if L >= max(maxlevel) 
    

    where the max is over these regions in both cases (yes, both are max).
    The cell remains UNSET if it does not lie in any region, or if

     max(minlevel) <= L < max(maxlevel)
    

    in which case refinement is neither required nor forbidden

  3. If any cells on this patch are still UNSET and flag_gradient==True, then
    call flag2refine and possibly mark some of these cells as DOFLAG
    if tolerance tolsp is exceeded by the undivided gradient estimate or user's
    criterion in flag2refine.
    (Does not change cells already marked, does not mark more cells as DONTFLAG)

  4. If any cells are still UNSET and flag_richardson==True,
    call errest to possibly mark some of these cells as DOFLAG
    if tolerance tol is exceeded by Richardson error estimate.
    (Does not change cells already marked, does not mark more cells as DONTFLAG)

  5. If any cells are still UNSET, mark them as DONTFLAG.

Then buffer flagged points and cluster into rectanglar level L+1 patches.

If adjoint_flagging == True then the above is still done, but now:

  • flag2refine computes inner product with adjoint snapshots over some
    time range and compares max amplitude of this to tolsp to mark DOFLAG,
  • errest computes inner product of error estimate with adjoint snapshots
    and uses tol as tolerance to mark DOFLAG.

Proposed modification (ignoring adjoint):

setrun.py sets values:

amrdata.flag_richardson = boolean or array 
amrdata.flag_richardson_tol = scalar or array
amrdata.flag2refine = boolean or array
amrdata.flag2refine_tol = scalar or array

In each case, a scalar value means that value applies to all levels L,
while an array such as

amrdata.flag_richardson = [True, True, False]

would mean use Richardson flagging on levels L = 1 and 2 but not on level 3
or above, and the tolerance could also vary by level.

The above algorithm would be unchanged except that on level L the values of
flag_richardson, richardson_tol, etc. would be the ones specified for that
level. This should be a very minor change.

Note that flag_richardson and flag2refine can each be True or False independent
of the other. If both True, then if a cell is UNSET by regions we first check
flag2refine and if it remains UNSET then it is checked by flag_richardson.
So it will end up flagged if either criterion says it should be flagged
(unless the regions already determined it as DOFLAG or DONTFLAG).

Adjoint flagging:

Currently if adjoint_flagging = True then either flag2refine and/or
flag_richardson can be applied, with the tolerance applied to the
adjoint inner product rather than directly to q or the error estimate.

This could be extended to arrays as above by letting adjoint_flagging be an
array and apply the current approach on each level based on adjoint_flagging[L].

A more general approach would be to allow adjoint_flagging in addition to one
or both of flag2refine (based on gradient of q) and flag_richardson (error
estimates of q). Ideally we would allow two new possibilities, adjoint_flagging
based on magnitude of inner product of q and the adjoint, and adjoint_flagging
based on the inner product of the error estimate with the adjoint.
This does not seem worth fully implementing at this time because doing both
Richardson flagging on q and on the adjoint simultaneously would require more
refactoring and probably would not be used.

But I do suggest we introduce:

amrdata.flag_adjoint = boolean or array
amrdata.flag_adjoint_tol = scalar or array
amrdata.flag_adjoint_richardson = boolean or array
amrdata.flag_adjoint_richardson_tol = scalar or array

for more clarity of what is being done in setrun.py, and with the requirement
that flag_adjoint_richardson and flag_richardson cannot both
be True on the same level.

In particular, allow both flag2refine and flag_adjoint to be True on the same
level, and remove the code that computes the adjoint inner product from
flag2refine since it does not belong there anyway -- the original idea was
that flag2refine.f90 should be a simple routine for gradient flagging that
the user could easily modify to implement some other desired flagging based
directly on q.

This set of modifications would in particular allow setting, e.g.

amrdata.flag_adjoint = [True, True, False]
amrdata.flag_adjoint_tol = 0.001
amrdata.flag2refine = [False, False, True]
amrdata.flag2refine_tol = 0.1

in order to use adjoint flagging only on levels L=1 and 2, and flagging based
on q in all finer levels. This is a common use case in GeoClaw where we
want to use adjoint flagging to track waves over the ocean on levels up to
L+1=3, but not on the finer grids near the coastal location of interest, where
wave_tolerance flagging is both much cheaper to apply and also properly
identifies onshore locations that are flooded (whereas the adjoint of
the linearized ocean at rest is identically 0 onshore).

@rjleveque
Copy link
Member Author

@mjberger and I also just discussed the possibility of specifying that the finest level(s) should be frozen beyond some time, after which re-gridding is never done for these level(s). This would be useful e.g. in GeoClaw simulations where the finest levels cover a fixed coastal region at very fine resolution, and frequent re-gridding is both expensive and unnecessary. This may take a bit of work to implement but should be possible.

@rjleveque
Copy link
Member Author

See clawpack/geoclaw#454 for a related suggestion in GeoClaw, eliminating the automatic creation of regions corresponding to each topo or dtopo file.

@rjleveque
Copy link
Member Author

Another suggestion: I think in setrun.py, in addition to e.g.

amrdata.amr_levels_max = 3

we should also have something like:

amrdata.amr_levels_max_outside_flagregions = 2

as the default maximum refinement level in portions of the computational domain not covered by any flagregion.

This would be equivalent to specifying a flagregion with spatial extent covering the entire domain and with minlevel = 1, maxlevel = 2 (in the case shown above), but without having to specify a flagregion. This would be convenient and also perhaps clearer to the user.

Currently the default is that any level up to amr_levels_max is allowed for any cell that is not in any flagregion (whether the cell is flagged or not then depends on other criteria such as flag2refine or Richardson).

Sometimes this is what one wants, but often not. In particular, if the finest levels are only intended to be used in a small part of the domain (e.g. one coastal site for a tsunami simulation) then there might be a flagregion that forces refinement to this level in a small region, but another flagregion would have to be introduced to forbid refinement to this level elsewhere, regardless of the size of wave.

@mandli
Copy link
Member

mandli commented Jun 10, 2020

For the former suggestion were you thinking of specifying a time at which a level would be frozen?

@rjleveque
Copy link
Member Author

@mandli: Yes, it would be good to be able to specify a time at which each level is frozen. Perhaps we could allow this to also vary with level, with default time infinity, and with the requirement that frozen_time[L+1] <= frozen_time[L] since if a level is frozen then all finer levels must also be frozen.

For the tsunami application I mentioned, we often have a finest level with minlevel==maxlevel but that doesn't turn on until some time time t1 just before we expect the tsunami to arrive (based on coarse exploratory runs) and so it would be natural to freeze at time t1.

However, one potentially tricky issue is that the regridding that introduces this finest grid will not in general happen exactly at time t1, but only at the next regridding time after t1. So it has to do this regridding to introduce the fine level in the first place before it is frozen, and hence it's not enough to check if t > frozen_time when deciding whether to regrid this level, at least not if the user sets frozen_time = t1. If the user is aware of this they could specify a frozen_time somewhat larger than t1, if they know how much larger it needs to be (to guarantee that at least one regridding happened between t1 and this time), or we could try to automate this by keeping track of whether it has already regridded after frozen_time and only freezing it after the first regridding following frozen_time.

Or since in general there would probably only be one such time for each run (or very few at least), we could force the code to adjust the time step if need to hit that time exactly (as we do for output times) and then force a regridding at exactly that time and freeze afterwards.

None of these is a perfect solution, but could be workable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants