Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Storage cache refinements #808

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
30 changes: 28 additions & 2 deletions opm/models/discretization/common/fvbasediscretization.hh
Original file line number Diff line number Diff line change
Expand Up @@ -878,6 +878,31 @@ public:
return storageCache_[timeIdx][globalIdx];
}

/*!
* \brief Move the storage cache for a given time index to the back.
*
* This method should only be called by the time discretization.
*
* \param numSlots The number of time step slots for which the
* hints should be shifted.
*/
void shiftStorageCache(unsigned numSlots = 1)
{
if (!enableStorageCache()) {
return;
}

if (simulator_.problem().recycleFirstIterationStorage()) {
return;
}

assert(numSlots > 0);

for (unsigned timeIdx = 0; timeIdx < historySize - numSlots; ++ timeIdx) {
storageCache_[timeIdx + numSlots] = storageCache_[timeIdx];
}
}

/*!
* \brief Set an entry of the cache for the storage term.
*
Expand Down Expand Up @@ -1472,9 +1497,10 @@ public:
// make the current solution the previous one.
solution(/*timeIdx=*/1) = solution(/*timeIdx=*/0);

// shift the intensive quantities cache by one position in the
// history
// shift the intensive quantities and storage caches by one
// position in the history (if active and required)
asImp_().shiftIntensiveQuantityCache(/*numSlots=*/1);
asImp_().shiftStorageCache(/*numSlots=*/1);
}

/*!
Expand Down
60 changes: 28 additions & 32 deletions opm/models/discretization/common/fvbaselocalresidual.hh
Original file line number Diff line number Diff line change
Expand Up @@ -535,40 +535,36 @@ protected:
if (elemCtx.enableStorageCache()) {
const auto& model = elemCtx.model();
unsigned globalDofIdx = elemCtx.globalSpaceIndex(dofIdx, /*timeIdx=*/0);
if (model.newtonMethod().numIterations() == 0 &&
!elemCtx.haveStashedIntensiveQuantities())
{
if (!elemCtx.problem().recycleFirstIterationStorage()) {
// we re-calculate the storage term for the solution of the
// previous time step from scratch instead of using the one of
// the first iteration of the current time step.
tmp2 = 0.0;
elemCtx.updatePrimaryIntensiveQuantities(/*timeIdx=*/1);
asImp_().computeStorage(tmp2, elemCtx, dofIdx, /*timeIdx=*/1);
}
else {
// if the storage term is cached and we're in the first iteration
// of the time step, use the storage term of the first iteration
// as the one as the solution of the last time step (this assumes
// that the initial guess for the solution at the end of the time
// step is the same as the solution at the beginning of the time
// step. This is usually true, but some fancy preprocessing
// scheme might invalidate that assumption.)
for (unsigned eqIdx = 0; eqIdx < numEq; ++ eqIdx)
tmp2[eqIdx] = Toolbox::value(tmp[eqIdx]);
}

Valgrind::CheckDefined(tmp2);

model.updateCachedStorage(globalDofIdx, /*timeIdx=*/1, tmp2);
for (unsigned eqIdx = 0; eqIdx < numEq; ++eqIdx) {
tmp2[eqIdx] = Toolbox::value(tmp[eqIdx]);
}
else {
// if the mass storage at the beginning of the time step is not cached,
// if the storage term is cached and we're not looking at the first
// iteration of the time step, we take the cached data.
tmp2 = model.cachedStorage(globalDofIdx, /*timeIdx=*/1);
Valgrind::CheckDefined(tmp2);
if (!elemCtx.haveStashedIntensiveQuantities()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh, interesting. I'd not come across that function before. Its documentation is a little hard to comprehend though, since it states that the function

returns true if NO intensive quantities are stashed

Is that accurate?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also not really come across it. The mechanism is used to stash away data for a cell before perturbing the element context for finite difference derivative evaluations, and then restoring it afterwards, so its removal from the logic made such cases fail.

The doc is wrong...

if (elemCtx.problem().recycleFirstIterationStorage()) {
if (model.newtonMethod().numIterations() == 0) {
// if the storage term is cached and we're in the first iteration
// of the time step, use the storage term of the first iteration
// as the one as the solution of the last time step (this assumes
// that the initial guess for the solution at the end of the time
// step is the same as the solution at the beginning of the time
// step. This is usually true, but some fancy preprocessing
// scheme might invalidate that assumption.)
Valgrind::CheckDefined(tmp2);
model.updateCachedStorage(globalDofIdx, /*timeIdx=*/1, tmp2);
}
} else {
model.updateCachedStorage(globalDofIdx, /*timeIdx=*/0, tmp2);
// If we are at the very first timestep, there has
// never been a storage cache shift, and we must also set the timeIdx 1 cache here.
if (elemCtx.simulator().timeStepIndex() == 0 && model.newtonMethod().numIterations() == 0) {
model.updateCachedStorage(globalDofIdx, /*timeIdx=*/1, tmp2);
}
}
}
// if the storage term at the beginning of the time step is cached
// from the last time step or we're not looking at the first
// iteration of the time step, we take the cached data.
tmp2 = model.cachedStorage(globalDofIdx, /*timeIdx=*/1);
Valgrind::CheckDefined(tmp2);
}
else {
// if the mass storage at the beginning of the time step is not cached,
Expand Down
25 changes: 13 additions & 12 deletions opm/models/discretization/common/tpfalinearizer.hh
Original file line number Diff line number Diff line change
Expand Up @@ -608,21 +608,22 @@ private:
setResAndJacobi(res, bMat, adres);
// Either use cached storage term, or compute it on the fly.
if (model_().enableStorageCache()) {
// The cached storage for timeIdx 0 (current time) is not
// used, but after storage cache is shifted at the end of the
// timestep, it will become cached storage for timeIdx 1.
model_().updateCachedStorage(globI, /*timeIdx=*/0, res);
if (model_().newtonMethod().numIterations() == 0) {
// Need to update the storage cache.
if (problem_().recycleFirstIterationStorage()) {
if (problem_().recycleFirstIterationStorage()) {
if (model_().newtonMethod().numIterations() == 0) {
// Assumes nothing have changed in the system which
// affects masses calculated from primary variables.
model_().updateCachedStorage(globI, /*timeIdx=*/1, res);
} else {
Dune::FieldVector<Scalar, numEq> tmp;
IntensiveQuantities intQuantOld = model_().intensiveQuantities(globI, 1);
LocalResidual::computeStorage(tmp, intQuantOld);
model_().updateCachedStorage(globI, /*timeIdx=*/1, tmp);
}
} else {
// The cached storage for timeIdx 0 (current time) is
// not used at this step, but after storage cache is
// shifted at the end of the timestep, it will become
// cached storage for timeIdx 1.
model_().updateCachedStorage(globI, /*timeIdx=*/0, res);
// If we are at the very first timestep, there has
// never been a storage cache shift, and we must also set the timeIdx 1 cache here.
if (simulator_().timeStepIndex() == 0 && model_().newtonMethod().numIterations() == 0) {
model_().updateCachedStorage(globI, /*timeIdx=*/1, res);
}
}
res -= model_().cachedStorage(globI, 1);
Expand Down