Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create Generalized Laplace Operator #414

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

bugrahantemur
Copy link
Contributor

@bugrahantemur bugrahantemur commented Apr 28, 2023

This PR introduces a new Laplace operator implementation that can be used by any module in ExaDG.

The requirements that this new operator fulfills were listed in #357. A document with the math behind this operator is also linked in that issue.

In short, the operator is templated for the scalar and vector-valued problems, as well as coefficients that couple or do not couple between the components of the gradient.

For a proof-of-concept, this PR also removes the diffusive operator in the convection-diffusion module and replaces it with the new generalized Laplace operator. Other than this, as few places as possible are touched in this PR to keep it manageable for reviews.


Here are some details about the implementation:

  • Operators that use the implemented kernel (these are right now the operator itself, and the combined operator in the convection-diffusion module) are responsible for implementing their own cell-loop/face-loop functions to decide how to calculate coefficients, how to store them (like having cell-based face loops or not), and how and when to update them.

  • do_boundary_integral_cell_based() is introduced in the base class operator OperatorBase fully analogous to the already present do_face_int_integral_cell_based() as these two functions need to access the cell-based face coefficients unlike their non-cell-based counterparts.

  • fill_matrix_free_data() in IncNS::SpatialOperatorBase is made virtual and overriden in IncNS::OperatorCoupled, since the generalized Laplace operator requires to set additional flags if it is used as a preconditioner.


Note: The convection-diffusion module is currently the only place where the operator is used. However, the applications that test this module do not use variable coefficients as the diffusivity is assumed constant, i.e. the variable coefficient functionality is disabled for optimization. During development, these tests were also successfully run with variable coefficients, but this is not part of the PR to keep changes to a minimum.

Closes #357

@bugrahantemur bugrahantemur marked this pull request as ready for review May 2, 2023 10:51
@bugrahantemur
Copy link
Contributor Author

I'd appreciate a review if you can @nfehn @peterrum :)

@nfehn
Copy link
Member

nfehn commented May 3, 2023

Yes, I have it on my agenda, but I am currently struggling to get ExaDG running on my new machine, get tests passing, etc.

@nfehn
Copy link
Member

nfehn commented May 4, 2023

@bugrahantemur Now I think I got a first overview of this PR. I very much appreciate the effort that you made so far.

In my opinion, the current design has the following problem

  • we have duplicated code between the GeneralizedLaplace::Operator and the CombinedOperator (all the functions calculating the variable coefficient). The code duplication is somewhat substantial, so I believe we really have to avoid this.
  • the implementation seems to be somewhat limited to variable coefficient fields provided via Functions. Extending this to other types of models will probably require changes at several places in the code, because of the code duplication item above.

These problems are kind of solved in the IncNS module, where we have variable viscosity models in the form of turbulence models and generalized Newtonian models (see the recent PR #381). The solution is to use the design principle "composition", i.e. a class that operates on / manipulates the variable coefficient data structure living in Kernel. It is important to not realize this functionality inside the Operator, because this limits flexibility and causes code duplication. In principle, we want to be able to use variable coefficient fields based on an arbitrary underlying function or model, which might depend on other dof-vectors or originate from any "multiphysics" context. This modularity can in my opinion be achieved in an elegant way by composition, see the class IncNS::SpatialOperatorBase with member variables TurbulenceModel and GeneralizedNewtonianModel. The viscous Kernel or Operator, however, should not know where the variable viscosity comes from (analytical function vs. a "complex" model). In principle, one could add another model like AnalyticalViscosityModel derived from ViscosityModelBase and implementing a variable viscosity field provided by the user via a dealii::Function. Note also that in order to be able to combine different models (assuming they are "additive"), the variable viscosity models in the IncNS module implement an add_viscosity() (not a calculate_viscosity()) logic.

If not(coefficient_is_variable), we need to make sure in my opinion that the VariableCoefficient data structures storing coefficients for all q-points is not used, but is instead by-passed using the constant coefficient that is a member variable of OperatorData. Otherwise, I have the fear that we make the standard case with constant coefficient too slow in terms of computational efficiency, increasing the data transfer from main memory significantly when reading q-point data unnecessarily in the constant coefficient case. This logic with "optimized code path" for the constant coefficient case should be handled by Kernel (owning the VariableCoefficient data structure) according to the single-responsibility principle. In this sense, the VariableCoefficient object is only a feature of the Kernel that is only used if coefficient_is_variable.

@bugrahantemur
Copy link
Contributor Author

bugrahantemur commented May 8, 2023

@nfehn Your proposed approach sounds quite good, and a separate object implementing the coefficient calculation can be done easily, but I think there is an important point which may have slipped your attention, which is the operators inside the multigrid preconditioners.

Namely, the preconditioners operate on a number of actual operators, like the IncNS::MomentumOperator or the ConvDiff::CombinedOperator which exist simultaneously on multiple levels. These operators each create and own their own kernel objects (this works fine as long as every needed information is present in the kernel). If the kernel has mesh-dependent physics, like calculating the penalty parameter in the viscous or the diffusive kernels, update_after_grid_motion is called on the operator, see:

MultigridPreconditioner<dim, Number>::update_operators_after_grid_motion()
{
for(unsigned int level = this->coarse_level; level <= this->fine_level; ++level)
{
get_operator(level)->update_after_grid_motion();
}
}

which in turn might do something like:

MomentumOperator<dim, Number>::update_after_grid_motion()
{
if(operator_data.viscous_problem)
viscous_kernel->calculate_penalty_parameter(this->get_matrix_free(),
this->get_data().dof_index);
}

However, this cannot include updating the coefficients if the operator or the kernel is not supposed to own the coefficient calculation object. Was this clear, else we can maybe discuss in person?

@bugrahantemur
Copy link
Contributor Author

I pushed a new design trying to fit into our constraints. Here are some points:

  • I implemented a new generic AnalyticalTensorCoefficientFunctionModel which is templated for the kernels. This object can be used to evaluate a dealii::Function and set the values at the quadrature points for any kernel that supports the operations set_coefficients_cell() etc.

  • In the kernel operator I added an enum to select different coefficient models. Right now only ConstantFunction and VariableFunction are supported. This was the only way I found which allows the kernel data to stay portable between different instances of the kernel, which is very important due to the MultiGrid implementation (see my comment from above). Other models, e.g. computing the coefficients from a solution vector, may add the desired dof_index to the kernel data (analogous to the convective kernel design) and implement these models independently from the kernel.

This design allowed an elegant way of avoiding code duplication, keeping the FE operators clean (as @nfehn highlighted), and also keeping the kernels as small as possible in terms of implemented algorithms but also as detailed as necessary in the current framework.

@bugrahantemur bugrahantemur requested a review from nfehn May 11, 2023 08:43
@nfehn
Copy link
Member

nfehn commented May 26, 2023

However, this cannot include updating the coefficients if the operator or the kernel is not supposed to own the coefficient calculation object. Was this clear, else we can maybe discuss in person?

As a general answer to this question: You are right that multigrid has its own design challenges. In ExaDG, the general/current strategy is to ask the fine-level operator for the data, and then interpolate the data to coarser multigrid levels. We do this currently e.g. to interpolate the velocity vector (IncNS) or displacement vector (Structure) to coarser levels. These vectors originate from the linearization inside the Newton solver. Generally, I would argue that one does not even have the information to recompute the variable coefficient on the coarser levels as it is done on the fine level. The variable coefficient might dependent on the solution vector, which is only known on the fine level and not on the coarser levels. If I take the example of a turbulence SGS model: Interpolating the fine-level turbulent viscosity to coarser levels is something different than interpolating the velocity to coarser levels and recomputing the turbulent viscosity on the coarser levels (using the viscosity model). However, I think the former is what one wants to have for multigrid?
In the Structure module, the operator owns the MaterialHandler, which could also be seen similar to the variable coefficient model. However, I think this operator is not used in a "combined" fashion with other operators. Since we want to write a generalized Laplace operator, I believe we can not follow the design chosen in the Structure module.

So in my opinion the procedure to enable multigrid should be:

  1. Ask the fine-level operator for the data
  2. Interpolate fine-level data to coarser levels e.g. in MyModule::MultigridPreconditioner::update() (this is an open problem in ExaDG, where implementations still have to be developed; questions: q-point vs dof-data layout for interpolation; which type of interpolation for structure-preservation)
  3. Inject the coarse data into the coarse-level operators/kernels.
  4. Ready for multigrid

The philosophy for our geometric multigrid algorithm implementation in ExaDG is:

The fine-level operator must be able to provide all the data to construct the coarse-level multigrid operators.

The general design of the PDE operator, however, should not be driven by the multigrid preconditioner. To solve the current problem with an analytical variable coefficient model regarding multigrid: I see no harm if the multigrid preconditioner for a certain module depends on the spatial operator of this module, which is the case for most multigrid preconditioners in ExaDG. In your case and to avoid the problem of interpolating fine-level data to coarse levels, one could easily pass the analytical variable coefficient model to this particular multigrid preconditioner during initialization. Just as the variable coefficient model is "module-dependent", the multigrid preconditioners may also be module-dependent. I see this as the burden/price of geometric multigrid.

Comment on lines 41 to 53
template<typename T, int coefficient_rank, int dim, typename Number>
static inline DEAL_II_ALWAYS_INLINE //
T
coeff_mult(dealii::Tensor<coefficient_rank, dim, Number> const & coeff, T const & x)
{
if constexpr(coefficient_rank == 4)
return dealii::double_contract<2, 0, 3, 1>(coeff, x);
else
return coeff * x;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Multiplication of what with coefficients of different ranks". The word "different" is misunderstandable. I would write "... from the left with a coefficient tensor of rank coefficient rank".

In my opinion, the variable names T, coeff_mult, x are too short. Also 2,0,3,1 is somewhat hard to understand.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change the sentence, rename the variables and put a comment about the double contraction👍

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's also make the function clearer by using full names.

Comment on lines 69 to 100

//!
CoefficientModel coefficient_model;

//!
std::shared_ptr<dealii::Function<dim>> coefficient_function;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment #414 (comment)

Comment on lines 78 to 80
* Generalized Laplace kernel. This class is responsible for providing the core pieces of
* information required to evaluate the Laplace operator, i.e. the fluxes, numerical parameters
* (like the internal penalty parameter), and the coefficients.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"interior penalty" should be explained for this generalized Laplace operator. Does the operator support both continuous and discontinuous Galerkin discretizations?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done the implementation with only DG in my head. So I'm not sure if there are any intricacies apart from the interior penalty that I missed about continuous Galerkin. What do you think?

* this is equal to 1.
* @tparam coupling_coefficient Boolean switch to differentiate if the coefficient type couples
* the solution vector components or not.
* - `false` always results in a scalar coefficient,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not fully understand this. What about a 1-component solution field (gradient has dim components) with a diagonal 2-tensor as variable coefficient? The result of coeff_mult() would again be a vector with dim components. Would one call this a "coupling"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See answer #414 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would the current implementation support a diagonal 2-tensor as variable coefficient? Please add some documentation if not already done.

@@ -235,6 +237,147 @@ class VariableCoefficients
bool store_cell_based_face_data{false};
};

template<typename kernel_type>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to avoid this dependency of the analytical coefficient model on the kernel. The connection is actually the VariableCoefficients class and not the kernel in my opinion, just as you placed this functionality in the file variable_coefficients.h. I would not template VariableCoefficients here, but let the analytical model explicitly use the class VariableCoefficients.

Comment on lines 311 to 317
AnalyticalTensorCoefficientFunctionModel<This> coefficient_model{};
coefficient_model.calculate_coefficients(
matrix_free,
quad_index,
data.calculate_and_store_cell_based_face_coefficients,
this,
data.coefficient_function);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code shows that kernel and the analytical model are strongly coupled from an implementation point of view (see arguments This as template and this; comment https://github.com/exadg/exadg/pull/414/files?diff=unified&w=0#r1206535671), while the general goal should be to minimize dependencies. I think the design would improve if kernel does not depend on the analytical model (and the other way round), however, both the kernel and the analytical model may depend on VariableCoefficients (as the low-level data structure).

Comment on lines 763 to 770
static constexpr unsigned int value_rank = (n_components > 1) ? 1 : 0;
static constexpr unsigned int coefficient_rank =
(coupling_coefficient) ? ((n_components > 1) ? 4 : 2) : 0;

using value_type = dealii::Tensor<value_rank, dim, scalar>;
using gradient_type = dealii::Tensor<value_rank + 1, dim, scalar>;

using coefficient_type = dealii::Tensor<coefficient_rank, dim, scalar>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can one ask the kernel for these types (to avoid this duplication of code)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I could define them in the namespace as alias templates, so outside the classes instead of inside them. This is maybe even better. Not sure if one could ask the kernel, probably would work as well :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see comment #414 (comment)

Comment on lines 747 to 753
* @tparam n_components Number of components of the solution (vector) field. Solution is scalar if
* this is equal to 1.
* @tparam coupling_coefficient Boolean switch to differentiate if the coefficient type couples
* the solution vector components or not.
* - `false` always results in a scalar coefficient,
* - `true` results in a 2-rank tensor coefficient if `n_components = 1` and in a 4-rank tensor
* coefficient if `n_components > 1`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should find a way to make this information unique, e.g. by referring to the kernel.

@nfehn nfehn added the new feature New feature is implemented or requested label Jun 1, 2023
Comment on lines 322 to 323
AnalyticalTensorCoefficientFunctionEvaluator<dim, Number, CoefficientType::rank>{
matrix_free, quad_index, data.coefficient_function, variable_coefficients}();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Syntax: I find this syntax very complicated. Writing analytical_coefficient_function.evaluate(variable_coefficients) would be clearer / easier to understand (without need for documentation). Extracting that this line of code modifies variable_coefficients took me several minutes. Hence, this style of programming requires much more documentation in my opinion. I doubt this additional effort in documentation justifies the one line of code saved by this programming style.

Design: More importantly, this design does not solve my main criticsm from the last review, namely that GeneralizedLaplace::Kernel should in my opinion not depend on AnalyticalTensorCoefficientFunctionEvaluator.

Example: When adding a new type to CoefficientModel, the kernel/operator has to be recompiled. I consider this an unnecessary dependency.

If we do not want to provide access to Kernel::variable_coefficients via a public getter returning a reference to VariableCoefficients (in order to fill the variable coefficients from "outside"), a lambda function with argument VariableCoefficients & could be passed to the kernel during construction/initialization, which allows to compute the coefficient field.

What to do if one wants to realize an analytical coefficient function that is time dependent? I believe one has to touch again Kernel with the current design, but this is unnecessary. Would you adjust the constructor of AnalyticalTensorCoefficientFunctionEvaluator and make double time a member variable of that class? This is probably not a good design / unnecessary, because one rather wants to call the evaluate() function suggested above with a second argument time.

if(data.coefficient_model == CoefficientModel::ConstantFunction)
{
constant_coefficient =
FunctionEvaluator<CoefficientType::rank, dim, Number>::value(*data.coefficient_function,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For CoefficientModel::Constant, why should one provide a dealii::Function at all? Then, the dereferencing * will fail.

This highlights another disadvantage of the currently chosen "framework" design. It appears odd to add a separate member constant_coefficient to KernelData if this data structure already contains std::shared_ptr<dealii::Function<dim>> coefficient_function;. So to avoid such a redundancy, one forces "everyone" to use the dealii::Function interface.

However, the kernel does not require such information! Only the classes using the kernel or operator (which might originate from vastly different application scenarios) need this information.

Comment on lines 36 to 59
// constant expression parameters
template<int n_components>
constexpr unsigned int value_rank = (n_components > 1) ? 1 : 0;

template<int n_components, bool coefficient_is_scalar>
constexpr unsigned int coefficient_rank = (coefficient_is_scalar) ? 0 :
((n_components > 1) ? 4 : 2);

// alias templates
template<typename Number>
using scalar = dealii::VectorizedArray<Number>;

template<int dim, typename Number>
using vector = dealii::Tensor<1, dim, scalar<Number>>;

template<int dim, typename Number, int n_components>
using value_type = dealii::Tensor<value_rank<n_components>, dim, scalar<Number>>;

template<int dim, typename Number, int n_components>
using gradient_type = dealii::Tensor<value_rank<n_components> + 1, dim, scalar<Number>>;

template<int dim, typename Number, int n_components, bool coefficient_is_scalar>
using coefficient_type =
dealii::Tensor<coefficient_rank<n_components, coefficient_is_scalar>, dim, scalar<Number>>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not a fan of such global definitions. To give an example: Why should one need to know outside the implementation of kernel/operator that the implementation is based on dealii::VectorizedArray?


calculate_penalty_parameter(matrix_free, dof_index);

coefficient_is_variable = data.coefficient_model != CoefficientModel::ConstantFunction;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can be considered error-prone when adding new options to CoefficientModel. Again, I see the origin of the problem in an unnecessary dependency.

Comment on lines 255 to 256
* Calculates and returns the `flux` on the faces which should be weighted with the shape
* function values.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear to me why the documentation of these functions should explain "weighted by ...". Implementation of the weak form is in my opinion the responsibility of the Operator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wrote the documentation like this to make it clear where the name of the function comes from. Because that is our motivation to name this function get_value_flux(). The caller shall multiply the return value with the shape function values. The naming of the function is not obvious from its body, which for example also works with gradients here. I just wanted to prevent any confusion caused by the function name.


GradientType const average_gradient = 0.5 * (gradient_m + gradient_p);

return -coeff_mult(coefficient, (average_gradient - tau * jump_tensor)) * normal;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

where does the minus sign next to return come from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's because I didn't want to implement operator as the negative Laplace operator. But of course I can change it to be the negative Laplace as in other parts of ExaDG.

Comment on lines 320 to 321
value_flux_m += value_flux;
value_flux_p += -value_flux; // - sign since n⁺ = -n⁻
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not like that the present PR makes such subtle sign changes.

Comment on lines 40 to 42
template<int n_components, bool coefficient_is_scalar>
constexpr unsigned int coefficient_rank = (coefficient_is_scalar) ? 0 :
((n_components > 1) ? 4 : 2);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think explanations/motivation for these numbers would be helpful here.

@bugrahantemur
Copy link
Contributor Author

As of now, there is a new version of the implementation for the Generalized Laplace operator.

In the new implementation,

  • the kernel and the coefficient calculation is completely decoupled,
  • the type aliases are put in the kernel, the operator inherits these from the kernel,
  • the kernel holds a variant (an actual std::variant) of either a constant coefficient or a VariableCoefficients object,
  • there are Get***Coefficient visitors for the variant for efficient access to the coefficients from the variant in a namespace called VarCoeffUtils,
  • in the same namespace, an AnalyticalTensorCoefficientFunctionEvaluator is implemented, which can fill a VariableCoefficients object based on a dealii::Function,
  • the ConvDiff::Operator implements initialize_diffusivity() and calculate_diffusivity() to manipulate the coefficients in the diffusive kernel,
  • these two functions implement the logic for different diffusivity calculation models (a module enum parameter),
    • Constant, AnalyticalTimeFunction, AnalyticalTimeAndSpaceFunction, only the last one requires VariableCoefficients
  • the Multigrid preconditioner of ConvDiff gets a constant diffusivity and runs with this, even in the case of variable diffusivity,
  • the pressure convection diffusion operator in IncNS can now actually work with variable viscosities.
    • as a side effect, it was necessary to make the viscosity a tensor of rank 0 in the viscous kernel to realize this.

@bugrahantemur
Copy link
Contributor Author

I'm wondering if std::visit to get coefficients on std::variant<VariableCoefficients<Coeff>, Coeff> in the innermost loop is going to ruin performance as it won't be inlined...

@@ -50,12 +51,14 @@ MultigridPreconditioner<dim, Number>::initialize(
PDEOperator const & pde_operator,
MultigridOperatorType const & mg_operator_type,
bool const mesh_is_moving,
Number const mg_diffusivity,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one should ask the pde_operator for the diffusivity instead of extending the interface.

Comment on lines +250 to +251
auto & diffusivity = pde_operator_level->get_diffusivity();
diffusivity = mg_diffusivity;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why returning a non-const reference and then overwrite the value? I think I would then prefer a set-function as we usually do in ExaDG.

Comment on lines +839 to +843
// update the diffusivity coefficients which depend on the quadrature point coordinates
// and may depend on the time, e.g. in case of an analytical coefficient function.
// However, the ALE interface does not support passing in the time here, so the coefficients
// are calculated with the simulation start time.
calculate_diffusivity(this->param.start_time);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is that this code is not placed correctly in update_after_grid_motion(). Also as a user, I can not expect that for a time-dependent problem on a non-moving mesh, the calculation of a time-dependent diffusivity will be done in a function that explicitly relates to grid-motion (ALE) problems.


diffusivity = VariableDiffusivity{};

std::get<VariableDiffusivity>(diffusivity)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is get doing here? Could you explain and/or add a comment?

@nfehn
Copy link
Member

nfehn commented Jul 25, 2023

Let me raise a general question: I see the novelty of the present PR in the introduction of a generalized Laplace operator. However, it does not really seem to fit optimally that this is new implementation is then used in the present PR to replace the diffusive operator in the ConvDiff module. So far, the ConvDiff module does not support variable diffusivities, while we have variable diffusivities/viscosities in the IncNS module, which is therefore much more suitable as an application example for a generalized, variable coefficient Laplace operator. As a consequence, the present PR has to introduce new functionality for the ConvDiff module (functions like calculate_viscosisty() in the spatial operator, changes to the multigrid preconditioner), even though there seems to be no motivation for such functionality currently from an application perspective. As a consequence, I believe the present PR mixes too many topics, that make code review unnecessarily complex and somewhat hinder the progress in the sense of merging new functionality step by step. So I want to raise the question whether it would be better to first introduce the generalized Laplace operator before using it in a concrete module in ExaDG.

@nfehn
Copy link
Member

nfehn commented Jul 25, 2023

Related to the comment #357 (comment), I believe the present PR needs to decide whether to support for now variable coefficients for the generalized Laplace operator or not. I mainly see the following two options:

  • If not, using the new generalized Laplace operator for the diffusive operator in the ConvDiff already in the present PR is reasonable since this operator does so far not support variable coefficients. Then, no redundancies are introduced by the present PR. The present PR replaces ConvDiff::DiffusiveOperator by a more general implementation. The present PR does, however, not extend the functionality of the ConvDiff module.
  • If yes, the implementation of the generalized Laplace operator introduced here probably needs to care more about the DG methodology for the generalized Laplace operator, and this should probably be introduced stand-alone, i.e. the present PR should really focus on such aspects and not on changes in the ConvDiff module. This would imply that we have for now some redundancy with IncNS::ViscousOperator, but this is acceptable since we have the plan to replace this functionality later by the new variant. If no one wants to do this in the future, one would simply remove the new generalized Laplace operator again from ExaDG, which is easy since it is not used so far. Of course, this is not intended, but I just want to describe the general procedure to avoid code duplications in ExaDG over a longer period. In any case, I think this is better than introducing another variable coefficient operator in ConvDiff actually not needed right now.

Comment on lines +67 to +72
if(this->param.preconditioner_pressure_block ==
SchurComplementPreconditioner::PressureConvectionDiffusion)
{
matrix_free_data.append_mapping_flags(
GeneralizedLaplace::Kernel<dim, Number>::get_mapping_flags(true, true));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you explain why this is needed? Why didn't we need it before, or has it been a bug before?

Comment on lines +783 to +790
// Set diffusivity
auto & diffusivity = pressure_conv_diff_operator->get_diffusivity();

// If the viscosity is variable copy the reference from the viscous kernel
if(this->param.viscosity_is_variable())
diffusivity = this->viscous_kernel->get_viscosity_coefficients();
else // just set the constant viscosity
diffusivity = this->param.viscosity;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find it un-intuitive to see from this code whether just a reference is copied or the whole data. I mentioned somewhere else that getter functions returning non-const references are also more difficult to understand in my opinion.

Shouldn't such code look like this:

if(this->viscous_kernel->viscosity_is_variable())
  pressure_conv_diff_operator->set_diffusivity(this->viscous_kernel->get_variable_viscosity());
else
  pressure_conv_diff_operator->set_diffusivity(this->viscous_kernel->get_const_viscosity());

Comment on lines +246 to +391
};

private:
/**
* Performs cell and quadrature point loops to calculate and store the variable coefficients.
*/
void
cell_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free,
VectorType &,
VectorType const &,
Range const & cell_range)
{
IntegratorCell integrator(matrix_free, {}, quad_index);

for(unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
{
integrator.reinit(cell);

for(unsigned int q = 0; q < integrator.n_q_points; ++q)
{
auto const coefficient =
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function,
integrator.quadrature_point(q),
time);

variable_coefficients.set_coefficient_cell(cell, q, coefficient);
}
}
};

/**
* Performs face and quadrature point loops to calculate and store the variable coefficients.
*/
void
face_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free,
VectorType &,
VectorType const &,
Range const & face_range)
{
IntegratorFace integrator(matrix_free, true, {}, quad_index);

for(unsigned int face = face_range.first; face < face_range.second; face++)
{
integrator.reinit(face);

for(unsigned int q = 0; q < integrator.n_q_points; ++q)
{
auto const coefficient =
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function,
integrator.quadrature_point(q),
time);

variable_coefficients.set_coefficient_face(face, q, coefficient);
}
}
};

/**
* Performs cell-based face and quadrature loops to calculate and store the variable coefficients.
*/
void
cell_based_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free,
VectorType &,
VectorType const &,
Range const & cell_range)
{
IntegratorFace integrator(matrix_free, true, {}, quad_index);

unsigned int const n_faces_per_cell =
matrix_free.get_dof_handler().get_triangulation().get_reference_cells()[0].n_faces();

for(unsigned int cell = cell_range.first; cell < cell_range.second; ++cell)
{
for(unsigned int face = 0; face < n_faces_per_cell; ++face)
{
integrator.reinit(cell, face);

unsigned int const cell_based_face_index = integrator.get_current_cell_index();

for(unsigned int q = 0; q < integrator.n_q_points; ++q)
{
auto const coefficient =
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function,
integrator.quadrature_point(q),
time);

variable_coefficients.set_coefficient_face_cell_based(cell_based_face_index,
q,
coefficient);
}
}
}
};

dealii::MatrixFree<dim, Number> const & matrix_free;
unsigned int const quad_index;
dealii::Function<dim> & coefficient_function;
double const time;
VariableCoefficients<CoefficientType> & variable_coefficients;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should not be realized inside variable_coefficients.h.

Comment on lines +283 to +319
/**
* Returns the coefficient for the requested (@p cell index, @p q point index)
* pair.
*
* If the coefficient is not variable, returns the constant coefficient.
*/
CoefficientType
get_coefficient_cell(unsigned int const cell, unsigned int const q) const
{
return std::visit(VarCoeffUtils::GetCoefficientCell<CoefficientType>{cell, q}, coefficients);
}

/**
* Returns the coefficient for the requested (@p face index, @p q point index)
* pair.
*
* If the coefficient is not variable, returns the constant coefficient.
*/
CoefficientType
get_coefficient_face(unsigned int const face, unsigned int const q) const
{
return std::visit(VarCoeffUtils::GetCoefficientFace<CoefficientType>{face, q}, coefficients);
}

/**
* Returns the coefficient for the requested (@p face index (cell-based), @p q
* point index) pair.
*
* If the coefficient is not variable, returns the constant coefficient.
*/
CoefficientType
get_coefficient_face_cell_based(unsigned int const cell_based_face, unsigned int const q) const
{
return std::visit(VarCoeffUtils::GetCoefficientFaceCellBased<CoefficientType>{cell_based_face,
q},
coefficients);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be honest, I do not see the advantages of the variant/visitor design patter here. In my opinion, one needs to write more lines of code, and the code is more difficult to understand than a very simple if-else logic. Since we are here in the generalized Laplace operator, there is no concern right now that this if-else logic needs to be duplicated at multiple places. And even in such a case, one could also introduce free functions in the future, but only once we need this complexity. So where is the advantage of the present design? Defining two functions operator() with the same name is essentially a complicated way of implementing an if-else logic.

Comment on lines +402 to +403
unsigned int const cell;
unsigned int const q;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In terms of code design, I am not convinced that it is a good idea that the most complex case (with face, q variables) dictates the "interface" also for the simplest case of a constant coefficient (not even mentioning that it is quite hard to identify an "interface" from this struct definition). While I do not see many other variants to be realized in the future, I also do not understand the motivation for the present design if we had more variants in the future. If all these variants have different interfaces, the interface of this struct will grow according to the number of all variants, just as the simple if-elseif logic.

@bugrahantemur
Copy link
Contributor Author

I think this PR is stuck right now between trying to have a use for the generalized Laplace operator, and the features it needs to have due to its original motivation, which was to be used in the context of the Darcy equation. I am strongly against merging such a big contribution without using it anywhere. On the other hand, trying to use it with existing modules, as pointed out, then causes problems, for example by introducing features that are not application motivated. The interfaces with the modules become a problem, as well.

In light of this, I think it is wiser that I further develop this operator together with the Darcy module in my own repository, use and test it there. Also concerning external constraints like development time, I think this is the best way to go. After the operator has been used there successfully with the Darcy module, we can later decide what parts of it we want to have in ExaDG and how we want it specifically. We can the import parts here step by step. This would be easier for everybody, I guess.

So, I would actually suggest closing this PR for now. What do you think @nfehn ?

@nfehn
Copy link
Member

nfehn commented Aug 9, 2023

Overall, I think we made progress in the course of this PR and developed ideas that I consider useful, so it would be somewhat sad if we do not finalize things, fully or in parts. I expressed my ideas regarding the current status and content-related aspects here: #414 (comment) and #414 (comment).

Realistically, the developments in ExaDG will go on and it might happen that codes then start to more and more diverge. I see ExaDG also as a platform for collaboration and exchange. It might be that this will become more difficult when working in separate repositories. My very personal assessment is to continue in ExaDG. I think it will pay off.

I want to emphasize that, in the end, it is totally your own decision how to proceed, because only you know your constraints. I can offer help/advice regarding DG discretizations / variable coefficients (and, potentially, also software design), but due to constraints on my side I can not invest much time if things are not going to be brought into ExaDG in the near future.

@bugrahantemur
Copy link
Contributor Author

Yes, we have come a long way due to our continuous efforts and that's great. What I'm talking about is just going to be transient (I'm planning something like 2-3 months maximum). In the meantime, I will keep up with the developments in the main repo, but I consider this a necessary step for the speed of development I need right now (conference mid-September 😅) As soon as I have things working and got some results, I want to put it all into the main repo, just as was our plan all along. Of course, it will all build on the state (of this PR) we have achieved until now.

@peterrum peterrum marked this pull request as draft September 17, 2023 06:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature New feature is implemented or requested
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Poisson problem with variable coefficients
2 participants