-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Generalized Laplace Operator #414
base: master
Are you sure you want to change the base?
Create Generalized Laplace Operator #414
Conversation
Yes, I have it on my agenda, but I am currently struggling to get ExaDG running on my new machine, get tests passing, etc. |
include/exadg/convection_diffusion/spatial_discretization/operator.cpp
Outdated
Show resolved
Hide resolved
@bugrahantemur Now I think I got a first overview of this PR. I very much appreciate the effort that you made so far. In my opinion, the current design has the following problem
These problems are kind of solved in the If |
@nfehn Your proposed approach sounds quite good, and a separate object implementing the coefficient calculation can be done easily, but I think there is an important point which may have slipped your attention, which is the operators inside the multigrid preconditioners. Namely, the preconditioners operate on a number of actual operators, like the Lines 234 to 240 in 9aba126
which in turn might do something like: Lines 132 to 137 in 9aba126
However, this cannot include updating the coefficients if the operator or the kernel is not supposed to own the coefficient calculation object. Was this clear, else we can maybe discuss in person? |
9da9103
to
7955cba
Compare
I pushed a new design trying to fit into our constraints. Here are some points:
This design allowed an elegant way of avoiding code duplication, keeping the FE operators clean (as @nfehn highlighted), and also keeping the kernels as small as possible in terms of implemented algorithms but also as detailed as necessary in the current framework. |
As a general answer to this question: You are right that multigrid has its own design challenges. In ExaDG, the general/current strategy is to ask the fine-level operator for the data, and then interpolate the data to coarser multigrid levels. We do this currently e.g. to interpolate the velocity vector (IncNS) or displacement vector (Structure) to coarser levels. These vectors originate from the linearization inside the Newton solver. Generally, I would argue that one does not even have the information to recompute the variable coefficient on the coarser levels as it is done on the fine level. The variable coefficient might dependent on the solution vector, which is only known on the fine level and not on the coarser levels. If I take the example of a turbulence SGS model: Interpolating the fine-level turbulent viscosity to coarser levels is something different than interpolating the velocity to coarser levels and recomputing the turbulent viscosity on the coarser levels (using the viscosity model). However, I think the former is what one wants to have for multigrid? So in my opinion the procedure to enable multigrid should be:
The philosophy for our geometric multigrid algorithm implementation in ExaDG is: The fine-level operator must be able to provide all the data to construct the coarse-level multigrid operators. The general design of the PDE operator, however, should not be driven by the multigrid preconditioner. To solve the current problem with an analytical variable coefficient model regarding multigrid: I see no harm if the multigrid preconditioner for a certain module depends on the spatial operator of this module, which is the case for most multigrid preconditioners in ExaDG. In your case and to avoid the problem of interpolating fine-level data to coarse levels, one could easily pass the analytical variable coefficient model to this particular multigrid preconditioner during initialization. Just as the variable coefficient model is "module-dependent", the multigrid preconditioners may also be module-dependent. I see this as the burden/price of geometric multigrid. |
template<typename T, int coefficient_rank, int dim, typename Number> | ||
static inline DEAL_II_ALWAYS_INLINE // | ||
T | ||
coeff_mult(dealii::Tensor<coefficient_rank, dim, Number> const & coeff, T const & x) | ||
{ | ||
if constexpr(coefficient_rank == 4) | ||
return dealii::double_contract<2, 0, 3, 1>(coeff, x); | ||
else | ||
return coeff * x; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Multiplication of what with coefficients of different ranks". The word "different" is misunderstandable. I would write "... from the left with a coefficient tensor of rank coefficient rank
".
In my opinion, the variable names T
, coeff_mult
, x
are too short. Also 2,0,3,1
is somewhat hard to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll change the sentence, rename the variables and put a comment about the double contraction👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's also make the function clearer by using full names.
|
||
//! | ||
CoefficientModel coefficient_model; | ||
|
||
//! | ||
std::shared_ptr<dealii::Function<dim>> coefficient_function; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment #414 (comment)
* Generalized Laplace kernel. This class is responsible for providing the core pieces of | ||
* information required to evaluate the Laplace operator, i.e. the fluxes, numerical parameters | ||
* (like the internal penalty parameter), and the coefficients. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"interior penalty" should be explained for this generalized Laplace operator. Does the operator support both continuous and discontinuous Galerkin discretizations?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done the implementation with only DG in my head. So I'm not sure if there are any intricacies apart from the interior penalty that I missed about continuous Galerkin. What do you think?
* this is equal to 1. | ||
* @tparam coupling_coefficient Boolean switch to differentiate if the coefficient type couples | ||
* the solution vector components or not. | ||
* - `false` always results in a scalar coefficient, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not fully understand this. What about a 1-component solution field (gradient has dim components) with a diagonal 2-tensor as variable coefficient? The result of coeff_mult()
would again be a vector with dim components. Would one call this a "coupling"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See answer #414 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would the current implementation support a diagonal 2-tensor as variable coefficient? Please add some documentation if not already done.
@@ -235,6 +237,147 @@ class VariableCoefficients | |||
bool store_cell_based_face_data{false}; | |||
}; | |||
|
|||
template<typename kernel_type> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to avoid this dependency of the analytical coefficient model on the kernel. The connection is actually the VariableCoefficients
class and not the kernel in my opinion, just as you placed this functionality in the file variable_coefficients.h. I would not template VariableCoefficients
here, but let the analytical model explicitly use the class VariableCoefficients
.
AnalyticalTensorCoefficientFunctionModel<This> coefficient_model{}; | ||
coefficient_model.calculate_coefficients( | ||
matrix_free, | ||
quad_index, | ||
data.calculate_and_store_cell_based_face_coefficients, | ||
this, | ||
data.coefficient_function); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code shows that kernel and the analytical model are strongly coupled from an implementation point of view (see arguments This
as template and this
; comment https://github.com/exadg/exadg/pull/414/files?diff=unified&w=0#r1206535671), while the general goal should be to minimize dependencies. I think the design would improve if kernel does not depend on the analytical model (and the other way round), however, both the kernel and the analytical model may depend on VariableCoefficients
(as the low-level data structure).
static constexpr unsigned int value_rank = (n_components > 1) ? 1 : 0; | ||
static constexpr unsigned int coefficient_rank = | ||
(coupling_coefficient) ? ((n_components > 1) ? 4 : 2) : 0; | ||
|
||
using value_type = dealii::Tensor<value_rank, dim, scalar>; | ||
using gradient_type = dealii::Tensor<value_rank + 1, dim, scalar>; | ||
|
||
using coefficient_type = dealii::Tensor<coefficient_rank, dim, scalar>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can one ask the kernel for these types (to avoid this duplication of code)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I could define them in the namespace as alias templates, so outside the classes instead of inside them. This is maybe even better. Not sure if one could ask the kernel, probably would work as well :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see comment #414 (comment)
* @tparam n_components Number of components of the solution (vector) field. Solution is scalar if | ||
* this is equal to 1. | ||
* @tparam coupling_coefficient Boolean switch to differentiate if the coefficient type couples | ||
* the solution vector components or not. | ||
* - `false` always results in a scalar coefficient, | ||
* - `true` results in a 2-rank tensor coefficient if `n_components = 1` and in a 4-rank tensor | ||
* coefficient if `n_components > 1`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should find a way to make this information unique, e.g. by referring to the kernel.
7955cba
to
bfbeeee
Compare
AnalyticalTensorCoefficientFunctionEvaluator<dim, Number, CoefficientType::rank>{ | ||
matrix_free, quad_index, data.coefficient_function, variable_coefficients}(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Syntax: I find this syntax very complicated. Writing analytical_coefficient_function.evaluate(variable_coefficients)
would be clearer / easier to understand (without need for documentation). Extracting that this line of code modifies variable_coefficients
took me several minutes. Hence, this style of programming requires much more documentation in my opinion. I doubt this additional effort in documentation justifies the one line of code saved by this programming style.
Design: More importantly, this design does not solve my main criticsm from the last review, namely that GeneralizedLaplace::Kernel
should in my opinion not depend on AnalyticalTensorCoefficientFunctionEvaluator
.
Example: When adding a new type to CoefficientModel
, the kernel/operator has to be recompiled. I consider this an unnecessary dependency.
If we do not want to provide access to Kernel::variable_coefficients
via a public getter returning a reference to VariableCoefficients
(in order to fill the variable coefficients from "outside"), a lambda function with argument VariableCoefficients &
could be passed to the kernel during construction/initialization, which allows to compute the coefficient field.
What to do if one wants to realize an analytical coefficient function that is time dependent? I believe one has to touch again Kernel
with the current design, but this is unnecessary. Would you adjust the constructor of AnalyticalTensorCoefficientFunctionEvaluator
and make double time
a member variable of that class? This is probably not a good design / unnecessary, because one rather wants to call the evaluate()
function suggested above with a second argument time
.
if(data.coefficient_model == CoefficientModel::ConstantFunction) | ||
{ | ||
constant_coefficient = | ||
FunctionEvaluator<CoefficientType::rank, dim, Number>::value(*data.coefficient_function, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For CoefficientModel::Constant
, why should one provide a dealii::Function
at all? Then, the dereferencing *
will fail.
This highlights another disadvantage of the currently chosen "framework" design. It appears odd to add a separate member constant_coefficient
to KernelData
if this data structure already contains std::shared_ptr<dealii::Function<dim>> coefficient_function;
. So to avoid such a redundancy, one forces "everyone" to use the dealii::Function
interface.
However, the kernel does not require such information! Only the classes using the kernel or operator (which might originate from vastly different application scenarios) need this information.
// constant expression parameters | ||
template<int n_components> | ||
constexpr unsigned int value_rank = (n_components > 1) ? 1 : 0; | ||
|
||
template<int n_components, bool coefficient_is_scalar> | ||
constexpr unsigned int coefficient_rank = (coefficient_is_scalar) ? 0 : | ||
((n_components > 1) ? 4 : 2); | ||
|
||
// alias templates | ||
template<typename Number> | ||
using scalar = dealii::VectorizedArray<Number>; | ||
|
||
template<int dim, typename Number> | ||
using vector = dealii::Tensor<1, dim, scalar<Number>>; | ||
|
||
template<int dim, typename Number, int n_components> | ||
using value_type = dealii::Tensor<value_rank<n_components>, dim, scalar<Number>>; | ||
|
||
template<int dim, typename Number, int n_components> | ||
using gradient_type = dealii::Tensor<value_rank<n_components> + 1, dim, scalar<Number>>; | ||
|
||
template<int dim, typename Number, int n_components, bool coefficient_is_scalar> | ||
using coefficient_type = | ||
dealii::Tensor<coefficient_rank<n_components, coefficient_is_scalar>, dim, scalar<Number>>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not a fan of such global definitions. To give an example: Why should one need to know outside the implementation of kernel/operator that the implementation is based on dealii::VectorizedArray
?
|
||
calculate_penalty_parameter(matrix_free, dof_index); | ||
|
||
coefficient_is_variable = data.coefficient_model != CoefficientModel::ConstantFunction; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be considered error-prone when adding new options to CoefficientModel
. Again, I see the origin of the problem in an unnecessary dependency.
* Calculates and returns the `flux` on the faces which should be weighted with the shape | ||
* function values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not clear to me why the documentation of these functions should explain "weighted by ...". Implementation of the weak form is in my opinion the responsibility of the Operator
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wrote the documentation like this to make it clear where the name of the function comes from. Because that is our motivation to name this function get_value_flux()
. The caller shall multiply the return value with the shape function values. The naming of the function is not obvious from its body, which for example also works with gradients here. I just wanted to prevent any confusion caused by the function name.
|
||
GradientType const average_gradient = 0.5 * (gradient_m + gradient_p); | ||
|
||
return -coeff_mult(coefficient, (average_gradient - tau * jump_tensor)) * normal; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where does the minus sign next to return
come from?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's because I didn't want to implement operator as the negative Laplace operator. But of course I can change it to be the negative Laplace as in other parts of ExaDG.
value_flux_m += value_flux; | ||
value_flux_p += -value_flux; // - sign since n⁺ = -n⁻ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not like that the present PR makes such subtle sign changes.
template<int n_components, bool coefficient_is_scalar> | ||
constexpr unsigned int coefficient_rank = (coefficient_is_scalar) ? 0 : | ||
((n_components > 1) ? 4 : 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think explanations/motivation for these numbers would be helpful here.
bfbeeee
to
8c53b63
Compare
plus remove an unnecessary alias
As of now, there is a new version of the implementation for the Generalized Laplace operator. In the new implementation,
|
I'm wondering if |
@@ -50,12 +51,14 @@ MultigridPreconditioner<dim, Number>::initialize( | |||
PDEOperator const & pde_operator, | |||
MultigridOperatorType const & mg_operator_type, | |||
bool const mesh_is_moving, | |||
Number const mg_diffusivity, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think one should ask the pde_operator
for the diffusivity instead of extending the interface.
auto & diffusivity = pde_operator_level->get_diffusivity(); | ||
diffusivity = mg_diffusivity; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why returning a non-const reference and then overwrite the value? I think I would then prefer a set-function as we usually do in ExaDG.
// update the diffusivity coefficients which depend on the quadrature point coordinates | ||
// and may depend on the time, e.g. in case of an analytical coefficient function. | ||
// However, the ALE interface does not support passing in the time here, so the coefficients | ||
// are calculated with the simulation start time. | ||
calculate_diffusivity(this->param.start_time); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that this code is not placed correctly in update_after_grid_motion()
. Also as a user, I can not expect that for a time-dependent problem on a non-moving mesh, the calculation of a time-dependent diffusivity will be done in a function that explicitly relates to grid-motion (ALE) problems.
|
||
diffusivity = VariableDiffusivity{}; | ||
|
||
std::get<VariableDiffusivity>(diffusivity) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is get
doing here? Could you explain and/or add a comment?
Let me raise a general question: I see the novelty of the present PR in the introduction of a generalized Laplace operator. However, it does not really seem to fit optimally that this is new implementation is then used in the present PR to replace the diffusive operator in the |
Related to the comment #357 (comment), I believe the present PR needs to decide whether to support for now variable coefficients for the generalized Laplace operator or not. I mainly see the following two options:
|
if(this->param.preconditioner_pressure_block == | ||
SchurComplementPreconditioner::PressureConvectionDiffusion) | ||
{ | ||
matrix_free_data.append_mapping_flags( | ||
GeneralizedLaplace::Kernel<dim, Number>::get_mapping_flags(true, true)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain why this is needed? Why didn't we need it before, or has it been a bug before?
// Set diffusivity | ||
auto & diffusivity = pressure_conv_diff_operator->get_diffusivity(); | ||
|
||
// If the viscosity is variable copy the reference from the viscous kernel | ||
if(this->param.viscosity_is_variable()) | ||
diffusivity = this->viscous_kernel->get_viscosity_coefficients(); | ||
else // just set the constant viscosity | ||
diffusivity = this->param.viscosity; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it un-intuitive to see from this code whether just a reference is copied or the whole data. I mentioned somewhere else that getter functions returning non-const references are also more difficult to understand in my opinion.
Shouldn't such code look like this:
if(this->viscous_kernel->viscosity_is_variable())
pressure_conv_diff_operator->set_diffusivity(this->viscous_kernel->get_variable_viscosity());
else
pressure_conv_diff_operator->set_diffusivity(this->viscous_kernel->get_const_viscosity());
}; | ||
|
||
private: | ||
/** | ||
* Performs cell and quadrature point loops to calculate and store the variable coefficients. | ||
*/ | ||
void | ||
cell_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free, | ||
VectorType &, | ||
VectorType const &, | ||
Range const & cell_range) | ||
{ | ||
IntegratorCell integrator(matrix_free, {}, quad_index); | ||
|
||
for(unsigned int cell = cell_range.first; cell < cell_range.second; ++cell) | ||
{ | ||
integrator.reinit(cell); | ||
|
||
for(unsigned int q = 0; q < integrator.n_q_points; ++q) | ||
{ | ||
auto const coefficient = | ||
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function, | ||
integrator.quadrature_point(q), | ||
time); | ||
|
||
variable_coefficients.set_coefficient_cell(cell, q, coefficient); | ||
} | ||
} | ||
}; | ||
|
||
/** | ||
* Performs face and quadrature point loops to calculate and store the variable coefficients. | ||
*/ | ||
void | ||
face_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free, | ||
VectorType &, | ||
VectorType const &, | ||
Range const & face_range) | ||
{ | ||
IntegratorFace integrator(matrix_free, true, {}, quad_index); | ||
|
||
for(unsigned int face = face_range.first; face < face_range.second; face++) | ||
{ | ||
integrator.reinit(face); | ||
|
||
for(unsigned int q = 0; q < integrator.n_q_points; ++q) | ||
{ | ||
auto const coefficient = | ||
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function, | ||
integrator.quadrature_point(q), | ||
time); | ||
|
||
variable_coefficients.set_coefficient_face(face, q, coefficient); | ||
} | ||
} | ||
}; | ||
|
||
/** | ||
* Performs cell-based face and quadrature loops to calculate and store the variable coefficients. | ||
*/ | ||
void | ||
cell_based_loop_set_coefficients(dealii::MatrixFree<dim, Number> const & matrix_free, | ||
VectorType &, | ||
VectorType const &, | ||
Range const & cell_range) | ||
{ | ||
IntegratorFace integrator(matrix_free, true, {}, quad_index); | ||
|
||
unsigned int const n_faces_per_cell = | ||
matrix_free.get_dof_handler().get_triangulation().get_reference_cells()[0].n_faces(); | ||
|
||
for(unsigned int cell = cell_range.first; cell < cell_range.second; ++cell) | ||
{ | ||
for(unsigned int face = 0; face < n_faces_per_cell; ++face) | ||
{ | ||
integrator.reinit(cell, face); | ||
|
||
unsigned int const cell_based_face_index = integrator.get_current_cell_index(); | ||
|
||
for(unsigned int q = 0; q < integrator.n_q_points; ++q) | ||
{ | ||
auto const coefficient = | ||
FunctionEvaluator<coefficient_rank, dim, Number>::value(coefficient_function, | ||
integrator.quadrature_point(q), | ||
time); | ||
|
||
variable_coefficients.set_coefficient_face_cell_based(cell_based_face_index, | ||
q, | ||
coefficient); | ||
} | ||
} | ||
} | ||
}; | ||
|
||
dealii::MatrixFree<dim, Number> const & matrix_free; | ||
unsigned int const quad_index; | ||
dealii::Function<dim> & coefficient_function; | ||
double const time; | ||
VariableCoefficients<CoefficientType> & variable_coefficients; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should not be realized inside variable_coefficients.h
.
/** | ||
* Returns the coefficient for the requested (@p cell index, @p q point index) | ||
* pair. | ||
* | ||
* If the coefficient is not variable, returns the constant coefficient. | ||
*/ | ||
CoefficientType | ||
get_coefficient_cell(unsigned int const cell, unsigned int const q) const | ||
{ | ||
return std::visit(VarCoeffUtils::GetCoefficientCell<CoefficientType>{cell, q}, coefficients); | ||
} | ||
|
||
/** | ||
* Returns the coefficient for the requested (@p face index, @p q point index) | ||
* pair. | ||
* | ||
* If the coefficient is not variable, returns the constant coefficient. | ||
*/ | ||
CoefficientType | ||
get_coefficient_face(unsigned int const face, unsigned int const q) const | ||
{ | ||
return std::visit(VarCoeffUtils::GetCoefficientFace<CoefficientType>{face, q}, coefficients); | ||
} | ||
|
||
/** | ||
* Returns the coefficient for the requested (@p face index (cell-based), @p q | ||
* point index) pair. | ||
* | ||
* If the coefficient is not variable, returns the constant coefficient. | ||
*/ | ||
CoefficientType | ||
get_coefficient_face_cell_based(unsigned int const cell_based_face, unsigned int const q) const | ||
{ | ||
return std::visit(VarCoeffUtils::GetCoefficientFaceCellBased<CoefficientType>{cell_based_face, | ||
q}, | ||
coefficients); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest, I do not see the advantages of the variant/visitor design patter here. In my opinion, one needs to write more lines of code, and the code is more difficult to understand than a very simple if-else logic. Since we are here in the generalized Laplace operator, there is no concern right now that this if-else logic needs to be duplicated at multiple places. And even in such a case, one could also introduce free functions in the future, but only once we need this complexity. So where is the advantage of the present design? Defining two functions operator()
with the same name is essentially a complicated way of implementing an if-else logic.
unsigned int const cell; | ||
unsigned int const q; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In terms of code design, I am not convinced that it is a good idea that the most complex case (with face, q variables) dictates the "interface" also for the simplest case of a constant coefficient (not even mentioning that it is quite hard to identify an "interface" from this struct definition). While I do not see many other variants to be realized in the future, I also do not understand the motivation for the present design if we had more variants in the future. If all these variants have different interfaces, the interface of this struct will grow according to the number of all variants, just as the simple if-elseif logic.
I think this PR is stuck right now between trying to have a use for the generalized Laplace operator, and the features it needs to have due to its original motivation, which was to be used in the context of the Darcy equation. I am strongly against merging such a big contribution without using it anywhere. On the other hand, trying to use it with existing modules, as pointed out, then causes problems, for example by introducing features that are not application motivated. The interfaces with the modules become a problem, as well. In light of this, I think it is wiser that I further develop this operator together with the Darcy module in my own repository, use and test it there. Also concerning external constraints like development time, I think this is the best way to go. After the operator has been used there successfully with the Darcy module, we can later decide what parts of it we want to have in ExaDG and how we want it specifically. We can the import parts here step by step. This would be easier for everybody, I guess. So, I would actually suggest closing this PR for now. What do you think @nfehn ? |
Overall, I think we made progress in the course of this PR and developed ideas that I consider useful, so it would be somewhat sad if we do not finalize things, fully or in parts. I expressed my ideas regarding the current status and content-related aspects here: #414 (comment) and #414 (comment). Realistically, the developments in ExaDG will go on and it might happen that codes then start to more and more diverge. I see ExaDG also as a platform for collaboration and exchange. It might be that this will become more difficult when working in separate repositories. My very personal assessment is to continue in ExaDG. I think it will pay off. I want to emphasize that, in the end, it is totally your own decision how to proceed, because only you know your constraints. I can offer help/advice regarding DG discretizations / variable coefficients (and, potentially, also software design), but due to constraints on my side I can not invest much time if things are not going to be brought into ExaDG in the near future. |
Yes, we have come a long way due to our continuous efforts and that's great. What I'm talking about is just going to be transient (I'm planning something like 2-3 months maximum). In the meantime, I will keep up with the developments in the main repo, but I consider this a necessary step for the speed of development I need right now (conference mid-September 😅) As soon as I have things working and got some results, I want to put it all into the main repo, just as was our plan all along. Of course, it will all build on the state (of this PR) we have achieved until now. |
This PR introduces a new Laplace operator implementation that can be used by any module in ExaDG.
The requirements that this new operator fulfills were listed in #357. A document with the math behind this operator is also linked in that issue.
In short, the operator is templated for the scalar and vector-valued problems, as well as coefficients that couple or do not couple between the components of the gradient.
For a proof-of-concept, this PR also removes the diffusive operator in the convection-diffusion module and replaces it with the new generalized Laplace operator. Other than this, as few places as possible are touched in this PR to keep it manageable for reviews.
Here are some details about the implementation:
Operators that use the implemented kernel (these are right now the operator itself, and the combined operator in the convection-diffusion module) are responsible for implementing their own cell-loop/face-loop functions to decide how to calculate coefficients, how to store them (like having cell-based face loops or not), and how and when to update them.
do_boundary_integral_cell_based()
is introduced in the base class operatorOperatorBase
fully analogous to the already presentdo_face_int_integral_cell_based()
as these two functions need to access the cell-based face coefficients unlike their non-cell-based counterparts.fill_matrix_free_data()
inIncNS::SpatialOperatorBase
is made virtual and overriden inIncNS::OperatorCoupled
, since the generalized Laplace operator requires to set additional flags if it is used as a preconditioner.Note: The convection-diffusion module is currently the only place where the operator is used. However, the applications that test this module do not use variable coefficients as the diffusivity is assumed constant, i.e. the variable coefficient functionality is disabled for optimization. During development, these tests were also successfully run with variable coefficients, but this is not part of the PR to keep changes to a minimum.
Closes #357