Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not divisible block size with nullspace vectors #251

Open
njsyw1997 opened this issue Mar 15, 2023 · 5 comments
Open

Not divisible block size with nullspace vectors #251

njsyw1997 opened this issue Mar 15, 2023 · 5 comments

Comments

@njsyw1997
Copy link

Hi @ddemidov ,
I am using AMGCL to solve a 2D linear elasticity problem. It has a 2 $\times$ 2 block structure and works fine without nullspace vectors. But when I pass the coordinates to the solver, it returns
what(): Matrix size should be divisible by block_size
Do you have any idea of this problem?
Here are the parameters and sample data.

./amgcl/build/examples/solver \   
-A ./stiffness.mtx \
-f ./rhs.mtx \
-C ./vecpoints.mtx \
-p precond.coarsening.type="smoothed_aggregation" \
-p precond.coarsening.aggr.eps_strong=0.00 \
-p precond.relax.type="ilu0" \
-p solver.maxiter=1000 \
-p solver.type="cg" \
-b 2

data.zip

@ddemidov
Copy link
Owner

ddemidov commented Mar 16, 2023

EDIT: this is wrong, see comments below.

Hi,
The matrix in block-valued format (-b2) has 26049x26049 size (52098/2), and aggregation expects coordinate matrix with height of 26049/2, which is not a whole number, hence the error.
But anyway, using nullspace vectors with non-scalar-valued backends is not supported, and you would get an error later even if the initial size check passed.

The following works:

./solver \
    -A stiffness.mtx \
    -f rhs.mtx \
    -C vecpoints.mtx \
    precond.coarsening.type=smoothed_aggregation \
    precond.coarsening.aggr.eps_strong=0.00 \
    precond.relax.type=ilu0 \
    solver.maxiter=1000 \
    solver.type=cg
Solver
======
Type:             CG
Unknowns:         52098
Memory footprint: 1.59 M

Preconditioner
==============
Number of levels:    3
Operator complexity: 1.17
Grid complexity:     1.11
Memory footprint:    40.28 M

level     unknowns       nonzeros      memory
---------------------------------------------
    0        52098         721522     35.32 M (85.61%)
    1         5103         104013      4.64 M (12.34%)
    2          564          17280    333.14 K ( 2.05%)

Iterations: 16
Error:      3.72722e-09

[Profile:       0.679 s] (100.00%)
[  reading:     0.524 s] ( 77.20%)
[  setup:       0.064 s] (  9.41%)
[  solve:       0.091 s] ( 13.33%)

@njsyw1997
Copy link
Author

Thanks for your rapid response! In the tutorial of nullspace vectors, you are using a block-valued backend with as_scalar wrapper. Is that supported by runtime interface? I found similar setting in the coarsening\runtime.hpp and I thought it was supported automatically when switching to block-valued backend.

as_scalar = (
block_value_type &&
c != ruge_stuben &&
prm.get("nullspace.cols", 0) > 0
);

@ddemidov
Copy link
Owner

ddemidov commented Mar 20, 2023

Thanks for pointing that out, I forgot about this feature! This makes my above response completely irrelevant, sorry.

Here is what is really happening:

  • you are solving a 2D problem, which means that three near null-space vectors are constructed from the coordinate matrix.
  • unfortunately, this means that the systems on the lower levels have 3x3 structure, not 2x2. Indeed, the top-level system yields 1701 aggregates, which results in a 5103x5103 coarse matrix, which no longer may be represented with 2x2 blocks.

I am not sure what can be done here. We could, for example, merge a couple of aggregates at the end of aggregation algorithm so that the total number of aggregates is divisible by 2, but I think the bigger problem is that solving a problem with 3x3 structure using 2x2 blocks would be inefficient.

With this specific example you could start with 3x3 blocks. That works, but seems to be less efficient that using scalar values:

solver -A stiffness.mtx -f rhs.mtx -C vecpoints.mtx -b3 \
    precond.coarsening.type=smoothed_aggregation precond.coarsening.aggr.eps_strong=0.00 \
    precond.relax.type=ilu0 solver.maxiter=1000 solver.type=cg
Solver
======
Type:             CG
Unknowns:         17366
Memory footprint: 1.59 M

Preconditioner
==============
Number of levels:    3
Operator complexity: 1.37
Grid complexity:     1.11
Memory footprint:    48.80 M

level     unknowns       nonzeros      memory
---------------------------------------------
    0        17366         198648     40.15 M (73.22%)
    1         1701          59009      6.79 M (21.75%)
    2          188          13640      1.85 M ( 5.03%)

Iterations: 15
Error:      3.52609e-09

[Profile:       0.874 s] (100.00%)
[  reading:     0.561 s] ( 64.21%)
[  setup:       0.142 s] ( 16.30%)
[  solve:       0.170 s] ( 19.41%)

P.S. this works for 3D problems, as in the tutorial you mentioned, because there we have 6 near-null space vectors, and the coarser systems may still be efficiently represented with 3x3 blocks.

@njsyw1997
Copy link
Author

I see. Other AMG solvers have an option to set the number of systems of PDEs(for example, HYPRE_BoomerAMGSetNumFunctions in Hypre and PDE equations in ML, Trilinos). It seems to use the block structure to improve the scalar solver. Can we do a similar thing in AMGCL?

@ddemidov
Copy link
Owner

When you pass coordinates to form nullspace, you implicitly set the number of equations. When you are not using nullspace vectors, you can set precond.coarsening.aggr.block_size=2, which would use pointwize aggregation, and should be similar to using -b2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants