Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ElementTriN1 on different length-scales #1016

Open
HelgeGehring opened this issue Feb 11, 2023 · 9 comments
Open

ElementTriN1 on different length-scales #1016

HelgeGehring opened this issue Feb 11, 2023 · 9 comments

Comments

@HelgeGehring
Copy link
Contributor

The ElementTriN1()*ElementTriP1() works fine for me as long as the lengthscale of the system is ~1, if I change it to the order of ~ 1e-6 the results become unprecise. This seems especially for solutions which have non-zero values in both subspaces.
I've had a look on the assembly:

from skfem import *

@BilinearForm
def test_form(u_n, u_p, v_n, v_p, w):
    print(u_n.max(), u_p.max())
    return u_p * v_p

mesh = MeshTri().refined(2)
basis = Basis(mesh, ElementTriN1()*ElementTriP1())
test_form.assemble(basis)

print('-' * 80)

mesh = MeshTri().refined(2).scaled((1e-6,1e-6))
basis = Basis(mesh, ElementTriN1()*ElementTriP1())
test_form.assemble(basis)

The first assembly has values like

0.0 0.816847572980459
3.6336951459609206 0.0

while the second one is like

0.0 0.816847572980459
3633695.1459609214 0.0

Could this be the problem, that there are orders of magnitude between the values and eigs cannot solve the system precisely?

@kinnala
Copy link
Owner

kinnala commented Feb 11, 2023

Perhaps.

Quick comment: Sometimes in solid mechanics you are able to use different length units to solve issues such as this.

@HelgeGehring
Copy link
Contributor Author

Thanks!

At the moment I scale all length units (i.e. the size of the mesh and the wavelength to be in the oder of 1), effectively I just leave out the μ in μm for all values.
While this works, it would somehow simplify the whole description for more complex problems if I could stick to SI-base-units for everything, as I otherwise have to scale all physical parameters in the system, too.
This is of course doable, but it would be somehow nice to understand why the system, which should in principle invariant of the scale changes that dramatically.

Might there be some error a minor error in the implementation of ElementTriN1? Is it expected that the values for u_n change a lot while the values for u_p keep constant? And do you maybe know of any literature which addresses such a problem?
Somehow it seems wrong to me, that the precision changes when I scale the whole system :/

@kinnala
Copy link
Owner

kinnala commented Feb 11, 2023

Perhaps not 'error', but there are usually several 'equivalent' options to implementing a finite element. What I mean that there are different sets of basis functions that span the same polynomial space. But since we use floats and not reals they are not equivalent in practice.

A simple example is one dimensional P2 element. We can use (1-x, x, (1-x)x) or we can use the ones listed here:

https://github.com/kinnala/scikit-fem/blob/master/skfem/element/element_line/element_line_p2.py

Former type of basis is used when the polynomial degree changes from element-to-element and a high degree is needed in some parts of the domain.

What I'm saying is that there are most likely different 'equivalent' (in the sense of reals but not floats) ways to write down the basis for N1 which could be more robust against such small elements.

Literature is usually rather quiet on such topics so it might be worthwhile checking other FEM codes and how they write the basis there.

@kinnala
Copy link
Owner

kinnala commented Feb 14, 2023

BTW did you test with ElementTriN2, does it lead to the same issue?

@kinnala
Copy link
Owner

kinnala commented Feb 14, 2023

What I'd like to see here is some sort of minimal example of the issue.

@kinnala
Copy link
Owner

kinnala commented Feb 14, 2023

I'll try to now study your first example if I can see anything amiss.

@kinnala
Copy link
Owner

kinnala commented Feb 14, 2023

All those elements ElementTriRT1, ElementTriBDM1, ElementTriN1 exhibit same behaviour. I checked and double checked all the formulations and they look correct. I cannot find anything that'd be wrong.

While looking for some hints on the subject I found this reference which points out an interesting problem related to these elements which I hadn't heard about: https://tigerprints.clemson.edu/cgi/viewcontent.cgi?article=2867&context=all_theses

Nevertheless, that is not related to these 'large values' you are describing. I think these are simply caused by scaling 1/det(J) in the Piola transformation. Jacobian determinant gets smaller for smaller elements and consequently the basis function values get larger.

@HelgeGehring
Copy link
Contributor Author

Thanks! Sad to hear that literature is so quiet on this stuff, seems to be quite relevant for actually doing the simulations (?)
And thanks for all the checking, I'll have a look on the reference.

I've modified ex64 to have a minimal example I think:

"""Waveguide cutoff analysis."""

import numpy as np
from skfem import *
from skfem.helpers import *


# three different mesh and element types
mesh_elem = [
    (
        MeshTri.init_tensor(np.linspace(0, 1, 40),
                            np.linspace(0, .5, 20)),
        ElementTriN1() * ElementTriP1(),
    ),
    (
        MeshQuad.init_tensor(np.linspace(0, 1, 40) ** 0.9,
                             np.linspace(0, .5, 20)),
        ElementQuadN1() * ElementQuad1(),
    ),
    (
        MeshTri.init_tensor(np.linspace(0, 1, 20),
                            np.linspace(0, .5, 10)),
        ElementTriN2() * ElementTriP2(),
    ),
]

mesh, elem = mesh_elem[0]
failing = True

errors = []
scale_factors = np.linspace(-1,8)

for scale_factor in scale_factors:
    scale = 10**(-scale_factor)
    mesh_scaled = mesh.scaled([scale]*2)
    basis = Basis(mesh_scaled, elem)

    epsilon = lambda x: 1. + 0. * x[0]
    # epsilon = lambda x: 3 * (x[1] < 0.25) + 1
    one_over_u_r = 1


    @BilinearForm
    def aform(E, lam, v, mu, w):
        return one_over_u_r * curl(E) * curl(v) * (1 if failing else scale**2)


    @BilinearForm
    def gauge(E, lam, v, mu, w):
        # set div E = 0 using a Lagrange multiplier
        return dot(grad(lam), v) + dot(E, grad(mu))


    @BilinearForm
    def bform(E, lam, v, mu, w):
        return epsilon(w.x) * dot(E, v) / (scale**2 if failing else 1)


    A = aform.assemble(basis)
    B = bform.assemble(basis)
    C = gauge.assemble(basis)

    lams, xs = solve(*condense(A + C, B, D=basis.get_dofs()),
                     solver=solver_eigen_scipy_sym(k=6))


    # compare against analytical eigenvalues
    err1 = np.abs(lams[0] - np.pi ** 2)
    err2 = np.abs(lams[1] - 4. * np.pi ** 2)
    err3 = np.abs(lams[2] - 4. * np.pi ** 2)


    if __name__ == "__main__":
        print('TE10 error: {}'.format(err1))
        print('TE01 error: {}'.format(err2))
        print('TE20 error: {}'.format(err3))

    errors.append((err1, err2, err3))

if __name__ == "__main__":
    import matplotlib.pyplot as plt

    errors = np.array(errors)
    plt.xlabel('scale factor')
    plt.ylabel('error')
    plt.plot(scale_factors, errors)
    plt.show()

    fig, axs = plt.subplots(4, 1)
    for itr in range(4):
        (E, Ebasis), _ = basis.split(xs[:, itr])
        Ei = Ebasis.interpolate(E)
        plotbasis = Ebasis.with_element(ElementDG(ElementTriP2()))
        Emag = plotbasis.project(np.sqrt(dot(Ei, Ei)))
        plotbasis.plot(Emag,
                       colorbar=True,
                       ax=axs[itr],
                       shading='gouraud',
                       nrefs=2)
    plt.show()

It only fails for failing=True, as I'd guess in that case we equally have different orders of magnitude for the matrix entries (?)
It also fails for all different elements, so I'd guess another implementation won't help?

For this case it's nice, that we can scale both subsystems differently as the Lagrange multipliers anyway go to zero, in the general case this wouldn't be possible, right?

@HelgeGehring
Copy link
Contributor Author

Okay, I just realize, that I can just scale both subsystems independently. I just multiply every v_z and u_z by that scale and done.
It seems to me, that I should aim for each term being (without considering u/v) in the order of 1 and and considering a derivative as something like 1/scale_factor (?)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants