Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TODO unification of "identical" files in mesher and solver #22

Open
2 of 3 tasks
martinvandriel opened this issue Oct 17, 2014 · 6 comments
Open
2 of 3 tasks

TODO unification of "identical" files in mesher and solver #22

martinvandriel opened this issue Oct 17, 2014 · 6 comments

Comments

@martinvandriel
Copy link
Contributor

  • analytic_spheroid_mapping.f90 should be doing the same, but is quite different, needs testing
  • splib.f90 has changed some of the interfaces. Probably its best to compute everythin gll/glj related in the mesher and store it in the meshfile, then the whole splib can be removed from the solver
  • clocks.f90 should also be unified
@martinvandriel
Copy link
Contributor Author

maybe it would make sense to unfify all the finite element stuff with the kerner, where I rewrote all the routines and tested them rigorously...

@tnissen
Copy link

tnissen commented Oct 17, 2014

Unifying with kerner/mesher sounds like a good idea, but storing
everything GLL/GLJ in the mesher.... you mean derivatives, mapping, the
whole Jacobian etc, or just the very basic elemental stuff? Would
significantly increase the size of these files... that is why we've
avoided it so far... I think in SPECFEM they store everything too, but
the databases are accordingly large.

What do you mean by rigorous testing? Most routines come from Numerical
recipes and/or are otherwise subject to heavy testing as they stand...

On 17/10/2014 14:35, Martin van Driel wrote:

maybe it would make sense to unfify all the finite element stuff with
the kerner, where I rewrote all the routines and tested them rigorously...


Reply to this email directly or view it on GitHub
#22 (comment).

Tarje

<>--<>--<>--<>--<>--<>
Dept. of Earth Sciences
Oxford University
South Parks Road
Oxford OX1 3AN; UK
tel: +44 1865 282149
fax: +44 1865 272072
web: seis.earth.ox.ac.uk http://seis.earth.ox.ac.uk
<>--<>--<>--<>--<>--<>

@martinvandriel
Copy link
Contributor Author

splib only computes the gll/glj points and the derivative matrices G0, G1, G2 as far as I can see. That is a total of less then 100 numbers, so it can be easily stored in the mesh and saves duplication of 700 lines of code.

With rigorous testing I mean a test driven approach like we have in the kerner where the mapping is tested in very general way (non-public link):

https://github.com/sstaehler/kerner/blob/master/test_finite_elem_mapping.f90

that testing revealed several bugs in the mapping that is used in the solver and mesher and it was easier to rewrite from scratch than fixing it. Partly because there where several subtle implicit assumptions: ds/dxi = 0 at the axis (not true in the current meshes for the inner core) and partly because the math could be simplified a lot. By now it is only half of the number of lines of code, although including the inverse mapping as well. We did not yet merge it back, because it uses different user interfaces.

@tnissen
Copy link

tnissen commented Oct 17, 2014

OK, sounds good, as long as we don't get into storing literally
everything related to the mesh and blowing up the mesh db... this
would be overkill as it is much easier/cheaper to re-create on-the-fly.

On 17/10/2014 15:07, Martin van Driel wrote:

splib only computes the gll/glj points and the derivative matrices G0,
G1, G2 as far as I can see. That is a total of less then 100 numbers,
so it can be easily stored in the mesh and saves duplication of 700
lines of code.

With rigorous testing I mean a test driven approach like we have in
the kerner where the mapping is tested in very general way (non-public
link):

https://github.com/sstaehler/kerner/blob/master/test_finite_elem_mapping.f90

that testing revealed several bugs in the mapping that is used in the
solver and mesher and it was easier to rewrite from scratch than
fixing it. Partly because there where several subtle implicit
assumptions: ds/dxi = 0 at the axis (not true in the current meshes
for the inner core) and partly because the math could be simplified a
lot. By now it is only half of the number of lines of code, although
including the inverse mapping as well. We did not yet merge it back,
because it uses different user interfaces.


Reply to this email directly or view it on GitHub
#22 (comment).

Tarje

<>--<>--<>--<>--<>--<>
Dept. of Earth Sciences
Oxford University
South Parks Road
Oxford OX1 3AN; UK
tel: +44 1865 282149
fax: +44 1865 272072
web: seis.earth.ox.ac.uk http://seis.earth.ox.ac.uk
<>--<>--<>--<>--<>--<>

@sstaehler
Copy link
Contributor

Another question at this point would obviously be to store the Mesh in a netcdf container, where individual variables could be accessed directly for checking.

This might also (together with a XDMF file) replace the vtk output of the mesher.

Writing and reading the variables from the netcdf file would be simplified by the wrappers I made for the kerner. Saves you all the trouble of handling dimensions and variable ids.

Am 17. Oktober 2014 16:12:15 MESZ, schrieb tnissen notifications@github.com:

OK, sounds good, as long as we don't get into storing literally
everything related to the mesh and blowing up the mesh db... this
would be overkill as it is much easier/cheaper to re-create on-the-fly.

On 17/10/2014 15:07, Martin van Driel wrote:

splib only computes the gll/glj points and the derivative matrices
G0,
G1, G2 as far as I can see. That is a total of less then 100 numbers,
so it can be easily stored in the mesh and saves duplication of 700
lines of code.

With rigorous testing I mean a test driven approach like we have in
the kerner where the mapping is tested in very general way
(non-public
link):

https://github.com/sstaehler/kerner/blob/master/test_finite_elem_mapping.f90

that testing revealed several bugs in the mapping that is used in the
solver and mesher and it was easier to rewrite from scratch than
fixing it. Partly because there where several subtle implicit
assumptions: ds/dxi = 0 at the axis (not true in the current meshes
for the inner core) and partly because the math could be simplified a
lot. By now it is only half of the number of lines of code, although
including the inverse mapping as well. We did not yet merge it back,
because it uses different user interfaces.


Reply to this email directly or view it on GitHub

#22 (comment).

Tarje

<>--<>--<>--<>--<>--<>
Dept. of Earth Sciences
Oxford University
South Parks Road
Oxford OX1 3AN; UK
tel: +44 1865 282149
fax: +44 1865 272072
web: seis.earth.ox.ac.uk http://seis.earth.ox.ac.uk
<>--<>--<>--<>--<>--<>


Reply to this email directly or view it on GitHub:
#22 (comment)

Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.

@martinvandriel
Copy link
Contributor Author

Thought about this, too. Would enable visualization on exactly the same data as used in the computation. Also coud be a single file and hence speed up file ouput by the mesher (which at the moment is the most time consuming when going to 10k cores).

I could not decide yet whether this should then enable collective reading in the solver, which might be useful if we consider going beyond 10k cores. That would require some restructuring though, while for non-collective reading it is fine to just produce one group for each rank and essentially keep the current structure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants