Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incorrect correction factor in attenuation model #699

Open
puneet336 opened this issue Jul 30, 2020 · 12 comments
Open

incorrect correction factor in attenuation model #699

puneet336 opened this issue Jul 30, 2020 · 12 comments

Comments

@puneet336
Copy link

puneet336 commented Jul 30, 2020

I am trying to run specfem3d model with test case A and i am getting following errors -

Error: incorrect scale factor:    8.5176614274563551
  
incorrect correction factor in attenuation model

 Error detected, aborting MPI... proc           10
....

in the source code, i was able to see that the error message should be thrown only
if (scale_factor < 0.8d0 .or. scale_factor > 1.2d0) .
in my case all the reported values on std error are close to 8.51.

here are the run commands -

mpirun -np 96 -ppn 48 bin/xmeshfem3D
mpirun -np 96 -ppn  48 bin/xspecfem3D 

Please let me know if i can provide any further information on this issue.

@danielpeter
Copy link
Contributor

it’s likely due to an unrealistic reference frequency or Q value of your model.

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

thank you for the quick reply.
As i am working with this software for the first time (for benchmarking purpose), I am not sure on the way to control the reference frequency/Q value .

@danielpeter
Copy link
Contributor

ok, you're using a default model s362ani, thus it's likely either an issue with your code version or a compiler thing.

could you try to you update your code to use the current devel version:

$ git clone --recursive --branch devel https://github.com/geodynamics/specfem3d_globe.git

and see if this fixes it?

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

ok, i can try with this branch.
earlier, i had downloaded specfem3d from here .
Meanwhile i attempted with ATTENUATION = .false. , the simulation seems to be working fine.

Estimated total run time in hh:mm:ss =      2 h 42 m 35 s
 We have done    14.5833330     % of that

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

i am getting compilation error with this devel branch -

running xcreate_header_file...

./bin/xcreate_header_file

 creating file OUTPUT_FILES/values_from_mesher.h to compile solver with correct values
STOP an error occurred while reading the parameter file: WRITE_SEISMOGRAMS_BY_MAIN

make: *** [OUTPUT_FILES/values_from_mesher.h] Error 1

@danielpeter
Copy link
Contributor

yes, "black lives matter" - please change in your Par_file:

WRITE_SEISMOGRAMS_BY_MASTER = ..

to

WRITE_SEISMOGRAMS_BY_MAIN = ..

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

thank you, another error which i got -

 creating file OUTPUT_FILES/values_from_mesher.h to compile solver with correct values
STOP an error occurred while reading the parameter file: SAVE_AZIMUTHAL_ANISO_KL_ONLY

SAVE_AZIMUTHAL_ANISO_KL_ONLY = .false.
should be fine in the Par_file?

@danielpeter
Copy link
Contributor

correct, the flag SAVE_AZIMUTHAL_ANISO_KL_ONLY is missing in your Par_file and can be set to .false..

just as a side remark, the flag USE_FAILSAFE_MECHANISM is no more needed and could be omitted. if it stays in the Par_file, it will be ignored and doesn't affect the setup.

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

i did not see the attenuation error.
while running xspecfem3d, the application terminated ( exit code 9) with following in output_solver.txt -

   Attenuation frequency band min/max (Hz):   1.53527525E-03 /   8.63348767E-02
               period band    min/max (s) :   11.5828047     /   651.348999
   Logarithmic center frequency (Hz):   1.15129407E-02
                      period     (s):   86.8587799

   using shear attenuation Q_mu

   ATTENUATION_1D_WITH_3D_STORAGE  :  T
   ATTENUATION_3D                  :  F
 preparing elastic element arrays
   using attenuation: shifting to unrelaxed moduli
   crust/mantle transverse isotropic and isotropic elements
   tiso elements =        76032
   iso elements  =        72000
   inner core isotropic elements
   iso elements  =         4176
 preparing wavefields
   allocating wavefields
   initializing wavefields

I noticed that during mesh generation, ~1.3T of data was generated. Hope this issue is related to insufficient memory .

@danielpeter
Copy link
Contributor

okay, so this newer version looks better.

it's more likely an issue with the compiler, not the memory disk space. the original s362ani routines contained some pretty old fortran77 statements. this was fine with older compiler versions. i think newer compilers became more stringent and won't initialize variables the same way as older versions anymore. this can lead to issues with those older code segments - looks like we have that fixed now in the more recent devel version. will need to release a newer tar-file version then...

@puneet336
Copy link
Author

puneet336 commented Jul 30, 2020

i will try with a different compiler to see if i get same issue. I have few doubts regarding the input i am using to benchmark my system, I plan to skim through the user manual for usage information but i have few questions on top of my head.
My current setup has following parameters -
case a)

NEX_XI                          = 384
NEX_ETA                         = 384
# number of MPI processors along the two sides of the first chunk
NPROC_XI                        = 4
NPROC_ETA                       = 4

i plan to test this simulation to more processes (/ more nodes), but this setup restricts me to launch me only 96 processes.
from the input, 384/4 =96 (for both NEX_ETA/NPROC_ETA ,NEX_XI/NPROC_XI). So, in order to scale this simulation to say 2000 cores (or 2000 mpi processes).i could think of following - combinations -
case b)

NEX_XI                          = 40000
NEX_ETA                         = 40000
NPROC_XI                        = 20
NPROC_ETA                       = 20

case c)

NEX_XI                          = 80000
NEX_ETA                         = 80000
NPROC_XI                        = 40
NPROC_ETA                     = 40

i have followng doubts -
Q1) By case b, & c am i increasing the "problem size" compared to case a) ?
Q2) For case a), during mesh generation phase, the storage requirement was ~1.3T (during solver - 178G). So, Will the storage and memory requirement will grow with b , c compared to case a) ? and is it possible to approximate the requirement?
Q3) which combination out of b) and c) could be optimal for finishing this quicker (optimization)?
Q4) is it always the case that NPROC_XI == NPROC_ETA and NEX_XI == NEX_ETA .

Q5) I have seen comments regarding the weak openmp support of this application, still i would like to give it a try. So during runs do i need to change anything else (in Par_file/input files) apart from supplying / setting OMP_NUM_THREADS to desired value?

My objective is to test scaling of this input across multiple nodes.
In case this post is not suitable for this thread, please let me know if there is a forum for this software where i can ask few "noob"ish questions like these.

@danielpeter
Copy link
Contributor

an idea could be to look at these scripts used for the benchmarks on Summit (or Titan if you prefer):
https://github.com/SPECFEM/scaling-benchmarks/tree/master/Summit/SPECFEM3D_GLOBE

the scalings especially for weak-scaling are tricky since the mesh will change topology with size. in the past, normalizing final runtimes by the number of elements can help level out the loads.

in terms of NPROC and NEX values (NEX_XI must be equal to NEX_ETA) there is a table provided for example here:
https://github.com/geodynamics/specfem3d_globe/wiki/03_running_the_mesher

(well, the table looks better in the pdf version https://github.com/geodynamics/specfem3d_globe/blob/master/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf)

for the questions:
Q1) yes, the "problem size" is determined by NEX values. higher NEX (number of elements per chunk along the surface) increases the problem. values > 10,000 actually seem excessive, but if you like try it out.

Q2) look at the output of ./bin/xcreate_header_files (which can be run locally without any job-submission). it will output something like:

size of static arrays for all slices =    1594.6705760000000       MB
..

which is a rough estimate of the runtime memory requirements. for the file disk space, you would probably have to extrapolate from smaller runs.

Q3) roughly the same. in more detail, probably b) due to how the mesh topology will be constructed and changed for higher NEX. but again, NEX > 10,000 is probably an excessive load.

Q4) yes.

Q5) just compile the code with OpenMP:

./configure --enable-openmp ..

and needed OpenMP compiler flags and it will be turned on during runtime. and yes, steered by OMP_NUM_THREADS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants