New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
incorrect correction factor in attenuation model #699
Comments
it’s likely due to an unrealistic reference frequency or Q value of your model. |
thank you for the quick reply. |
ok, you're using a default model could you try to you update your code to use the current devel version:
and see if this fixes it? |
ok, i can try with this branch.
|
i am getting compilation error with this devel branch -
|
yes, "black lives matter" - please change in your Par_file:
to
|
thank you, another error which i got -
SAVE_AZIMUTHAL_ANISO_KL_ONLY = .false. |
correct, the flag just as a side remark, the flag |
i did not see the attenuation error.
I noticed that during mesh generation, ~1.3T of data was generated. Hope this issue is related to insufficient memory . |
okay, so this newer version looks better. it's more likely an issue with the compiler, not the memory disk space. the original s362ani routines contained some pretty old fortran77 statements. this was fine with older compiler versions. i think newer compilers became more stringent and won't initialize variables the same way as older versions anymore. this can lead to issues with those older code segments - looks like we have that fixed now in the more recent devel version. will need to release a newer tar-file version then... |
i will try with a different compiler to see if i get same issue. I have few doubts regarding the input i am using to benchmark my system, I plan to skim through the user manual for usage information but i have few questions on top of my head.
i plan to test this simulation to more processes (/ more nodes), but this setup restricts me to launch me only 96 processes.
case c)
i have followng doubts - Q5) I have seen comments regarding the weak openmp support of this application, still i would like to give it a try. So during runs do i need to change anything else (in Par_file/input files) apart from supplying / setting OMP_NUM_THREADS to desired value? My objective is to test scaling of this input across multiple nodes. |
an idea could be to look at these scripts used for the benchmarks on Summit (or Titan if you prefer): the scalings especially for weak-scaling are tricky since the mesh will change topology with size. in the past, normalizing final runtimes by the number of elements can help level out the loads. in terms of NPROC and NEX values (NEX_XI must be equal to NEX_ETA) there is a table provided for example here: (well, the table looks better in the pdf version https://github.com/geodynamics/specfem3d_globe/blob/master/doc/USER_MANUAL/manual_SPECFEM3D_GLOBE.pdf) for the questions: Q2) look at the output of
which is a rough estimate of the runtime memory requirements. for the file disk space, you would probably have to extrapolate from smaller runs. Q3) roughly the same. in more detail, probably b) due to how the mesh topology will be constructed and changed for higher NEX. but again, NEX > 10,000 is probably an excessive load. Q4) yes. Q5) just compile the code with OpenMP:
and needed OpenMP compiler flags and it will be turned on during runtime. and yes, steered by OMP_NUM_THREADS. |
I am trying to run specfem3d model with test case A and i am getting following errors -
in the source code, i was able to see that the error message should be thrown only
if (scale_factor < 0.8d0 .or. scale_factor > 1.2d0) .
in my case all the reported values on std error are close to 8.51.
here are the run commands -
Please let me know if i can provide any further information on this issue.
The text was updated successfully, but these errors were encountered: