Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPECFEM3D_Cartesian large memory simulation questions #1633

Open
adunham1 opened this issue Oct 16, 2023 · 3 comments
Open

SPECFEM3D_Cartesian large memory simulation questions #1633

adunham1 opened this issue Oct 16, 2023 · 3 comments

Comments

@adunham1
Copy link

Hello SPECFEM developers,

I am an avid SPECFEM3D user and am currently working on ground motion simulations for scenario ~M9 earthquakes on the Cascadia Subduction Zone Megathrust. The mesh I have created is ~65M elements and contains the Stephenson et al., 2017 Cascadia Community Velocity model. I have a few questions related to large memory simulations within SPECFEM3D_cartesian:

  1. I am using kinematic earthquake sources with a spacing of 500m and contain ~400,000 point sources. For each run, this takes the solver about 1hr to read all of these sources in. Are there any flags to allow the solver to read these in in parallel to decrease this time or any other potential fixes to this issue?

  2. Because these sources are so large, it is nearly impossible to use an external source time function with the current implementation (i.e. it would take 400,000 files). Is there a way to change the source time function for each point without an external file? If not, would this be a useful addition to SPECFEM3D? I am hoping to test out source time functions such as Brune and Yoffe functions.

  3. Finally, the CVM I am using is very detailed and takes ~5Gb of memory to read in within generate_databases, which limits the amount of cores/node I can run my simulations on. Is there any way in SPECFEM3D to more efficiently deal with these very large ASCII files or does anyone have any tips for reading in large and detailed velocity models? Is there any push to change the format of these files to something like netcdf to make reading in the gridded datasets more efficient? 

Thanks so much for your help and I look forward to discussing these issues!

@homnath
Copy link

homnath commented Oct 17, 2023 via email

@adunham1
Copy link
Author

Thanks Hom! It would be great to implement at least a Brune as an option for a STF. I know this is something others would want to use as well so maybe this can be a change the developers add?

Changing the input format of the tomography files would be so beneficial and significantly decrease my memory usage for these large runs. Please keep me updated when this change (as well as the stf) could possibly be made. I will definitely attend the Wednesday zoom next month!

@homnath
Copy link

homnath commented Nov 7, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants