Replies: 10 comments 18 replies
-
Update: Unfortunately I wasn't able to duplicate the above behavior. I took cgsize_t test_points[3] = {1,2,3}, bc_index;
cg_boco_write(fn, B, Z, "TestBC", CGNS_ENUMV(BCTypeUserDefined), CGNS_ENUMV(PointList), 3, test_points, &bc_index); right before the file was closed and the I'll leave open in case anyone has an idea and if there's an answer to my first question. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
It would be straightforward to implement a parallel version of cg_boco_write. Do you have, for example, a zone that is partitioned among multiple ranks? Do you have an example of how you are calling cg_boco_write? The issue with calling the serial version in parallel would be that all the ranks need to call it with the same input parameters. For example, you could not have the bocname be different between ranks. HDF5 requires file structure elements (group, dataset, etc..) to be a collective operation, i.e., you can't have one rank create a group the other ranks don't know about, and they all must create objects with the same size dimensions. If you meet that criterion, then what would happen is all the ranks will write the same data to the file. If they have different data between the ranks, the final data would be whichever rank wrote last. |
Beta Was this translation helpful? Give feedback.
-
If all the ranks call cg_boco_write with the same parameters, it should work so it might be a bug. Does it work correctly when using one rank? Correct, it would have a similar API as the other cpg_*write/read APIs. Remember, if you have a bunch of small BC datasets each rank contributes to, the performance might be poor. Parallel I/O performance is best with large data with the fewest IO calls. |
Beta Was this translation helpful? Give feedback.
-
Scott makes a good point. Ultimately, we have 2 options.
On the reader side within PETSc we presumably have the same two options independent of the choice on the writer side. My guess is that option is better even though I initially pushed @jrwrigh towards 1 because I get Scott’s point that performance wise we may not see 1 being faster than 2 anyway. |
Beta Was this translation helpful? Give feedback.
-
cg_boco_write can not do collective (or independent) IO as it always writes to the entire dataset. If it is called by all the ranks with the same parameters, then only rank 0 will do the actual writing. |
Beta Was this translation helpful? Give feedback.
-
Regarding the CGNS that worked:
CGNS that didn't work:
So there are a few packages that have newer versions. Namely: Regardless, question 2 (the actual bug with |
Beta Was this translation helpful? Give feedback.
-
Hey @brtnfld
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the response. I am flying a little blind here as we don't have a reader yet for this file and I am trying to do all I can to verify correctness. Note I did use the "known" openmpi suppression e.g.,
but I can imagine these files may not cover everything. I will see if @cwsmith (controller of PRs on SCOREC/core) knows of any other suppression files for MPIO and/or HDF5 which might be needed to get a clean bill of health from valgrind which is always nice to have. |
Beta Was this translation helpful? Give feedback.
-
In case someone finds this thread and wants to know what came of it, #730 provides this feature. |
Beta Was this translation helpful? Give feedback.
-
I'm working on a program that writes out a mesh and boundary conditions to a new CGNS file in parallel. There are dedicated functions for writing node data in parallel, except for
cg_boco_write
.First, how difficult would it be to write a
cgp_boco_write
function? Seems like the underlying infrastructure would be identical tocgp_elements_write_data
, but with obvious changes to the node type, etc.Second, should using
cg_boco_write
on a file that was opened usingcgp_open
work correctly? I'm currently running into issues when trying to do this. Specifically, thePointList
data is not written out. From cgnsview:I can't find any example code of using the parallel library and
cg_boco_*
functions. Most of the parallel library examples do have calls to non-parallel functions (such ascg_base_write
), but (and possibly crucially) I don't see any call non-parallel functions which write out array data to nodes.Is it possible that non-parallel library functions can't write arrays to files opened using
cgp_open
?Here's a copy of that cgns file (it's only 32KB, but GitHub will only take a zipped version of the file): chefOut.cgns.gz
I'll see about creating a MWE of the issue tomorrow.
(CCing interested parties @jedbrown @KennethEJansen )
Beta Was this translation helpful? Give feedback.
All reactions