You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to write out data in an SPH particle container derived from the NeighborParticleContainer.
There are two types of particles, bound and fluid, which are distinguished by an int marker named mk. I can write out them together using (approach 1): this->WritePlotFile(filename, "SPHParticles", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"});
which works on both CPU and GPU.
I am also considering to write out the data separately for bound and fluid particles, by passing a Lambda function (approach 2), i.e.: //- Bound particles this->WritePlotFile(filename, "Bound", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"}, [] AMREX_GPU_DEVICE (const auto& p) { return p.id() > 0 && p.idata(AOSInt::mk) > 10; });
//- Fluid particles this->WritePlotFile(filename, "Fluid", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"}, [] AMREX_GPU_DEVICE (const auto& p) { return p.id() > 0 && p.idata(AOSInt::mk) <= 10; });
which works for CPU but behaves weird for GPU.
Please find the simulation results for outputting together (left) and outputting separately (right).
myvideo.mp4
Please note on the right, initially some particles are missing, and at later times, the appearance of some weird particles. Interestingly, bound particles seem always correct.
I also tried adding amrex::Gpu::streamSynchronize(); before the output of fluid particles, but it did not change anything.
Any suggestion is appreciated.
Best regards,
Cong
The text was updated successfully, but these errors were encountered:
It's possible you've found a bug in one of our IO routines. Could you also try running your executable with the CUDA_LAUNCH_BLOCKING=1 environment variable set (assuming you're running on NVIDIA), and see if the problem still persists? This has the effect of inserting a sync after every kernel launch, which could help see if there is a missing call internally.
Yes, I am running on an NVIDIA GPU. I have re-run the executable with CUDA_LAUNCH_BLOCKING=1 and the problem persists.
Accidentally, this time I used a smaller value for max_grid_size (8, previously it was perhaps 128?), and we can observe main issue at the border of each grid. Also, bound particles are not correct.
myvideo.mp4
Hope this provides extra hints, @atmyers. Thank you.
Hi there,
I am trying to write out data in an SPH particle container derived from the NeighborParticleContainer.
There are two types of particles, bound and fluid, which are distinguished by an int marker named
mk
. I can write out them together using (approach 1):this->WritePlotFile(filename, "SPHParticles", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"});
which works on both CPU and GPU.
I am also considering to write out the data separately for bound and fluid particles, by passing a Lambda function (approach 2), i.e.:
//- Bound particles
this->WritePlotFile(filename, "Bound", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"}, [] AMREX_GPU_DEVICE (const auto& p) { return p.id() > 0 && p.idata(AOSInt::mk) > 10; });
//- Fluid particles
this->WritePlotFile(filename, "Fluid", {1, 1, 1, 1}, {1}, {"velx", "velz", "rho", "press"}, {"mk"}, [] AMREX_GPU_DEVICE (const auto& p) { return p.id() > 0 && p.idata(AOSInt::mk) <= 10; });
which works for CPU but behaves weird for GPU.
Please find the simulation results for outputting together (left) and outputting separately (right).
myvideo.mp4
Please note on the right, initially some particles are missing, and at later times, the appearance of some weird particles. Interestingly, bound particles seem always correct.
I also tried adding
amrex::Gpu::streamSynchronize();
before the output of fluid particles, but it did not change anything.Any suggestion is appreciated.
Best regards,
Cong
The text was updated successfully, but these errors were encountered: