You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on experimental modifications of the VerletSplit class. So far, I tested the use of run_style verlet/split on a number of example applications, and for one in particular, I run into an issue that appears to be fixed with a slight change of code. I would like to request that the development team for VerletSplit.cpp make changes for future releases as exemplified below to address the issue raised below.
LAMMPS Version and Platform
LAMMPS (7 Feb 2024). I'm currently working on a local git branch that is not modified from this version.
Steps to Reproduce
I modified my local copy of lammps/examples/SPIN/dipole_spin/in.spin.iron_dipole_pppm in the following manner:
In order to be able to run the example on more than 8 processors with MPI, I changed region box block 0.0 12.0 0.0 12.0 0.0 12.0
to region box block 0.0 24.0 0.0 24.0 0.0 24.0
I added the line run_style verlet/split before the run statement
I built lammps with the additional packages needed to run this application. My build.sh script (run from my lammps/build directory) appears as follows:
I got memory violation errors which I traced to VerletSplit::rk_setup. After I replaced the line MPI_Gatherv(atom->q,n,MPI_DOUBLE,atom->q,qsize,qdisp,MPI_DOUBLE,0,block);
with if(atom->q_flag) MPI_Gatherv(atom->q,n,MPI_DOUBLE,atom->q,qsize,qdisp,MPI_DOUBLE,0,block);
(i.e., adding the if statement) and re-building, the simulation appears to run as expected. The rationale for the change is that when the atom->q_flag is false, the buffers are not allocated/initialised even on the sending side.
Further Information, Files, and Links
While the fix provided above appears to work for the above example, this ticket is to request that the development team responsible for maintaining the VerletSplit class address for future releases the apparent issue that certain types of MPI communications are not always applicable for certain applications in this and perhaps other forms that may be anticipated.
The text was updated successfully, but these errors were encountered:
Summary
I am working on experimental modifications of the VerletSplit class. So far, I tested the use of
run_style verlet/split
on a number of example applications, and for one in particular, I run into an issue that appears to be fixed with a slight change of code. I would like to request that the development team forVerletSplit.cpp
make changes for future releases as exemplified below to address the issue raised below.LAMMPS Version and Platform
LAMMPS (7 Feb 2024). I'm currently working on a local git branch that is not modified from this version.
Steps to Reproduce
I modified my local copy of
lammps/examples/SPIN/dipole_spin/in.spin.iron_dipole_pppm
in the following manner:In order to be able to run the example on more than 8 processors with MPI, I changed
region box block 0.0 12.0 0.0 12.0 0.0 12.0
to
region box block 0.0 24.0 0.0 24.0 0.0 24.0
I added the line
run_style verlet/split
before therun
statementI built lammps with the additional packages needed to run this application. My build.sh script (run from my lammps/build directory) appears as follows:
(I'm sure not all of these packages are needed for the SPIN application mentioned.) When I run,
$ mpirun -n 10 ~/lammps/build/lmp -partition 8 2 -in in.spin.iron_dipole_pppm
I got memory violation errors which I traced to
VerletSplit::rk_setup
. After I replaced the lineMPI_Gatherv(atom->q,n,MPI_DOUBLE,atom->q,qsize,qdisp,MPI_DOUBLE,0,block);
with
if(atom->q_flag) MPI_Gatherv(atom->q,n,MPI_DOUBLE,atom->q,qsize,qdisp,MPI_DOUBLE,0,block);
(i.e., adding the if statement) and re-building, the simulation appears to run as expected. The rationale for the change is that when the atom->q_flag is false, the buffers are not allocated/initialised even on the sending side.
Further Information, Files, and Links
While the fix provided above appears to work for the above example, this ticket is to request that the development team responsible for maintaining the VerletSplit class address for future releases the apparent issue that certain types of MPI communications are not always applicable for certain applications in this and perhaps other forms that may be anticipated.
The text was updated successfully, but these errors were encountered: