Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem:: Running the examples using the parallel version #3

Open
mo7ammedmostafa opened this issue May 31, 2019 · 5 comments
Open

Comments

@mo7ammedmostafa
Copy link

Hello my name Kamra,
I came across your code while looking a mpi parallel research that I can use to test new schemes and algorithm.
I downloaded the code and managed to compile it but it would seem that there is a problem with mesh reader when reading the boundary file for examples cavity and pitzDaily
I get the following error
`At line 436 of file mesh_geometry_and_topology.f90 (unit = 7, file = 'processor0/constant/polyMesh/boundary')
Fortran runtime error: Bad integer for item 2 in list input

Error termination. Backtrace:
At line 436 of file mesh_geometry_and_topology.f90 (unit = 7, file = 'processor1/constant/polyMesh/boundary')
Fortran runtime error: Bad integer for item 2 in list input

Error termination. Backtrace:
#0 0x7ff209cd131a
#1 0x7ff209cd1ec5
#2 0x7ff209cd268d
#3 0x7ff209e3e924
#4 0x7ff209e41c1a
#5 0x7ff209e430f9
#6 0x55a6efbdfb0b
#7 0x55a6efbd61e8
#8 0x55a6efbb006e
#9 0x7ff209101b96
#10 0x55a6efbb00a9
#11 0xffffffffffffffff
#0 0x7f504480731a
#1 0x7f5044807ec5
#2 0x7f504480868d
#3 0x7f5044974924
#4 0x7f5044977c1a
#5 0x7f50449790f9
#6 0x559a1359eb0b
#7 0x559a135951e8
#8 0x559a1356f06e
#9 0x7f5043c37b96
#10 0x559a1356f0a9
#11 0xffffffffffffffff

Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.


mpiexec detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:

Process name: [[49949,1],1]
Exit code: 2
--------------------------------------------------------------------------`

So do I just need to modify the boundary file from OpenFOAM format to you native format?

THANKS
Kamra

@nikola-m
Copy link
Owner

nikola-m commented Jun 1, 2019 via email

@mo7ammedmostafa
Copy link
Author

mo7ammedmostafa commented Jun 1, 2019

I am planning to implement a new scheme for high order flux Reconstruction
Based on a quick read of the code it looks like the necessary data structure will work but I wanted to evaluate the parallel efficiency because most of my test problems are 3d
I already wrote my own code for icnomp navier stokes on unstructured grid but its parallel efficiency is terrible, so it will take a long time to optimize it so i was looking for code as similar to my own as possible but with better performance to use and learn from

@nikola-m
Copy link
Owner

nikola-m commented Jun 3, 2019 via email

@mo7ammedmostafa
Copy link
Author

I understand the difference between the two algorithms, but what I am interested in is the parallel efficiency of the implementation and data structure.

@nikola-m
Copy link
Owner

nikola-m commented Jun 15, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants