Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when running long GPU inferences #487

Open
RandomDefaultUser opened this issue Sep 28, 2023 · 0 comments
Open

Memory leak when running long GPU inferences #487

RandomDefaultUser opened this issue Sep 28, 2023 · 0 comments
Assignees
Labels
bug Something isn't working important

Comments

@RandomDefaultUser
Copy link
Member

I have encountered odd behavior when running long (over 200) GPU inferences in a row on the GPUs on hemera. After around 200-300 inferences, I get a memory overload error, which seems to originate somewhere on the python side. This looks like a memory leak.

I have tried to identify the problem, but standard profiling and investigating didn't give much insight. I suspect it could be related to our LAMMPS interface, but I have no idea. I will have to investigate further.

@RandomDefaultUser RandomDefaultUser added bug Something isn't working important labels Sep 28, 2023
@RandomDefaultUser RandomDefaultUser self-assigned this Sep 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working important
Projects
None yet
Development

No branches or pull requests

1 participant