Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible memory leak when collecting events for profiling #625

Open
kif opened this issue Oct 5, 2022 · 5 comments
Open

Possible memory leak when collecting events for profiling #625

kif opened this issue Oct 5, 2022 · 5 comments
Labels

Comments

@kif
Copy link
Contributor

kif commented Oct 5, 2022

Describe the bug
Very large (host) memory consumption has been observed when running OpenCL application in profiling mode.
Example: Processing 10000 4Mpix images (int32) with ~6 kernels per image on a nvidia Tesla A40 gets (OOM-) killed on a computer with 200GB of memory. The computer could host all images, uncompressed, in memory.

I used the tracemalloc tool from Python on the application without noticeable leak (at the Python level) indicating that the leak was from malloc performed outside the scope of Python. I investigated a possible leak coming from HDF5 via the h5py since all data were read and written in this format. but this was not the case.

When profiling is disabled, the memory consumption does not exceed few percent of the total memory.

To Reproduce
Investigated in:
silx-kit/pyFAI#1744

Expected behavior
A memory leak is expected from keeping the list of all events, but should not exceed 3.4 MB for 60000 kernels (when stored as 2-namedtuple)

Environment (please complete the following information):

  • OS: Linux Debian11 + Ubuntu 20.04
  • ICD Loader and version: ocl-icd 2.2.14-2
  • ICD and version: Nvidia 470.141.03-1~deb11u1
  • CPU/GPU: Nvidia Titan V & Tesla A40
  • Python version: 3.9 + 3.9
  • PyOpenCL version: 2021, 1, 2 + 2021, 2, 13

Additional context
The list of event is handled at https://github.com/silx-kit/silx/blob/master/src/silx/opencl/processing.py#L288

@kif kif added the bug label Oct 5, 2022
@inducer
Copy link
Owner

inducer commented Oct 5, 2022

Can something like valgrind maybe provide details on where those allocations are taking place?

@kif
Copy link
Contributor Author

kif commented Oct 5, 2022

Here are the valgrind "massif" profile for two calls of the program when running on a limited number of images (2000), with and without profiling activated. Valgrind still suggests to look at h5py rather than pyopencl but the triggering of the option makes 16G difference in memory consumption.

Without profiling:
image

With profiling:
image

@kif
Copy link
Contributor Author

kif commented Oct 5, 2022

I run it several other times and it looks like the profiling in OpenCL prevents the memory from being freed.

@kif
Copy link
Contributor Author

kif commented Oct 6, 2022

So I tired to collect only timestamps for each event instead of the complete process.
The patch is for now implemented in:
silx-kit/silx#3690

The memory profile looks like this, now. One would have expected 10 memory free (since 10 files are processed) but fewer are visible.

image

@kif
Copy link
Contributor Author

kif commented Oct 18, 2022

I got struck by something similar in another project ... but profiling was not involved this time.
https://github.com/kif/multianalyzer/blob/main/multianalyzer/opencl.py
The pattern was similar: read data from an HDF5 file with large chunks and send them to the GPU ...
But once again, unable to reproduce the behaviour within a self contained script.
Calling the pyopencl.array method finalize helps in freeing the memory on the CPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants