Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix memory leak in base_class.py #1908

Merged
merged 6 commits into from May 15, 2024
Merged

Fix memory leak in base_class.py #1908

merged 6 commits into from May 15, 2024

Conversation

peteole
Copy link
Contributor

@peteole peteole commented Apr 23, 2024

Description

Loading the data return value is not necessary since it is unused. Loading the data causes a memory leak through the ep_info_buffer variable. I found this while loading a PPO learner from storage on a multi-GPU system since the ep_info_buffer is loaded to the memory location it was on while it was saved to disk, instead of the target loading location, and is then not cleaned up.

Motivation and Context

  • I have raised an issue to propose this change (required for new features and bug fixes)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist

  • I've read the CONTRIBUTION guide (required)
  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have opened an associated PR on the SB3-Contrib repository (if necessary)
  • I have opened an associated PR on the RL-Zoo3 repository (if necessary)
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)
  • I have checked that the documentation builds using make doc (required)

Note: You can run most of the checks using make commit-checks.

Note: we are using a maximum length of 127 characters per line

peteole and others added 3 commits April 23, 2024 19:23
Loading the data return value is not necessary since it is unused. Loading the data causes a memory leak through the ep_info_buffer variable. I found this while loading a PPO learner from storage on a multi-GPU system since the ep_info_buffer is loaded to the memory location it was on while it was saved to disk, instead of the target loading location, and is then not cleaned up.
@araffin araffin self-requested a review May 3, 2024 15:29
@araffin
Copy link
Member

araffin commented May 6, 2024

Loading the data causes a memory leak through the ep_info_buffer variable.

Do you have a minimal example to reproduce/track this behavior? also, how big is the leak?

@araffin araffin added the more information needed Please fill the issue template completely label May 6, 2024
@peteole
Copy link
Contributor Author

peteole commented May 7, 2024

Do you have a minimal example to reproduce/track this behavior? also, how big is the leak?

Hi, I managed to get this working example:

import gymnasium as gym

from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
import torch

# Parallel environments
train=True
vec_env = make_vec_env("CartPole-v1", n_envs=2)
if train:

    model = PPO("MlpPolicy", vec_env, verbose=1, device="cuda:2")
    model.learn(total_timesteps=1)
    model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")])
    model.save("ppo_cartpole")

    del model

model = PPO("MlpPolicy", vec_env, verbose=1, device="cuda:3")
import torch
model.set_parameters("ppo_cartpole.zip", device="cuda:3")
torch.cuda.empty_cache()
import time
print("watch gpustat now")
time.sleep(10)
print("memory on ep info device:", torch.cuda.memory_allocated("cuda:2"))# prints 0

Adjust the cuda devices according to your machine. Run once with train=True, then you can turn it off to make sure GPU usage comes from that.

Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?

Anyways, setting load_data=False should be safe since the data return value is not used and it fixes the problem: With this option, gpustat shows that only cuda:3 is used, as intended.

@peteole
Copy link
Contributor Author

peteole commented May 7, 2024

Here is the output of watch gpustat without the change:

Every 2.0s: gpustat                      myserver: Tue May  7 11:13:36 2024

myserver                   Tue May  7 11:13:36 2024  550.78
[0] NVIDIA GeForce RTX 2080 Ti | 36'C,   0 % |   262 / 11264 MB |
[1] NVIDIA GeForce RTX 2080 Ti | 35'C,   0 % |   262 / 11264 MB |
[2] NVIDIA GeForce RTX 2080 Ti | 31'C,   0 % |   416 / 11264 MB | ole(154M)
[3] NVIDIA GeForce RTX 2080 Ti | 34'C,   0 % |   418 / 11264 MB | ole(156M)
[4] NVIDIA GeForce RTX 2080 Ti | 33'C,   0 % |   262 / 11264 MB |
[5] NVIDIA GeForce RTX 2080 Ti | 33'C,   0 % |   262 / 11264 MB |
[6] NVIDIA GeForce RTX 2080 Ti | 33'C,   0 % |   262 / 11264 MB |
[7] NVIDIA GeForce RTX 2080 Ti | 33'C,   0 % |   262 / 11264 MB |

@peteole
Copy link
Contributor Author

peteole commented May 12, 2024

Is there anything I should still do for merger?

@araffin
Copy link
Member

araffin commented May 13, 2024

Is there anything I should still do for merger?

From your side, nothing for now ;)

What is missing is from my side. I need to take some time to reproduce the problem and check that nothing else would be impacted.

@araffin araffin removed the more information needed Please fill the issue template completely label May 13, 2024
@araffin
Copy link
Member

araffin commented May 13, 2024

I took the time to look at your example closely but I don't understand model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")]), this is not supposed to contain any torch variable.

Btw, PPO is usually faster when everything is on CPU (when not using a CNN).

Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?

I'm also not sure to understand this.
What is the expected result? memory allocated on gpu:3 but no memory allocated on gpu:2?

@peteole
Copy link
Contributor Author

peteole commented May 14, 2024

I took the time to look at your example closely but I don't understand model.ep_info_buffer.extend([torch.ones(10000,device="cuda:2")]), this is not supposed to contain any torch variable.

Oh... I am using an internal framework built on stable baselines. I do not entirely understand what it is doing but apparently we have a use case for putting torch variables in this buffer. To reproduce the issue, I did this as well.

Btw, PPO is usually faster when everything is on CPU (when not using a CNN).

Thanks for the tip!

Running gpustat shows that cuda:2 is being used with circa 154M, same for cuda:3. However, torch.cuda.memory_allocated("cuda:2") prints 0. So maybe cuda:2 is being initialized for torch usage even though it does not have to?

I'm also not sure to understand this. What is the expected result? memory allocated on gpu:3 but no memory allocated on gpu:2?

Exactly! I do not want to touch the gpu that was used for training when doing inference.

From my understanding, there is two issues here:

  1. We should not put torch variables on the episode info buffer in our internal framework
  2. The episode info buffer is loaded from storage but then not being used. Normally, this is not a problem but if you happened to misuse the info buffer with torch variables, a GPU gets activated unexpectedly, allocating 150mb of storage.

While it is arguable that 2. is only a problem because of 1., loading the data is superfluous since the data return value is unused. I do not see how it would hurt to merge this pr.

Copy link
Member

@araffin araffin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks =)

@araffin araffin merged commit 6c00565 into DLR-RM:master May 15, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants