Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory linear growing up without torch.cuda.empty_cache() #9259

Open
shaochengyan opened this issue Apr 30, 2024 · 1 comment
Open

GPU memory linear growing up without torch.cuda.empty_cache() #9259

shaochengyan opened this issue Apr 30, 2024 · 1 comment
Labels

Comments

@shaochengyan
Copy link

shaochengyan commented Apr 30, 2024

🐛 Describe the bug

There is a simple edge feature adding in the edge_update function and I debug to Line42 (below)

Pasted image 20240430181742

Next, i will show the growing of GPU memory.

Step1-check the GPU memory (nvidiam-smi)

Pasted image 20240430182251

Step2-run below code (in VSCode Debug Mode)

out = self.edge_updater(edge_index, feat1=feat1,feat2=feat2) 

GPU Memory

Pasted image 20240430182337

Step3-rerun three times that code and check GPU memory

Pasted image 20240430182144

Pasted image 20240430182209

Pasted image 20240430182221

Step4-run torch.cuda.empty_catch()

then, the gpu memory back to normal!

Pasted image 20240430182518

I guess it is caused by message function?

below is my all code, my filename is TEST_001_Efficiency_of_PyG.py, and you can run:

python TEST_001_Efficiency_of_PyG.py --is_use_pyg
import time
import torch
from torch_geometric.nn.aggr import Aggregation

# cola

# PyG
from torch_geometric.nn import MessagePassing
from torch_geometric.nn import AGNNConv
from torch_geometric.utils import softmax

import torch
from pytorch_memlab import MemReporter, LineProfiler, profile

import argparse



parser = argparse.ArgumentParser(description="Process some integers.")
parser.add_argument('--is_use_pyg', action='store_true', help='A boolean flag')

config = parser.parse_args()

is_use_pyg = config.is_use_pyg
print(is_use_pyg)

is_use_pyg = True


if not is_use_pyg:
    def dense_multiply(feat1: torch.Tensor, feat2: torch.Tensor, edge_index: torch.Tensor):
        dense_mult = torch.einsum("mi,pi->mp", feat1, feat2)
        return dense_mult[edge_index[1], edge_index[0]][..., None]
    

if is_use_pyg:
    class SparseMultiply(MessagePassing):
        def __init__(self, aggr: str | torch.List[str] | Aggregation | None = 'sum', *, aggr_kwargs: torch.Dict[str, torch.Any] | None = None, flow: str = "source_to_target", node_dim: int = -2, decomposed_layers: int = 1) -> None:
            super().__init__(aggr, aggr_kwargs=aggr_kwargs, flow=flow, node_dim=node_dim, decomposed_layers=decomposed_layers)  
        
        def forward(self, feat1, feat2, edge_index):
            out = self.edge_updater(edge_index, 
                                    feat1=feat1, 
                                    feat2=feat2 
                                    ) 
            
            return out                        

        def edge_update(self, feat1_i, feat2_j) -> torch.Tensor:
            return torch.sum(feat1_i * feat2_j, dim=-1, keepdim=True)

else:
    def SparseMultiply():
        return torch.tensor([0])
        


if __name__=="__main__":
    N = 5000*8
    # dim = 32
    dim = 128
    num_edge = int(N * 100)  
    sm = SparseMultiply().cuda()

    feat = torch.rand(N, dim).cuda()
    edge_index = torch.randint(low=0, high=N, size=(2, num_edge)).cuda()


    # reporter = MemReporter()
    time_all = 0
    if is_use_pyg:
        t1 = time.time()
        for i in range(10):
            out2 = sm.forward(feat, feat, edge_index)
            torch.cuda.empty_cache()
            # print(out2)
        time_all += (time.time() - t1)
    else:
        t1 = time.time()
        for i in range(10):
            out1 = dense_multiply(feat, feat, edge_index)
            # print(out1)
        time_all += (time.time() - t1)
    
    print("TimeAll=", time_all)


"""
python TEST_001_Efficiency_of_PyG.py
python TEST_001_Efficiency_of_PyG.py --is_use_pyg
"""

Versions

For security purposes, please check the contents of collect_env.py before running it.

python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22068 100 22068 0 0 33027 0 --:--:-- --:--:-- --:--:-- 32986
Collecting environment information...
PyTorch version: 2.2.2+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-2ubuntu1~20.04) 11.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.28.0
Libc version: glibc-2.31

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA Graphics Device
Nvidia driver version: 520.61.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU: 24
在线 CPU 列表: 0-23
每个核的线程数: 1
每个座的核数: 16
座: 1
NUMA 节点: 1
厂商 ID: GenuineIntel
CPU 系列: 6
型号: 183
型号名称: 13th Gen Intel(R) Core(TM) i7-13700KF
步进: 1
CPU MHz: 3993.227
CPU 最大 MHz: 5400.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 6835.20
虚拟化: VT-x
L1d 缓存: 384 KiB
L1i 缓存: 256 KiB
L2 缓存: 16 MiB
NUMA 节点0 CPU: 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] pytorch-memlab==0.3.0
[pip3] torch==2.2.2+cu118
[pip3] torch_cluster==1.6.3+pt22cu118
[pip3] torch-geometric==2.6.0
[pip3] torch_scatter==2.1.2+pt22cu118
[pip3] torch_sparse==0.6.18+pt22cu118
[pip3] torch_spline_conv==1.2.2+pt22cu118
[pip3] torchaudio==2.2.2+cu118
[pip3] torchvision==0.17.2+cu118
[pip3] triton==2.2.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] pytorch-memlab 0.3.0 pypi_0 pypi
[conda] torch 2.2.2+cu118 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt22cu118 pypi_0 pypi
[conda] torch-geometric 2.6.0 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt22cu118 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu118 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt22cu118 pypi_0 pypi
[conda] torchaudio 2.2.2+cu118 pypi_0 pypi
[conda] torchvision 0.17.2+cu118 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi

@rusty1s
Copy link
Member

rusty1s commented May 2, 2024

I am not sure if nvidia-smi is the best way to measure this. I think all memory is correctly freed when running:

out2 = sm.forward(feat, feat, edge_index)
print('----------')
import gc
for obj in gc.get_objects():
    if isinstance(obj, torch.Tensor) and obj.is_cuda:
        print(obj.size())
print('----------')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants