Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: WSL Cuda out of Memory when Trying to Load GGUF Model #360

Open
Lirikana opened this issue Mar 26, 2024 · 8 comments
Open

[Bug]: WSL Cuda out of Memory when Trying to Load GGUF Model #360

Lirikana opened this issue Mar 26, 2024 · 8 comments
Labels
bug Something isn't working

Comments

@Lirikana
Copy link

Your current environment

PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 2080 Ti
GPU 1: NVIDIA GeForce RTX 2080 Ti
GPU 2: NVIDIA GeForce RTX 2080 Ti
GPU 3: NVIDIA GeForce RTX 2080 Ti
GPU 4: NVIDIA GeForce RTX 2080 Ti

Nvidia driver version: 551.52
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      48 bits physical, 48 bits virtual
Byte Order:                         Little Endian
CPU(s):                             32
On-line CPU(s) list:                0-31
Vendor ID:                          AuthenticAMD
Model name:                         AMD EPYC 7F52 16-Core Processor
CPU family:                         23
Model:                              49
Thread(s) per core:                 2
Core(s) per socket:                 16
Socket(s):                          1
Stepping:                           0
BogoMIPS:                           6986.90
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid
Virtualization:                     AMD-V
Hypervisor vendor:                  Microsoft
Virtualization type:                full
L1d cache:                          512 KiB (16 instances)
L1i cache:                          512 KiB (16 instances)
L2 cache:                           8 MiB (16 instances)
L3 cache:                           16 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Not affected
Vulnerability Retbleed:             Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.0
[pip3] triton==2.2.0
[conda] Could not collect ROCM Version: Could not collect
Aphrodite Version: 0.5.1
Aphrodite Build Flags:
CUDA Archs: Not Set; ROCm: Disabled

Please note that the system is WSL on windows 11, therefore the env.py is not able to gather the correct version of CUDA due to a known bug. The CUDA version installed is 12.1

🐛 Describe the bug

Trying to run a gguf model that has been converted to safetensors results in a Cuda out of memory error. This occurs after some of the ray workers have finished loading the model. 4 RTX2080ti 22GB were used which should have 88GB of VRAM available.

Launch Parameters

python3 -m aphrodite.endpoints.openai.api_server --model miquliz-120b-v2.0.i1-Q4_K_M/ -tp 4 --api-keys 123456 -q gguf --dtype float16 -gmu 0.95  

Error Log

INFO:     Model weights loaded. Memory usage: 17.15 GiB x 4 = 68.60 GiB
(RayWorkerAphrodite pid=232346) INFO:     Model weights loaded. Memory usage: 17.15 GiB x 4 = 68.60 GiB
(RayWorkerAphrodite pid=232501) WARNING:  Custom allreduce is disabled because your platform lacks GPU P2P capability. To silence this warning, specify  [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
(RayWorkerAphrodite pid=232501) disable_custom_all_reduce=True explicitly. [repeated 2x across cluster]
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/endpoints/openai/api_server.py", line 563, in <module>
    engine = AsyncAphrodite.from_engine_args(engine_args)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 676, in from_engine_args
    engine = cls(parallel_config.worker_use_ray,
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 341, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 410, in _init_engine
    return engine_class(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 118, in __init__
    self._init_cache()
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 321, in _init_cache
    num_blocks = self._run_workers(
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 1028, in _run_workers
    driver_worker_output = getattr(self.driver_worker,
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/worker.py", line 136, in profile_num_available_blocks
    self.model_runner.profile_run()
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/model_runner.py", line 758, in profile_run
    self.execute_model(seqs, kv_caches)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/model_runner.py", line 692, in execute_model
    hidden_states = model_executable(
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/models/llama.py", line 413, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/models/llama.py", line 340, in forward
    hidden_states, residual = layer(
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/models/llama.py", line 298, in forward
    hidden_states = self.mlp(hidden_states)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/models/llama.py", line 114, in forward
    x, _ = self.down_proj(x)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/layers/linear.py", line 617, in forward
    output_parallel = self.linear_method.apply_weights(
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/layers/quantization/gguf.py", line 137, in apply_weights
    out = reshaped_x @ weight.T
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB. GPU 0 has a total capacity of 22.00 GiB of which 0 bytes is free. Process 232346 has 17179869184.00 GiB memory in use. Including non-PyTorch memory, this process has 17179869184.00 GiB memory in use. Of the allocated memory 20.96 GiB is allocated by PyTorch, and 137.73 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
(RayWorkerAphrodite pid=232430) INFO:     Model weights loaded. Memory usage: 17.15 GiB x 4 = 68.60 GiB [repeated 2x across cluster]
@Lirikana Lirikana added the bug Something isn't working label Mar 26, 2024
@AlpinDale
Copy link
Member

Try disabling CUDA graphs --enforce-eager, should help.

@Lirikana
Copy link
Author

The same error occurs. It appears that one of the ray workers is trying to allocate way too much memory.

torch.cuda.OutOfMemoryError: CUDA out of memory. 
Tried to allocate 512.00 MiB. GPU 0 has a total capacity of 22.00 GiB of which 0 bytes is free. 
Process 232346 has 17179869184.00 GiB memory in use. Including non-PyTorch memory, this process has 17179869184.00 GiB memory in use. 
Of the allocated memory 20.96 GiB is allocated by PyTorch, and 137.73 MiB is reserved by PyTorch but unallocated. 
If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

@sgsdxzy
Copy link
Collaborator

sgsdxzy commented Mar 26, 2024

You may need to lower your context length by specifying --max-model-len 4096

@Lirikana
Copy link
Author

Different error after lowering context length.

*** SIGSEGV received at time=1711454288 on cpu 22 ***
    @     0x7f9340ccc520  1479940200  (unknown)
    @             0x2000  (unknown)  (unknown)
    @                0x2  1528629200  (unknown)
    @     0x7f933f8d2e30  (unknown)  (unknown)
    @ 0xec8348fb89485355  (unknown)  (unknown)
[2024-03-26 19:58:08,630 E 235769 235769] logging.cc:361: *** SIGSEGV received at time=1711454288 on cpu 22 ***
[2024-03-26 19:58:08,632 E 235769 235769] logging.cc:361:     @     0x7f9340ccc520  1479940200  (unknown)
[2024-03-26 19:58:08,634 E 235769 235769] logging.cc:361:     @             0x2000  (unknown)  (unknown)
[2024-03-26 19:58:08,637 E 235769 235769] logging.cc:361:     @                0x2  1528629200  (unknown)
[2024-03-26 19:58:08,638 E 235769 235769] logging.cc:361:     @     0x7f933f8d2e30  (unknown)  (unknown)
[2024-03-26 19:58:08,640 E 235769 235769] logging.cc:361:     @ 0xec8348fb89485355  (unknown)  (unknown)
Fatal Python error: Segmentation fault

Stack (most recent call first):
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/layers/quantization/gguf.py", line 135 in apply_weights
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/layers/vocab_parallel_embedding.py", line 170 in forward
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520 in _call_impl
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/modeling/models/llama.py", line 422 in sample
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/model_runner.py", line 700 in execute_model
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/model_runner.py", line 758 in profile_run
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/task_handler/worker.py", line 136 in profile_num_available_blocks
  File "/home/lirikana/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115 in decorate_context
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 1028 in _run_workers
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 321 in _init_cache
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/aphrodite_engine.py", line 118 in __init__
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 410 in _init_engine
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 341 in __init__
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/engine/async_aphrodite.py", line 676 in from_engine_args
  File "/home/lirikana/.local/lib/python3.10/site-packages/aphrodite/endpoints/openai/api_server.py", line 563 in <module>
  File "/usr/lib/python3.10/runpy.py", line 86 in _run_code
  File "/usr/lib/python3.10/runpy.py", line 196 in _run_module_as_main

Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, charset_normalizer.md, yaml._yaml, sentencepiece._sentencepiece, psutil._psutil_linux, psutil._psutil_posix, msgpack._cmsgpack, google._upb._message, setproctitle, ray._raylet, cupy_backends.cuda.api._runtime_enum, cupy_backends.cuda.api.runtime, cupy_backends.cuda.stream, cupy_backends.cuda.libs.cublas, cupy_backends.cuda.libs.cusolver, cupy_backends.cuda._softlink, cupy_backends.cuda.libs.cusparse, cupy._util, cupy.cuda.device, fastrlock.rlock, cupy.cuda.memory_hook, cupy.cuda.graph, cupy.cuda.stream, cupy_backends.cuda.api._driver_enum, cupy_backends.cuda.api.driver, cupy.cuda.memory, cupy._core.internal, cupy._core._carray, cupy.cuda.texture, cupy.cuda.function, cupy_backends.cuda.libs.nvrtc, cupy.cuda.jitify, cupy.cuda.pinned_memory, cupy_backends.cuda.libs.curand, cupy_backends.cuda.libs.profiler, cupy.cuda.common, cupy.cuda.cub, cupy_backends.cuda.libs.nvtx, cupy.cuda.thrust, cupy._core._dtype, cupy._core._scalar, cupy._core._accelerator, cupy._core._memory_range, cupy._core._fusion_thread_local, cupy._core._kernel, cupy._core._routines_manipulation, cupy._core._routines_binary, cupy._core._optimize_config, cupy._core._cub_reduction, cupy._core._reduction, cupy._core._routines_math, cupy._core._routines_indexing, cupy._core._routines_linalg, cupy._core._routines_logic, cupy._core._routines_sorting, cupy._core._routines_statistics, cupy._core.dlpack, cupy._core.flags, cupy._core.core, cupy._core._fusion_variable, cupy._core._fusion_trace, cupy._core._fusion_kernel, cupy._core.new_fusion, cupy._core.fusion, cupy._core.raw, cupyx.cusolver, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._flinalg, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, cupy.cuda.cufft, cupy.fft._cache, cupy.fft._callback, cupy.random._generator_api, cupy.random._bit_generator, scipy._lib._uarray._uarray, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, cupy.lib._polynomial, cupy_backends.cuda.libs.nccl, regex._regex, numba.core.typeconv._typeconv, numba._helperlib, numba._dynfunc, numba._dispatcher, numba.core.runtime._nrt_python, numba.np.ufunc._internal, numba.experimental.jitclass._box, markupsafe._speedups, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy._lib.messagestream, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.spatial._ckdtree, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.optimize._direct (total: 157)
Segmentation fault

@Lirikana
Copy link
Author

I'm getting CUDA oom, the system has 128GB of system memory available.

@Lirikana
Copy link
Author

Error appears to be happening on line 563 in api_server.py

    engine = AsyncAphrodite.from_engine_args(engine_args)

Not sure where exactly it fails since efforts at putting in debug code in the format of

logger.debug()

doesn't yield any messages in the console.

@AlpinDale
Copy link
Member

Sorry I've been away for a while. Have you tried the docker image? This is probably a WSL issue. GPU docker on windows uses WSL too, but who knows...

@Lirikana
Copy link
Author

Not sure how to run a docker image on windows. Hopefully official support for windows will be added in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants