Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: load-in-4bit not load after converted, and it seem not use swap well #361

Open
yamosin opened this issue Mar 27, 2024 · 0 comments

Comments

@yamosin
Copy link

yamosin commented Mar 27, 2024

Your current environment

(aph) omnisaa@WIN-4CNRONV51MG:~$ python env.py
Collecting environment information...
PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, Mar 21 2024, 16:24:04) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090

Nvidia driver version: 551.86
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          12
On-line CPU(s) list:             0-11
Vendor ID:                       GenuineIntel
Model name:                      Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz
CPU family:                      6
Model:                           63
Thread(s) per core:              2
Core(s) per socket:              6
Socket(s):                       1
Stepping:                        2
BogoMIPS:                        4788.90
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt flush_l1d arch_capabilities
Virtualization:                  VT-x
Hypervisor vendor:               Microsoft
Virtualization type:             full
L1d cache:                       192 KiB (6 instances)
L1i cache:                       192 KiB (6 instances)
L2 cache:                        1.5 MiB (6 instances)
L3 cache:                        30 MiB (1 instance)
Vulnerability Itlb multihit:     KVM: Mitigation: VMX disabled
Vulnerability L1tf:              Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds:               Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown:          Mitigation; PTI
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.2.0
[pip3] triton==2.2.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.2.0                    pypi_0    pypi
[conda] triton                    2.2.0                    pypi_0    pypiROCM Version: Could not collect
Aphrodite Version: 0.5.1
Aphrodite Build Flags:
CUDA Archs: Not Set; ROCm: Disabled

How would you like to use Aphrodite?

I want to run this Midnight-Miqu-70B-v1.5.
I use wsl and set up 55G ram and 150G swap for wsl via .wslconfig
When reading the model I can see that RAM is being used, but swap only for 11~17G and then it OOM. ray worker killed.

(base) omnisaa@WIN-4CNRONV51MG:~$ free -h --giga
               total        used        free      shared  buff/cache   available
Mem:             55G         50G        371M        222M        4.3G        3.8G
Swap:           153G         12G        141G

Another test was for the tinyllama 1.1b model, and although it showed that the conversion had been completed, it waited for a long time without loading the model, and VRAM not be used as normal that 22G vram allocated, this is just 0.8G on card 1 and 1.2G on card 2

(aph) omnisaa@WIN-4CNRONV51MG:~$ python -m aphrodite.endpoints.openai.api_server --model tinyllama1.1b/ -tp 2 --max-mode
l-len 4096 --swap-space 2 -gmu 0.9 --enforce-eager --kv-cache-dtype fp8_e5m2 --load-in-4bit
WARNING:  User-specified max_model_len 4096 is higher than the original 2048. Attempting to use RoPE scaling.
WARNING:  bnb quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO:     CUDA_HOME is not found in the environment. Using /usr/local/cuda as CUDA_HOME.
INFO:     Using fp8_e5m2 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance.
But it may cause slight accuracy drop. Currently we only support fp8 without scaling factors and make e5m2 as a default
format.
2024-03-27 02:48:44,571 INFO worker.py:1752 -- Started a local Ray instance.
INFO:     Initializing the Aphrodite Engine (v0.5.1) with the following config:
INFO:     Model = 'tinyllama1.1b/'
INFO:     DataType = torch.bfloat16
INFO:     Model Load Format = auto
INFO:     Number of GPUs = 2
INFO:     Disable Custom All-Reduce = False
INFO:     Quantization Format = bnb
INFO:     Context Length = 4096
INFO:     Enforce Eager Mode = True
INFO:     KV Cache Data Type = fp8_e5m2
INFO:     KV Cache Params Path = None
INFO:     Device = cuda
WARNING:  Custom allreduce is disabled because your platform lacks GPU P2P capability. To silence this warning, specify
disable_custom_all_reduce=True explicitly.
(RayWorkerAphrodite pid=4111) WARNING:  Custom allreduce is disabled because your platform lacks GPU P2P capability. To silence this warning, specify
(RayWorkerAphrodite pid=4111) disable_custom_all_reduce=True explicitly.
INFO:     Memory allocated for converted model: 0.37 GiB
INFO:     Memory reserved for converted model: 0.4 GiB
INFO:     Model weights loaded. Memory usage: 0.37 GiB x 2 = 0.74 GiB
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant