Replies: 1 comment
-
Hi @ql-xun, Without knowing the exact code, it's difficult to provide detailed advice. However, here are some thoughts regarding your questions:
In general, it seems there are already more than 1e9 channel coefficients even for a batch size of 1. This is quite a demanding simulation, and you might need to consider using a system with more memory. |
Beta Was this translation helpful? Give feedback.
-
We ran a MIMO demo in the time domain. When we increased the number of MIMO antennas, along with the FFT size, while keeping the batch size = 1, we encountered an out-of-memory issue with the GPU. Consequently, I have a couple of questions:
1. Is it possible to reduce memory usage on the GPU by converting float32 to float16 and complex64 to two separate float values?
2. Is it feasible to split the model to manage memory better?
Additionally, I have another question: Why does increasing the batch size not effectively reduce the processing time?
Thank you for your patience!
Hardware configuration:
GPU1:TITAN V Memory:12GB
CPU: E5-2620 v4 @ 2.10GHz 8 Core Memory 16GB
The detail of the error about out of memory:
Out of memory while trying to allocate 17768298824 bytes.
BufferAssignment OOM Debugging.
BufferAssignment stats:
parameter allocation: 8.58MiB
constant allocation: 3.85MiB
maybe_live_out allocation: 6.00MiB
preallocated temp allocation: 16.55GiB
preallocated temp fragmentation: 16B (0.00%)
total allocation: 16.57GiB
total fragmentation: 6.86MiB (0.04%)
Peak buffers:
Buffer 1:
Size: 9.90GiB
Operator: op_type="Sum" op_name="Sum_9" source_file="/root/nvme0n1p1/lib/python3.8/dist-packages/sionna/channel/utils.py" source_line=320
XLA Label: fusion
Shape: c64[1,1, 64, 1, 64, 8772,37] - [batch size, num_rx, num_rx_ant, num_tx, num_tx_ant, num_time_steps, l_max - l_min + 1]
==========================
Buffer 2:
Size: 6.16GiB
Operator: op_type="SelectV2" op_name="SelectV2" source_file="/root/nvme0n1p1/lib/python3.8/dist-packages/sionna/channel/tr38901/channel_coefficients.py" source_line=1030
XLA Label: fusion
Shape: c64[1,1,1,23,64,64,8772]
==========================
Beta Was this translation helpful? Give feedback.
All reactions