New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sionnaRT: slow GPU computation times #383
Comments
Hi @bchateli, |
Hi @SebastianCa, I just tried the snippet in GoogleColab and, like you, I am not able to reproduce the behaviour. It must be related to my setup. I am running Sionna 0.16.1/2 on Windows 10, with Python 3.9.16 and Tensorflow 2.10.1. I tried more recent TF versions but I haven't been able to make it work on GPU. |
Hello @bchateli, One idea to identify the source of the slowdown: if you fix all of the inputs (e.g. by fully controlling the randomness, or by setting some hardcoded values) and run the simulation multiple times, do the subsequent runs get faster? |
Hi @merlinND, Thank for the idea, I will try and get back to you. I compared my setup with the one in Collab and suspect it might related to the old TF version I am using (2.10.1 vs 2.15.0 for Collab). However, TF does not maintain GPU support on native Windows past 2.10 versions, so I'd have to test on WSL. |
To get back to your suggestion @merlinND, I ran the simulation (on GPU) 3 times on the same console, and I got almost the same time for each run (plus/minus 1s) |
Hello @bchateli, I am unable to reproduce on my machine as well. The GPU runtime is roughly 0.3-0.5 s on my machine, and CPU was ~0.5 s. |
Hello @merlinND, Thanks for the return. Did you run it on Linux or Windows ? My hypothesis is that it is related to the old GPU-compatible TF version that I use on Windows. I tried to reproduce it on WSL, but so far, I haven't been able to make the GPU work with TF in WSL. |
Hello @bchateli, I ran my test on Linux (Ubuntu 22.04). |
Hi,
This a follow up to #283.
While 0.16.1 allowed to speed up computations on CPU, there is still a big difference between the execution on CPU and GPU.
For the code below, execution takes around 1s on CPU while it takes around 80s on GPU (RTX 3080). Those results are on 0.16.1 but similar behaviour has been encountered on 0.16.2.
Any idea on what is causing this? Thanks in advance!
The text was updated successfully, but these errors were encountered: