You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The CPU core is not fully used, as in the following screenshot.
The GPU is not fully used as well, as in the following screenshot.
In fact, when I previously run the sde_gan.py on my laptop with 2070maxq, the expected time is about 3:40 to 4 hours; when run on 3900x/4090, it is 2:30 to 3 hours.
The change is not significant, and the resources are not fully used as shown in above figs. What should I do if I want it trained faster?
I also try to choose a larger batch size. It is weird that nothing happend.
Is the bottleneck elsewhere?
The text was updated successfully, but these errors were encountered:
One thing you could try is to switch to Diffrax. By taking advantage of JAX's JIT compiler, then this can sometimes be substantially faster than torchsde. (Probably due to reduced Python overhead and fewer memory allocations.)
In particular you can find an SDE-GAN example in the documentation here.
Other than that, I don't have any strong recommendations. Neural SDEs were a topic we never really finished with. I think maximising computational efficiency (amongst other things) remains an open research question for them.
The CPU core is not fully used, as in the following screenshot.
The GPU is not fully used as well, as in the following screenshot.
In fact, when I previously run the sde_gan.py on my laptop with 2070maxq, the expected time is about 3:40 to 4 hours; when run on 3900x/4090, it is 2:30 to 3 hours.
The change is not significant, and the resources are not fully used as shown in above figs. What should I do if I want it trained faster?
I also try to choose a larger batch size. It is weird that nothing happend.
Is the bottleneck elsewhere?
The text was updated successfully, but these errors were encountered: