Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to increase utilization of available computing power? #139

Open
dmayfrank opened this issue Jan 24, 2023 · 1 comment
Open

How to increase utilization of available computing power? #139

dmayfrank opened this issue Jan 24, 2023 · 1 comment

Comments

@dmayfrank
Copy link

Hello,
first of all, thank you very much for the very user-friendly package! Great work!

I am currently training a deep reinforcement learning agent that has a differentiable optimization as part of its policy. In principle, this works fine, but training the agent takes a very long time because the available computing resources are not used efficiently. When I use the GPU as PyTorch device, only around 10% of its capacity is used, for CPU around 40%. When I run the same code with an agent that does not have the differentiable optimization as part of it, utilization is always basically 100%. Of course, I tried increasing the batch size, but this does not change anything.

Do you have any ideas what I could do to resolve this issue? I saw that the qpth-package (https://github.com/locuslab/qpth) offers batch solving of QPs, instead of multiprocessing via pooling, so maybe switching to that package would be an option? However, due to the seemingly more active development on cvxpylayers and the user-friendliness, I would like to stick with cvxpylayers if possible.

Thank you very much for your help!

@gy2256
Copy link

gy2256 commented Dec 10, 2023

I'm having the exact issue. Have you try to use the qpth?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants