Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get memory usage for "adjoint" and "autograd" method? #138

Open
cyx96 opened this issue May 10, 2022 · 1 comment
Open

How to get memory usage for "adjoint" and "autograd" method? #138

cyx96 opened this issue May 10, 2022 · 1 comment
Labels
question Further information is requested

Comments

@cyx96
Copy link

cyx96 commented May 10, 2022

Thanks for this amazing package!

I was trying to test the memory usage of adjoint, as claimed by authors of the original neural ODE paper, the memory usage of adjoint method should be smaller compared to vanilla "autograd". However, the output of torch.cuda.memory_summary() show an increase of GPU memory of the adjoint method compared to autograd. I'm wondering if I used torch.cuda.memory_summary() wrong, I printed it after the training. If my approach was incorrect, what is the correct way to get memory usage for "adjoint" and "autograd" method?

@cyx96 cyx96 added the question Further information is requested label May 10, 2022
@joglekara
Copy link
Contributor

Hey, did you happen to make progress on this? I am curious to know and can hopefully provide some benchmarks when I get my problem running as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants