Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching for pycuda kernels? #388

Open
daurer opened this issue Feb 15, 2022 · 2 comments
Open

Caching for pycuda kernels? #388

daurer opened this issue Feb 15, 2022 · 2 comments
Assignees
Labels
0.9 High-performance for entire framework GPU acceleration

Comments

@daurer
Copy link
Contributor

daurer commented Feb 15, 2022

It currently takes up to a minute for the JIT compilation of all the kernels, this is often longer than the actual reconstruction time.

  • Investigate where the time is lost (compilation vs. mem alloc)
  • Maybe we can cache the compiled kernels somewhere (e.g. home folder, recons folder)
@daurer
Copy link
Contributor Author

daurer commented Jan 17, 2023

use ENV to overwrite default CACHE_DIR to user defined location and skip cleaning up of compiled kernels

@daurer daurer self-assigned this Jan 24, 2023
@daurer
Copy link
Contributor Author

daurer commented Feb 1, 2024

With the CuPy engines, the JIT is significantly faster and no problem really going forward if we transition towards mostly using CuPy engines for HPC

Maybe we can close this issue?

@daurer daurer added 0.9 High-performance for entire framework and removed 0.8 labels Feb 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
0.9 High-performance for entire framework GPU acceleration
Projects
None yet
Development

No branches or pull requests

2 participants