Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ThunderSVM doesn't utilize GPU. #271

Open
ericlaycock opened this issue Oct 17, 2023 · 0 comments
Open

ThunderSVM doesn't utilize GPU. #271

ericlaycock opened this issue Oct 17, 2023 · 0 comments

Comments

@ericlaycock
Copy link

I trained a ThunderSVM model in Google Colab and tested it and it seemed to work quite well. However, I believe it does not use the GPU when running locally.

First, I git cloned this repo, made a build directory, and ran the cmake command for Visual Studio 2017. The .sln file was generated - I opened vs2017 and built it successfully. Then, before trying to import thundersvm in Python, I have to do os.add_dll_directory("C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v11.1\\bin"). This took me about a week's worth of work to figure out due to this repo being fairly out of date (eg. the readme installation still recommends using vscode 2015) and ThunderSVM seemingly needing out of date dependencies.

All in all, I'm able to import ThunderSVM and even load my ThunderSVM model. But when it comes to prediction, it does not use the GPU (which I verified with nvidia-smi). I have no idea why this would be, and I suspect it's an issue in the installation. Please help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant