Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Latest installation instructions (e.g. TensorFlow, CUDA versions) #52

Closed
generalciao opened this issue Mar 24, 2023 · 5 comments
Closed
Assignees
Labels
question Further information is requested

Comments

@generalciao
Copy link

What are the current recommended versions for a local installation under Windows?
Is there a performance advantage to using (installing with) GPU support, if one only uses pretrained models?

The local install documentation for Windows currently states the following (see below).

CPU-only:
conda create -n Cascade python=3.7 tensorflow==2.3 keras==2.3.1 h5py numpy scipy matplotlib seaborn ruamel.yaml spyder.

With GPU:
conda create -n Cascade python=3.7 tensorflow-gpu==2.3.0 keras h5py numpy scipy matplotlib seaborn ruamel.yaml spyder

For GPU installs (e.g. for Deeplabcut) Ithere are usually also some CUDA/cudnn version requirements. Is that true here also? Any recommendations?

There are more recent tensorflow versions available (e.g. 2.10), which is part of why I'm asking whether these instructions are up-to-date. I am very new to python, conda, etc. and hope to get the install right from the start, as debugging version conflicts would be beyond my abilities ...

Thank you!

@PTRRupprecht PTRRupprecht self-assigned this Mar 24, 2023
@PTRRupprecht PTRRupprecht added the question Further information is requested label Mar 24, 2023
@PTRRupprecht
Copy link
Member

Hi @generalciao,

What are the current recommended versions for a local installation under Windows?

The recommended version for a local installation under Windows is "CPU-only" if you use pretrained models only.

Is there a performance advantage to using (installing with) GPU support, if one only uses pretrained models?

CASCADE runs fast enough with any CPU out there, and the speed advantage provided by a GPU is not worth the hassle of installing CUDA/cdnn, unless you have unusual requirements (extremely large datasets, online deconvolution). Installing CUDA/cdnn has become much nicer compared to the situation e.g. 5 years ago, but it can still be annoying.

For GPU installs (e.g. for Deeplabcut) Ithere are usually also some CUDA/cudnn version requirements. Is that true here also? Any recommendations?

If you really for some reason want to go for a GPU installation, yes. Only specific CUDA/cudnn versions work well together with specific tensorflow versions (and tensorflow is a Python package used by CASCADE as well as DeepLabCut). In this case I would refer to the instructions given by the DeepLabCut developers (https://github.com/DeepLabCut/DeepLabCut/blob/main/docs/installation.md#gpu-support).
It is important to understand that CUDA/cdnn are system-wide packages. While Python packages like numpy or tensorflow can be installed in different versions in different conda environments on the same computer, this is not so straight-forward for CUDA/cdnn.

There are more recent tensorflow versions available (e.g. 2.10), which is part of why I'm asking whether these instructions are up-to-date.

Yes, there are more recent tensorflow versions available. I have been using them successfully with CASCADE, but I would recommend sticking to the versions recommended in the readme, because we have tested them and there or no compatibility issues. We noticed in the past that pretrained models trained with newer tensorflow versions cannot always be used with installations based on older tensorflow versions, and we therefore are hesitant to always recommend the newest tensorflow versions.

To put it short, I would recommend to follow the instructions for the CPU-only version as indicated in the readme.

If you run into any problems, we're happy to help - let us know!

@generalciao
Copy link
Author

Thank you for this helpful reply. Sounds like I can take the easy way out aka CPU-only. One clarification:

unusual requirements (extremely large datasets, online deconvolution)

What counts as "extremely large", roughly speaking?

It is important to understand that CUDA/cdnn are system-wide packages.

Yep, and this is where I get stuck when I come back to DLC after not using it for a while, my GPU wants new drivers, and it's hard to figure out which versions of the drivers, toolkits and libraries might work - plus, not wanting to break any other installs. But in this context, it's worth noting that I recently tried out SLEAP and the install was super easy (1 conda line) and it worked out-of-the-box with my GPU. Their docs do not indicate that one must download and install a CUDA toolkit separately. I already had one, but somehow I had the sense their conda setup may somehow solve the dependency issue more smoothly. I don't know enough to be sure ...

@PTRRupprecht
Copy link
Member

PTRRupprecht commented Mar 24, 2023

What counts as "extremely large", roughly speaking?

Very generally: anything for which the CPU-based version would take too long. Very rough estimate but one could argue about the numbers: calcium imaging recordings to be analyzed with number_neurons * number_timepoints >> 100 million/day. But probably even then the motion correction and source extraction will take much longer than deconvolution.

But in this context, it's worth noting that I recently tried out SLEAP and the install was super easy (1 conda line) and it worked out-of-the-box with my GPU.

Sounds good! However, this can also be a matter of luck and existing package versions that might create conflicts. I recently installed DeepLabCut on a computer where I had a huge mess of packages, and the CUDA versions just worked without any modifications.
It is now well-known that tensorflow often has some problems, while its main alternative (pytorch) is a bit smoother. This, however, does not explain your observation, because also SLEAP is based on tensorflow ;-)

@generalciao
Copy link
Author

number_neurons * number_timepoints >> 100 million/day

Yep, then I think I'll be fine without GPU. number_timepoints can easily reach 100k for me in a session, but I rarely have >1 session per day, and number_neurons is usually <100, so CPU it is for now. I do have a bunch of previously analyzed data that I'd like to re-run through CASCADE, though, so let's see ...

a matter of luck and existing package versions that might create conflicts

I realize this is getting slightly off-topic, but hope you don't mind me sharing this info from the SLEAP installation docs (bold emphasis mine), since it might be of interest for packaging/distributing this tool in the future:

conda create -y -n sleap -c sleap -c nvidia -c conda-forge sleap=1.3.0
This comes with CUDA to enable GPU support. All you need is to have an NVIDIA GPU and updated drivers.
If you already have CUDA installed on your system, this will not conflict with it.

My system-wide CUDA version was 7.11 pre-SLEAP, and that is still what's reported when I run nvidia-smi. Looking at the install log for SLEAP (1.2.9) in the conda prompt, it downloaded and installed these packages: cudatoolkit-11.3.1, cuda-nvcc-11.3.58, cudnn-8.2.1.32, from nvidia and conda-forge channels. Tensorflow-2.6.3 was also installed (not so relevant). So at least as far as I can tell, their way of installing specific CUDA (etc) version support without conflicting with the system-wide CUDA seems to have worked as promised, but I haven't tested my system CUDA very carefully.

@PTRRupprecht
Copy link
Member

Thanks for the comment, that's very interesting. I think I underestimated and did not know what can be done in a conda environment. I did not know that the cuda libraries can be installed specifically in an environment.

To make this work, there might still be the requirement that the GPU drivers are compatible with the installed CUDA version, but this should, as the authors of SLEAP state on their documentation page, be likely if the drivers are more or less up to date.

Thanks a lot for your comment, I learned something new!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants