Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add models to the algorithm templates #297

Open
jmsmkn opened this issue Sep 24, 2020 · 4 comments
Open

Add models to the algorithm templates #297

jmsmkn opened this issue Sep 24, 2020 · 4 comments
Labels
enhancement New feature or request

Comments

@jmsmkn
Copy link
Member

jmsmkn commented Sep 24, 2020

Request from Erdi:

it would also be super cool if I could upload a tensorflow/pytorch model directly?

Maybe we can do something with ONNX?

I don't have experience with ONNX but I don't have great experiences with MMDNN. I think supporting tensorflow and pytorch would be somewhat easy.

I think the first step would be to add support in evalutils by having a place to drop your model in the templated repo, then we can see how it would work, then integrate if it's good

@jmsmkn jmsmkn added the enhancement New feature or request label Sep 24, 2020
@silvandeleemput
Copy link
Member

@jmsmkn That's a good suggestion. As you say it should be easy to just support PyTorch and TensorFlow models for this for now. I could create a quick mockup for PyTorch when I find some time. I guess we have to carefully think about the interface for the template.

@jmsmkn
Copy link
Member Author

jmsmkn commented Sep 24, 2020

Would be really good to know if ONNX could be used, so that we do not have to maintain support for all of the frameworks: https://onnx.ai/supported-tools.html#buildModel

@silvandeleemput
Copy link
Member

Ok, let me do some research on ONNX first.

@silvandeleemput
Copy link
Member

silvandeleemput commented May 13, 2021

We now use ONNX Runtime (CPU only) for bodyct-multiview-nodule-detection and this works fine, also model/weigths conversion from PyTorch to ONNX format is very easy to do (probably very similar for TensorFlow). Haven't tested it for the GPUs yet.

The only caveat I found for using ONNX Runtime (CPU mode) on grand-challenge is that you must explicitly specify the CPU affinities, since it has no permissions for automatic affinity resolution there. See the following code for creating an onnxruntime.InferenceSession from a ONNX model file, which accounts for this:
https://github.com/DIAGNijmegen/bodyct-multiview-nodule-detection/blob/7a6fd7e0590eeeeecf4cee2afa032e0bdeeaeeff/packages/onnxruntime_utils.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants