Add NPU backend support for val and inference #2109
+57
−0
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I am a user of NPU. When I used TIMM recently, I found that it does not support NPU natively. It's pleasure to see that someone has made some contributions on leveraging NPU to TIMM #2102. But it currently only offers the feature of using NPU during training. This PR extends NPU support to the validate and inference entries, thus addressing this limitation.
Specify the device as
"npu"
, then you can use NPU as accelerator during inferencing and validating.It is tested on:
val
subset of ImageNet-1KValidate Scripts
ScreenShot
It shows the validation results on
val
subset of ImageNet-1K are as following:Inference Scripts
ScreenShot
results
Here offers some results of predicting the top-5 classification results by inferencing on tiny_vit_21m_512. Everything goes well on npu.