-
Notifications
You must be signed in to change notification settings - Fork 435
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NonMaxSuppression parameters are not deployed correctly #2291
Comments
Thanks for reporting, it sounds make sense to me. Let me check |
Hi @cansik I've looked into this issue. I think we need more detailed information of your request.
|
@jaegukhyun In However, the export does not take into account the two parameters |
Hi @cansik. There are misunderstandings for previous guides. OTX collects configurable hyper parameters in https://github.com/openvinotoolkit/training_extensions/blob/develop/src/otx/algorithms/detection/configs/detection/configuration.yaml. This configuration.yaml file will be placed in your workspace if you use otx build or otx train command. You may be able to change some hyperparameters from model.py or deployment.py, not configuration.yaml, but it is not recommended behaviors since we have plan to hide those file. In near future user only can access hyperparameters from configurational.yaml. In summary, we want user to change hyperparamters using configuration.yaml and template.yaml, and don't recommend changing those parameters using model.py and the other files. However, current configuration.yaml can only change confidence threshold. Iou_threshold and input_size can be fixed by model.py and data_pipeline.py. In summary, now confidence threshold can be changed by user when model export and ov model inference, you can check usage of this variable in PR summary. Input size and iou threshold should wait for supporting from configuration.yaml. |
I trained an object detection model based on YOLOX with otx and exported / optimized it as openvino model. The network includes a
NonMaxSuppression
operator to post-process the detected objects which is great.The only problem is that now the score-threshold is fixed to
0.01
and nms is fixed to0.65
as set in thetest_cfg
. It seems that thepost_processing
settings in thedeployment.py
are ignored and only thetest_cfg
settings from themodel.py
are applied. Is this behaviour intended?But since these values maybe should be adaptive, wouldn't it make sense to expose them as network inputs? Or is there another way to change the thresholds (maybe even on runtime)? I know this is maybe more MMDetection related, but the deployment problem seems to be otx related.
The text was updated successfully, but these errors were encountered: