Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run multiple detection models in single pipeline? #789

Open
divdaisymuffin opened this issue Sep 9, 2021 · 16 comments
Open

How to run multiple detection models in single pipeline? #789

divdaisymuffin opened this issue Sep 9, 2021 · 16 comments

Comments

@divdaisymuffin
Copy link

Can I use two or more than one gvadetect elements. I Actually want to use person detection alongwith face detection, I tried something like below but it didnt worked.
{ "name": "object_detection", "version": 2, "type": "GStreamer", "template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! t. ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvaclassify model=\"{models[age-gender-recognition-retail-0013][1][network]}\" model-proc=\"{models[age-gender-recognition-retail-0013][1][proc]}\" name=\"recognition\" model-instance-id=recognition ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"", "description": "Object Detection Pipeline", "parameters": { "type" : "object", "properties" : { "inference-interval": { "element":"detection", "type": "integer", "minimum": 0, "maximum": 4294967295 }, "cpu-throughput-streams": { "element":"detection", "type": "string" }, "n-threads": { "element":"videoconvert", "type": "integer" }, "nireq": { "element":"detection", "type": "integer", "minimum": 1, "maximum": 64 }, "recording_prefix": { "type":"string", "default":"recording" } } } }

@nnshah1
Copy link

nnshah1 commented Sep 9, 2021

Yes that should be possible - will provide an example. Question: should the results for the detections be combined and sent together (one set per frame) - or seperated?

@divdaisymuffin
Copy link
Author

@nnshah1, yeah actually I am able to see person model is sending data and only bounding boxes for it is visible, what ideally I want is to have person getting detected then, face is getting detected and face detection matrix can be given to recognition models. So yeah can the results be combined and sent together on each frame? Please help with it.

@nnshah1
Copy link

nnshah1 commented Sep 9, 2021

To clarify - do you want to do face detection only within the person detection region? Or to do them independently. That is do you want to have faces and people detected seperatly, or to do people detection -> face detection(within detected people) -> recognition (within faces).

@divdaisymuffin
Copy link
Author

Actually I want both, 1. people detection -> face detection(within detected people) -> recognition (within faces)-->mqtt.
for other purpose I want
live stream --> one branch-->person detection-->mqtt
--> second branch --> face detection-->age-genderrecognition-->mqtt

@nnshah1
Copy link

nnshah1 commented Sep 9, 2021

In the second one - is it suffidience to have live stream --> person detection --> face detection --> age-gender-recognition --> mqtt (i.e. both branches combining to single mqtt endpoint?)

@divdaisymuffin
Copy link
Author

yes, but will it send data if person is standing with his back visible and not face, in that case will data will be sent to mqtt?

@nnshah1
Copy link

nnshah1 commented Sep 9, 2021

yes

@divdaisymuffin
Copy link
Author

yeah then its great for me

@tthakkal
Copy link

tthakkal commented Sep 10, 2021

Please find template below for each usecase

Person_detect -> face_detect (roi list) -> age_gender_recog -> metaconvert -> metapublish

Gst-launch pipeline :

gst-launch-1.0 uridecodebin uri=<input file> ! gvadetect model=person-detection.xml model-proc=person-detection.json ! gvadetect model=face-detection.xml model-proc=face-detection.json object-class=person inference-region=roi-list ! gvaclassify model=age-gender-recognition-retail-0013.xml model-proc=age-gender-recognition-retail-0013.json ! gvametaconvert ! gvametapublish ! fakesink

Template adjusted based on your example:

"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 object-class=person inference-region=roi-list ! gvaclassify model=\"{models[age-gender-recognition-retail-0013][1][network]}\" model-proc=\"{models[age-gender-recognition-retail-0013][1][proc]}\" name=\"recognition\" model-instance-id=recognition ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"

Person_detect -> queue -> face_detect -> queue -> metaconvert -> metapublish

Gst-launch pipeline :

gst-launch-1.0 uridecodebin uri=<input file> ! gvadetect model=person-detection.xml model-proc=person-detection.json ! gvadetect model=face-detection.xml model-proc=face-detection.json ! gvametaconvert ! gvametapublish ! fakesink

Template adjusted based on your example:

"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[person_detection_2020R2][1][network]}\" model-proc=\"{models[person_detection_2020R2][1][proc]}\" name=\"detection1\" threshold=0.50 ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[face_detection_adas][1][network]}\" model-proc=\"{models[face_detection_adas][1][proc]}\" name=\"detection\" threshold=0.50 ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60000000000 name=\"splitmuxsink\"

@divdaisymuffin
Copy link
Author

Thanks @nnshah1 and @tthakkal. Let me try these.

@divdaisymuffin
Copy link
Author

divdaisymuffin commented Jan 13, 2022

@tthakkal @nnshah1 I want to run2 detection models together, one is head detection model and another is a model that should take roi of first model and run on specific roi only which will be passes by first detection model.

Based on your previous suggestion of using roi-list I have tried but that is not working for me.

Please see the pipeline that I am trying to run.

"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][network]}\" model-proc=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][proc]}\" name=\"detection\" threshold=0.40 object-class=person inference-region=roi-list ! gvadetect model=\"{models[age_gender_new_75][1][network]}\" model-proc=\"{models[age_gender_new_75][1][proc]}\" name=\"detection2\" model-instance-id=detection2 ! gvametaconvert name=\"metaconvert\" ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"",

It stucks with below error:

Screenshot from 2022-01-13 09-29-22

@tthakkal
Copy link

tthakkal commented Jan 13, 2022

object-class and inference-region should be part of the second detection. please update and try.

rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][network]}\" model-proc=\"{models[head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8][1][proc]}\" name=\"detection\" threshold=0.40 ! gvadetect model=\"{models[age_gender_new_75][1][network]}\" model-proc=\"{models[age_gender_new_75][1][proc]}\" name=\"detection2\" model-instance-id=detection2 object-class=person inference-region=roi-list ! gvametaconvert name=\"metaconvert\" ! gvametapublish name=\"destination\" ! appsink name=appsink t. ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"

@tthakkal
Copy link

@divdaisymuffin if it is head detection please set right object-class based on label mentioned in model-proc from first detection.

@divdaisymuffin
Copy link
Author

@tthakkal
Tried the shared pipeline as well, still error remains same and in model-proc the class name is "person" only

{ "json_schema_version": "2.0.0", "input_preproc": [], "output_postproc": [ { "converter": "tensor_to_bbox_yolo_v3", "iou_threshold": 0.4, "classes": 1, "anchors": [ 10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0 ], "masks": [ 3, 4, 5, 0, 1, 2 ], "bbox_number_on_cell": 3, "cells_number": 13, "labels": [ "person" ] } ]

@tthakkal
Copy link

Try with gst-launch by exec into container and see if it works.

gst-launch-1.0 rtspsrc location=<rtsp source> udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! queue ! decodebin ! videoconvert ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=<path to head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8 model xml> model-proc=<path to head_yolov4_tiny_608to416_default_anchors_mask_012_heatmap_INT8 model-proc json> name=detection threshold=0.40 ! gvadetect model=<path to age_gender_new_75 model xml> model-proc=<path to age_gender_new_75 json> name=detection2 model-instance-id=detection2 object-class=person inference-region=roi-list ! gvametaconvert ! gvametapublish ! fakesink

for any further debug, setup a meeting.

@nnshah1
Copy link

nnshah1 commented Jan 14, 2022

@divdaisymuffin Which version are you using? If the element doesn't support the property it's probably a DL Streamer version mismatch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants