Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing inference model #547

Open
varunjain3 opened this issue Jul 19, 2020 · 17 comments
Open

Changing inference model #547

varunjain3 opened this issue Jul 19, 2020 · 17 comments
Labels

Comments

@varunjain3
Copy link

I have been trying to change the current person-vehicle-bike-detection-crossroad-0078 model with the person-vehicle-bike-detection-crossroad-1016 as both of them have a similar use. But I am facing a problem where the output is not showing any bounding boxes or the class labels, but is showing a undefined label with an accuracy percentage where an object is expected to be.
Screenshot from 2020-07-20 01-17-38

Can you please guide me on how can I deal with the G-streamer pipeline or where can I get a proper documentation to deal with other similar model changing scenarios.

@xwu2git
Copy link
Collaborator

xwu2git commented Jul 19, 2020

@nnshah1, can you help?

@varunjain3
Copy link
Author

Dear @nnshah1 , Thanks a lot for the help. This worked exactly as we wanted it to.

But I again faced an error wherein I was trying to change the person detection model- person-detection-retail-0013 with the person-detection-retail-0002 one in the stadium scenario. Checking from the CPU usage, one could understand that the model was running, in fact, the svcq-counting stats were also showing the numbers, yet no bounding boxes were appearing on the resulting video. I also checked the layer_name this time, it is same in both the models. Can you help me understand where I am going wrong?

Other than this, could you please explain to me, where are the outputs of the inferences are stored, as per the documentation this should be stored in some rec folder in the analytics container in some .json file. But I am not able to locate that.

Further, how could one add another model in series for eg. the person-reidentification model in the svcq pipeline.

@nnshah1
Copy link

nnshah1 commented Jul 20, 2020

I wasn't able to find a current person-detection-retail-0002 - this seems to no longer be supported, can you send me a pointer to the model? If the model is running then again I suspect a model-proc related issue.

The inferences are sent from the pipeline to mqtt and then stored in the database here:

def on_message(self, client, userdata, message):

To add person-re-identification model to the pipeline, please see:

https://github.com/OpenVisualCloud/Smart-City-Sample/tree/51ffca882c843c81bd2b382131de27a507633677/analytics/entrance

As this pipeline has both person-detection and person-reidentification. Note: it also includes custom logic to count people based on their re-id - but that may or may not be useful for your use case.

@varunjain3
Copy link
Author

@nnshah1 Here is a pointer to the folder for the model https://download.01.org/opencv/2020/openvinotoolkit/2020.4/open_model_zoo/models_bin/3/person-detection-retail-0002/

Being present in the latest openmodel zoo directory, Can I assume it to be supported, or is there any other criteria for a model to be supported.

Can you let me know what other things shall be changed in the model-proc w.r.t to this model, as I checked the layer_name for both the models are same?

As far as I can understand the function - "on_message", reads the inference_results from some mqtt json file in Smart-City-Sample/analytics/mqtt2db/mqtt2db.py
But I am not able to understand, where is this .json file is being stored and in which container, can we retrieve the logs of all the inferences at the end of a run or maybe in between?

Also, where shall one make changes to change the analytics(UI) being displayed. As in how is the UI retrieving the stored inferences and from where.

@nnshah1
Copy link

nnshah1 commented Jul 21, 2020

I'll take a quick look at the model. I believe I was mistaken above, the model is present here:

https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-retail-0002

The on_message reads the inference results from the mqtt topic (mqtt is a message broker). The analytics pipeline streams its results to the message broker in json format (one frames results as one message). The results are not stored in a file.

To make changes to the UI - the inference results are stored in the database:

To start prototyping you can modify the mqtt2db.py to print the analytics it recieves and modify the results being stored in the database (to get a sense of how the system is working).

If you need to change the visualization itself, that is done in

https://github.com/OpenVisualCloud/Smart-City-Sample/tree/51ffca882c843c81bd2b382131de27a507633677/cloud

@xwu2git xwu2git added the BKM label Aug 21, 2020
@Gsarg18
Copy link

Gsarg18 commented Oct 13, 2020

I am using gvainference for custom model and it able to detect the location of objects. But i am facing problem where output is not showing any bounding box on the frame. I have also updated the analytics.js file for our model classes still it is not showing any bounding box

@nnshah1
Copy link

nnshah1 commented Oct 13, 2020

@Gsarg18 you will need to add the json message to the frame converting the detection output to the message format expected by the rest of the solution. Are you attaching a json meta data?

@Gsarg18
Copy link

Gsarg18 commented Oct 13, 2020 via email

@nnshah1
Copy link

nnshah1 commented Oct 13, 2020

If you are running two detectors across the whole frame - that should be possible now. To run a secondary detection (i.e. on top of a bounding box detected by a primary detectory) - this is not currently directly supported but is a feature on the roadmap.

If you have GVA::RegionOfInterest, and gvametaconvert in the pipeline, can you verify the json data is well formed and as expected?

I would first compare it to a working case just to double check that any required fields are missing.

@Gsarg18
Copy link

Gsarg18 commented Oct 13, 2020 via email

@nnshah1
Copy link

nnshah1 commented Oct 13, 2020

I believe you will also need to add a detection tensor to the regionofinterest.

Another approach would be to add a json meta directly (via add_message) creating your own message to match (removing gvametaconvert).

@Gsarg18
Copy link

Gsarg18 commented Oct 13, 2020 via email

@nnshah1
Copy link

nnshah1 commented Oct 13, 2020

Can you provide more details (pipeline.json, gvapython code, dlstreamer version) / sample output ?

If you are using gvainference(yolov3) + gvapython to add region of interest + metaconvert - you should be quite close.

@Gsarg18
Copy link

Gsarg18 commented Oct 13, 2020 via email

@nnshah1
Copy link

nnshah1 commented Oct 18, 2020

@Gsarg18

In the 2020.2 version of dlstreamer, in order for the JSON metadata to be added to the frame correctly you will need to add a label_id in addition to calling add_region. Note this is not needed in later versions (specifically tried 2021.1).

    def process_frame(self, frame):
        region = frame.add_region(0,0,100,100,"BlueMonday",1.0)
        region.detection()["label_id"] = 1
        return True

I confirmed this works with the current Smart Cities sample. As you already mentioned, you'll need to add your label into analytics.js as well for the bounding box to be displayed.

@Gsarg18
Copy link

Gsarg18 commented Oct 19, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants