Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cant run against Efficientnet-b0 due to model exceeds allowable size of 10MB #415

Closed
jemata opened this issue Apr 29, 2024 · 16 comments
Closed

Comments

@jemata
Copy link

jemata commented Apr 29, 2024

Running DL streamer using the Docker option on a Serpent Canyon Board and using Ubuntu 22.04

Running the following command against my real Sense camera and replaced the classification model with efficientnet that came with the models download, i get an error that model exeeds allowable size of 10MB....

gst-launch-1.0 v4l2src device=/dev/video4 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${DETECTION_MODEL} model_proc=${DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify model=${OBJECT_CLASSIFICATION_MODEL} model-proc=${OBJECT_CLASSIFICATION_MODEL_PROC} device=CPU object-class=vehicle ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false

image

@brmarkus
Copy link

brmarkus commented Apr 29, 2024

Can you show the values of the used ENV variables, please?

Looks like the variable OBJECT_CLASSIFICATION_MODEL_PROC doesn't contain the path to the Model-Processing JSON file but the path of the model?

@jemata
Copy link
Author

jemata commented Apr 29, 2024

yes that is correct, it is the path to the model, where would the .json files be?

dlstreamer@vision-checkout:/temp/public/efficientnet-b0$ ls
FP16 FP32 efficientnet-b0
dlstreamer@vision-checkout:
/temp/public/efficientnet-b0$ cd FP32/
dlstreamer@vision-checkout:/temp/public/efficientnet-b0/FP32$ ls
efficientnet-b0.bin efficientnet-b0.xml
dlstreamer@vision-checkout:
/temp/public/efficientnet-b0/FP32$ pwd
/home/dlstreamer/temp/public/efficientnet-b0/FP32
dlstreamer@vision-checkout:~/temp/public/efficientnet-b0/FP32$ echo $MODEL_PATH

dlstreamer@vision-checkout:/temp/public/efficientnet-b0/FP32$ echo $MODELS_PATH
/home/dlstreamer/temp/
dlstreamer@vision-checkout:
/temp/public/efficientnet-b0/FP32$ pwd
/home/dlstreamer/temp/public/efficientnet-b0/FP32
dlstreamer@vision-checkout:~/temp/public/efficientnet-b0/FP32$

@jemata
Copy link
Author

jemata commented Apr 29, 2024

i cant find the efficienet one under dlstreamer@vision-checkout:~/dlstreamer/samples/gstreamer/model_proc/intel$ pwd
/home/dlstreamer/dlstreamer/samples/gstreamer/model_proc/intel

@brmarkus
Copy link

hmm, good question... still searching... I don't see it inside the container either.

Googling I found something under "https://github.com/dlstreamer/pipeline-zoo-models/blob/main/storage/efficientnet-b0_INT8/efficientnet-b0.json", but not sure it is still correct...

@tjanczak
Copy link

efficientnet is a simple classification model, it does not require dedicated model-proc file; please use the following params:

gvaclassify inference-region=full-frame device=CPU model=<path_to_models>/efficientnet-b0/FP32/efficientnet-b0.xml model-proc=<path_to_model_proc>/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=<path_to_labels>/samples/labels/imagenet_2012.txt ! queue ! ...

@brmarkus
Copy link

Oh, right!
From e.g. "https://docs.openvino.ai/2024/omz_models_model_efficientnet_b0.html" the result from classification with "efficientnet-b0" is:

Object classifier according to ImageNet classes

So giving the additional label file makes perfect sense, thank you @tjanczak !!

@brmarkus
Copy link

With the additional information I just found "https://dlstreamer.github.io/supported_models.html", also discribing:

7 | Classification | efficientnet-b0 | public | tf, openvino | 0.819 | openvino | CPU, GPU |   | CPU | imagenet_2012.txt | model-proc | classification_benchmark_demo
Classification

efficientnet-b0 (https://docs.openvino.ai/latest/omz_models_model_efficientnet_b0.html)

public

tf, openvino

0.819

openvino

CPU, GPU

CPU

imagenet_2012.txt (https://github.com/dlstreamer/dlstreamer/blob/master/samples/labels/imagenet_2012.txt)

model-proc (https://github.com/dlstreamer/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json)

classification_benchmark_demo

@jemata
Copy link
Author

jemata commented May 8, 2024

added the efficient classification option to my working yolov8 detection dlstreamer cmd and i am getting the following error

gst-launch-1.0 v4l2src device=/dev/video10 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${OBJECT_DETECTION_MODEL} model_proc=${OBJECT_DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify inference-region=full-frame device=CPU model=${OBJECT_CLASSIFICATION_MODEL} model-proc=/home/dlstreamer/dlstreamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=/home/dlstreamer/dlstreamer/samples/labels/imagenet_2012.txt ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Got context from element 'vaapipostproc1': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayGLX)\ vaapidisplayglx0", gst.vaapi.Display.GObject=(GstObject)"(GstVaapiDisplayGLX)\ vaapidisplayglx0";
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
X Error of failed request: BadShmSeg (invalid shared segment parameter)
Major opcode of failed request: 130 (MIT-SHM)
Minor opcode of failed request: 2 (X_ShmDetach)
Segment id in failed request: 0x2c00004
Serial number of failed request: 67
Current serial number in output stream: 68

@jemata
Copy link
Author

jemata commented May 8, 2024

ok after several tries and disconnecting connecting the real sense cameras it is working
image

Not sure why i need to unplug/plug cameras for dlstreamer to work again as i was getting device no found......has anyone seen this issue?

@jemata
Copy link
Author

jemata commented May 8, 2024

what is the correct way to end the stream? as I closed the "gst-launch 1.0" window and now re-running same cmd i get

gst-launch-1.0 v4l2src device=/dev/video4 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${OBJECT_DETECTION_MODEL} model_proc=${OBJECT_DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify inference-region=full-frame device=CPU model=${OBJECT_CLASSIFICATION_MODEL} model-proc=/home/dlstreamer/dlstreamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=/home/dlstreamer/dlstreamer/samples/labels/imagenet_2012.txt ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Got context from element 'vaapipostproc1': gst.vaapi.Display=context, gst.vaapi.Display=(GstVaapiDisplay)"(GstVaapiDisplayGLX)\ vaapidisplayglx0", gst.vaapi.Display.GObject=(GstObject)"(GstVaapiDisplayGLX)\ vaapidisplayglx0";
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Device '/dev/video4' is busy
Additional debug info:
../sys/v4l2/gstv4l2object.c(4145): gst_v4l2_object_set_format_full (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
Call to S_FMT failed for YUYV @ 320x240: Device or resource busy
Execution ended after 0:00:00.159668979
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Setting pipeline to NULL ...
Additional debug info:
../libs/gst/base/gstbasesrc.c(3132): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
Freeing pipeline ...
(openvino_env) dlstreamer@vision-checkout:~$

@brmarkus
Copy link

brmarkus commented May 9, 2024

To stop the gstreamer pipeline I usually manually use "Ctrl-C" or send a SIGTERM to the process(es) (and SIGKILL after a timeout).
In addition I add "-e" to the command-line (e.g. from here: "https://docs.oracle.com/cd/E88353_01/html/E37839/gst-launch-1-0-1.html"):

   -e, --eos-on-shutdown
           Force an EOS event on  sources  before  shutting  the  pipeline
           down.  This is useful to make sure muxers create readable files
           when a muxing pipeline is shut down forcefully via Control-C.

Programmatically you just stop the pipeline (similar to PLAY, PAUSE, STOP), see e.g. "https://gstreamer.freedesktop.org/documentation/application-development/advanced/pipeline-manipulation.html?gi-language=c"

@brmarkus
Copy link

brmarkus commented May 9, 2024

When just closing the "playback render window" you might still have one/some processes running in the background, leaving resources busy. Try with "Ctrl-C" or SIGTERM/SIGKILL.

@jemata
Copy link
Author

jemata commented May 9, 2024

CTRL+C works and i am able to re-start the stream; however noticing issues with the real sense cameras, when 1 connected it works ok however when 2 or more it is getting flaky

@brmarkus
Copy link

brmarkus commented May 9, 2024

The Intel RealSense consists of multiple cameras to realize depth information, the driver must be complicated, there might even be special protocols, specific timings for synchronization between the sensors to retrieve depth information?

There is an Intel RealSense specific SDK for using the camera(s) programmatically: https://github.com/IntelRealSense/librealsense/

Maybe v4l2src is not the best choice - does an Intel RealSense really and fully comply with VideoForLinux-v2 (v4l2)?

I see people implemented RealSense specific gstreamer plugins, like

@jemata
Copy link
Author

jemata commented May 14, 2024

closing this issue as i am able to run efficienet

@jemata jemata closed this as completed May 14, 2024
@jemata
Copy link
Author

jemata commented May 14, 2024

closing as issue has been resolved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants