New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cant run against Efficientnet-b0 due to model exceeds allowable size of 10MB #415
Comments
Can you show the values of the used ENV variables, please? Looks like the variable |
yes that is correct, it is the path to the model, where would the .json files be? dlstreamer@vision-checkout: dlstreamer@vision-checkout: |
i cant find the efficienet one under dlstreamer@vision-checkout:~/dlstreamer/samples/gstreamer/model_proc/intel$ pwd |
hmm, good question... still searching... I don't see it inside the container either. Googling I found something under "https://github.com/dlstreamer/pipeline-zoo-models/blob/main/storage/efficientnet-b0_INT8/efficientnet-b0.json", but not sure it is still correct... |
efficientnet is a simple classification model, it does not require dedicated model-proc file; please use the following params: gvaclassify inference-region=full-frame device=CPU model=<path_to_models>/efficientnet-b0/FP32/efficientnet-b0.xml model-proc=<path_to_model_proc>/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=<path_to_labels>/samples/labels/imagenet_2012.txt ! queue ! ... |
Oh, right!
So giving the additional label file makes perfect sense, thank you @tjanczak !! |
With the additional information I just found "https://dlstreamer.github.io/supported_models.html", also discribing: 7 | Classification | efficientnet-b0 | public | tf, openvino | 0.819 | openvino | CPU, GPU | | CPU | imagenet_2012.txt | model-proc | classification_benchmark_demo efficientnet-b0 (https://docs.openvino.ai/latest/omz_models_model_efficientnet_b0.html) public tf, openvino 0.819 openvino CPU, GPU CPU imagenet_2012.txt (https://github.com/dlstreamer/dlstreamer/blob/master/samples/labels/imagenet_2012.txt) model-proc (https://github.com/dlstreamer/dlstreamer/blob/master/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json) classification_benchmark_demo |
added the efficient classification option to my working yolov8 detection dlstreamer cmd and i am getting the following error gst-launch-1.0 v4l2src device=/dev/video10 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${OBJECT_DETECTION_MODEL} model_proc=${OBJECT_DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify inference-region=full-frame device=CPU model=${OBJECT_CLASSIFICATION_MODEL} model-proc=/home/dlstreamer/dlstreamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=/home/dlstreamer/dlstreamer/samples/labels/imagenet_2012.txt ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false |
what is the correct way to end the stream? as I closed the "gst-launch 1.0" window and now re-running same cmd i get gst-launch-1.0 v4l2src device=/dev/video4 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${OBJECT_DETECTION_MODEL} model_proc=${OBJECT_DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify inference-region=full-frame device=CPU model=${OBJECT_CLASSIFICATION_MODEL} model-proc=/home/dlstreamer/dlstreamer/samples/gstreamer/model_proc/public/preproc-aspect-ratio.json labels-file=/home/dlstreamer/dlstreamer/samples/labels/imagenet_2012.txt ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false |
To stop the gstreamer pipeline I usually manually use "Ctrl-C" or send a SIGTERM to the process(es) (and SIGKILL after a timeout).
Programmatically you just stop the pipeline (similar to PLAY, PAUSE, STOP), see e.g. "https://gstreamer.freedesktop.org/documentation/application-development/advanced/pipeline-manipulation.html?gi-language=c" |
When just closing the "playback render window" you might still have one/some processes running in the background, leaving resources busy. Try with "Ctrl-C" or SIGTERM/SIGKILL. |
CTRL+C works and i am able to re-start the stream; however noticing issues with the real sense cameras, when 1 connected it works ok however when 2 or more it is getting flaky |
The Intel RealSense consists of multiple cameras to realize depth information, the driver must be complicated, there might even be special protocols, specific timings for synchronization between the sensors to retrieve depth information? There is an Intel RealSense specific SDK for using the camera(s) programmatically: https://github.com/IntelRealSense/librealsense/ Maybe I see people implemented RealSense specific gstreamer plugins, like
|
closing this issue as i am able to run efficienet |
closing as issue has been resolved |
Running DL streamer using the Docker option on a Serpent Canyon Board and using Ubuntu 22.04
Running the following command against my real Sense camera and replaced the classification model with efficientnet that came with the models download, i get an error that model exeeds allowable size of 10MB....
gst-launch-1.0 v4l2src device=/dev/video4 ! video/x-raw,width=320,height=240,framerate=30/1 ! videoconvert ! gvadetect model=${DETECTION_MODEL} model_proc=${DETECTION_MODEL_PROC} device=CPU ! queue ! gvaclassify model=${OBJECT_CLASSIFICATION_MODEL} model-proc=${OBJECT_CLASSIFICATION_MODEL_PROC} device=CPU object-class=vehicle ! queue ! gvawatermark ! videoconvert ! fpsdisplaysink video-sink=xvimagesink sync=false
The text was updated successfully, but these errors were encountered: