Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Capture n_frame_c image by opencv #1198

Closed
subzeromot opened this issue Apr 29, 2024 · 23 comments
Closed

Capture n_frame_c image by opencv #1198

subzeromot opened this issue Apr 29, 2024 · 23 comments

Comments

@subzeromot
Copy link

I want to capture frame and save to local by opencv

frame = np.array(n_frame_c, copy=True, order='C')
frame_org = cv2.cvtColor(frame_org, cv2.COLOR_RGBA2BGRA)

But to do that, the frame should be converted into RGBA first.
With deepstream-python-app, I will need to create a caps-filter element and add to pipeline:

caps = Gst.ElementFactory.make("capsfilter", "filter1")
caps.set_property('caps', Gst.Caps.from_string("video/x-raw(memory:NVMM),format=RGBA"))

How can I create this caps-filter in DSL?
In DslSourceBintr.cpp, I tried to update m_bufferOutFormat but it seem not work, I still get crash when call frame = np.array(n_frame_c, copy=True, order='C')

std::wstring L_bufferOutFormat(DSL_VIDEO_FORMAT_RGBA);
m_bufferOutFormat.assign(L_bufferOutFormat.begin(), L_bufferOutFormat.end());

@rjhowell44
Copy link
Collaborator

@subzeromot Please see the reference section on Video Buffer Conversion Under the Source documentation

...specifically dsl_source_video_buffer_out_format_set to set the output format of your source to RGBA

@subzeromot
Copy link
Author

thanks, video conversion seems ok, but I still can not copy n_frame_c. Have no idea. Not see any thing in log file

@subzeromot
Copy link
Author

image
This is my pipeline. I put pph after on-screen-display (osd).

@subzeromot
Copy link
Author

I think I need to do conversation in GstBin-osd. How can I do that?

@subzeromot
Copy link
Author

I set process-mode in nvdsosd to CPU, so the output sink should be RBGA, right? Why in the pipeline, it still shows NV12?

@rjhowell44
Copy link
Collaborator

@subzeromot ... There is definitely a problem here. What's interesting is that it works fine with just one source,

playing

but fails with two sources... even with the same components downstream.

I'll be working on this tomorrow.

@subzeromot
Copy link
Author

@rjhowell44 Actually, it works with multi video source, but rtsp source is not work, even with one source

@subzeromot
Copy link
Author

image This is my pipeline. I put pph after on-screen-display (osd).

Why elements in sources bin are not link to each other?

@rjhowell44
Copy link
Collaborator

Because they are failing to link to the streammuxer.. if you have logging on you will see the error messages.

it's diffidently a race condition in the order the components are linked and how the caps are negotiated. The RTSP Source elements are always the last to link because of the dynamic stream (plugin) selection; H264 vs H265.

I can fix this by adding more specific format control downstream.

@rjhowell44
Copy link
Collaborator

@subzeromot I take that all back. Everything is working fine for me. I had missed changing the format on one of my sources. The following file uses to http uri sources and two rtsp sources... You should be able to run this if you update the rtsp uri. Just strip off the .txt extension.

4_source_pgie_iou_tracker_tiler_osd_custom_pph_window.py.txt

and I can see from the graph that the format is RGBA throughout the pipeline.
4-rgba-sources

I can't see your image above. looks like you posted from a private repo. But in in your first image above I see 2 sources with an OSD... but no Tiler or Demuxer. The Streammux is batching the two streams from the sources. OSD can only handle a single, non-batched input buffer.. You will need to Tile the streams (see script attached) or add a Demuxer with two branches, each with their own OSD and Sink.

Please send me a log if you have any further issues.

export GST_DEBUG=1,DSL:4

@subzeromot
Copy link
Author

why your graph look so difference? I run your script and it work perfect. But if I remove 2 sources (uri-source, uri-source-2), only play with rtsp sources, they still can not convert to RBGA, but if I add back one uri source, it works again. Did you use main branch or others to run this script?

@rjhowell44
Copy link
Collaborator

Sorry @subzeromot , You're correct. I realized I was testing with my v0.30.alpha dev branch. I have not had a chance to retest with the Master branch, but I have confirmed that I can run v0.30.alpha with just one or two RTSP sources.

I will test with master tomorrow. Please try with the v0.30.alpha release if you can. I've optimized a couple of components in this branch, but I'll be surprised if it fixed a bug I was unaware of.

@subzeromot
Copy link
Author

thanks, let me try with v0.30

@rjhowell44
Copy link
Collaborator

@subzeromot any update on this?

@subzeromot
Copy link
Author

seem still not working, I run with v0.30.alpha, and there is my pipeline when I run 2 RTSP sources
pipeline

@rjhowell44
Copy link
Collaborator

rjhowell44 commented May 3, 2024

@subzeromot Please provide me with a log file so I can see where/why it is failing for you

export GST_DEBUG_FILE=./log.txt
export GST_DEBUG=1,DSL4

... and I'm unable to expand that image above... says it's from a private repo

@subzeromot
Copy link
Author

GST_DEBUG.log
There is my log files. The pipeline runs ok, but when I add script to copy frame from buffer, it crash Segmentation fault (core dumped)

frame_meta = pyds.glist_get_nvds_frame_meta(l_frame.data)
n_frame_c = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id)
frame_org = np.array(n_frame_c, copy=True, order='C')
frame_org = cv2.cvtColor(frame_org, cv2.COLOR_RGBA2BGRA)

@subzeromot
Copy link
Author

pipeline.zip
Let me know if you can download this pipeline image file

@rjhowell44
Copy link
Collaborator

@subzeromot this appears to be a different issue and I believe the problems is caused by this statement.

n_frame_c = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id)

please remove the hash() and try with

n_frame_c = pyds.get_nvds_buf_surface(buffer, frame_meta.batch_id)

I will try and do the same when I get a few moments.

Otherwise, your pipeline looks good.

@subzeromot
Copy link
Author

I removed hash(), but nothing happen. Anw, I also try to set process mode for OSD to GPU and still same ...

@rjhowell44
Copy link
Collaborator

@subzeromot I've been able to get this to work. The key is that the memory type must be changed to DSL_NVBUF_MEM_TYPE_CUDA_UNIFIED. See get-nvds-buf-surface

It is sufficient to do this at the Streammux with the below as long as you add your pad-probe-handler before the Tiler.

        retval = dsl_pipeline_streammux_nvbuf_mem_type_set('pipeline',
            DSL_NVBUF_MEM_TYPE_CUDA_UNIFIED)

Here's my updated script.
4_source_pgie_iou_tracker_tiler_osd_custom_pph_window.py.txt

Updated pipeline graph .. you can see Tiler component converts the memory back to cuda-device.
new

The Tiler can be updated as well if for some reason you want the pad-probe-handler after the Tiler.

@rjhowell44
Copy link
Collaborator

I plan to add this as an example and cover the requirements under a new section in under the overview.. called working with opencv

@rjhowell44
Copy link
Collaborator

New examples and overview section added to the v0.30.alpha release which has been release

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants