-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Capture n_frame_c image by opencv #1198
Comments
@subzeromot Please see the reference section on Video Buffer Conversion Under the Source documentation ...specifically dsl_source_video_buffer_out_format_set to set the output format of your source to RGBA |
thanks, video conversion seems ok, but I still can not copy n_frame_c. Have no idea. Not see any thing in log file |
I think I need to do conversation in GstBin-osd. How can I do that? |
I set process-mode in nvdsosd to CPU, so the output sink should be RBGA, right? Why in the pipeline, it still shows NV12? |
@subzeromot ... There is definitely a problem here. What's interesting is that it works fine with just one source, but fails with two sources... even with the same components downstream. I'll be working on this tomorrow. |
@rjhowell44 Actually, it works with multi video source, but rtsp source is not work, even with one source |
Because they are failing to link to the streammuxer.. if you have logging on you will see the error messages. it's diffidently a race condition in the order the components are linked and how the caps are negotiated. The RTSP Source elements are always the last to link because of the dynamic stream (plugin) selection; H264 vs H265. I can fix this by adding more specific format control downstream. |
@subzeromot I take that all back. Everything is working fine for me. I had missed changing the format on one of my sources. The following file uses to http uri sources and two rtsp sources... You should be able to run this if you update the rtsp uri. Just strip off the .txt extension. 4_source_pgie_iou_tracker_tiler_osd_custom_pph_window.py.txt and I can see from the graph that the format is RGBA throughout the pipeline. I can't see your image above. looks like you posted from a private repo. But in in your first image above I see 2 sources with an OSD... but no Tiler or Demuxer. The Streammux is batching the two streams from the sources. OSD can only handle a single, non-batched input buffer.. You will need to Tile the streams (see script attached) or add a Demuxer with two branches, each with their own OSD and Sink. Please send me a log if you have any further issues.
|
why your graph look so difference? I run your script and it work perfect. But if I remove 2 sources (uri-source, uri-source-2), only play with rtsp sources, they still can not convert to RBGA, but if I add back one uri source, it works again. Did you use main branch or others to run this script? |
Sorry @subzeromot , You're correct. I realized I was testing with my v0.30.alpha dev branch. I have not had a chance to retest with the Master branch, but I have confirmed that I can run v0.30.alpha with just one or two RTSP sources. I will test with master tomorrow. Please try with the v0.30.alpha release if you can. I've optimized a couple of components in this branch, but I'll be surprised if it fixed a bug I was unaware of. |
thanks, let me try with v0.30 |
@subzeromot any update on this? |
@subzeromot Please provide me with a log file so I can see where/why it is failing for you
... and I'm unable to expand that image above... says it's from a private repo |
GST_DEBUG.log
|
pipeline.zip |
@subzeromot this appears to be a different issue and I believe the problems is caused by this statement. n_frame_c = pyds.get_nvds_buf_surface(hash(buffer), frame_meta.batch_id) please remove the hash() and try with n_frame_c = pyds.get_nvds_buf_surface(buffer, frame_meta.batch_id) I will try and do the same when I get a few moments. Otherwise, your pipeline looks good. |
I removed hash(), but nothing happen. Anw, I also try to set process mode for OSD to GPU and still same ... |
@subzeromot I've been able to get this to work. The key is that the memory type must be changed to It is sufficient to do this at the Streammux with the below as long as you add your pad-probe-handler before the Tiler. retval = dsl_pipeline_streammux_nvbuf_mem_type_set('pipeline',
DSL_NVBUF_MEM_TYPE_CUDA_UNIFIED) Here's my updated script. Updated pipeline graph .. you can see Tiler component converts the memory back to cuda-device. The Tiler can be updated as well if for some reason you want the pad-probe-handler after the Tiler. |
I plan to add this as an example and cover the requirements under a new section in under the overview.. called working with opencv |
New examples and overview section added to the v0.30.alpha release which has been release |
I want to capture frame and save to local by opencv
frame = np.array(n_frame_c, copy=True, order='C')
frame_org = cv2.cvtColor(frame_org, cv2.COLOR_RGBA2BGRA)
But to do that, the frame should be converted into RGBA first.
With deepstream-python-app, I will need to create a caps-filter element and add to pipeline:
How can I create this caps-filter in DSL?
In
DslSourceBintr.cpp
, I tried to updatem_bufferOutFormat
but it seem not work, I still get crash when callframe = np.array(n_frame_c, copy=True, order='C')
The text was updated successfully, but these errors were encountered: