-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[HOW-TO] get hardware acceleration #1027
Comments
Hi, thanks for the question. You're right that the Arm cores are busier now that we run more of the camera stack on them. Here are some ideas to consider:
I'm afraid I don't know too much about compiling customised versions of FFmpeg, so I'm sorry that I can't help you with that. |
Hi and thanks for your answer!
I'll try this for sure! I also read (but not in a recent posts) that maybe V4L2 might also improve cpu usage. Do you know something about that?
Yes, I know C++ would be more efficient than Python but unfortunately the streaming code is just a small part of a larger Python program.
Yes, you understood it correctly! |
This will already be using the V4L2 encoder. If you run just the camera, with no encoding, the CPU usage is still relatively high (well, as a percentage of a single core). There does just seem to be quite a lot of code churning over, as camera and encoder interrupts are all passed up to and then handled by Python. You're right that you can't access the same camera from different processes, but you can access different cameras. Not sure if that helps you, though. Piping buffers to other processes is likely to be expensive, of course. You can improve that by passing shared memory buffers around, but I think the whole thing would start to get really complicated. Is it a problem that running two cameras will burn most of a CPU core? I would expect you might have problems getting the video encoder to deal with two streams at just under 60fps as well. Another thought would be to put |
Finally i found some time to try what you suggested!
I'd really like to try using libcamera-vid to stream directly to networks, but I can't make it works...
I usually stream using the However when I try to run this command I got this error (I'm coping the whole output because maybe it could be usefull):
A question related to stream directly using libcamera: it's possibile to use this command from python and also grab the frames for other purposes (I know I can import libcamera from python)? |
I don't know too much about rtsp specifically, but I'm not sure that our libav integration is able to support it. In my experience, rtsp can be a bit tricky because it requires a server to deliver sdp descriptions and do some negotiation with the client. @naushir might be able to answer that definitively. Another alternative would be to output your h.264 stream directly to stdout and pipe that into a separate ffmpeg process. This is less elegant, but it's the same thing that Picamera2 does, and you might get some performance benefits in running libcamera-vid instead. |
Yes I know rtsp is a bit tricky but I use media-mtx as a server to manage the clients connections and it works very well! Anyway I will try to pipe the h264 stream into ffmpeg and maybe compiling it I'll reach my goal! Again thanks so much for your support! |
Hello everyone, I'm trying to get hardware acceleration to reduce the cpu consumption while using picamera2 to stream the camera video.
I have a cm4 with two official raspberry camera 3.
Streaming a single camera requires around 45% of cpu consumption while streaming with both cameras require almost 100% of cpu.
I'm currently stream the video this way:
More than one year ago, before bullseye and using the old legacy camera, I was able to not using the cpu to stream the camera (the first raspberry camera version tho). It was possiile by compiling userland and ffmpeg with some options and using ffmpeg to stream the video with almost the same command as the one above.
Now this solution is not working anymore and it seems hard to find some topic related to my request.
Thanks in advance for your help
The text was updated successfully, but these errors were encountered: