New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support ffmpeg 6 #2071
Comments
Well that's unfortunate. We'll have to do version checking to switch between the two pixel format names, and parsing of ffmpeg's version string has been difficult in the past. |
How about checking available pix_fmts and picking one? Wouldn't that be more universal? I would assume parsing that would be even harder |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this was a feature request that others have shown no interest in then it's likely to not get implemented due to lack of interest. If others also want to see this feature then now is the time to say something! Thank you for your contributions. |
Due to the limited number of people using dedicated hardware this hasn't been more of an issue, but it's still on the todo list. |
An update: As far as I can tell ffmpeg 5 has been working for everyone except those trying to use However, if anybody else wanted to pick this up to test, verify and support the new pixel format it'd be great. Otherwise it'll continue to be pushed off until the larger projects that are in the works are released. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this was a feature request that others have shown no interest in then it's likely to not get implemented due to lack of interest. If others also want to see this feature then now is the time to say something! Thank you for your contributions. |
Just wanted to add that upgrading to ffmpeg 6 if you use x264 shouldn't be a problem. I have been running it for a week 24/7 now. The performance of using x264 with ffmpeg 6 on an ARM CPU was also noticeably better than with ffmpeg 4 or ffmpeg 5. I'm seeing about 13% lower CPU with the same settings (three variants on highest quality). |
I can take a look at this if the fix isn't required too soon. Being able to detect the ffmpeg capabilities is probably also needed for the other hardware encoding improvements I want to do. |
No rush, it's been sitting around as a todo this long :) |
Maybe we're overthinking it. Maybe we treat these codec implementations as completely different codecs? Like we could have Vaapi Original and Vaapi New or something and just have the user-facing display name be "Vaapi" for both. |
For a short term solution it would probably be the easiest. But I'm a bit worried about the possible number of options this would create in the long term. At least for NVENC I could imagine around 3 different options it could run with (different hardware scalers). |
I didn't realize there was an issue with nvenc as well, I thought the only issue we were trying to resolve was the renaming of vaapi with the new versions of ffmpeg. What is the nvenc issue with ffmpeg >5?
I don't think that's going to be possible since failures can happen for so many different reasons. |
Ah sorry for being unclear. This is more of an issue with the general hardware acceleration for nvenc and has nothing to do with the new ffmpeg version. It's just that there are two different ways to enable hardware accelerated rescaling with nvenc which might not always be present. In that case you would need to provide different options to the user. |
Oh got it, thanks for clarifying. If it's just two versions of nvenc then we could write code to introspect the options available to nvenc and try to parse out the version, kind of like we're doing with codec names, but with codec options instead. But if it's more than two versions of a codec we need to support than I don't think that's a scalable solution. |
Hello, what do you think about #3071 ? |
Hi, I'm currently running owncast on an LXC Container using Alpine base os, and I'm currently facing the same issue. |
After I removed the presets I also get low memory warnings, however, it's working fine without issues. I'd like to resolve those warnings, but at least it's working. |
@mahmed2000 Since you're testing manually, could you share the invocation you're testing with? I'll run the same thing and report back. |
Additionally
But it also says that |
@gabek Yep, you're right with the compression_level setting. Varying the presets doesn't seem to have any effect on file size, but compression_level is having an effect on Intel (only 1 and 2 seem to actually change anything though). Still, default behavior seems to be to ignore rather than throw an error. Docs just kinda seem all over the place for this. Command, testing with OBS, just modified file outputs, debug level logs and listen for RTMP, redirect stderr to file:
|
@LelieL91 Thanks so much for testing! The more datapoints the better! Continue to watch this thread if you can, I'm sure we'll be making changes and will continue to be testing. |
We do have a way to ignore errors that are non-fatal and might be due to buggy codec log messages, or other things out of our control. Not something we should just throw things into right away, but just mentioning that there's a precident of dealing with "we're getting errors, but it works fine" scenarios. https://github.com/owncast/owncast/blob/develop/core/transcoder/utils.go#L45
|
Sure, no problem, if you remember I already said that I'm happy to help about this feature.
If you need something specific, feels free to ask anytime. At the moment, this error is the only "issue" but not a fatal one. I don't know if possible, but upgrading this fork with the current dependencies/commits would be helpful for future test/merging, the problem with the go module was kind annoying to find out. Luckily it was only that one for now 🥲 |
I maybe found something online about my error, I'm going to report here, could be maybe useful for someone else:
Look at the link for referencies: https://gitlab.freedesktop.org/mesa/mesa/-/issues/3524#note_1989027
|
@LelieL91 that's definitely a driver issue, what's the output of Also as per the Alpine wiki did you install |
@gabek I think its dependent on AMD v other devices. AMD follows the format in its section with 0-29 which is more of a sequence of bit flags than an actual scale. For other encoders it maps to some arbitrary scale dependent on the device. Could be 1-7, could be 1-5, could be 1-3. Might map directly onto the presets of say h264_qsv for example which also go from 1-7 or does some custom settings. That does complicate things. I was also testing it wrong and compression_level actually does seem to change speed upto 7 on my end. |
There still has to be an answer, though. We can't start asking people what their hardware is. Maybe the real solution is to not provide a level at all and hope whatever the defaults are will be good enough. Perhaps not the most optimal, but if there isn't an answer to this question, then there's not much that can be done. |
Did some digging into compression_level, from the ffmpeg source, the only reference in the context of h264_vaapi (outside of setting the default) is in this function In short, it checks it against the VAAPI interface for this VAConfigAttribEncQualityRange. The value gets set by vaapi itself, and the actual range would depend on the underlying device and driver, which is why the ffmpeg docs say what they say. What the value means is also up to the device as with the AMD section. I should be seeing a warning when setting the value arbitrarily high (1000, etc) but it seems to just be implicitly clamping down to 1-7 somewhere else. Does a value >30 trigger a warning
That can work, but then the CPU or GPU usage slider is misleading for vaapi. Could go the obs route and require the user to set the flag and value explicitly. See obsproject/obs-studio#7953 (comment). |
Unfortunately, I think that's a non starter. I know most people will have it installed already to get vaapi working in the first place, but that's too much of a dependency.
That sounds like a support nightmare. It would be the most customizable, but in reality nobody is going to understand how to set this value, and it will become my problem. This is the same reason you can't just type in a free form framerate in the admin, for example.
That's the core of it all for me. Is there a way to give people a realtive way to approximately provide their intention. I understand a 3 on Intel is different than a 3 on AMD. But is a 3 on Intel faster than a 0 on Intel, and is a 3 on AMD faster than a 3 on AMD? If so, then we can at least give some control, and that's better than none. |
If it was AMD only, yes. If it was Intel only, yes. Together, no. A 3 on Intel is faster than a 0. A 3 on AMD is slower than a 0, but faster than a 1, and 2 isn't even valid. Either is incompatible with how the other uses the value and needs handling separately, which is why I linked the OBS comment. Also need to figure out where NVIDIA fits. IntelI can
AMDTake the following with a heavy dose of testing, I don't have AMD hardware to verify on. Just what I understand from the docs.
Would just Otherwise, it's just not possible as is. |
It could be a decent solution. I'm wondering if it's worth adding keeping track of hardware, since that's not currently a thing we do. I don't know how many people actually take advantage of vaapi. I don't think it's many. Otherwise, I'm with you, it sounds like the default will have to do and disable the slider. Thanks for all the detailed research into this. |
I want to enumerate what I'm aware of being the open questions here:
Am I missing anything? |
And also the This triggers it:
But this doesn't:
I can also confirm even running through owncast itself, there is no error when running with presets. |
I'm seeing So, unfortunately, there appears to be some case(s) where it's not ignorable. Edit: Intel put in a patch a few days ago for this? https://patchwork.ffmpeg.org/project/ffmpeg/patch/20240410030103.520402-2-haihao.xiang@intel.com/ Edit 2: I applied that patch and the memory error completely went away. |
This isn't helpful now, but I did put in a pre-order of an AMD Framework laptop last November, that I'm guessing I should get around June. So at that point I should be able to test AMD + vaapi. But that's a long time to wait. |
@gabek What I did to test Intel is grabbing a server from the Hetzner Server auction. They charge per hour for bare metal hardware and with the auction servers there is no set-up fee. Grab an AMD that has an integrated GPU and for pennies per hour it's testable. |
That's a great suggestion, thank you! |
@rmens, do you have the exact CPU and/or distro you used? Tried an i7-6700, doesn't seem to have the problem with scaling. Unfortunately, all the AMD CPUs are all X or P series which don't have iGPUs, neither do the 3900s. |
@mahmed2000 I tried an i5-12500 with Debian 12 |
@rmens can't repro. Still works on an i5-12500 with deb 12. I've tried the stock bookworm v5 ffmpeg, the latest autobuild from this repo: https://github.com/BtbN/FFmpeg-Builds, and manually compiling v6.1.1. I'd try your linked ppa but I can't get it to install. Can you test and see if either the standalone bin or manually compiling sill causes the error?
Bumped up from bookworm to trixie since that has 6.1.1, works fine there too. |
I'll have an AMD laptop in my hands within the next couple of weeks, so I'll be able to test with those integrated graphics. |
Good news. I'm testing on the integrated graphics on my new AMD laptop and it works fine. It's throwing the |
As I wrote before, the problem is that there is a fallback to the cpu, so yes, apparently everything seems working, but the CPU usage is the same as without gpu passthrough. And that's not only my test, as it was also mentioned in the reference above. Can you please check your CPU usage in both with and without iGPU use? |
Sorry for the confusion. It does look like the CPU usage is the same between the two. I wonder how that even works, does vaapi have CPU encoding support? Does it import libx264? I wouldn't think it would because why? But that's beside the point. If there's legitimately an issue with vaapi support, then I think we're stuck. Even if we wanted to ship a version of ffmpeg with Owncast, it would have to support all the codecs, including NVENC, and that's non-open software, so legally I don't believe it would be possible. Not that I'm interested in going that route anyway. We're too small of a project to be maintaining our own build of ffmpeg. |
Share your bug report, feature request, or comment.
In ffmpeg version 5 the three distinct pixel formats seems to have been merged into one named just
vaapi
The code currently forces the
vaapi_vld
pixel format that is no longer present in the latest version of ffmpegHere is a comparison of available pixel formats on ffmpeg 5.0.1 (from alpine 3.16) and 4.4.1 (from alpine 3.15).
The text was updated successfully, but these errors were encountered: