Replies: 2 comments 3 replies
-
I didn't write the jpeg writer code, so I don't actually know how it works. |
Beta Was this translation helpful? Give feedback.
2 replies
-
No, the original author is Jon Olick. https://www.jonolick.com/code.html
I don't think he has the code on Github anywhere, just the source drop.
…-Fabian
On Fri, Sep 24, 2021 at 3:10 PM José Carlos Cazarin Filho < ***@***.***> wrote:
Nvm, I traced it back to the GitHub user jpcy (https://github.com/jpcy)
I sent an e-mail, hope to hear back
Thanks for the help
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1216 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAIHB6E7N2NBZDNBIBX26N3UDTZOTANCNFSM5EWJBZ7Q>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello all!
In our system we have a ov10626 sensor that spits out data in the format VYUV422. Right now we use the hardware capabilities of our processor to convert that data to RGB and then send that data to the stb library (we use stbi_write_jpg_to_func) to convert it to JPEG.
This is a bit redundant since stb lib does convert RGB to YUV data before converting to JPEG.
I was trying to change the code from the library to do that directly (YUV from the sensor to JPEG), but with no success.
Since the data is 422, it means that for every 128 Y components I would get 64 U components and 64 V components
I guess that my code to create an array with 128 Y components and 64 U and 64 V components (from a block of 256 bytes of the image) is right.
The tricky part is how to properly process that data.
In the original code of the library, when it does subsampling, it uses 256 Y components and 64U and 64 V components, and the calls for the processing part are:
`
`
My questions are: the stride parameter, does it relate to the size of the float array with the components being passed?
In the original code, Y is a 256 array and the stride used is 16. subU is a 64 array and the stride is 8. Should the size of the array always be the square of the stride size?
If yes, how would it work when the Y array is 128 elements long and the U/V array is 64 elements long for each sub block of the image?
And how does the calls to stbiw__jpg_processDU should be done?
I have very little experience with image processing, so any help will be greatly appreciated
Thank you so much!
Beta Was this translation helpful? Give feedback.
All reactions