New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding GPU acceleration to encode_jpeg #8391
base: main
Are you sure you want to change the base?
Conversation
Summary: I'm adding GPU support to the existing torchvision.io.encode_jpeg function. If the input tensors are on the GPU, the CUDA version will be used and the CPU version otherwise. Additionally, I'm adding a new function torchvision.io.encode_jpegs (plural) with uses a fused kernel and may be faster than successive calls to the singular version which incurs kernel launch overhead for each call. If it's alright, I'll be happy to refactor decode_jpeg to follow this convention in a follow up PR. Test Plan: 1. pytest test -vvv 2. ufmt format torchvision 3. flake8 torchvision Reviewers: Subscribers: Tasks: Tags:
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/8391
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit a799c53 with merge base 5181a85 (): NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @deekay42 . I made another pass but this looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @deekay42,
I work on the video decoder in C++ so @NicolasHug thought that my comments may be useful for this PR.
I hope you find my comments useful, and feel free to push back.
I am also curious if you did any benchmarking to see how much speedup we get using hardware decoding or encoding?
ImageReadMode mode, | ||
torch::Device device); | ||
|
||
C10_EXPORT std::vector<torch::Tensor> encode_jpeg_cuda( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, why does this interface take in and return a std::vector instead of a single stacked Tensor? I am asking because in the TorchMedia decoder, our users wanted a single stacked tensor for all the video frames and doing the torch.stack() operation on a python list of tensors was time-consuming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just chiming in, these are good questions
We need to return a vector instead of a stacked tensor because the encoded jpeg bytes of each input image aren't necessarily of the same length. We wouldn't be able to fit those in a stacked tensor since by definition all the tensors in a stack must have the same shape.
As for why we're using a vector for the input: this is for similar reason, i.e. the input images may not all be of the same shape. It is possible that users want to pass a stacked tensors of images as input though, e.g. if this stack is the output of a (generative?) model. So eventually we should allow the Python API (image.io.decode_jpeg()
) to support a stacked tensor of shape NCHW
, and possibly convert that internally to a vec of tensors of shape CHW
so it can be passed to this C++ function, but that can be left out for another PR and I didn't want to bother Dominik with that right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicolas beat me to it, but yes, the reason for the vector is due to heterogeneous input and output tensor shapes.
#include <c10/cuda/CUDAGuard.h> | ||
#include <nvjpeg.h> | ||
|
||
nvjpegHandle_t nvjpeg_handle = nullptr; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: perhaps rename this to g_nvjpeg_handle
so it is clear this is a global variable?
Same for nvjpeg_handle_creation_flag
below.
"The number of channels should be 3, got: ", | ||
image.size(0)); | ||
|
||
// nvjpeg requires images to be contiguous |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add a citation link if you can.
ImageReadMode mode, | ||
torch::Device device); | ||
|
||
C10_EXPORT std::vector<torch::Tensor> encode_jpeg_cuda( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: perhaps the name itself should indicate this is a plurality of images, like maybe encode_jpegs_cuda
?
|
||
C10_EXPORT std::vector<torch::Tensor> encode_jpeg_cuda( | ||
const std::vector<torch::Tensor>& images, | ||
const int64_t quality); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add a comment about quality. Is higher better or lower? What is the range/min/max here?
|
||
for (int c = 0; c < channels; c++) { | ||
target_image.channel[c] = src_image[c].data_ptr<uint8_t>(); | ||
// this is why we need contiguous tensors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: maybe add a CHECK here to make sure the tensor is contiguous?
} | ||
} | ||
|
||
torch::Tensor encode_single_jpeg( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: put this in an anonymous namespace since this function is not public?
} | ||
} | ||
|
||
torch::Tensor encode_single_jpeg( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: this declaration can be omitted entirely if you move the implementation of this function above in an anonymous namespace, right?
getStreamState); | ||
|
||
// Synchronize the stream to ensure that the encoded image is ready | ||
cudaError_t syncState = cudaStreamSynchronize(stream); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the answer to this question and I am curious if you know -- is there a way to just do a single streamSynchronize per batch instead of per image? That way we can pipeline some work for some extra speedup when handling a batch of images.
size_t length; | ||
nvjpegStatus_t getStreamState = nvjpegEncodeRetrieveBitstreamDevice( | ||
nvjpeg_handle, nv_enc_state, NULL, &length, stream); | ||
TORCH_CHECK( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: maybe CHECK for the length > 0?
Summary:
I'm adding GPU support to the existing torchvision.io.encode_jpeg function. If the input tensors are on the GPU, the CUDA version will be used and the CPU version otherwise. Additionally, I'm adding a new function torchvision.io.encode_jpegs (plural) with uses a fused kernel and may be faster than successive calls to the singular version which incurs kernel launch overhead for each call. If it's alright, I'll be happy to refactor decode_jpeg to follow this convention in a follow up PR.
Test Plan:
Reviewers:
Subscribers:
Tasks:
Tags: