You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Great work! However, the decoding time is about 12s for a 1080p image. The bottleneck lies in calling Hyperprior.decompress_forward, especially in prior_entropy_model.decompress (~10s).
Is there any plan to optimize the decoding time?
The text was updated successfully, but these errors were encountered:
Yes, that definitely needs to be optimized for practical purposes. Do you have a detailed profile of the execution times for the decoding process?
The model architecture should be reasonably well optimized already, assuming you are running on GPU. Using torchscript to JIT functions in the forward pass should result in a small improvement.
The bottleneck probably lies in the actual entropy coding/decoding process. The current implementation is a vectorized rANS encoder written in numpy - which also has a small bit overhead in addition to being relatively slow, as the vectorized 'heads' must be initialized to some default value - which takes extra bits to store. Rewriting this in a lower-level language (which TF Compression and Fabian Mentzer's torchac do) would definitely improve encoding/decoding times significantly. This is something I'd like to get working eventually if I can find the time to.
Great work! However, the decoding time is about 12s for a 1080p image. The bottleneck lies in calling Hyperprior.decompress_forward, especially in prior_entropy_model.decompress (~10s).
Is there any plan to optimize the decoding time?
The text was updated successfully, but these errors were encountered: