You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your paper you wrote the inference time with Basic VSR++ is 0.072 seconds and i wonder how you get these values? That would lead in 13.9FPS and i never saw BasicVSR++ beeing so fast. So how do you come to only 0.072seconds for BasicVSR++ ?
And second question is: If this is true, then your model with 0.427s is nearly 6 times slower than the even very slow BasicVSR++
Is this really the case? 6 times slower than BasicVSR++ ?
The text was updated successfully, but these errors were encountered:
For a fair comparison, we measured the average inference time through 100 independent executions for all compared models. The average runtime of BasicVSR++ was 0.072s, which is consistent with the 77ms claimed in the paper.
As you mentioned, the inference time of our FMA-Net is 0.427s, approximately 6 times slower than BasicVSR++. This is because BasicVSR++ is implemented with fast and lightweight convolution and warping operations only. However, unlike VSR, global feature mapping is required for deblurring, making our model relatively slower.
Also, please consider that the inference time may vary depending on the environment (GPU, OS, etc.)
In your paper you wrote the inference time with Basic VSR++ is 0.072 seconds and i wonder how you get these values? That would lead in 13.9FPS and i never saw BasicVSR++ beeing so fast. So how do you come to only 0.072seconds for BasicVSR++ ?
And second question is: If this is true, then your model with 0.427s is nearly 6 times slower than the even very slow BasicVSR++
Is this really the case? 6 times slower than BasicVSR++ ?
The text was updated successfully, but these errors were encountered: