Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange Inference time ? #9

Open
Platorius opened this issue Apr 29, 2024 · 1 comment
Open

Strange Inference time ? #9

Platorius opened this issue Apr 29, 2024 · 1 comment

Comments

@Platorius
Copy link

In your paper you wrote the inference time with Basic VSR++ is 0.072 seconds and i wonder how you get these values? That would lead in 13.9FPS and i never saw BasicVSR++ beeing so fast. So how do you come to only 0.072seconds for BasicVSR++ ?

And second question is: If this is true, then your model with 0.427s is nearly 6 times slower than the even very slow BasicVSR++

Is this really the case? 6 times slower than BasicVSR++ ?

@GeunhyukYouk
Copy link
Collaborator

GeunhyukYouk commented May 1, 2024

For a fair comparison, we measured the average inference time through 100 independent executions for all compared models. The average runtime of BasicVSR++ was 0.072s, which is consistent with the 77ms claimed in the paper.

image

As you mentioned, the inference time of our FMA-Net is 0.427s, approximately 6 times slower than BasicVSR++. This is because BasicVSR++ is implemented with fast and lightweight convolution and warping operations only. However, unlike VSR, global feature mapping is required for deblurring, making our model relatively slower.

Also, please consider that the inference time may vary depending on the environment (GPU, OS, etc.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants