Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem while testing real data #13

Open
ung264 opened this issue Nov 29, 2020 · 1 comment
Open

Problem while testing real data #13

ung264 opened this issue Nov 29, 2020 · 1 comment

Comments

@ung264
Copy link

ung264 commented Nov 29, 2020

Hello @MathGaron ,

Firstly thanks for sharing your great work. The performance looks very promising. I'm trying to reproduce the result in the paper but got big translation error for the real data.

I trained a model for the clock. The training and validation loss curve seems fine. The final training and validation MSE loss are both around 0.012. Then I adapted the inference script from https://github.com/lvsn/deeptracking and use it to test the images inside directory clock_occlusion_0. Following the evaluation procedure in the paper, I got a big translation error of 23mm. The rotation error seems fine, it's 2.7 degree. I found the big translation error mainly comes from Tz. Then I check the training MSE loss for Tz and found it's 0.044, while training MSE loss for Tx and Ty are just 0.003. I'm wondering if that's the normal case and have a few questions:

  1. Could you please share what training and validation MSE loss is reasonable? Is 0.012 a good enough value for the training?
  2. Do you also get much bigger Tz MSE loss than for Tx and Ty?
  3. The network architecture inside deeptrack_net.py seems different from the one in the paper. filter_size_1 is 64 and not 96. Could that cause the performance gap?
  4. Do you have any ideas on what are possible reasons for the big Tz error?

Looking forward to your reply and thanks in advance!

@chengm0-0
Copy link

你好@MathGaron

首先,感谢您分享您的出色工作。该性能看起来很有希望。我正在尝试在论文中重现结果,但对于真实数据却出现了很大的翻译错误。

我为时钟训练了一个模型。训练和验证损失曲线似乎很好。最终的训练和验证MSE损失都在0.012左右。然后,我从https://github.com/lvsn/deeptracking改编了推理脚本,并用它来测试目录clock_occlusion_0中的图像。按照本文中的评估程序,我得到了23mm的较大平移误差。旋转误差似乎很好,为2.7度。我发现较大的翻译错误主要来自Tz。然后,我检查了Tz的训练MSE损失,发现它是0.044,而Tx和Ty的训练MSE损失仅为0.003。我想知道这是否是正常情况,并有几个问题:

  1. 您能否分享合理的培训和验证MSE损失?0.012对于培训来说足够好了吗?
  2. 您还可以获得比Tx和Ty大得多的Tz MSE损失吗?
  3. deeptrack_net.py内部的网络体系结构似乎与本文中的有所不同。filter_size_1是64,而不是96。是否会导致性能差距?
  4. 您对导致Tz大错误的可能原因有任何想法吗?

期待您的答复并提前致谢!

Hi, how did you create your dataset?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants