Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

testing based on random weights also giving the same results as in paper #86

Open
Akudavale opened this issue Jan 18, 2024 · 9 comments

Comments

@Akudavale
Copy link

Akudavale commented Jan 18, 2024

Hello all and @mbrossar,
intially i deleted the iekfnets.p file in temp and set the training the model to 1 i.e..
read_data = 0
train_filter = 1
test_filter = 0
results_filter = 0, and the code save function saved the randomly intialize weights of the model.I didnt train the model even for a single epoch and saved state_dict()(already implemented in code).
later,
When i tested the randowm weights model by keeping test_filter=1 and results_filter = 1, I obtained the same results as published in the paper. how is that possible?

without training how can anyone get the state of art results as in paper. in paper it was trained upto 400 epochs(mentioned in code). and later i also trained the model for 400 epochs to cross verify there are no changes in the results. with training and without training there are same results.

i request anyone to explain me in detail or i have understood anything wrong in code?

@rendoudoudou
Copy link

Is the data you use KITTI? If it is KITTI, I guess the results of training and non-training are almost the same because the initial values of cov_lat and cov_up in the code are very good and are tailor-made for KITTI. After I used my own data set, when the training epoch reached 3000, compared with no training, the result trajectory can be seen to be significantly different.

@Akudavale
Copy link
Author

Akudavale commented Jan 22, 2024

@rendoudoudou Yes i am using KITTI dataset. So, if i need to tet the model bsed on adding more convolution layers how to do it?
and please can you let me what dataset you used ? if its custom made dataset hoe did you record it?
i am working on project to improve model accuracy or to analyse the paper based on adding more conv layers in MSNET() class.

and apart from cov_lat and cov_up , what other intial values play a major role for converjange of model?

@rendoudoudou
Copy link

Sorry, I don't know CNN very well. I don't know how to add more convolutional layers. I guess it is modified in the class MesNet in utils_torch_filter.py, but I have not tried it. I am using the convolutional layers provided by the author of the code.
The dataset I used was collected by a colleague, and I regret that I cannot make it public. I guess you can adjust the initial values ​​of cov_lat and cov_up in the code so that they are not optimal, so that the training may be effective. Or you can collect inertial navigation data yourself for training and testing.
Regarding your need to increase the number of convolutional layers, I don't know this aspect and I'm sorry that I can't provide my thoughts.

@rendoudoudou
Copy link

I think the initial parameters in class KITTIParameters in main_kitti.py are very important. These parameters provided by the author are applicable to the KITTI dataset. The most important of these parameters are cov_lat and cov_up, because these two parameters are related to the covariance matrix Nn trained by the author.

@Akudavale
Copy link
Author

Akudavale commented Jan 22, 2024

@rendoudoudou Thanks for sharing your experience.
While you used own dataset, what part of code did you change or did you write a custom dataset class? and while recoding data from IMU was it in same fashion as OXTS data in KITTI?

and one more question is. even if i change the intial values and try to run the code but at some pointy i will againg converge to the same values as in paper during tuning. Is there any other wat i can use to test the code on same dataset ?

@rendoudoudou
Copy link

I did not modify the author's code. I just converted my own data to the pickle file format by referring to the author's read_data function. This conversion will not change the IMU value.
Regarding your other question, I think that adjusting the initial value can only make the training and non-training results of the KITTI data set different, and cannot achieve the best results. Because the initial value given by the author is the best, I guess that using the author's initial value should get the best results for KITTI data, which is the result in the paper. In addition, you can also try different training epochs and may get different results.

@Akudavale
Copy link
Author

@rendoudoudou Thankyou,
and i have also observed that the intial values in main_kitti.py and utils_numpy_filter.py file are completely diffrent can you explain or give a brife why is that so. what are the significance of wrting two diffrent intial values ,

thankyou

@rendoudoudou
Copy link

The initial parameters are indeed different in these two places. In my understanding, the initial parameters in utils_numpy_filter.py have no effect, you can ignore it, because the initial parameters in main_kitti.py will overwrite the initial parameters in utils_numpy_filter.py.

@Akudavale
Copy link
Author

@rendoudoudou Thankyou so much for ur insites it will be very helpful

If i have futher questing i will mail you. can you please send me a hi msg to kudavale3105@gmail.com this email id. so that in futher if i get any doubts i can approch you.

Thankyou.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants