Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Original vs Keras performance #1

Open
rauldiaz opened this issue May 14, 2018 · 3 comments
Open

Original vs Keras performance #1

rauldiaz opened this issue May 14, 2018 · 3 comments

Comments

@rauldiaz
Copy link

Hi,

I was wondering why the performance of your implementation is lower than the original repo. Do you have any intuition on why this happens? I also made my own port of pointnet to keras a few months ago and it can't go beyond 82% accuracy on the validation set.

Thanks!

@garyli1019
Copy link
Owner

Hi there, I don't remember what exactly the differences were, but I feel like there are many possible factors could impact the accuracy rate. Like the way we feed the training data(in 2d array, or 3d, or 4d etc), batch number, learning rate decay, optimizer etc. I tried my best to match the original repo, but keras still got some limitations because of its simplicity(I couldn't save my trained model by keras saving feature because of the lambda layer, not sure if they have fixed it or not).

@oldercoder
Copy link

Hi gary, for save the model, I did save_weights_only in ModelCheckpoint and .save_weights with Model, that works for me. By the way, after the model saved, how do you predict it with a new dataset?

@Sharknado888
Copy link

Hey guys I found better results using the same code after normalizing the input data into a unit sphere, as was described in the paper.
Thanks garyli1019, for the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants