Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KvN accuracy computation #10

Open
achiatti opened this issue Apr 17, 2020 · 0 comments
Open

KvN accuracy computation #10

achiatti opened this issue Apr 17, 2020 · 0 comments

Comments

@achiatti
Copy link

Thanks for providing these great data and resources.

I have tried to understand better how the KvN accuracy is first computed in your script ./image-matchin/evaluateModel.m and then also reused to choose between K-net and N-net in your second script evaluateTwoStage.m

The specific portion of code I am confused about is:

bestKnownNovelAcc = 0;
bestKnownNovelThreshold = 0;
for threshold = 0:0.01:1.2
    knownNovelAcc = sum((predNnDist > threshold) == ~testIsKnownObj)/length(testIsKnownObj);
    if bestKnownNovelAcc < knownNovelAcc
        bestKnownNovelAcc = knownNovelAcc;
        bestKnownNovelThreshold = threshold;
    end
end

My understanding is that your optimal threshold is chosen to maximise a metric
based on your knowledge of the ground truth labels in the test set (stored as testIsKnownObj ).
Isn't that the same as assuming that you already know if the observed/grasped image in the test set is known or not, even before predicting its class with either of the two networks? Am I missing something here?
How would one decide between K-net or N-net (i.e., conclude the so-called "recollection stage" in your paper) without access to the ground truth results then?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant