You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there an obvious way to to extract the prediction probabilities instead of just no. correct labels vs no. ground truth labels? I have checked the print_acc.py and eval_prob_adaptive.py but I can't see anything obvious. Any help would be super appreciated.
Thanks.
The text was updated successfully, but these errors were encountered:
I am also interested in how to get the prediction probability (ie, prediction confidence). From the current eval_prob_adaptive function, it seems impossible to get the probability.
I tried to compute the softmax over the data, the probability is close to 1/n for each label among n classes in my cases.
Hi there,
Is there an obvious way to to extract the prediction probabilities instead of just no. correct labels vs no. ground truth labels? I have checked the print_acc.py and eval_prob_adaptive.py but I can't see anything obvious. Any help would be super appreciated.
Thanks.
The text was updated successfully, but these errors were encountered: