Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calculation of AUC metric #22

Open
weilonghu opened this issue Dec 24, 2020 · 0 comments
Open

Calculation of AUC metric #22

weilonghu opened this issue Dec 24, 2020 · 0 comments

Comments

@weilonghu
Copy link

weilonghu commented Dec 24, 2020

def ctr_eval(sess, model, data, batch_size):
    start = 0
    auc_list = []
    f1_list = []
    while start + batch_size <= data.shape[0]:
        auc, f1 = model.eval(sess, get_feed_dict(model, data, start, start + batch_size))
        auc_list.append(auc)
        f1_list.append(f1)
        start += batch_size
    return float(np.mean(auc_list)), float(np.mean(f1_list))

In this function, you calculate AUC in every batch and take their average value as the final AUC. But as far as I know AUC needs to be globally sorted in the test set. Can you explain it ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant