New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about training Inception-ResNet #11
Comments
Hi, The dataset is randomly splited into 80% for training and 20% for testing, according to each identity. |
Thanks a lot for your answer. Could you please provide the Webface dataset for training Inception-ResNet in the paper? |
Have a look at this thread. |
Thanks a lot for your answer. Is the image size of the Webface dataset you used when training Inception-ResNet 112*112? I look in the exp_setting.sh file, the file name of the dataset is written as casia-112x112-protected-train. |
Yes, we use 112*112 for both training and testing. |
Thanks a lot for your answer. The Webface I downloaded is 250*250. So I would like to ask, have you processed the dataset? Or maybe I downloaded the wrong dataset? |
Apologies, I lost track of where I downloaded the dataset. |
Thanks a lot for your answer. I have one more question. Does the script 'Unlearnable-Examples-main/scripts/face/min-min-noise/train_clean.sh' use clean face images to train the Inception-ResNet model? |
Yes, the train_clean.sh is the baseline that is trained with the original clean dataset. |
Thanks a lot for your answer. If I want to use clean face dataset to train Inception-ResNet model according to your settings,do I need to uncomment out the following code? https://github.com/HanxunH/Unlearnable-Examples/blob/main/main.py#L102 (contains L102-L116 ) |
This part is for evaluating face verifications on LFW, depending if you need the results or not. |
Hi, thanks for open source. I am very interested in your paper. I have some questions. When you use the WebFace dataset to train the Inception-ResNet network, how are the training set and test set divided?
The text was updated successfully, but these errors were encountered: