Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

for each unlabeled image, is it be input to the different model with different data augmentation? #6

Open
KevinYu17 opened this issue Nov 7, 2023 · 0 comments

Comments

@KevinYu17
Copy link

I am confused about contrast learning. My current understanding is that for models A and B, the same unlabeled image is enhanced with different data augmentation A1 and B1, and the corresponding data is put into models A and B respectively for subsequent similarity comparison. Am I right? If so, how to compute the L-similarity between the different augmentation(what if it is rotated or flipped, the loss could be high)

If my comprehension is totally wrong, could you tell me the correct procedure?
Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant