Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No effect on AWS Rekognition? #138

Open
pospielov opened this issue May 31, 2021 · 4 comments
Open

No effect on AWS Rekognition? #138

pospielov opened this issue May 31, 2021 · 4 comments

Comments

@pospielov
Copy link

I just downloaded two images, original and cloaked, from your website and uploaded them to AWS Rekognition. The results are 100% "similarity".
Did you upload the wrong images (I checked all of them, including Obama - that have different sizes)?
image

@tbeckenhauer
Copy link

tbeckenhauer commented Jul 18, 2021

I see the same effect with my photos. In case I don't get back to this, I realized I had only run this with --mode=low. Currently processing --mode=high, but it's taking a while.

@tbeckenhauer
Copy link

Ok, I processed this myself, and this test was run with --mode=high. The alterations are visible enough that you can see them. AWS Rekognition thinks they are the same person.

Comparison Original to High

@tbeckenhauer
Copy link

tbeckenhauer commented Jul 20, 2021

So I considered it wasn't a realistic test to compare a before and after. I was thinking what we need to test are two different images of the same person that have been run through fawkes. Well I did that, and the results are not good. Basically we get 99.9, 99.5, and 99.3% similarity for low, mid, and high. I imagine these facial recognition tools saw all the publicity for fawkes and started training their networks to recognize cloaked images. Maybe generative adversarial networks would be a good next step, but I am not an expert on this. I imagine we would need those scripts for automating the testing.

Low
Comparison Obama Low
Mid
Comparison Obama Mid
High
Comparison Obama High

@ghost
Copy link

ghost commented Oct 22, 2023

https://www.theregister.com/2022/03/15/research_finds_data_poisoning_cant/
This issue suggests that big players already trained their models to resist data poisoning.
Fawkes is about done for.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants