New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the adversarial noise prevents humans from correctly assessing and reporting flips #364
Comments
@ivantor69 Thank you for the feedback! Sorry to hear that your flip was reported. Could you share the flip screenshot ( There is a "Regenerate" button on the Protect images step. If the color shift does not work for the image, you can regenerate to get acceptable colors. Please always check if the images are still recognizable before submitting the flip. Adversarial noise combined with color shifting deteriorates the results of image recognition software to some extent. In our tests people, cats, dogs and cars (these are the most popular image categories on the Internet) are detected very well by Google Vision and it is quite complicated to make Google think that it is not a person, but an animal for instance. Nevertheless, the noise prevents from detecting the details on the image which could be crucial for solving the flip, like in your example - the glasses are not detected (maybe the flip is related to glasses). In addition, other objects that don’t belong to popular image categories could be detected incorrectly. Please see some examples here. |
I sent email today! |
@ivantor69 I see your point. When you look just at one noised image, sometimes you can't say for sure what's on it. But please do not forget that any flip is always a combination of images and they must be logically connected, not just a bunch of random pictures. So the context of the flip should give you a hint what is shown on this particular image. Otherwise, some may want to report it. By the way, you posted a good example of how color shifting with noise works. Google detects the content of this image as an animal. So on this particular example our adversarial technique gives misleading results for AI |
I find that the noise filter makes Idena unusable, as it degrades color and noise a lot, way too much. If you don't want to prioritize making a better algorithm for the filter, I will roll back the windows app version so that it works without using the noise filter, and I won't click update app even if the warning appears..
One more thing: why couldn't you swing the image (window) so that the bot using image recognition software can't understand. Isn't that a good idea?
Please understand my frustration: I had a flip with a man eating potatoes and after the filter the color changed into yellow and people thought it was lemons!!! And for this reason they reported me!
The text was updated successfully, but these errors were encountered: