Skip to content

mjDelta/kervolution-under-adversarial-attack-pytorch

Repository files navigation

kervolution-kernel-convolution-pytorch

We implement the Kervolution Nerual Network structure in CVPR 2019. And we are quite interested in its performance under the white box attacking (e.g FGSM attack). So we have done a series of experiments, hoping we can find the effect that kervolution can bring us.

Adversarial attack

According to Goodfellow's research Explaining and harnessing adversarial examples, we can build a classical attack named fast gradient sign method (FGSM) to attack the traditional CNN structure, e.g LeNet.

Kervolution with polynomial kernel under FGSM attack

Non-learnable parameters

Initialize , and set cp_require_grad=False

Model cp dp Epsilon=0 Epsilon=0.05 Epsilon=0.07 Epsilon=0.1
KNN-A 1 5 0.9876 0.8602 0.754 0.5762
KNN-B 1 3 0.9877 0.9054 0.8361 0.7036
KNN-C 1 2 0.9874 0.9128 0.8513 0.7284
KNN-D 0.5 5 0.989 0.8268 0.7001 0.5142
KNN-E 0.5 3 0.9872 0.9048 0.8425 0.7227
KNN-F 0.5 2 0.9885 0.9243 0.8765 0.7718
CNN - - 0.9882 0.8948 0.8178 0.6629

According to the experimet results, we find that by setting cp=0.5 and dp=2, we can get the best performance under the FGSM attack (shown bold font in above table). Failure and success cases display:

About

implement Kervolutional Neural Networks (CVPR, 2019) and compare with CNN under the white box attack

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published