Skip to content

[TMLR 2023] as a featured article (spotlight 🌟 or top 0.01% of the accepted papers). In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.

License

Notifications You must be signed in to change notification settings

abhijay9/attacking_perceptual_similarity_metrics

Repository files navigation

Attacking Perceptual Similarity Metrics

Abhijay Ghildyal, Feng Liu. In TMLR, 2023. (Featured Certification)

[OpenReview] [Arxiv]

In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.

Figure (above): $I_1$ is more similar to $I_{ref}$ than $I_{0}$ according
to all perceptual similarity metrics and humans. We attack
$I_1$ by adding imperceptible adversarial perturbations ($\delta$)
such that the metric ($f$) flips its earlier assigned rank, i.e.,
in the above sample, $I_0$ becomes more similar to $I_{ref}$.


Figure (above): An example of the PGD attack on LPIPS(Alex)

Requirements

Requires Python 3+ and PyTorch 0.4+. For evaluation, please download the data from the links below.

When starting this project, I used the requirements.txt (link) from the LPIPS repository (link). We are grateful to the authors of various perceptual similarity metrics for making their code and data publicly accessible.

Downloads

The transferable adversarial attack samples generated for our benchmark in Table 5 can be downloaded from this google drive folder (link). Please unzip transferableAdvSamples.zip in the datasets/ folder.

Alternatively, you can use the following:

cd datasets
gdown 1gA7lD7FtvssQoMQwaaGS_6E3vPkSf66T # get <id> from google drive (see below)
unzip transferableAdvSamples.zip

In case the gdown id changes, you can obtain it from the 'shareable with anyone' link for transferableAdvSamples.zip file in the aforementioned Google Drive folder. The id will be a substring in the shareable link, as shown here: https://drive.google.com/file/d/<id>/view?usp=share_link.

Download the LPIPS repo (link), outside this folder. Then, download the BAPPS dataset as mentioned here: link.

Benchmark

Use the following to benchmark various metrics on the transferable adversarial samples created by attacking LPIPS(Alex) on BAPPS dataset samples via stAdv and PGD.

# L2
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric l2 --save l2

# SSIM
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric ssim --save ssim

# ST-LPIPS(Alex)
CUDA_VISIBLE_DEVICES=0 python transferableAdv_benchmark.py --metric stlpipsAlex --save stlpipsAlex

The results will be stored in the results/transferableAdv_benchmark/ folder.

Finally, use the ipython notebook results/study_results_transferableAdv_attack.ipynb to calculate the number of flips.

Creating Transferable Adversarial Samples

The following steps were performed to create the transferable adversarial samples for our benchmark.

  1. Create adversarial samples by attacking LPIPS(Alex) via the spatial attack stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdv_samples.py
  1. We perform a visual inspection of the samples before proceeding and weed out some of the samples that do not meet our criteria of imperceptibility.

  2. Using the samples selected in step 2, we attack LPIPS(Alex) via $\ell_\infty$-bounded PGD with different max iterations.

CUDA_VISIBLE_DEVICES=0 python create_transferable_PGD_samples.py
  1. Finally, we combine the stAdv and PGD attacks by attacking the samples created via stAdv.
CUDA_VISIBLE_DEVICES=0 python create_transferable_stAdvPGD_samples.py

We hope the above code is able to assist and inspire additional studies to test the robustness of perceptual similarity metrics through more extensive benchmarks using various datasets and stronger adversarial attacks.

Whitebox PGD attack

To perform the whitebox PGD attack run the following

CUDA_VISIBLE_DEVICES=0 python whitebox_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64

The results are saved in results/whitebox_attack/.

Finally, use the ipython notebook results/study_results_whitebox_attack.ipynb to calculate the number of flips and other stats.

We provide code to perform the reverse of our attack (see Appendix F), i.e., we attack the less similar of the two distorted images to make it more similar to the reference image.

CUDA_VISIBLE_DEVICES=0 python whitebox_toMakeMoreSimilar_attack_pgd.py --metric lpipsAlex --save lpipsAlex --load_size 64

To add. Code for FGSM attack, and Benchmark on PIEAPP dataset.

Citation

If you find this repository useful for your research, please use the following to cite our work:

@article{ghildyal2023attackPercepMetrics,
  title={Attacking Perceptual Similarity Metrics},
  author={Abhijay Ghildyal and Feng Liu},
  journal={Transactions on Machine Learning Research},
  year={2023}
}

About

[TMLR 2023] as a featured article (spotlight 🌟 or top 0.01% of the accepted papers). In this study, we systematically examine the robustness of both traditional and learned perceptual similarity metrics to imperceptible adversarial perturbations.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published