Skip to content

inspire-group/advml-traffic-sign

Repository files navigation

DARTS: Deceiving Autonomous Cars with Toxic Signs

Website: http://adversarial-learning.princeton.edu/darts/

The code in this repository is associated with the paper DARTS: Deceiving Autonomous Cars with Toxic Signs and its earlier extended abstract Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos , a research project under the INSPIRE group in the Electrical Engineering Department at Princeton University. It is the same code that we used to run the experiments, but excludes some of the run scripts as well as the datasets used. Please download the dataset in pickle format here, or visit the original website for GTSRB and GTSDB datasets.

Files Organization

The main implementation is in ./lib containing:

For specific data/setup we used in our experiments:

The main code we used to run the experiments is in Run_Robust_Attack.ipynb. It demonstrates our procedures and usage of the functions in the library. It also includes code that we used to run most of the experiments from generating the attacks to evaluating them in both virtual and physical settings.
Examples of previously proposed adversarial examples generation methods are listed in GTSRB.ipynb.
Relevant parameters are set in a separate configure file called parameters.py.

Contact

Comments and suggestions can be sent to Chawin Sitawarin (chawins@princeton.edu) and Arjun Nitin Bhagoji (abhagoji@princeton.edu).

About

Code for the 'DARTS: Deceiving Autonomous Cars with Toxic Signs' paper

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published