Skip to content

piazzesiNiccolo/myLbc

Repository files navigation

Learning by Cheating and Adversarial Robustness Toolbox

This repo is a custom version of the LearningByCheating autonomous driving agent and related suite, which has been integrated with the IBM Adversarial Robustness Toolbox (ART) for the injection of 4 attacks on the RGB camera:

• Spatial Transformation (STA),

• HopSkipJump (HSJ),

• Basic Iterative Method (BIM),

• NewtonFool (NF).

Summary

For my Bachelor Thesis in Computer Science at the University of Florence, I injected some adversarial attacks inside the images extracted by the RGB camera, before these are fed to the Learning By Cheating trained agent. The objective is to visualize theeffects of these attacks on the trained agents.

The attacks injected were provided by the Adversarial Robustness Toolbox (ART) library (version 1.2.0). This repo contains the modified LbC agent that I used to test the attacks. You can see the tests results and the videos recorded for the various runs here.

The following table summarizes results recorded in the regular suite

Attack Completed Runs Ignored Semaphores Collisions Timeouts
Golden runs (no Attack) 12/12 0 0 0
HSJ 6/12 9 6 0
STA 7/12 0 4 1
BIM 0/12 N/A 12 0
NF 3/12 13 9 0

Installation

The installation guide can be found in HERE. The process to install and use the software is pretty much the same, just remember to clone this modified repo instead of the original. If you want to skip compiling, use this script. I changed it to actually install this version.

Usage

To select which attack to be injected during a run, you need to change the string passed in the following line:

self.attack = load_attack(self.adv, 'hopskipjump')

This line can be found in the init method of the ImageAgent class. This class can be found in the image module. You can choose each of the four attacks with the following strings:

  • 'hopskipjump' → HopSkipJump
  • 'spatialtransformation' → Spatial Transformation
  • 'newton' → NewtonFool
  • 'bim' → Basic Iterative Method

To see the effects of these attacks you need to start the Lbc Benchmark Agent inside the Carla simulator. Instructions on how to do this can be found here, in the Running the Carla Server and Running the LbC Agent sections.

Changes

The code changes can be found in the subfolder bird_view/models. I added the module attack and modified some part of the image module.

Reference

Niccolò Piazzesi, "Attacchi verso sistemi di apprendimento in ambito autonomous driving: studio e implementazione in ambienti simulati (in Italian)", Bachelor Thesis at the University of Florence. Supervisor: Andrea Ceccarelli.

Link to the Thesis: http://rcl.dsi.unifi.it/administrator/components/com_jresearch/files/publications/PiazzesiNiccol%C3%B2.pdf

About

custom version of LearningByCheating, used for a thesis work

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published