Skip to content

Augmenting data and generating adversarial examples to compare against trained model.

License

Notifications You must be signed in to change notification settings

Hilton-AH/CNNs-and-Adversarial-Examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CNNs & Adversarial Examples

Introduction

While Convolutional Neural Networks (CNNs) shine in areas like disease detection and self-driving cars, they're susceptible to **adversarial examples.

Understanding Adversarial Examples

Adversarial examples are inputs slightly tweaked to deceive AI. For instance, an altered image might wrongly lead a CNN to misclassify a rhino.

Potential Risks

Key sectors face challenges due to this vulnerability:

  • Self-Driving Cars: Misinterpreting traffic signs.
  • Medical Imaging: Incorrect tumor evaluations.
  • Security Systems: Evading facial recognition.

Project Objective

We'll produce adversarial examples to confuse our model. This exploration emphasizes the necessity to grasp AI weaknesses and robust testing.

Key Takeaway: Advanced AI models, despite their prowess, have blind spots. Identifying and mitigating these is crucial.

About

Augmenting data and generating adversarial examples to compare against trained model.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published