Skip to content

Giskard-AI/awesome-ai-safety

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

28 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Awesome AI Safety Awesome

License Contributions Discord Mastodon HitCount

Figuring out how to make your AI safer? How to avoid ethical biases, errors, privacy leaks or robustness issues in your AI models?

This repository contains a curated list of papers & technical articles on AI Quality & Safety that should help ๐Ÿ“š

Table of Contents

You can browse papers by Machine Learning task category, and use hashtags like #robustness to explore AI risk types.

  1. General ML Testing
  2. Tabular Machine Learning
  3. Natural Language Processing
  4. Computer Vision
  5. Recommendation System
  6. Time Series

General ML Testing

AI Incident Databases

Tabular Machine Learning

Natural Language Processing

Large Language Models

Computer Vision

Recommendation System

Time Series

Contributions are welcome ๐Ÿ’•