Skip to content

Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.

Notifications You must be signed in to change notification settings

kkm24132/ResponsibleAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 

Repository files navigation

Ethical AI / Responsible AI

Objective: Ethical AI / Responsible AI: Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.

My Articles/Blogs references

Categories

Category Description
Risk in deployment
  • Bias (Dataset does not reflect facial recognition not working properly for example)
  • Fairness, History dataset is reality or not
  • Unethical aspects or Unfair usage
Regulatory aspects
  • Region specific needs (e.g. GDPR etc.)
Provide clarity as much as possible
  • What is happening from Step 1 to Step N
  • Features used during feature engineering process
  • Any information regarding feature importance / Top N features (as per applicability)
How to approach Bias in AI
  • Gather more diverse datasets
  • Explore to include labels from a wider range of judges
  • Monitor output of models / experiments / algorithms
  • Focus on small categories and edge cases
  • Laws and Regulation protocol may be required to address bias
Category DOs (AI Should) DON'Ts (AI Should Not)
  • Principles
  • Processes/Methods
  • Standards/Guidelines
  • Regulation
  • Incorporate Privacy Design Principles
  • Incoporate Regulation Principles
  • Be Accountable to users for the solutions that it generates
  • Upholds high standard of scientific excellence of the AI solution
  • Be Accountable to end users / people using the AI solution
  • AI creates a solution that may likely to cause overall harm to end users
  • AI solution's principal objective to direct injury
  • Solutions aid in surveillance violating international guidelines

Principles from the Ethical Institute

The Ethical institute has recommended following principles

  • Human Augmentation
  • Bias Evaluation
  • Explainability by Justification
  • Reproducible Operations
  • Displacement Strategy
  • Practical Accuracy
  • Trust by Privacy
  • Security Risks

Please check here

As per HBR (Harvard Business review), the ethical frameworks for AI aren't enough. Check Here

Machine Learning Roadmap

ML Roadmap

  • Focus on every stages in the ML journey
  • Critical to highlight on detailing and what tasks are performed in a step wise manner

References

Why do we need ML Interpretability?

  • Some questions pertaining to Model bias, fairness, ethics
  • Do we check causality of features? Does more data help here making better decision?
  • Do we have ability to debug and know more specifics
  • Are there any regulatory requirements associated with and needs to be understood in detail?
  • Do we trust model's outcomes and to what extent?
  • Do we have a segregation of critical domain vs non-critical domain that can be defined?

Human-Centered Design for AI/Data Science

  • Lex Fridman's lecture on Human-Centered Artificial Intelligence :MIT 6.S093
  • Stanford Human-centered Artificial Intelligence research
  • Google's People + AI research (PAIR) Guidebook

Bias - Different Types

This research paper can be followed for 6 different types of Bias in AI.

  • Historical Bias
  • Representation Bias
  • Measurement Bias
  • Aggregation Bias
  • Evaluation Bias
  • Deployment Bias

Bias in AI

Fairness and Model Explainability CHECKLIST

  • This is needed at CRISP-DM stages from a holistic point of view (Kind of a Checklist)
  • Problem Formation
    • Is an algorithm an ethical solution to the problem?
  • Construction of Datasets / Preparation Process
    • Is the training data representative of different groups so that we have diverse data representation for appropriate analysis of feature presence?
    • Are there biases in labels or features?
    • Does the data need to be modified to mitigate bias?
  • Selection of Algorithms or Methods
    • Do fairness constraints need to be included in the objective function?
  • Training Process
  • Testing Process
    • Has the model been evaluated using relevant fairness metrics?
  • Deployment
    • Is the model deployed on a population for which it was not trained or evaluated?
    • Are there unequal effects across users?
  • Monitoring / HITL
    • Does the model encourage feedback loops that can produce increasingly unfair outcomes?

Checklist Responsible AI

Policy Related Guidance

Key Objectives could be as follows fro Policy / Governance / Regulatory related frameworks:

  • Safeguard consumer interest in an AI solution
  • Serve as a common, global, consistent reference point
  • Foster innovation and more robust solutions

Frameworks:

News / Updates

About

Capture fundamentals around ethics of AI, responsible AI from principle, process, standards, guidelines, ecosystem, regulation/risk standpoint.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published