Skip to content

hashtaglensman/HumanintheLoop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

49 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

   HOME    |    ABSTRACT    |    METHODS    |    CITATION    |    DEMO    |    CONTACT-US    

Abstract

Facial attribute classification algorithms frequently manifest demographic biases, disproportionately impacting specific gender and racial groups. Existing bias mitigation techniques often lack generalizability, require demographically annotated training sets, exhibit application-specific limitations, and entail a trade-off between fairness and classification accuracy. This trade-off implies that achieving fairness often results in diminished classification accuracy for the most proficient demographic sub-group. In this paper, we propose a continual learning framework designed to mitigate bias in facial attribute classification algorithms by integrating human-machine partnership, especially during the deployment stage. Our methodology harnesses the expertise of human annotators to label uncertain data samples, subsequently used to fine-tune a deep neural network over a time period. Through iterative refinement of the network's predictions with human guidance, we seek to enhance the accuracy and fairness of facial attribute classification. Extensive experimentation on gender and smile attribute classification tasks validates the efficacy of our approach, supported by empirical results from four datasets. The outcomes consistently demonstrate accuracy improvements and reduced bias across both classification tasks.

Contribution

Methods

Gender Classification

Method Acceptance Accuracy Standard deviation
Shannon's Entropy ` s d
Dirichlet's Uncertainty Estimation |
Boundary Proximity Confidence-based Outlier Detection ` s d
Ensemble-based Outlier Detection |
Multiview Disagreemen |

Smile Attribute Classification

We discussed two methods in the paper

Method 1:

For this method, we defined the uncertain data as the test data which was misclassified with high confidence score. To order to capture the definition, we proposed the following, The data is called to be uncertain when,

the feature of the test data is equvidistant from the feature space of the training data and,

the high ratio of the confidence score,

Method 2:

If there is a disagreement between two scores obtained from the main classifier model, and the combination of different pruned and quantized models, then the data is considered as uncertain.

Method 3:

Data points are considered as uncertain when it lies close to the decision boundary.

Citation

Demo

Contact Us

Anoop Krishnan

Ajita Rattani