Skip to content

Code for the publication Debiasing Pre-Trained Language Models via Efficient Fine-Tuning

Notifications You must be signed in to change notification settings

michaelgira23/debiasing-lms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚖️ Debiasing Language Models

Official code for Debiasing Pre-Trained Language Models via Efficient Fine-Tuning published in the Second Workshop on Language Technology for Equality, Diversity, Inclusion at ACL 2022.

View Demo | View Presentation

Currently placeholder. Code will be polished and published soon! In the meantime, you can take a look at the old code.

Dataset

Our fine-tuning dataset consists of the WinoBias and CrowS-Pairs datasets. After cloning the Git submodules for the respective datasets, run:

python dataset/prepare.py

prepare.py combines the datasets from each repository and splits them into a training (80%), cross-validation (10%), and testing sets (10%).

About

Code for the publication Debiasing Pre-Trained Language Models via Efficient Fine-Tuning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages