Skip to content
View VL-Bias's full-sized avatar
Block or Report

Block or report VL-Bias

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
VL-Bias/README.md

Counterfactually Measuring and Eliminating Social Bias in Vision-Language Pre-training Models (ACM MM 2022)

Official Pytorch implementation and dataset

Example code in ALBEF

Code is available at github

VL-Bias Dataset

VL-Bias is available at Google Drive

VL-Bias dataset collected 24k images, including 13K for the 52 activities and 11K for the 13 occupations. image

52 activities

baking biking cleaning cooking crying driving exercising fishing hugging jumping kneeling lifting picking praying riding running sewing shouting skating smiling spying staring studying talking walking waving begging calling climbing coughing drinking eating falling hitting jogging kicking laughing painting pitching reading rowing serving shopping sitting sleeping speaking standing stretching sweeping throwing washing working

13 occupations

athlete chef doctor engineer farmer footballer judge mechanic nurse pilot police runner soldier

Text

we use four templates described in Table to generate captions. Finally, for each template, we have collected 24k image-text pairs, including 13K for the 52 activities and 11K for the 13 occupations.

image

Popular repositories

  1. VL-Bias VL-Bias Public

    VL-Bias dataset

    Python 1 1