http://slbooth.com/logic_interpretability.html
Knowledge compilation techniques translate propositional theories into equivalent forms to increase their computational tractability. But, how should we best present these propositional theories to a human? We analyze the standard taxonomy of propositional theories for relative interpretability across three model domains: highway driving, emergency triage, and the chopsticks game. We generate decision-making agents which produce logical explanations for their actions and apply knowledge compilation to these explanations. Then, we evaluate how quickly, accurately, and confidently users comprehend the generated explanations. We find that domain, formula size, and negated logical connectives significantly affect comprehension while formula properties typically associated with interpretability are not strong predictors of human ability to comprehend the theory.
For explanation generation, see Explanation_Generator/
.
For knowledge compilation language conversions and user test generation, see Human_Interpretable_Logic_Statements/
.
See StudyProcedures
for materials including: a PDF of all questions presented to study participants, a script for starting the user study, and a follow up discussion.
@inproceedings{booth19:logic_interpretability,
title = {Evaluating the Interpretability of the Knowledge Compilation Map:
Communicating Logical Statements Effectively},
author = {Serena Booth and Christian Muise and Julie Shah},
booktitle = {IJCAI},
year = {2019},
}
Serena Booth1, Christian Muise2,3, Julie Shah1
1MIT Computer Science and Artificial Intelligence Laboratory
2MIT-IBM Watson AI Lab
3IBM Research
{serenabooth, julie_a_shah} [at] csail.mit.edu, christian.muise [at] ibm.com