Skip to content

Collection of papers used for a scoping review of explanation types and need indicators in human–agent interact / robotics / human–agent collaborations

Notifications You must be signed in to change notification settings

lwachowiak/Explanation-Types-and-Need-Indicators-in-HAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

A Taxonomy of Explanation Types and Need Indicators in Human–Agent Collaborations

Repository

This repository contains a CSV with the papers used for a scoping review of explanation types and need indicators in human–agent interaction, robotics, and human–agent collaborations.

The unlabeled papers, as extracted directly from the academic search engines (Scopus, IEEE, ACM), can be found in the subfolder Unlabeled Papers (all)

Reference

The paper can be found here or here.

@article{wachowiak_taxonomy_2024,
  title = "A Taxonomy of Explanation Types and Need Indicators in Human–Agent Collaborations",
  abstract = "In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human–agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human-agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human-agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human–agent interaction-specific causes and categorize them by whether they are centered on the agent's behavior, the user's mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.",
  author = "Lennart Wachowiak and Andrew Coles and Gerard Canal and Oya Celiktutan",
  year = "2024",
  month = jun,
  day = "5",
  doi = "10.1007/s12369-024-01148-8",
  language = "English",
  journal = "International Journal of Social Robotics",
  issn = "1875-4805",
  publisher = "Springer",
}

Abstract

In recent years, explanations have become a pressing matter in AI research. This development was caused by the increased use of black-box models and a realization of the importance of trustworthy AI. In particular, explanations are necessary for human–agent interactions to ensure that the user can trust the agent and that collaborations are effective. Human–agent interactions are complex social scenarios involving a user, an autonomous agent, and an environment or task with its own distinct properties. Thus, such interactions require a wide variety of explanations, which are not covered by the methods of a single AI discipline, such as computer vision or natural language processing. In this paper, we map out what types of explanations are important for human–agent interactions, surveying the field via a scoping review. In addition to the typical introspective explanation tackled by explainability researchers, we look at assistive explanations, aiming to support the user with their task. Secondly, we survey what causes the need for an explanation in the first place. We identify a variety of human–agent interaction-specific causes and categorize them by whether they are centered on the agent’s behavior, the user’s mental state, or an external entity. Our overview aims to guide robotics practitioners in designing agents with more comprehensive explanation-related capacities, considering different explanation types and the concrete times when explanations should be given.

About

Collection of papers used for a scoping review of explanation types and need indicators in human–agent interact / robotics / human–agent collaborations

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published