GovWhitePapers Logo

Sorry, your browser is not compatible with this application. Please use the latest version of Google Chrome, Mozilla Firefox, Microsoft Edge or Safari.

Psychological Foundations of Explainability and Interpretability in Artificial Intelligence

In this paper, the case is made that interpretability and explainability are distinct requirements for machine learning systems. To make this case, an overview is provided of the literature in experimental psychology pertaining to interpretation (especially of numerical stimuli) and comprehension. Interpretation refers to the ability to contextualize a model’s output in a manner that relates it to the system’s designed functional purpose, and the goals, values, and preferences of end users. In contrast, explanation refers to the ability to accurately describe the mechanism, or implementation, that led to an algorithm’s output, often so that the algorithm can be improved in some way.

Beyond these definitions, the review shows that humans differ from one another in systematic ways, that affect the extent to which they prefer to make decisions based on detailed explanations versus less precise interpretations. These individual differences, such as personality traits and skills, are associated with their abilities to derive meaningful interpretations from precise explanations of model output. This implies that system output should be tailored to different types of users

  • Author(s):
  • David A. Broniatowski
  • Share this:
  • Share on Facebook
  • Share on Twitter
  • Share via Email
  • Share on LinkedIn
Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
Format:
  • White Paper
Topics:
Website:Visit Publisher Website
Publisher:National Institute of Standards and Technology (NIST)
Published:April 1, 2021
License:Public Domain

Featured Content

Contact Publisher

Claim Content