The Psychology

Explanations Must Vary

The explanation a software engineer can understand is different than the one a lay-user can understand. The efficacy of  the explanation changes depending on who the user is, and how much the user knows. The problem with a lot of XAI models is that they're designed by the people who don't need them. Explanations need to be interpretable, but interpretable to whom? While XAI models are getting better at focusing on the user experience, most of the cutting-edge interfaces are developed by software engineers and programmers.

This phenomenon is described by Tim Miller as akin to the, "inmates running the asylum," which is a reference to a 2004 book by Alan Cooper about how the tech industry is mis-manages. Miller's article points out that very few studies on XAI actually incorporate psychological or philosophical theories of explanation. The asylum, or the XAI industry, is being run by programmers, who don't understand what makes a good explanation.

Explanations are Social

It's been well documented that humans interact with computers in a social manner (Nass 1994). MLDSS are no different. It's also been noted that there are two reasons people seek explanations. One is to learn something about an abstract process, the other is "to facilitate social interaction…to create a shared meaning" (Miller 2018, Halle). This is where XAI is deeply rooted in the Human-Computer Interaction field. The basis of XAI is examining the relationship between model and user, and adding explanation in order to strengthen that interaction, to add trust and increase user knowledge about the model. 

Motivation

The motivation behind XAI is really important. If we just want people to trust AI, then making a convincing explanation is more important than making a correct explanation.

How does one determine if an explanation is correct? Does it matter if the explanation is right? If the goal is utility, or trust, or anything that relies on convincing the user, success of an explanation is often indicative of, "correlation between the plausibility of the explanations and the model's performance"(Jacovi and Goldberg 2020) rather than a correct explanation.

Understanding is different than trust, which is different than utility. Just because an engineer might understand how a model works doesn't mean they will be able to use it's prediction. A doctor might not understand how a model came to it's conclusion, but the prediction could still help the doctor come to a diagnosis. 

Some say that explanations aren't even useful at all, and the model should simply report a confidence level, like a percentage (Zhang et al). This will calibrate the proper level of trust in the prediction, without worrying about things like understanding or interpretability.

Some authors, (Ryan 2020) claim that it's dubious whether AI should be trusted at all. Others suggest that instead of relying on explainable models, we should instead revert to the truly interpretable ones. 

Create your website for free! This website was made with Webnode. Create your own for free today! Get started