Uses and Interfaces

Uses

One of the most prevalent uses of XAI in the literature is within the medical field (Panigutti, Confalieroni, Mosqueira-Rey, Gaube, van der Velden). Decision Support Systems have long been used to assist with the complex diagnostic process, known in the industry as Clinical Decision Support Systems, or CDSS (Sutton 2020). This is an especially salient example because it's extremely relevant to the everyday person, regardless of their affiliation with the medical field– would you want your doctor using an MLDSS that they didn't understand?

Other notable examples that have come under fire in the media recently include notoriously biased MLDSS. Hiring algorithms, such as Amazon's (Crawford), and the notorious COMPAS algorithm that determined detainee risk levels (Crawford, Kearns and Roth). MLDSS are being used much more in HR situations, as well as in the Criminal Justice system. There have been several examples of controversial AI-based methods to predict where crime will happen. These models are currently being used in several major cities across the US (including here in Portland!)

MLDSS can crop up in any industry, though. This is an interesting paper on jazz musicians playing with an AI drummer (Mcormack et al). Another example is how pilots interact with MLDSS(Zhang et al), along with analyzing interactions between drivers and self-driving cars (Tambwekar and Gombolay).

The point is: think of an industry or job, and the chances are MLDSS can be used in that position, if they aren't being used already. The problem is that most people who work in fields outside of computer science don't understand how artificial intelligence works, and they probably won't be given that background to use the MLDSS. Even XAI explanations don't necessarily work as intended, because XAI has it's root in ML development.

While XAI definitely draws on the Expert System for its explanations, the technical aspect of XAI stems from ML model engineers. Attempting to break into the black box was, initially, an exercise in improving model accuracy (Confalonieri, Bhatt 2020, Miller, 2017). Interpretability is often as important to users as it is to engineers. It's difficult to determine how to fix a broken model while having no idea what it's actually doing (Molnar). Instead of having to rely on trial and error to fix a model, engineers use things like learned features, saliency maps, and adversarial examples to better understand how DNNs make their decisions (Molnar). These techniques are the foundation for the technical side of XAI explanations. This is actually an enormous issue in XAI research. Read more about why in The Psychology.


Interfaces

While there are just about as many interfaces for xAI as there are MDSS systems to explain, some of the more popular ones are available open source. 

LIME

Local Interpretable Model-Agnostic Explanations is the project of the University of Washington. The video to the right is an overview of the LIME system broadly in layman's terms. LIME was one of the first interfaces in the current wave of XAI. The first paper written about it was published in 2016, and since then it's been cited by over 4,000 other papers. 

Like the name suggests, LIME is model-agnostic, which means it's applicable to any model. It's explanations are local, which means they focus on a specific instance of a model, with specific input data. It's also fairly easy to use-- it's a python package that's easy to install. Use the slideshow to the right to examine a LIME explanation of a text processor. The model provides a prediction for if the given text is athiest or Christian, and then provides a break down of which words lead it to its conclusion. 

LIME creates these explanations by dissecting inputs and rerunning the model in order to demonstrate which parts of the input, which features, are influencing the prediction. This is how it can be model agnostic, because it relies on the input and the output, not anything to do with the guts of the model itself. 

SHAP

SHapley Additive exPlanations is similar to LIME. It's built and managed by MIT, and also opensource (Lundberg and Lee). It uses Shapley values, which are a concept from game theory, that take each player's contribution to a total payout and weighs them all. Similar to LIME, the SHAP model attempts to isolate each of the features, it then applies a Shapley value to each feature to see if the values add up to the predicted result (Molnar). The SHAP interface provides visualizations so the user can see how each feature's Shapley value impacts the prediction.

The SHAP plot to the right is an example of a SHAP summary plot. It looks visually interesting, but it's not very interpretable. Essentially, a feature with an extreme SHAP value, either very high or very low, will impact the the model prediction. SHAP values closer to zero have a smaller effect. The second image is a SHAP explanation for an image processor, which is a little more interpretable, but not much. It also doesn't build much off of the bare bones feature importance graph we looked at before. 

In all, SHAP and LIME are just two of many, many interfaces being used in the XAI field. LIME has a bit more emphasis on being interpretable than SHAP, but SHAP can give more detailed, complex explanations. This is emblematic of the tension in the XAI field-- accuracy or interpretability? Is it more important to give as much detail in the explanation as possible, or to make it as understandable as possible?


Create your website for free! This website was made with Webnode. Create your own for free today! Get started