References

Alber, Maximilian, et al. INNvestigate Neural Networks! arXiv:1808.04260, arXiv, 13 Aug. 2018. arXiv.org, https://doi.org/10.48550/arXiv.1808.04260.

Angelov, Plamen P., et al. "Explainable Artificial Intelligence: An Analytical Review." WIREs Data Mining and Knowledge Discovery, vol. 11, no. 5, 2021, p. e1424. Wiley Online Library, https://doi.org/10.1002/widm.1424.

Angelov, Plamen, and Eduardo Soares. "Towards Explainable Deep Neural Networks (XDNN)." Neural Networks, vol. 130, Oct. 2020, pp. 185–94. DOI.org (Crossref), https://doi.org/10.1016/j.neunet.2020.07.010.

Antoniadi, Anna Markella, et al. "Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review." Applied Sciences, vol. 11, no. 11, 11, Jan. 2021, p. 5088. www.mdpi.com, https://doi.org/10.3390/app11115088.

Antoniou, Grigoris, et al. "Mental Health Diagnosis: A Case for Explainable Artificial Intelligence." International Journal on Artificial Intelligence Tools, vol. 31, no. 03, May 2022, p. 2241003. worldscientific.com (Atypon), https://doi.org/10.1142/S0218213022410032.

Avril, Eugénie. "Providing Different Levels of Accuracy about the Reliability of Automation to a Human Operator: Impact on Human Performance." Ergonomics, vol. 66, no. 2, Feb. 2023, pp. 217–26. Taylor and Francis+NEJM, https://doi.org/10.1080/00140139.2022.2069870.

---. "Providing Different Levels of Accuracy about the Reliability of Automation to a Human Operator: Impact on Human Performance." Ergonomics, vol. 66, no. 2, Feb. 2023, pp. 217–26. Taylor and Francis+NEJM, https://doi.org/10.1080/00140139.2022.2069870.

Bansal, Gagan, Besmira Nushi, et al. "Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, Oct. 2019, pp. 2–11. ojs.aaai.org, https://doi.org/10.1609/hcomp.v7i1.5285.

Bansal, Gagan, Tongshuang Wu, et al. "Does the Whole Exceed Its Parts? The Effect of AI Explanations on Complementary Team Performance." Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2021, pp. 1–16. ACM Digital Library, https://doi.org/10.1145/3411764.3445717.

Bhatt, Umang, et al. "Explainable Machine Learning in Deployment." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2020, pp. 648–57. ACM Digital Library, https://doi.org/10.1145/3351095.3375624.

Bond, Raymond R., et al. "Automation Bias in Medicine: The Influence of Automated Diagnoses on Interpreter Accuracy and Uncertainty When Reading Electrocardiograms." Journal of Electrocardiology, vol. 51, no. 6, Supplement, Nov. 2018, pp. S6–11. ScienceDirect, https://doi.org/10.1016/j.jelectrocard.2018.08.007.

---. "Automation Bias in Medicine: The Influence of Automated Diagnoses on Interpreter Accuracy and Uncertainty When Reading Electrocardiograms." Journal of Electrocardiology, vol. 51, no. 6S, 2018, pp. S6–11. PubMed, https://doi.org/10.1016/j.jelectrocard.2018.08.007.

Buchanan, Bruce G., and Edward Hance Shortliffe, editors. Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley, 1984.

Cai, Carrie J., et al. "Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2019, pp. 1–14. ACM Digital Library, https://doi.org/10.1145/3290605.3300234.

Confalonieri, Roberto, et al. "A Historical Perspective of Explainable Artificial Intelligence." WIREs Data Mining and Knowledge Discovery, vol. 11, no. 1, 2021, p. e1391. Wiley Online Library, https://doi.org/10.1002/widm.1391.

Covert, Ian, et al. "Explaining by Removing: A Unified Framework for Model Explanation." ArXiv.Org, 21 Nov. 2020, https://arxiv.org/abs/2011.14878v2.

Du, Mengnan, et al. "Techniques for Interpretable Machine Learning." Communications of the ACM, vol. 63, no. 1, Dec. 2019, pp. 68–77. ACM Digital Library, https://doi.org/10.1145/3359786.

Dvorak, Julia, et al. Explainable AI: A Key Driver for AI Adoption, a Mistaken Concept, or a Practically Irrelevant Feature? 2022.

Ehsan, Upol, and Mark O. Riedl. "Social Construction of XAI: Do We Need One Definition to Rule Them All?" ArXiv.Org, 11 Nov. 2022, https://arxiv.org/abs/2211.06499v1.

Gaube, Susanne, et al. "Non-Task Expert Physicians Benefit from Correct Explainable AI Advice When Reviewing X-Rays." Scientific Reports, vol. 13, no. 1, 1, Jan. 2023, p. 1383. www.nature.com, https://doi.org/10.1038/s41598-023-28633-w.

Ghajargar, Maliheh, Jeffrey Bardzell, Alison Smith Renner, et al. "From ”Explainable AI” to ”Graspable AI”" Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction, Association for Computing Machinery, 2021, pp. 1–4. ACM Digital Library, https://doi.org/10.1145/3430524.3442704.

Ghajargar, Maliheh, Jeffrey Bardzell, Alison Marie Smith-Renner, et al. "Graspable AI: Physical Forms as Explanation Modality for Explainable AI." Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction, Association for Computing Machinery, 2022, pp. 1–4. ACM Digital Library, https://doi.org/10.1145/3490149.3503666.

Gillath, Omri, et al. "Attachment and Trust in Artificial Intelligence." Computers in Human Behavior, vol. 115, Feb. 2021, p. 106607. DOI.org (Crossref), https://doi.org/10.1016/j.chb.2020.106607.

Groom, Victoria, and Clifford Nass. "Can Robots Be Teammates?: Benchmarks in Human–Robot Teams." Interaction Studies, vol. 8, no. 3, Dec. 2007, pp. 483–500.

Gsenger, Rita, and Toma Strle. "Trust, Automation Bias and Aversion: Algorithmic Decision-Making in the Context of Credit Scoring." Interdisciplinary Description of Complex Systems : INDECS, vol. 19, no. 4, Dec. 2021, pp. 542–60. hrcak.srce.hr, https://doi.org/10.7906/indecs.19.4.7.

Guo, Shunan, et al. "Visualizing Uncertainty and Alternatives in Event Sequence Predictions." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2019, pp. 1–12. ACM Digital Library, https://doi.org/10.1145/3290605.3300803.

Guo, Weisi. "Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine." IEEE Communications Magazine, vol. 58, no. 6, June 2020, pp. 39–45. IEEE Xplore, https://doi.org/10.1109/MCOM.001.2000050.

Haque, AKM Bahalul, et al. "Explainable Artificial Intelligence (XAI) from a User Perspective: A Synthesis of Prior Literature and Problematizing Avenues for Future Research." Technological Forecasting and Social Change, vol. 186, Jan. 2023, p. 122120. ScienceDirect, https://doi.org/10.1016/j.techfore.2022.122120.

Hofeditz, Lennart, et al. "Applying XAI to an AI-Based System for Candidate Management to Mitigate Bias and Discrimination in Hiring." Electronic Markets, vol. 32, no. 4, Dec. 2022, pp. 2207–33. DOI.org (Crossref), https://doi.org/10.1007/s12525-022-00600-9.

Hoffman, Robert R., et al. "Psychology and AI at a Crossroads: How Might Complex Systems Explain Themselves?" The American Journal of Psychology, vol. 135, no. 4, Dec. 2022, pp. 365–78. Silverchair, https://doi.org/10.5406/19398298.135.4.01.

Holzinger, Andreas, et al. "Towards Multi-Modal Causability with Graph Neural Networks Enabling Information Fusion for Explainable AI." Information Fusion, vol. 71, July 2021, pp. 28–37. ScienceDirect, https://doi.org/10.1016/j.inffus.2021.01.008.

Ignatiev, Alexey. "Towards Trustable Explainable AI." Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, International Joint Conferences on Artificial Intelligence Organization, 2020, pp. 5154–58. DOI.org (Crossref), https://doi.org/10.24963/ijcai.2020/726.

Jacovi, Alon, and Yoav Goldberg. "Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?" Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, 2020, pp. 4198–205. ACLWeb, https://doi.org/10.18653/v1/2020.acl-main.386.

Kaur, Harmanpreet, et al. "Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning." Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2020, pp. 1–14. ACM Digital Library, https://doi.org/10.1145/3313831.3376219.

Kenny, Eoin M., et al. "Explaining Black-Box Classifiers Using Post-Hoc Explanations-by-Example: The Effect of Explanations and Error-Rates in XAI User Studies." Artificial Intelligence, vol. 294, May 2021, p. 103459. ScienceDirect, https://doi.org/10.1016/j.artint.2021.103459.

---. "Explaining Black-Box Classifiers Using Post-Hoc Explanations-by-Example: The Effect of Explanations and Error-Rates in XAI User Studies." Artificial Intelligence, vol. 294, May 2021, p. 103459. ScienceDirect, https://doi.org/10.1016/j.artint.2021.103459.

Lai, Vivian, et al. "Selective Explanations: Leveraging Human Input to Align Explainable AI." ArXiv.Org, 23 Jan. 2023, https://arxiv.org/abs/2301.09656v1.

Lai, Vivian, and Chenhao Tan. "On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection." Proceedings of the Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 2019, pp. 29–38. ACM Digital Library, https://doi.org/10.1145/3287560.3287590.

Lundberg, Scott M., and Su-In Lee. "A Unified Approach to Interpreting Model Predictions." Proceedings of the 31st International Conference on Neural Information Processing Systems, Curran Associates Inc., 2017, pp. 4768–77.

McCormack, Jon, et al. "In a Silent Way: Communication Between AI and Improvising Musicians Beyond Sound." Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2019, pp. 1–11. ACM Digital Library, https://doi.org/10.1145/3290605.3300268.

Miller, Tim, et al. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. arXiv:1712.00547, arXiv, 4 Dec. 2017. arXiv.org, https://doi.org/10.48550/arXiv.1712.00547.

---. Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. arXiv:1712.00547, arXiv, 4 Dec. 2017. arXiv.org, https://doi.org/10.48550/arXiv.1712.00547.

---. Explanation in Artificial Intelligence: Insights from the Social Sciences. arXiv:1706.07269, arXiv, 14 Aug. 2018. arXiv.org, https://doi.org/10.48550/arXiv.1706.07269.

Mohseni, Sina, et al. "A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems." ACM Transactions on Interactive Intelligent Systems, vol. 11, no. 3–4, Sept. 2021, p. 24:1-24:45. December 2021, https://doi.org/10.1145/3387166.

Mosqueira-Rey, Eduardo, et al. "Human-in-the-Loop Machine Learning: A State of the Art." Artificial Intelligence Review, Aug. 2022. Springer Link, https://doi.org/10.1007/s10462-022-10246-w.

Mucha, Henrik, et al. "Interfaces for Explanations in Human-AI Interaction: Proposing a Design Evaluation Approach." Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2021, pp. 1–6. ACM Digital Library, https://doi.org/10.1145/3411763.3451759.

Nourani, Mahsan, et al. "The Effects of Meaningful and Meaningless Explanations on Trust and Perceived System Accuracy in Intelligent Systems." Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, Oct. 2019, pp. 97–105. ojs.aaai.org, https://doi.org/10.1609/hcomp.v7i1.5284.

Paleja, Rohan, et al. The Utility of Explainable AI in Ad Hoc Human-Machine Teaming. arXiv:2209.03943, arXiv, 8 Sept. 2022. arXiv.org, https://arxiv.org/abs/2209.03943.

---. The Utility of Explainable AI in Ad Hoc Human-Machine Teaming. arXiv:2209.03943, arXiv, 8 Sept. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2209.03943.

Panigutti, Cecilia, et al. "Understanding the Impact of Explanations on Advice-Taking: A User Study for AI-Based Clinical Decision Support Systems." Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2022, pp. 1–9. ACM Digital Library, https://doi.org/10.1145/3491102.3502104.

Ravi, Mudavath, et al. "A Comparative Review of Expert Systems, Recommender Systems, and Explainable AI." 2022 IEEE 7th International Conference for Convergence in Technology (I2CT), 2022, pp. 1–8. IEEE Xplore, https://doi.org/10.1109/I2CT54291.2022.9824265.

Ribeiro, Marco Tulio, et al. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. arXiv:1602.04938, arXiv, 9 Aug. 2016. arXiv.org, https://doi.org/10.48550/arXiv.1602.04938.

Ribera Turró, Mireia, and Agata Lapedriza. Can We Do Better Explanations? A Proposal of User-Centered Explainable AI. 2019.

Rohlfing, Katharina J., et al. "Explanation as a Social Practice: Toward a Conceptual Framework for the Social Design of AI Systems." IEEE Transactions on Cognitive and Developmental Systems, vol. 13, no. 3, Sept. 2021, pp. 717–28. IEEE Xplore, https://doi.org/10.1109/TCDS.2020.3044366.

Ryan, Mark. "In AI We Trust: Ethics, Artificial Intelligence, and Reliability." Science and Engineering Ethics, vol. 26, no. 5, Oct. 2020, pp. 2749–67. Springer Link, https://doi.org/10.1007/s11948-020-00228-y.

Sahoh, Bukhoree, and Anant Choksuriwong. "The Role of Explainable Artificial Intelligence in High-Stakes Decision-Making Systems: A Systematic Review." Journal of Ambient Intelligence and Humanized Computing, Apr. 2023. Springer Link, https://doi.org/10.1007/s12652-023-04594-w.

Schemmer, Max, et al. On the Influence of Explainable AI on Automation Bias. arXiv:2204.08859, arXiv, 19 Apr. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2204.08859.

Schmidt, Philipp, and Felix Biessmann. Quantifying Interpretability and Trust in Machine Learning Systems. arXiv:1901.08558, arXiv, 20 Jan. 2019. arXiv.org, https://doi.org/10.48550/arXiv.1901.08558.

Shneiderman, Ben. "Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy." International Journal of Human–Computer Interaction, vol. 36, no. 6, Apr. 2020, pp. 495–504. Taylor and Francis+NEJM, https://doi.org/10.1080/10447318.2020.1741118.

Spinner, Thilo, et al. "ExplAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning." IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 1, Jan. 2020, pp. 1064–74. IEEE Xplore, https://doi.org/10.1109/TVCG.2019.2934629.

Sutton, Reed T., et al. "An Overview of Clinical Decision Support Systems: Benefits, Risks, and Strategies for Success." Npj Digital Medicine, vol. 3, no. 1, 1, Feb. 2020, pp. 1–10. www.nature.com, https://doi.org/10.1038/s41746-020-0221-y.

Swartout, W., et al. "Explanations in Knowledge Systems: Design for Explainable Expert Systems." IEEE Expert, vol. 6, no. 3, June 1991, pp. 58–64. DOI.org (Crossref), https://doi.org/10.1109/64.87686.

Tambwekar, Pradyumna, and Matthew Gombolay. Towards Reconciling Usability and Usefulness of Explainable AI Methodologies. arXiv:2301.05347, arXiv, 12 Jan. 2023. arXiv.org, https://doi.org/10.48550/arXiv.2301.05347.

Taylor, J. Eric T., and Graham W. Taylor. "Artificial Cognition: How Experimental Psychology Can Help Generate Explainable Artificial Intelligence." Psychonomic Bulletin & Review, vol. 28, no. 2, Apr. 2021, pp. 454–75. Springer Link, https://doi.org/10.3758/s13423-020-01825-5.

van der Velden, Bas H. M., et al. "Explainable Artificial Intelligence (XAI) in Deep Learning-Based Medical Image Analysis." Medical Image Analysis, vol. 79, July 2022, p. 102470. ScienceDirect, https://doi.org/10.1016/j.media.2022.102470.

Vereschak, Oleksandra, et al. "How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies." Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW2, Oct. 2021, p. 327:1-327:39. October 2021, https://doi.org/10.1145/3476068.

Woodcock, Claire, et al. "The Impact of Explanations on Layperson Trust in Artificial Intelligence–Driven Symptom Checker Apps: Experimental Study." Journal of Medical Internet Research, vol. 23, no. 11, Nov. 2021, p. e29386. www.jmir.org, https://doi.org/10.2196/29386.

Xin, Doris, et al. "Accelerating Human-in-the-Loop Machine Learning: Challenges and Opportunities." Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning, Association for Computing Machinery, 2018, pp. 1–4. ACM Digital Library, https://doi.org/10.1145/3209889.3209897.

Yang, Qian, et al. "Re-Examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design." Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Association for Computing Machinery, 2020, pp. 1–13. ACM Digital Library, https://doi.org/10.1145/3313831.3376301.

Zhang, Yunfeng, et al. "Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 295–305. arXiv.org, https://doi.org/10.1145/3351095.3372852.

Zhang, Zelun Tony, et al. Resilience Through Appropriation: Pilots' View on Complex Decision Support. 2023.

Zhou, Yilun, et al. ExSum: From Local Explanations to Model Understanding. arXiv:2205.00130, arXiv, 29 Apr. 2022. arXiv.org, https://doi.org/10.48550/arXiv.2205.00130.

Create your website for free! This website was made with Webnode. Create your own for free today! Get started