|
Ramsey, A., Kale, A., Kassa, Y., Gandhi, R., & Ricks, B. (2023). Toward Interactive Visualizations for Explaining Machine Learning Models. In Jaziar Radianti, Ioannis Dokas, Nicolas Lalone, & Deepak Khazanchi (Eds.), Proceedings of the 20th International ISCRAM Conference (pp. 837–852). Omaha, USA: University of Nebraska at Omaha.
Abstract: Researchers and end users generally demand more trust and transparency from Machine learning (ML) models due to the complexity of their learned rule spaces. The field of eXplainable Artificial Intelligence (XAI) seeks to rectify this problem by developing methods of explaining ML models and the attributes used in making inferences. In the area of structural health monitoring of bridges, machine learning can offer insight into the relation between a bridge’s conditions and its environment over time. In this paper, we describe three visualization techniques that explain decision tree (DT) ML models that identify which features of a bridge make it more likely to receive repairs. Each of these visualizations enable interpretation, exploration, and clarification of complex DT models. We outline the development of these visualizations, along with their validity by experts in AI and in bridge design and engineering. This work has inherent benefits in the field of XAI as a direction for future research and as a tool for interactive visual explanation of ML models.
|
|