The challenge of imputation in explainable artificial intelligence models
Publication
, Conference
Ahmad, MA; Eckert, C; Teredesai, A
Published in: Ceur Workshop Proceedings
January 1, 2019
Explainable models in Artificial Intelligence are often employed to ensure transparency and accountability of AI systems. The fidelity of the explanations are dependent upon the algorithms used as well as on the fidelity of the data. Many real world datasets have missing values that can greatly influence explanation fidelity. The standard way to deal with such scenarios is imputation. This can, however, lead to situations where the imputed values may correspond to a setting which refer to counterfactuals. Acting on explanations from AI models with imputed values may lead to unsafe outcomes. In this paper, we explore different settings where AI models with imputation can be problematic and describe ways to address such scenarios.
Duke Scholars
Published In
Ceur Workshop Proceedings
ISSN
1613-0073
Publication Date
January 1, 2019
Volume
2419
Related Subject Headings
- 4609 Information systems
Citation
APA
Chicago
ICMJE
MLA
NLM
Ahmad, M. A., Eckert, C., & Teredesai, A. (2019). The challenge of imputation in explainable artificial intelligence models. In Ceur Workshop Proceedings (Vol. 2419).
Ahmad, M. A., C. Eckert, and A. Teredesai. “The challenge of imputation in explainable artificial intelligence models.” In Ceur Workshop Proceedings, Vol. 2419, 2019.
Ahmad MA, Eckert C, Teredesai A. The challenge of imputation in explainable artificial intelligence models. In: Ceur Workshop Proceedings. 2019.
Ahmad, M. A., et al. “The challenge of imputation in explainable artificial intelligence models.” Ceur Workshop Proceedings, vol. 2419, 2019.
Ahmad MA, Eckert C, Teredesai A. The challenge of imputation in explainable artificial intelligence models. Ceur Workshop Proceedings. 2019.
Published In
Ceur Workshop Proceedings
ISSN
1613-0073
Publication Date
January 1, 2019
Volume
2419
Related Subject Headings
- 4609 Information systems