A Human-Centered Explainable AI Framework for Trustworthy Radiological Decision Support Systems
Keywords:
Human-Centered AI; Explainable Artificial Intelligence; Radiological Decision Support; Medical Imaging; Clinical Trust; Ethical AI; Healthcare SystemsAbstract
Artificial intelligence-driven radiological decision support systems have shown impressive diagnostic accuracy, but their use in clinical settings is still quite limited. This is mainly due to issues like transparency, interpretability, and trust. Many of these systems function as black boxes, offering predictions without providing explanations that are meaningful in a clinical context. This article introduces a human-centered explainable artificial intelligence (XAI) framework aimed at boosting transparency, trust, and accountability in radiological decision support.
The framework combines deep learning-based image analysis with post-hoc explainability techniques to create visual and feature-level explanations that resonate with radiologists’ diagnostic thought processes. It prioritizes clinician interaction, ethical accountability, and seamless integration into existing workflows, rather than focusing solely on predictive performance. The study brings together experimental insights from the development of explainable models and assesses the framework from technical, clinical, and social angles. The results indicate that making explainability a fundamental design principle can significantly enhance clinician confidence and support better decision-making, ethical compliance, and enables safer human–AI collaboration in radiological practice.
References
Litjens, G., Kooi, T., Bejnordi, B. E., et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.
Esteva, A., Robicquet, A., Ramsundar, B., et al. (2019). A guide to deep learning in healthcare. Nature Medicine, 25(1), 24–29.
Topol, E. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25, 44–56.
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Holzinger, A., Langs, G., Denk, H., et al. (2019). What do we need to build explainable AI systems for the medical domain? arXiv preprint arXiv:1712.09923.
Selvaraju, R. R., Cogswell, M., Das, A., et al. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 618–626.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144.
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NeurIPS), 30.
Samek, W., Montavon, G., Vedaldi, A., Hansen, L. K., & Müller, K. R. (2021). Explainable AI: Interpreting, explaining and visualizing deep learning. Springer.
Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813.
Ghassemi, M., Oakden-Rayner, L., & Beam, A. L. (2021). The false hope of current approaches to explainable artificial intelligence in health care. The Lancet Digital Health, 3(11), e745–e750.
Rajpurkar, P., Irvin, J., Zhu, K., et al. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225.
Shen, D., Wu, G., & Suk, H. I. (2017). Deep learning in medical image analysis. Annual Review of Biomedical Engineering, 19, 221–248.
European Commission. (2021). Ethics guidelines for trustworthy artificial intelligence. Brussels.
Kelly, C. J., Karthikesalingam, A., Suleyman, M., Corrado, G., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17, 195.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.




