Interpretable Clinical Classification with Kolmogorov-Arnold Networks
Published in arXiv, 2025
Why should a clinician trust an Artificial Intelligence (AI) prediction? Despite the increasing accuracy of machine learning methods in medicine, the lack of transparency continues to hinder their adoption in clinical practice. In this work, we explore Kolmogorov-Arnold Networks (KANs) for clinical classification tasks on tabular data. In contrast to traditional neural networks, KANs are function-based architectures that offer intrinsic interpretability through transparent, symbolic representations. We introduce \emph{Logistic-KAN}, a flexible generalization of logistic regression, and \emph{Kolmogorov-Arnold Additive Model (KAAM)}, a simplified additive variant that delivers transparent, symbolic formulas. Unlike ``black-box’’ models that require post-hoc explainability tools, our models support built-in patient-level insights, intuitive visualizations, and nearest-patient retrieval. Across multiple health datasets, our models match or outperform standard baselines, while remaining fully interpretable. These results position KANs as a promising step toward trustworthy AI that clinicians can understand, audit, and act upon. We release the code for reproducibility in \codeurl.
Recommended citation: Almodóvar, A., Apellániz, P. A., Garrido, A., Fernández-Salvador, F., Zazo, S., & Parras, J. (2025). Interpretable clinical classification with Kolmogorov-Arnold networks. arXiv preprint arXiv:2509.16750. /files/2025-09-20-class-kans.pdf
