Integrating Deep CNN Features with Classical Statistical Regression for Interpretable and Uncertainty-Aware Image Classification
DOI:
https://doi.org/10.69968/ijisem.2026v5i1193-197Keywords:
Consumer Attitude, Social Media Advertising, Buying Behaviour, Digital Marketing, Purchase Decision, Consumer Perception, Online Advertising InfluenceAbstract
Deep convolutional neural networks (CNNs) achieve state-of-the-art performance in image classification tasks, particularly in medical imaging. However, their limited interpretability and lack of formal statistical inference restrict their adoption in high-stakes clinical and regulatory settings. This study proposes a hybrid two-stage framework integrating CNN-derived embeddings with classical statistical regression models to produce interpretable prediction systems with valid confidence intervals. Using a publicly available chest X-ray dataset, deep features extracted from a fine-tuned ResNet architecture were incorporated into logistic and mixed-effects regression models. Bootstrap resampling and permutation testing were employed to quantify uncertainty and compare model performance. Results demonstrate that the hybrid framework maintains competitive predictive accuracy while enabling estimation of odds ratios, hypothesis testing, and calibrated probability outputs. This integration bridges modern deep learning with classical statistical methodology and provides a pathway toward transparent and trustworthy AI deployment
References
[1] Aerts, H. J. W. L. (2016). The potential of radiomic-based phenotyping in precision medicine. JAMA Oncology, 2(12), 1636–1642.
[2] Azizi, S., Mustafa, B., Ryan, F., et al. (2021). Big self-supervised models advance medical image classification. Proceedings of the IEEE/CVF International Conference on Computer Vision, 3478–3488.
[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., et al. (2021). An image is worth 16×16 words: Transformers for image recognition at scale. International Conference on Learning Representations.
[4] Efron, B., & Tibshirani, R. J. (1993). An introduction to the bootstrap. CRC Press.
[5] Esteva, A., Chou, K., Yeung, S., et al. (2021). Deep learning-enabled medical computer vision. NPJ Digital Medicine, 4(1), 1–9.
[6] Gawlikowski, J., Tassi, C. R. N., Ali, M., et al. (2023). A survey of uncertainty in deep neural networks. Artificial Intelligence Review, 56, 1513–1589.
[7] Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.
[8] Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. ICML, 1321–1330.
[9] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. CVPR, 770–778.
[10] Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression (3rd ed.). Wiley.
[11] Litjens, G., Kooi, T., Bejnordi, B. E., et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88.
[12] Rawat, L. K., & Kumar, A. (2025). A Hybrid Deep Learning Framework with Explainable AI for Robust Detection and Grading of Diabetic Retinopathy, Int.J. of Pharm. Sci., 2026, Vol 4, Issue 2, 3866-3873.
[13] Rawat, L. K., & Kumar, A. (2025). A comprehensive review on deep learning methods in medical image analysis. International Journal of Innovations in Science Engineering and Management, 4(4), 38–41.
[14] Rawat, L. K., Kumar, A., & Singh, V. P. (2025). Explainable AI for diabetic retinopathy detection using EfficientNetB4 and Swin Transformer. International Journal of Innovative Research in Technology (IJIRT), 12(6).
[15] Rawat, L. K., Kumar, A., & Singh, V. P. (2025). Vision transformer for brain tumor segmentation using BraTS dataset. International Journal of Research and Analytical Reviews (IJRAR), 12(4)
[16] Rawat, L. K., Kumar, A., Pant, R., Singh, V. P., & Tripathi, A. (2025). Assess explainability approaches to improve clinical trust in medical AI. IOSR Journal of Multidisciplinary Research, 2(6), 15–20.
[17] Samek, W., Wiegand, T., & Müller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. IEEE Signal Processing Magazine, 34(6), 51–59.
[18] Tibshirani, R. (1996). Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society: Series B, 58(1), 267–288.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Lalit Kumar Rawat, Anil Kumar

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Re-users must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. This license allows for redistribution, commercial and non-commercial, as long as the original work is properly credited.





