XAI-Based Evaluation Model for Navigating Academic Ambiguity

Authors

  • Rakesh Kumar Pathak Assistant Professor, School of Computer Science, Xavier University, Patna
  • Prakash Upadhyay Assistant Professor, School of Computer Science, Xavier University, Patna

DOI:

https://doi.org/10.69968/ijisem.2026v5Si153-58

Keywords:

Explainable AI, assessment clarity, rubric interpretation, fairness, educational evaluation

Abstract

Academic evaluation often does not reflects student’s abilities accurately. There are sufficient testaments available which proves that student who did not scored well in their academic examinations have done exceptionally well in competitive exams have done very well in their life.   Academic evaluation often becomes unclear because instructions are interpreted differently by teachers, raters score inconsistently, and automated grading systems are not transparent. Till date our assessment modules are largely based on memory based evaluation and not on ability based assessment. This paper suggests using an Explainable AI (XAI)–based evaluation model to reduce this ambiguity by:

1) Making automated and mixed (human + AI) scoring more transparent,

2) Showing clear explanations for each feature and each student’s score, and

3) Providing measurable data that can help improve rubrics.

We include a synthetic-data experiment to show how an easy-to-understand model and XAI methods can

  1. Highlight the main factors affecting grades and
  2. Detect where strict grading or peer-review differences create ambiguity.

The findings show that XAI can offer practical insights that make academic assessment clearer, fairer, and more trustworthy. The paper ends with key suggestions and areas for future research

References

[1] Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli, “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, in CHI ’18. New York, NY, USA: Association for Computing Machinery, 2018, pp. 1–18. doi: 10.1145/3173574.3174156.

[2] B. Shneiderman, “Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy,” Int J Hum Comput Interact, vol. 36, no. 6, pp. 495–504, 2020, doi: 10.1080/10447318.2020.1741118.

[3] F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” arXiv: Machine Learning, 2017, [Online]. Available: https://api.semanticscholar.org/CorpusID:11319376

[4] Naveed, S.; Stevens, G.; Kern, D.-R. An Overview of Empirical Evaluation of Explainable AI (Xai): A Comprehensive Guideline to User-Centered Evaluation in Xai. Preprints 2024, 2024100098. https://doi.org/10.20944/preprints202410.0098.v1

[5] T. Herrmann and S. Pfeiffer, “Keeping the organization in the loop: a socio-technical extension of human centered artificial intelligence,” AI Soc, vol. 38, no. 4, pp. 1523–1542, 2023, doi: 10.1007/s00146-022-01391-5.

[6] G. Vilone and L. Longo, “Notions of explainability and evaluation approaches for explainable artificial intelligence,” Information Fusion, vol. 76, pp. 89–106, 2021, doi: https://doi.org/10.1016/j.inffus.2021.05.009.

[7] Goodman B. and Flaxman S., European Union regulations on algorithmic decision-making and a “right to explanation”, AI Magazine. (2017) 38, no. 3, 50–57, https://doi.org/10.1609/aimag.v38i3.2741.

[8] Danks D. and London A. J., Regulating autonomous systems: beyond standards, IEEE Intelligent Systems. (2017) 32, no. 1, 88–91, https://doi.org/10.1109/MIS.2017.1, 2-s2.0-85013322063.

[9] Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., and Pedreschi D., A survey of methods for explaining black box models, ACM Computing Surveys. (2019) 51, no. 5, 1–42, https://doi.org/10.1145/3236009, 2-s2.0-85052502285.

[10] Classifying XAI Methods to Resolve Conceptual Ambiguity, author - Lynda Dib and Laurence Capus, Technologies, year 2025, url https://api.semanticscholar.org/CorpusID:281122150

[11] Dib, L.; Capus, L. Classifying XAI Methods to Resolve Conceptual Ambiguity. Technologies 2025, 13, 390. https://doi.org/10.3390/technologies13090390

[12] Ai, L. (2024, September 25). Navigating the Maze of Explainable AI: A Systematic approach to evaluating methods and metrics Quick review. Liner. https://liner.com/review/navigating-the-maze-of-explainable-ai-a-systematic-approach-to

Downloads

Published

09-05-2026

Issue

Section

Articles

How to Cite

[1]
Rakesh Kumar Pathak and Prakash Upadhyay 2026. XAI-Based Evaluation Model for Navigating Academic Ambiguity. International Journal of Innovations in Science, Engineering And Management. 5, 1 (May 2026), 53–58. DOI:https://doi.org/10.69968/ijisem.2026v5Si153-58.