New UMUAI Paper: Explainable AI and Human Decisions
We are excited to announce that our latest research paper, “Exploring the Impact of Explainable AI and Cognitive Capabilities on Users’ Decisions,” has been officially published in User Modeling and User-Adapted Interaction (Springer).
📄 Read the paper here:
https://link.springer.com/article/10.1007/s11257-025-09438-0
📌 Overview
As AI systems increasingly support or automate high-stakes decisions, a critical challenge is ensuring that people understand, trust, and appropriately rely on AI recommendations. This paper investigates how different explainable AI (XAI) approaches and AI confidence cues influence human decision-making and cognitive effort.
To study this, we conducted an online experiment with 288 participants, who interacted with an AI-assisted loan approval system. The study examined how variations in AI explanations and confidence affected:
- Decision accuracy
- Reliance on AI recommendations
- Cognitive load (mental effort during decision-making)
We also explored whether these effects differed depending on participants’ Need for Cognition, a trait describing how much individuals enjoy engaging in effortful thinking.
🧠 Key Findings
🔹 AI Confidence Shapes User Behavior
Participants were more likely to rely on the AI — and experienced lower cognitive load — when the system communicated high confidence in its predictions. This highlights the importance of well-calibrated confidence signals in AI systems.
🔹 Explanation Type Matters
Not all explanations are equally helpful:
- Feature-based explanations did not improve decision accuracy.
- Counterfactual explanations, while perceived as harder to understand, increased reliance on the AI and reduced cognitive load when the AI’s predictions were correct.
🔹 Limited Impact of Need for Cognition
Contrary to expectations, we found no significant differences between users with high vs. low Need for Cognition in terms of accuracy, information prioritization, or cognitive load. This suggests that task complexity may overshadow individual cognitive preferences in AI-supported decisions.
🔍 Implications for Explainable AI Design
Our findings provide several insights for researchers and practitioners designing human-centered AI systems:
- 🧩 Combining explanation styles may be more effective than relying on a single approach.
- ⚖️ AI confidence cues can meaningfully guide user reliance, but must be carefully calibrated to avoid misuse.
- 👤 User personality traits may play a smaller role than expected in complex decision-making scenarios.
📚 Read the Full Paper
Exploring the Impact of Explainable AI and Cognitive Capabilities on Users’ Decisions
User Modeling and User-Adapted Interaction (2025)
👉 https://link.springer.com/article/10.1007/s11257-025-09438-0
We hope this work contributes to ongoing discussions around explainable AI, trust, and effective human-AI collaboration. Feel free to reach out if you have questions or are interested in follow-up research!
Authors:
Federico Maria Cau – University of Cagliari
Lucio Davide Spano – University of Cagliari