Confidence Without Competence in AI-Assisted Knowledge Work
Elena Eleftheriou, George Pallis, Marios Constantinides
TLDR
This paper explores how different AI interaction designs can improve learning and reduce overconfidence in students using LLMs.
Key contributions
- Identified that standard LLM use leads to high perceived understanding but low actual learning.
- Designed Deep3, a system with future-self explanations, contrastive learning, and guided hints.
- Future-self explanations align perceived and actual understanding despite higher cognitive workload.
- Guided hints achieved the largest learning gains without a proportional increase in user frustration.
Why it matters
This research highlights a critical issue: AI tools can foster overconfidence without actual competence. It offers practical interaction designs that can mitigate this, helping students truly learn. The findings are crucial for developing more effective and responsible AI-assisted learning environments.
Original Abstract
Large Language Models (LLMs) are widely used by students, yet their tendency to provide fast and complete answers may discourage reflection and foster overconfidence. We examined how alternative LLM interaction designs support deeper thinking without excessively increasing cognitive burden. We conducted a two-phase mixed-methods study. In Phase 1, interviews with 16 Gen Z students informed the design of Deep3, a web-based system with three interaction modes: \emph{a)} future-self explanations, \emph{b)} contrastive learning, and \emph{c)} guided hints. In Phase 2, we evaluated Deep3 with 85 participants across two learning tasks. We found that a standard single-agent baseline produced high perceived understanding despite the lowest objective learning. In contrast, future-self explanations imposed higher cognitive workload yet yielded the closest alignment between perceived and actual understanding, while guided hints achieved the largest learning gains without a proportional increase in frustration. These findings show that effort, confidence, and learning systematically diverge in LLM-supported work.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.