ArXiv TLDR

Enabling and Inhibitory Pathways of Students' AI Use Concealment Intention in Higher Education: Evidence from SEM and fsQCA

🐦 Tweet
2604.10978

Yiran Du, Huimin He

cs.HCcs.AI

TLDR

This study uncovers dual pathways influencing students' AI use concealment in higher education: enabling (fear-driven) and inhibitory (safety-driven).

Key contributions

  • Identifies an "enabling pathway" where stigma, risk, and policy uncertainty increase fear, promoting AI use concealment.
  • Reveals an "inhibitory pathway" where AI self-efficacy, fairness, and social support boost psychological safety, reducing concealment.
  • Uses SEM to confirm direct and mediated relationships and fsQCA to find multiple configurational pathways for concealment.

Why it matters

This paper offers crucial insights for higher education institutions to develop clear AI policies and foster supportive environments. It helps destigmatize appropriate AI use, promoting transparency and reducing student concealment.

Original Abstract

This study investigates students' AI use concealment intention in higher education by integrating the cognition-affect-conation (CAC) framework with a dual-method approach combining structural equation modelling (SEM) and fuzzy-set qualitative comparative analysis (fsQCA). Drawing on data from 1346 university students, the findings reveal two opposing mechanisms shaping concealment intention. The enabling pathway shows that perceived stigma, perceived risk, and perceived policy uncertainty increase fear of negative evaluation, which in turn promotes concealment. In contrast, the inhibitory pathway demonstrates that AI self-efficacy, perceived fairness, and perceived social support enhance psychological safety, thereby reducing concealment intention. SEM results confirm the hypothesised relationships and mediation effects, while fsQCA identifies multiple configurational pathways, highlighting equifinality and the central role of fear of negative evaluation across conditions. The study contributes to the literature by conceptualising concealment as a distinct behavioural outcome and by providing a nuanced explanation that integrates both net-effect and configurational perspectives. Practical implications emphasise the need for clear institutional policies, destigmatisation of appropriate AI use, and the cultivation of supportive learning environments to promote transparency.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.