Examining EAP Students' AI Disclosure Intention: A Cognition-Affect-Conation Perspective
TLDR
This study reveals how psychological safety and fear of evaluation impact EAP students' AI disclosure, emphasizing policy clarity.
Key contributions
- Proposed a model based on Cognition-Affect-Conation to examine EAP students' AI disclosure intention.
- Found psychological safety positively predicts AI disclosure; fear of negative evaluation negatively impacts it.
- Supportive teachers and clear guidance foster safety; policy ambiguity and reputational concerns hinder disclosure.
- Emphasizes clear institutional policies and supportive pedagogical environments to promote transparent AI use.
Why it matters
As AI use in academia grows, understanding disclosure is vital for academic integrity. This paper offers actionable insights for universities to foster transparency. It highlights the need for clear policies and supportive environments to promote ethical AI integration.
Original Abstract
The growing use of generative artificial intelligence (AI) in academic writing has raised increasing concerns regarding transparency and academic integrity in higher education. This study examines the psychological factors influencing English for Academic Purposes (EAP) students' intention to disclose their use of AI tools. Drawing on the cognition-affect-conation framework, the study proposes a model integrating both enabling and inhibiting factors shaping disclosure intention. A sequential explanatory mixed-methods design was employed. Quantitative data from 324 EAP students at an English-medium instruction university in China were analysed using structural equation modelling, followed by semi-structured interviews with 15 students to further interpret the findings. The quantitative results indicate that psychological safety positively predicts AI disclosure intention, whereas fear of negative evaluation negatively predicts it. The qualitative findings further reveal that supportive teacher practices and clear guidance foster psychological safety, while policy ambiguity and reputational concerns intensify fear of negative evaluation and discourage disclosure. These findings highlight the importance of clear institutional policies and supportive pedagogical environments in promoting transparent AI use.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.