Human-Computer Interaction
Research on user interfaces, interaction design, accessibility, and UX.
cs.HC · 436 papersCan providing feedback on gaze and mental-effort synchrony improve pair programming performance?
AI feedback on gaze and mental effort synchrony significantly improves pair programming performance, with proactive timing being most effective.
GazeMind: A Gaze-Guided LLM Agent for Personalized Cognitive Load Assessment
GazeMind is a gaze-guided LLM agent for personalized, interpretable cognitive load assessment on smart glasses, outperforming baselines by over 20%.
Priming, Path-dependence, and Plasticity: Understanding the molding of user-LLM interaction and its implications from (many) chat logs in the wild
This paper analyzes 140K LLM chat logs, revealing that user interaction patterns stabilize rapidly, leading to less exploration despite open input.
Closing the Loop: Unified 3D Scene Generation and Immersive Interaction via LLM-RL Coupling
This paper unifies language-driven 3D scene generation with immersive user interaction using LLMs and RL, enabling adaptive VR experiences.
PersonaTeaming: Supporting Persona-Driven Red-Teaming for Generative AI
PersonaTeaming introduces persona-driven red-teaming, enhancing both automated and human-AI collaborative methods for identifying generative AI risks.
The Capacity to Care: Designing Social Technology for Sustained Engagement With Societal Challenges
This paper explores how social media design hinders sustained engagement with societal challenges and proposes designs for 'sustainable care'.
The Missing Evaluation Axis: What 10,000 Student Submissions Reveal About AI Tutor Effectiveness
This paper introduces a new evaluation framework for AI tutors, focusing on student behavioral responses to feedback using over 10,000 code submissions.
UX in the Age of AI: Rethinking Evaluation Metrics Through a Statistical Lens
This paper introduces ADUX-Stat, a novel statistical framework to evaluate user experience in AI systems, addressing limitations of traditional metrics.
Tailoring Scaffolding to Diagnostic Strategies: Theory-Informed LLM-Based Agents
This paper proposes a KLI-informed hybrid LLM agent that adapts scaffolding based on a learner's diagnostic strategy for improved learning.
To Fuse or to Drop? Dual-Path Learning for Resolving Modality Conflicts in Multimodal Emotion Recognition
DCR is a dual-path framework that intelligently fuses or drops modalities to resolve conflicts in multimodal emotion recognition, improving robustness.
Not All Scaffolds Are Equal: How Initiation Mode Determines EMME Effectiveness in Debugging
This study finds that human-initiated Eye Movement Modeling Examples (EMME) are more effective than automated triggers for novice programmers debugging.
RTMS: A Real-Time Multimodal Scaffolding System for Improving Debugging in Computing Education
A real-time multimodal system (RTMS) uses cognitive load and stress indicators to provide adaptive feedback, significantly improving debugging for students.
Patterns of Developer Adoption of LLM-Generated Code Refactoring Suggestions
This paper analyzes how developers adopt LLM-generated code refactoring suggestions, finding most are accepted, with major changes following five patterns.
Building AI Companions that Prioritise Learning over Performance
This paper proposes AI learning companions designed to prioritize genuine learning and cognitive growth over immediate task performance in education.
OpenWatch: A Multimodal Benchmark for Hand Gesture Recognition on Smartwatches
OpenWatch introduces a multimodal benchmark for smartwatch hand gesture recognition, along with novel methods (MixToken, NormWear-Lora) and key findings.
Gaze4HRI: Zero-shot Benchmarking Gaze Estimation Neural-Networks for Human-Robot Interaction
Gaze4HRI introduces a large-scale benchmark for zero-shot gaze estimation in HRI, revealing current methods' failures and highlighting data diversity as key to robustness.
Cognitive Twins: Investigating Personalized Thinking Model Building and Its Performance Enhancement with Human-in-the-Loop
This paper introduces a Personalized Thinking Model (PTM) for AI education, building cognitive twins from learner journals with LLMs and HITL refinement.
3D Printing of Passively Actuated Self-Folding Robots with Integrated Functional Modules
A 3D printing method creates passively actuated, self-folding robots from conductive PLA nets, integrating electronics and a predictive folding model.
AICoFe: Implementation and Deployment of an AI-Based Collaborative Feedback System for Higher Education
AICoFe is an AI-based system for higher education that improves peer feedback quality using a multi-LLM pipeline and teacher-in-the-loop curation.
AISSA: Implementation and Deployment of an AI-based Student Slides Analysis tool for Academic Presentations
AISSA is an AI-powered web tool using LLMs and analytics to provide scalable, rubric-based feedback on student presentation slides.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.