ArXiv TLDR

MindMirror: A Local-First Multimodal State-Aware Support System for Digital Workers

🐦 Tweet
2605.11700

Wenqi Luo, Changbo Wang, Yan Wang

cs.HC

TLDR

MindMirror is a local-first multimodal system that uses AI to support digital workers by monitoring their state and offering personalized help.

Key contributions

  • Local-first multimodal system integrating camera, text, speech, and local LLM for digital worker support.
  • Forms a closed workflow for state checking, manual correction, structured reflection, and suggestion generation.
  • Achieves 94.49% accuracy in emotion recognition, a 34.83% gain over baseline.
  • User feedback highlights value of local-first design, manual correction, and structured reflection.

Why it matters

Digital workers often experience fatigue and anxiety. MindMirror offers a novel, local-first, state-aware system to provide personalized support, unlike existing tools. It empowers users with a controllable tool for self-reflection and interaction, improving well-being.

Original Abstract

Digital workers often experience fatigue, anxiety, reduced attention, and task blockage during prolonged computer-based work. Existing productivity tools mainly focus on task completion, while general-purpose AI chatbots require users to formulate clear prompts before receiving useful help. This paper presents MindMirror, a local-first multimodal state-aware support system for digital workers. MindMirror integrates camera-based facial expression cues, text input, optional speech interaction, structured blockage reflection, local large language model (LLM)-based response generation, and daily/weekly review reports. The system forms a closed workflow of state checking, manual correction, structured articulation, suggestion generation, and state review. The current prototype follows a local-first design, while optional speech services may rely on third-party APIs when enabled. It is implemented with a Web frontend, Flask backend, an emotion recognition model, an Ollama-hosted Qwen model, Chart.js visualization, and local JSON/LocalStorage records. We evaluate the emotion recognition module on an independent seven-class image-level facial expression benchmark containing 6,767 images. The fine-tuned Hugging Face model improves accuracy from 59.66% to 94.49% over a non-fine-tuned checkpoint baseline, an absolute gain of 34.83 percentage points. We further validate the prototype through endpoint-level reliability tests, voice-interaction latency tests, and a small formative user feedback study with six digital workers. Results suggest that users value the local-first design, manual correction mechanism, and structured reflection workflow. MindMirror is not intended for psychological diagnosis; instead, it serves as a lightweight, user-controllable tool for state reflection and supportive interaction.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.