ArXiv TLDR

VIVA Stimuli: A Web-Based Platform for Eye Tracking Stimuli

🐦 Tweet
2604.19397

Suleyman Ozdel, Virmarie Maquiling, Kadir Burak Buldu, Yasmeen Abdrabou, Enkelejda Kasneci

cs.HC

TLDR

VIVA Stimuli is a web platform for standardized eye-tracking stimulus presentation, enhancing reproducibility across diverse research setups.

Key contributions

  • Provides a web-based platform for standardized eye-tracking stimulus presentation.
  • Offers configurable task types like fixation, smooth pursuit, and cognitive load.
  • Supports all eye-tracking technologies, including wearable, screen-based, LFI, and EOG devices.
  • Features a visual editor to export and share protocols for exact stimulus replication.

Why it matters

This platform addresses a critical gap in eye-tracking research by standardizing stimulus presentation, which is crucial for reproducibility. It simplifies experiment design and sharing, allowing researchers to replicate findings more accurately across labs. This advancement promotes more reliable and comparable research outcomes.

Original Abstract

Reproducibility in eye-tracking research is increasingly important as researchers conduct diverse experiments and seek to validate or replicate findings. However, exact replication remains challenging due to differences in laboratory practices and experimental setups. Inconsistent stimulus presentation can yield divergent metrics from identical oculomotor behavior, yet the stimulus layer remains largely unstandardized. Existing tools often require programming expertise or depend on specific hardware vendors. We introduce VIVA Stimuli, a web-based platform for standardized eye-tracking stimulus presentation. It provides configurable task types, including fixation, smooth pursuit, cognitive load, blink, slippage, content display, and questionnaires within a unified environment. The platform supports any eye-tracking technology, including wearable and screen-based VOG trackers, LFI sensors, and EOG devices. ArUco markers enable synchronization for trackers with scene cameras, while a WebSocket architecture ensures temporal synchronization for those without. A visual experiment flow editor allows protocols to be exported and shared, enabling identical stimulus replication across laboratories.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.