ArXiv TLDR

Three Modalities, Two Design Probes, One Prototype, and No Vision: Experience-Based Co-Design of a Multi-modal 3D Data Visualization Tool

🐦 Tweet
2604.09426

Sanchita S. Kamath, Aziz N Zeidieh, Venkatesh Potluri, Sile O'Modhrain, Kenneth Perry + 1 more

cs.HCcs.AIcs.IR

TLDR

A co-designed multi-modal 3D data visualization tool makes complex scientific data accessible for blind and low-vision users.

Key contributions

  • Developed an accessible, multi-modal 3D data visualization tool via Experience-Based Co-Design with BLV experts.
  • Prototype features include reference sonification, stereo/volumetric audio, and configurable buffer aggregation.
  • Validated improved analytic accuracy and learnability for tasks like peak finding and gradient tracing.
  • Provides a co-design protocol and concrete design guidance for future accessible 3D visualization systems.

Why it matters

This work addresses the critical gap in accessible 3D data visualization for blind and low-vision individuals in STEM. It provides a validated co-design protocol and prototype, offering a blueprint for creating inclusive tools and advancing accessibility research.

Original Abstract

Three-dimensional (3D) data visualizations, such as surface plots, are vital in STEM fields from biomedical imaging to spectroscopy, yet remain largely inaccessible to blind and low-vision (BLV) people. To address this gap, we conducted an Experience-Based Co-Design with BLV co-designers with expertise in non-visual data representations to create an accessible, multi-modal, web-native visualization tool. Using a multi-phase methodology, our team of five BLV and one non-BLV researcher(s) participated in two iterative sessions, comparing a low-fidelity tactile probe with a high-fidelity digital prototype. This process produced a prototype with empirically grounded features, including reference sonification, stereo and volumetric audio, and configurable buffer aggregation, which our co-designers validated as improving analytic accuracy and learnability. In this study, we target core analytic tasks essential for non-visual 3D data exploration: orientation, landmark and peak finding, comparing local maxima versus global trends, gradient tracing, and identifying occluded or partially hidden features. Our work offers accessibility researchers and developers a co-design protocol for translating tactile knowledge to digital interfaces, concrete design guidance for future systems, and opportunities to extend accessible 3D visualization into embodied data environments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.