Recommending Usability Improvements with Multimodal Large Language Models
Sebastian Lubos, Alexander Felfernig, Damian Garber, Viet-Man Le, Manuel Henrich
TLDR
This paper introduces an MLLM-based automated approach to identify and recommend usability improvements from user screen recordings, ranked by severity.
Key contributions
- Novel MLLM approach automates usability evaluation using screen recordings and app context.
- Identifies usability issues based on Nielsen's heuristics, providing ranked improvement recommendations.
- User study with software engineers validated the practical usefulness of the generated suggestions.
- Provides a low-effort complement to traditional methods, aiding teams without usability experts.
Why it matters
Traditional usability evaluation is costly. This paper introduces an automated MLLM approach for usability improvement, providing ranked recommendations from user interactions. It democratizes UX insights for teams lacking experts, integrating into dev workflows.
Original Abstract
Usability describes quality attributes of application user interfaces that determine how effectively users can interact with them. Traditional usability evaluation methods require considerable expertise and resources, which can be challenging, especially for small teams and organizations. Automating usability evaluation could make it more accessible and help to improve the user experience. The recent emergence of powerful multimodal large language models (MLLMs) has opened new opportunities for automating usability evaluation and recommendation of improvements. These models can process visual inputs such as images and videos alongside textual context, which enables the identification of usability issues and the generation of actionable suggestions to resolve these issues. In this paper, we present a novel automated approach that uses limited application context and screen recordings of user interactions as input to an MLLM. The model automatically identifies and describes usability issues based on Nielsens usability heuristics, and provides corresponding explanations and improvement recommendations. To reduce the developer effort of manual prioritization, the recommendations are ranked by severity. The quality and practical usefulness of the generated recommendations were evaluated based on a user study that involved software engineers as participants. The evaluation focused on the highest-ranked suggestions provided by the model. The results demonstrate the potential of our approach to provide low-effort usability improvement recommendations. This makes it a promising complement to traditional evaluation methods, especially in settings with limited access to usability experts. In this sense, the approach serves as a basis for future integration into development tools to enable automated usability evaluation within software engineering workflows.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.