From Chat to Interview: Agentic Requirements Elicitation with an Experience Ontology
Dongming Jin, Zhi Jin, Yaotian Yang, Linyu Li, Zheng Fang + 3 more
TLDR
OntoAgent is an LLM-powered agent that uses an experience ontology to conduct structured, efficient, and effective requirements elicitation interviews.
Key contributions
- Introduces OntoAgent, an LLM-powered agent for structured requirements elicitation interviews.
- Leverages an experience ontology to guide systematic and explainable interview processes.
- Ontology is automatically built from domain-specific requirements descriptions.
- Achieves 33% better elicitation effectiveness and 21% higher questioning efficiency.
Why it matters
This paper addresses the limitations of current LLM-based requirements elicitation by introducing a structured, ontology-guided approach. It significantly improves interview effectiveness and efficiency, making the crucial requirements engineering process more robust and less reliant on analyst experience.
Original Abstract
Requirements elicitation interviews are crucial and time-consuming in requirements engineering, but heavily rely on the experience of requirements analysts. Although recent advancements in large language models (LLMs) have created new opportunities to automate this process, existing approaches rely solely on LLMs for free-form chat without taking into account the interview and development experience. That leads to the omission of implicit requirements and redundant questions. Practically, experienced analysts implicitly follow a structured cognitive framework when conducting requirements elicitation. Inspired by this observation, this paper proposes an interview agent named OntoAgent for the elicitation of requirements guided by an experience ontology. OntoAgent automatically analyzes domain-specific requirements descriptions to construct an experience ontology, which organizes requirements concerns into an ontology to support systematic and explainable interviews. During the interview, OntoAgent first performs four operations (i.e., ParseUser, ScoreOnto, ReRankOnto, GatePrune) guided by the ontology to identify the relevant requirement concerns. The selected concern is then combined with the current dialogue context to generate the elicitation question. To validate OntoAgent, we conduct comprehensive quantitative experiments using the widely adopted website application domain. The results show that OntoAgent significantly outperforms existing baselines in both elicitation effectiveness and questioning efficiency, achieving a 33% improvement in IRE and a 21% improvement in TKQR. Ablation studies further validate the contribution of each key design component. In addition, a qualitative user study demonstrates its practical advantages in real-world scenarios. We believe that OntoAgent can also be extended to requirements interview tasks in other domains.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.