ArXiv TLDR

AI-Assisted Requirements Engineering: An Empirical Evaluation Relative to Expert Judgment

🐦 Tweet
2604.15222

Oz Levy, Ilya Dikman, Natan Levy, Michael Winokur

cs.SEcs.AI

TLDR

This paper empirically evaluates AI tools for requirements quality assessment, finding they support preliminary checks but don't replace expert judgment.

Key contributions

  • Compared AI-assisted requirement evaluation with human expert assessment using INCOSE criteria.
  • AI tools provide consistent, rapid preliminary assessments for syntactic and structural requirement quality.
  • Expert judgment is essential for contextual interpretation, ambiguity resolution, and trade-off reasoning.
  • AI functions as a decision-support mechanism, integrating into RE workflows to enhance consistency.

Why it matters

This study provides crucial empirical evidence on how AI can be effectively integrated into requirements engineering. It clarifies AI's role as a decision-support tool, optimizing preliminary assessments while preserving the critical need for human expertise in complex interpretations. This helps systems engineers leverage AI responsibly.

Original Abstract

Artificial Intelligence is increasingly introduced into systems engineering activities, particularly within requirements engineering, where quality assessment and validation remain heavily dependent on expert judgment. While recent AI tools demonstrate promising capabilities in analyzing and generating requirements, their role within formal systems engineering processes-and their alignment with established INCOSE criteria-remains insufficiently understood. This paper investigates the extent to which AI-based tools can support systems engineers in evaluating requirement quality, without replacing professional expertise. The research adopts a structured systems engineering methodology to compare AI-assisted requirement evaluation with human expert assessment. A controlled study was conducted in which system requirements were evaluated against established INCOSE ``good requirement'' criteria by both experienced systems engineers and an AI-based assessment tool. The evaluation focused on consistency, completeness, clarity, and testability, examining not only accuracy but also the decision logic underlying each assessment. Results indicate that AI tools can provide consistent and rapid preliminary assessments, particularly for syntactic and structural quality attributes. However, expert judgment remains essential for contextual interpretation, ambiguity resolution, and trade-off reasoning. Rather than positioning AI as a replacement for systems engineers, the findings support its role as a decision-support mechanism within the RE lifecycle. From a systems engineering perspective, this study contributes empirical evidence on how AI can be integrated into RE workflows while preserving traceability, accountability, and engineering consistency.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.