ArXiv TLDR

Evaluating LLM-Based Goal Extraction in Requirements Engineering: Prompting Strategies and Their Limitations

🐦 Tweet
2604.22207

Anna Arnaudo, Riccardo Coppola, Maurizio Morisio, Flavio Giobergia, Andrea Bioddo + 2 more

cs.SEcs.AIcs.CL

TLDR

This paper evaluates LLM-based goal extraction in Requirements Engineering using prompting strategies and a generation-critic mechanism, achieving 61% accuracy.

Key contributions

  • Proposes an LLM chain for extracting functional goals in Requirements Engineering.
  • Introduces a generation-critic feedback loop using two LLMs for refining goal extraction.
  • Achieved 61% accuracy for low-level goal identification, best suited to accelerate manual work.
  • Feedback loop with Zero-shot prompting outperformed stand-alone Few-shot performance.

Why it matters

Automating Requirements Engineering tasks like goal extraction is crucial for efficiency. This paper explores LLM capabilities in this domain, offering a practical pipeline. While not a full replacement, it demonstrates LLMs' potential to significantly accelerate complex RE processes.

Original Abstract

Due to the textual and repetitive nature of many Requirements Engineering (RE) artefacts, Large Language Models (LLMs) have proven useful to automate their generation and processing. In this paper, we discuss a possible approach for automating the Goal-Oriented Requirements Engineering (GORE) process by extracting functional goals from software documentation through three phases: actor identification, high and low-level goal extraction. To implement these functionalities, we propose a chain of LLMs fed with engineered prompts. We experimented with different variants of in-context learning and measured the similarities between input data and in-context examples to better investigate their impact. Another key element is the generation-critic mechanism, implemented as a feedback loop involving two LLMs. Although the pipeline achieved 61% accuracy in low-level goal identification, the final stage, these results indicate the approach is best suited as a tool to accelerate manual extraction rather than as a full replacement. The feedback-loop mechanism with Zero-shot outperformed stand-alone Few-shot, with an ablation study suggesting that performance slightly degrades without the feedback cycle. However, we reported that the combination of the feedback mechanism with Few-shot does not deliver any advantage, possibly suggesting that the primary performance ceiling is the prompting strategy applied to the 'critic' LLM. Together with the refinement of both the quantity and quality of the Shot examples, future research will integrate Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) prompting to improve accuracy.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.