A Comparative Study of Controlled Text Generation Systems Using Level-Playing-Field Evaluation Principles
TLDR
This paper introduces a standardized 'level-playing-field' evaluation for controlled text generation systems, revealing many systems underperform original claims.
Key contributions
- Introduced a Level-Playing-Field (LPF) evaluation for controlled text generation systems.
- LPF standardizes system output processing and uses shared evaluation datasets/methods.
- Re-evaluation via LPF showed many CTG systems perform worse than original claims.
- Demonstrated the critical need for standardized, reproducible evaluation in CTG research.
Why it matters
This paper highlights the urgent need for standardized, reproducible evaluation in controlled text generation. Without such practices, published performance claims may substantially misrepresent true system capabilities, hindering progress.
Original Abstract
Background: Many different approaches to controlled text generation (CTG) have been proposed over recent years, but it is difficult to get a clear picture of which approach performs best, because different datasets and evaluation methods are used in each case to assess the control achieved. Objectives: Our aim in the work reported in this paper is to develop an approach to evaluation that enables us to comparatively evaluate different CTG systems in a manner that is both informative and fair to the individual systems. Methods: We use a level-playing-field (LPF) approach to comparative evaluation where we (i) generate and process all system outputs in a standardised way, and (ii) apply a shared set of evaluation methods and datasets, selected based on those currently in use, in order to ensure fair evaluation. Results: When re-evaluated in this way, performance results for a representative set of current CTG systems differ substantially from originally reported results, in most cases for the worse. This highlights the importance of a shared standardised way of assessing controlled generation. Conclusions: The discrepancies revealed by LPF evaluation demonstrate the urgent need for standardised, reproducible evaluation practices in CTG. Our results suggest that without such practices, published performance claims may substantially misrepresent true system capabilities.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.