Revisiting Semantic Role Labeling: Efficient Structured Inference with Dependency-Informed Analysis
Sangpil Youm, Leah Jones, Bonnie J. Dorr
TLDR
This paper introduces a modernized, encoder-based framework for Semantic Role Labeling (SRL) that offers 10x faster inference while maintaining performance.
Key contributions
- Introduces a modernized encoder-based framework for structured Semantic Role Labeling (SRL).
- Achieves 10x faster inference while preserving explicit predicate-argument structure.
- Maintains or improves predictive performance with modern LLMs like BERT, RoBERTa, and DeBERTa.
- Employs dependency-informed analysis to enhance structural stability and enable multilingual SRL projection.
Why it matters
This paper addresses the limitations of current LLMs in explicit semantic representation and outdated SRL frameworks. It provides a fast, accurate, and structurally robust solution for SRL, crucial for applications requiring clear predicate-argument structures. Its dependency-informed analysis also sheds light on LLM behavior.
Original Abstract
Semantic Role Labeling (SRL) provides an explicit representation of predicate-argument structure, capturing linguistically grounded relations such as who did what to whom. While recent NLP progress has been dominated by large language models (LLMs), these systems often rely on implicit semantic representations, often lacking explicit structural constraints and systematic explanatory mechanisms. Traditionally, SRL systems have often relied on AllenNLP; however, the framework entered maintenance mode in December 2022, limiting compatibility with evolving encoder architectures and modern inference requirements. We revisit structured SRL modeling, introducing a modernized encoder-based framework that preserves explicit predicate-argument structure while enabling inference 10 times faster. Using BERT-base, the model attains comparable predictive performance, and RoBERTa and DeBERTa further improve F1 performance within the same framework. We adopt a dependency-informed diagnostic methodology to characterize span-level inconsistencies and conduct a representation-level analysis of LLM behavior under dependency-informed structural signals. Results indicate that dependency cues primarily improve structural stability. Finally, we illustrate how the framework's explicit predicate-argument structure can support multilingual SRL projection as a downstream application.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.