ArXiv TLDR

Revisiting Code Debloating with Ground Truth-based Evaluation

🐦 Tweet
2604.17717

Muhammad Bilal, Moiz Ali, Mohit Kumar, Fareed Zaffar, Fahad Shaon + 2 more

cs.SE

TLDR

A new ground-truth evaluation of code debloating tools reveals dynamic methods remove too much code, while static methods retain too much.

Key contributions

  • Introduces a ground-truth evaluation paradigm for application-level code debloating.
  • Analyzes eight state-of-the-art debloaters across source, IR, and binary transformation levels.
  • Finds dynamic analysis tools falsely remove up to 94% of code that should be retained.
  • Reveals static analysis tools have high false retention rates and may add code.

Why it matters

Current debloating evaluations use imperfect proxies, hindering true performance assessment. This paper introduces a ground-truth methodology, revealing critical flaws in existing dynamic and static debloating tools. These issues can lead to functional errors, robustness failures, and security vulnerabilities.

Original Abstract

Program debloating aims to remove unused code to reduce performance overhead, attack surfaces, and maintenance costs. Over time, debloating has evolved across multiple layers (container, library, and application), each building on the principles of application-level debloating. Despite its central role, application-level debloating continues to rely on imperfect proxies for measuring performance, such as test-case-driven evaluation for correctness, code size for runtime efficiency, and gadget count reduction for estimating security posture. While there is widespread skepticism about using such imperfect proxies, the community still lacks standardized methodologies or benchmarks to assess the true performance of application-level software debloating. This experience paper aims to address the gap. We revisit the foundations of application-level debloating through a ground-truth-based evaluation paradigm. Our analysis of eight state-of-the-art debloaters - Blade, Chisel, Cov, CovA, Lmcas, Trimmer, Occam, and Razor - uncovers insights previously unattainable through traditional evaluations. These tools collectively span the spectrum of source-to-source, IR-to-IR, and binary-to-binary transformation paradigms, characterizing a holistic reassessment across abstraction levels. Our analysis reveals that while dynamic analysis-based tools often remove up to 94% of code that should be retained, static analysis-based approaches exhibit the opposite behavior, showing high false retention rates due to coarse-grained dependency over-approximation. Additionally, static analyses may add code by introducing specialized variants of functions. False retentions and removals not only cause functional incorrectness but may also lead to systematic inconsistency, robustness failures, and exploitable vulnerabilities.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.