ArXiv TLDR

Carbon-Taxed Transformers: A Green Compression Pipeline for Overgrown Language Models

🐦 Tweet
2604.25903

Ajmain Inqiad Alam, Palash Roy, Chanchal K. Roy, Banani Roy, Kevin A. Schneider

cs.SEcs.LG

TLDR

Carbon-Taxed Transformers (CTT) is a green compression pipeline that drastically reduces LLM memory, time, and CO2 emissions for SE tasks while preserving accuracy.

Key contributions

  • Introduces Carbon-Taxed Transformers (CTT), a multi-architectural compression pipeline for LLMs in software engineering.
  • Achieves up to 49x memory reduction, 3-10x inference time reduction, and 81% CO2 emission decrease.
  • Maintains high accuracy, retaining 89-98% on various SE tasks like code clone detection and summarization.
  • Ablation studies confirm the effectiveness of CTT's pipeline ordering and individual compression components.

Why it matters

This paper offers a critical solution to the growing problem of unsustainable computational costs and environmental impact of large language models. CTT provides a viable path for deploying efficient, green, and performant AI in software engineering, promoting responsible AI development.

Original Abstract

The accelerating adoption of Large Language Models (LLMs) in software engineering (SE) has brought with it a silent crisis: unsustainable computational cost. While these models demonstrate remarkable capabilities in different SE tasks, they are unmanageably large, slow to deploy, memory-intensive, and carbon-heavy. This reality threatens not only the scalability and accessibility of AI-powered SE, but also its long-term environmental sustainability. The research challenge is clear: we must go beyond accuracy and address efficiency and environmental cost as first-class design constraints. To meet this challenge, we introduce Carbon-Taxed Transformers (CTT), a systematic multi-architectural compression principled pipeline ordering inspired by economic carbon taxation principles. Drawing from the economic concept of carbon pricing, CTT operationalizes a computational carbon tax that penalizes architectural inefficiencies and rewards deployment-ready compression. We evaluate CTT across three core SE tasks: code clone detection, code summarization, and code generation, with models spanning encoder-only, encoder-decoder, and decoder-only architecture. Our results show that CTT delivers on inference: (1) up to 49x memory reduction, (2) time reduction up to 8-10x for clone detection, up to 3x for summarization, and 4-7x for generation, (3) up to 81% reduction in CO2 emissions and (4) CTT retains around 98% accuracy on clone detection, around 89% on summarization, and up to 91% (textual metrics) and 68% (pass@1) for generation. Two ablation studies show that pipeline ordering and individual component contributions are both essential, providing empirical justification for CTT's design and effectiveness. This work establishes a viable path toward responsible AI in SE through aggressive yet performance-preserving compression.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.