Evaluating the Environmental Impact of using SLMs and Prompt Engineering for Code Generation
Md Afif Al Mamun, Sayan Nath, Gias Uddin, Novarun Deb
TLDR
This paper studies the environmental impact of prompt engineering on SLM code generation, finding that sustainability can be optimized without sacrificing accuracy.
Key contributions
- First systematic study on prompt engineering's environmental impact on SLM code generation.
- Evaluated 6 prompting strategies across 11 SLMs, measuring accuracy, energy, and carbon.
- Sustainability often decouples from accuracy, allowing significant environmental optimizations.
- Chain-of-Thought balances reasoning and energy efficiency; multi-sampling is often inefficient.
Why it matters
This paper provides the first quantitative foundation for "green" prompt engineering. It enables developers to make environmentally responsible choices for AI-assisted coding. Understanding these impacts is crucial as AI's environmental footprint decentralizes.
Original Abstract
The shift from cloud-hosted Large Language Models (LLMs) to locally deployed open-source Small Language Models (SLMs) has democratized AI-assisted coding; however, it has also decentralized the environmental footprint of AI. While prompting strategies - such as Chain-of-Thought and ReAct - serve as external mechanisms for optimizing code generation without modifying model parameters, their impact on energy consumption and carbon emissions remains largely invisible to developers. This paper presents the first systematic empirical study investigating how different prompt engineering strategies in SLM-based code generation impact code generation accuracy alongside sustainability factors. We evaluate six prominent prompting strategies across 11 open-source models (ranging from 1B to 34B parameters) using the HumanEval+ and MBPP+ benchmarks. By measuring Pass@1 accuracy alongside energy (kWh), carbon emissions (kgCO2eq), and inference latency, we reveal that sustainability often decouples from accuracy, allowing significant environmental optimizations without sacrificing performance. Our findings indicate that Chain-of-Thought, being a simpler prompting technique, can provide a near-optimal balance between reasoning capability and energy efficiency. Conversely, multi-sampling strategies often incur disproportionate costs for marginal gains. Finally, we identify grid carbon intensity as the dominant factor in deployment-time emissions, highlighting the need for practitioners to consider regional energy profiles. This work provides a quantitative foundation for "green" prompt engineering, enabling developers to align high-performance code generation with ecological responsibility.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.