HintPilot: LLM-based Compiler Hint Synthesis for Code Optimization
Hanyun Jiang, Peisen Yao, Kaiyue Li, Tingting Lin, Chengpeng Wang + 1 more
TLDR
HintPilot uses LLMs to synthesize compiler hints, improving code optimization and achieving significant speedups without introducing semantic errors.
Key contributions
- Synthesizes compiler hints using LLMs to guide traditional compiler optimizations.
- Employs retrieval-augmented synthesis and profiling-guided iterative refinement.
- Achieves up to 6.88x geometric mean speedup over -Ofast on benchmarks.
- Preserves program correctness while optimizing code effectively.
Why it matters
Modern compilers struggle with vast optimization spaces, and direct LLM code optimization can introduce errors. HintPilot offers a novel, safer approach by using LLMs to generate compiler hints. This method significantly boosts code performance while ensuring correctness, addressing a critical challenge in software development.
Original Abstract
Code optimization remains a core objective in software development, yet modern compilers struggle to navigate the enormous optimization spaces. While recent research has looked into employing large language models (LLMs) to optimize source code directly, these techniques can introduce semantic errors and miss fine-grained compiler-level optimization opportunities. We present HintPilot, which bridges LLM-based reasoning with traditional compiler infrastructures via synthesizing compiler hints, annotations that steer compiler behavior. HintPilot employs retrieval-augmented synthesis over compiler documentation and applies profiling-guided iterative refinement to synthesize semantics-preserving and effective hints. Upon PolyBench and HumanEval-CPP benchmarks, HintPilot achieves up to 6.88x geometric mean speedup over -Ofast while preserving program correctness.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.