ArXiv TLDR

DiffHLS: Differential Learning for High-Level Synthesis QoR Prediction with GNNs and LLM Code Embeddings

🐦 Tweet
2604.09240

Zedong Peng, Zeju Li, Qiang Xu, Jieru Zhao

cs.LG

TLDR

DiffHLS uses differential learning with GNNs and LLM code embeddings to predict High-Level Synthesis Quality-of-Result faster.

Key contributions

  • Introduces DiffHLS, a differential learning framework for HLS QoR prediction.
  • Learns from kernel-design pairs, predicting baseline and design-induced delta for efficiency.
  • Combines GNNs for IR graph encoding with LLM code embeddings for the delta pathway.
  • Outperforms GNN baselines on PolyBench and demonstrates scalability on ForgeHLS.

Why it matters

High-Level Synthesis (HLS) optimization is currently slow and expensive. DiffHLS provides a faster, more accurate method for predicting Quality-of-Result, significantly accelerating design space exploration. This advancement makes HLS more practical and efficient for complex hardware development.

Original Abstract

High-Level Synthesis (HLS) compiles C/C++ into RTL, but exploring pragma-driven optimization choices remains expensive because each design point requires time-consuming synthesis. We propose \textbf{\DiffHLS}, a differential learning framework for HLS Quality-of-Result (QoR) prediction that learns from kernel--design pairs: a kernel baseline and a pragma-inserted design variant. \DiffHLS~encodes kernel and design intermediate-representation graphs with dedicated graph neural network (GNN) branches, and augments the delta pathway with code embeddings from a pretrained code large language model (LLM). Instead of regressing absolute targets directly, we jointly predict the kernel baseline and the design-induced delta, and compose them to obtain the design prediction. On PolyBench, \DiffHLS~attains lower average MAPE than GNN baselines under four GNN backbones, and LLM code embeddings consistently improve over a GNN-only ablation. We further validate scalability on the ForgeHLS dataset.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.