ArXiv TLDR

Secret Stealing Attacks on Local LLM Fine-Tuning through Supply-Chain Model Code Backdoors

🐦 Tweet
2604.27426

Zi Li, Tian Zhou, Wenze Li, Jingyu Hua, Yunlong Mao + 1 more

cs.CRcs.AI

TLDR

A new attack method exploits backdoored model code to steal sensitive secrets from local LLM fine-tuning, bypassing current defenses.

Key contributions

  • Identifies a supply-chain vector: backdoored model code camouflaged as architectural definitions.
  • Introduces a deterministic full-chain memorization mechanism for token-level secret stealing.
  • Achieves attacker-verifiable secret stealing via black-box queries, distinguishing true leakage.
  • Demonstrates over 98% attack success rate, bypassing DP-SGD, semantic, and code auditing.

Why it matters

This paper exposes a critical vulnerability in local LLM fine-tuning, challenging the assumption of privacy for offline training. It shifts the threat model from passive weight poisoning to active execution hijacking. The findings highlight the urgent need for enhanced supply-chain security and new defense mechanisms for LLM development.

Original Abstract

Local fine-tuning datasets routinely contain sensitive secrets such as API keys, personal identifiers, and financial records. Although ''local offline fine-tuning'' is often viewed as a privacy boundary, we reveal that compromised model code is sufficient to steal them. Current passive pretrained-weight poisoning attacks, while effective for natural language, fundamentally fail to capture such sparse high-entropy targets due to their reliance on probabilistic semantic prefixes. To bridge this gap, we identify and exploit a practical but overlooked supply-chain vector -- model code camouflaged as standard architectural definitions -- to realize a paradigm shift from passive weight poisoning to active execution hijacking. We introduce a deterministic full-chain memorization mechanism: it locks onto token-level secrets in dynamic computation flows via online tensor-rule matching, and leverages value-gradient decoupling to stealthily inject attack gradients, overcoming gradient drowning to force model memorization. Furthermore, we achieve, for the first time, attacker-verifiable secret stealing through black-box queries that precisely distinguishes true leakage from hallucination. Experiments demonstrate that our method achieves over 98\% Strict ASR without compromising the primary task, and can effectively bypass defense measures including DP-SGD, semantic auditing, and code auditing.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.