ArXiv TLDR

LoREnc: Low-Rank Encryption for Securing Foundation Models and LoRA Adapters

🐦 Tweet
2605.13163

Beomjin Ahn, Jungmin Kwon, Chanyong Jung, Jaewook Chung

cs.CRcs.CVcs.LG

TLDR

LoREnc is a training-free framework that secures foundation models and LoRA adapters against IP leakage and model recovery attacks with minimal overhead.

Key contributions

  • Secures FMs and LoRA adapters without retraining or original data.
  • Employs spectral truncation and compensation for weight protection.
  • Uses orthogonal reparameterization to obscure adapter fingerprints.
  • Achieves strong model recovery protection with under 1% computational overhead.

Why it matters

Current defenses are impractical due to retraining or data requirements. LoREnc offers a practical, training-free solution to secure generative AI models and adapters. This is crucial for protecting intellectual property and preventing model recovery attacks in on-device AI.

Original Abstract

Foundation models and low-rank adapters enable efficient on-device generative AI but raise risks such as intellectual property leakage and model recovery attacks. Existing defenses are often impractical because they require retraining or access to the original dataset. We propose LoREnc, a training-free framework that secures both FMs and adapters via spectral truncation and compensation. LoREnc suppresses dominant low-rank components of FM weights, compensates for the missing information in authorized adapters, and further applies orthogonal reparameterization to obscure structural fingerprints of the protected adapter. Unauthorized users produce structurally collapsed outputs, while authorized users recover exact performance. Experiments demonstrate that LoREnc provides strong protection against model recovery with under 1% computational overhead.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.