ArXiv TLDR

LOFT: Low-Rank Orthogonal Fine-Tuning via Task-Aware Support Selection

🐦 Tweet
2605.11872

Lanxin Zhao, Bamdev Mishra, Pratik Jawanpuria, Lequan Lin, Dai Shi + 2 more

cs.LGstat.ML

TLDR

LOFT is a low-rank orthogonal fine-tuning framework that separates adaptation subspace and transformation, improving PEFT efficiency via task-aware support selection.

Key contributions

  • LOFT explicitly separates adaptation subspace and transformation in orthogonal PEFT.
  • Unifies existing orthogonal PEFT methods under a multiplicative subspace rotation formulation.
  • Introduces task-aware support selection based on first-order analysis of training signals.
  • Improves efficiency-performance trade-off across diverse tasks with gradient-informed supports.

Why it matters

This paper offers a principled way to improve orthogonal PEFT by explicitly separating adaptation components. By introducing task-aware support selection, it enhances efficiency and performance across various AI tasks. This work highlights a crucial direction for advancing parameter-efficient fine-tuning.

Original Abstract

Orthogonal parameter-efficient fine-tuning (PEFT) adapts pretrained weights through structure-preserving multiplicative transformations, but existing methods often conflate two distinct design choices: the subspace in which adaptation occurs and the transformation applied within that subspace. This paper introduces LOFT, a low-rank orthogonal fine-tuning framework that explicitly separates these two components. By viewing orthogonal adaptation as a multiplicative subspace rotation, LOFT provides a unified formulation that recovers representative orthogonal PEFT methods, including coordinate-, butterfly-, Householder-, and principal-subspace-based variants. More importantly, this perspective exposes support selection as a central design axis rather than a byproduct of a particular parameterization. We develop a first-order analysis showing that useful adaptation supports should be informed by the downstream training signal, motivating practical task-aware support selection strategies. Across language understanding, visual transfer, mathematical reasoning, and multilingual out-of-distribution adaptation, LOFT recovers principal-subspace orthogonal adaptation while gradient-informed supports improve the efficiency-performance trade-off under matched parameter, memory, and compute budgets. These results suggest that principled support selection is an important direction for improving orthogonal PEFT.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.