Publication

PRIME: Ultra-Low-Rank Principal-Residual Model Merging

Seung-Ho Lee, Kyungsu Lee, Bazarvaani Zuchi, Jeongmin Ahn, Insuk Seo, Donghyeon Jeon, Inho Kang, and Seung-Hoon Na*

Seung-Ho Lee, Kyungsu Lee, Bazarvaani Zuchi, Jeongmin Ahn, Insuk Seo, Donghyeon Jeon, Inho Kang, and Seung-Hoon Na. "PRIME: Ultra-Low-Rank Principal-Residual Model Merging," ACL 2026 (ACL 2026) , 2026.

ACL 2026 ACL 2026 2026
PRIME: Ultra-Low-Rank Principal-Residual Model Merging

Abstract

Model merging has emerged as an effective approach for integrating multiple task-specific fine-tuned models into a single unified model without requiring additional data-intensive training. A central challenge in model merging is to reduce task interference while preserving the task-specific capabilities of the original models. In this work, we propose PRIME, an ultra-low-rank principal-residual model merging framework that decomposes task vector merging into two complementary stages. First, ultra-low-rank principal task vector merging retains only a small fraction of singular vectors, effectively reducing task interference while preserving most of the task-specific performance. Second, orthogonal residual task vector merging incorporates the remaining components by projecting them onto the null space of the principal subspace, thereby avoiding interference while recovering additional task-relevant information. Extensive experiments on eight natural language processing tasks demonstrate that PRIME consistently outperforms existing model merging methods, achieving improvements of up to 1.18% on T5 and 1.9% on LLaMA-3.2-3B.