A Causal Language Modeling Detour Improves Encoder Continued Pretraining
Rian Touchent, Eric de la Clergerie
TLDR
A Causal Language Modeling detour during encoder continued pretraining boosts downstream performance, outperforming standard MLM, especially in biomedicine.
Key contributions
- Proposes a "CLM detour" for encoder continued pretraining, outperforming standard Masked Language Modeling (MLM).
- Achieves +1.2-2.8pp (French) and +0.3-0.8pp (English) gains on biomedical tasks.
- Finds CLM's dense supervision primarily impacts low transformer layers (0-7), crucial for gains.
- Releases ModernCamemBERT-bio and ModernBERT-bio, new state-of-the-art biomedical encoders.
Why it matters
This paper introduces a novel pretraining strategy that significantly improves encoder adaptation to new domains. By leveraging Causal Language Modeling, it offers a more effective way to enhance model representations. The released models provide state-of-the-art performance for biomedical NLP.
Original Abstract
When adapting an encoder to a new domain, the standard approach is to continue training with Masked Language Modeling (MLM). We show that temporarily switching to Causal Language Modeling (CLM) followed by a short MLM decay improves downstream performance. On biomedical texts with ModernBERT, this CLM detour outperforms MLM baselines trained on identical data and compute across 8 French and 11 English biomedical tasks, by +1.2-2.8pp and +0.3-0.8pp respectively, depending on model size. We investigate the reasons for these gains. We find that CLM's dense supervision impacts low transformer layers (0-7) far more than MLM does. Freezing low layers during CLM eliminates the downstream benefit; freezing mid layers preserves it. The representational changes persist through the MLM decay phase, even when it matches the CLM phase in length, and they scale with model capacity. We release ModernCamemBERT-bio and ModernBERT-bio as state-of-the-art biomedical encoders in Base and Large sizes.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.