LASE: Language-Adversarial Speaker Encoding for Indic Cross-Script Identity Preservation
TLDR
LASE is a language-adversarial speaker encoder that preserves speaker identity across Indic scripts, improving multilingual voice cloning.
Key contributions
- Off-the-shelf speaker encoders fail to preserve identity across scripts, especially for Indic languages.
- Introduces LASE, an encoder using contrastive and gradient-reversal losses for language-agnostic embeddings.
- LASE achieves near-zero identity loss across scripts, outperforming baselines 2.4-2.7x.
- Matches ECAPA-TDNN in diarisation with ~100x less training data.
Why it matters
Multilingual voice cloning often fails to preserve speaker identity across scripts, leading to inconsistent output. LASE addresses this. It creates language-agnostic speaker embeddings, enabling consistent, high-quality cross-script voice synthesis for advanced multilingual AI.
Original Abstract
A speaker encoder used in multilingual voice cloning should treat the same speaker identically regardless of which script the audio was uttered in. Off-the-shelf encoders do not, and the failure is accent-conditional. On a 1043-pair Western-accented voice corpus across English, Hindi, Telugu, and Tamil, WavLM-base-plus-sv loses 0.082 absolute cosine similarity when the same voice changes script and ECAPA-TDNN loses 0.105. On a 1369-pair Indian-accented voice corpus, the gap shrinks to 0.006 (WavLM-SV) and 0.044 (ECAPA-TDNN). The leak is largest where it matters most for cross-script TTS: when a system projects a non-Indic-trained voice into Indic scripts. We present LASE (Language-Adversarial Speaker Encoder), a small projection head over frozen WavLM-base-plus trained with two losses: a supervised contrastive loss over voice identity, and a gradient-reversal cross-entropy against a 4-language classifier that pushes the embedding to be language-uninformative while remaining speaker-informative. Trained on 1118 quality-gated cross-script pairs synthesised from 8 commercial multilingual voices, LASE's residual gap is consistent with zero on both corpora (Delta = 0.013 Western, Delta = 0.026 Indian; both bootstrap 95% CIs include zero) and amplifies the cross-script-vs-floor margin 2.4-2.7x over both baselines. An ECAPA+GRL ablation shows the GRL objective improves either backbone but the WavLM choice contributes too. In synthetic multi-speaker diarisation, LASE matches ECAPA-TDNN on cross-script speaker recall (0.788 vs 0.789) with ~100x less training data. We release the r1 checkpoint, both corpora, and the bootstrap recipe.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.