ArXiv TLDR

PSK at SemEval-2026 Task 9: Multilingual Polarization Detection Using Ensemble Gemma Models with Synthetic Data Augmentation

🐦 Tweet
2605.05159

Srikar Kashyap Pulipaka

cs.CLcs.AIcs.LG

TLDR

This paper presents an ensemble of fine-tuned Gemma models with synthetic data augmentation for multilingual polarization detection, achieving 2nd place at SemEval-2026.

Key contributions

  • Fine-tuned Gemma models (12B, 27B) per language using LoRA for multilingual polarization detection.
  • Employed three synthetic data strategies with GPT-4o-mini and multi-stage quality filtering.
  • Achieved 0.811 mean macro-F1 across 22 languages, ranking 2nd overall at SemEval-2026.
  • Demonstrated significant F1 improvements (2-4%) via per-language threshold tuning and model ensembling.

Why it matters

This paper presents a robust approach for multilingual polarization detection, leveraging synthetic data and ensemble learning. Its 2nd place finish across 22 languages highlights the effectiveness of fine-tuned LLMs and data augmentation, emphasizing generalization in real-world NLP tasks.

Original Abstract

We present our system for SemEval-2026 Task 9: Multilingual Polarization Detection, a binary classification task spanning 22 languages. Our approach fine-tunes separate Gemma~3 models (12B and 27B parameters) per language using Low-Rank Adaptation (LoRA), augmented with synthetic data generated by a large language model (LLM). We employ three synthetic data strategies (direct generation, paraphrasing, and contrastive pair creation) using GPT-4o-mini, with a multi-stage quality filtering pipeline including embedding-based deduplication. We find that per-language threshold tuning on the development set yields 2 to 4\% F1 improvements without retraining. We also use weighted ensembles of 12B and 27B model predictions with per-language strategy selection. Our final system achieves a mean macro-F1 of 0.811 across all 22 languages, ranking 2nd overall of the participating teams, with 1st place finishes in 3 languages and top-3 in 8 languages. We also find that alternative architectures (XLM-RoBERTa, Qwen3) that showed strong development set performance suffered 30 to 50\% F1 drops on the test set, highlighting the importance of generalization.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.