ArXiv TLDR

Crosslingual Generalization through Multitask Finetuning

🐦 Tweet
2211.01786

Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman + 14 more

cs.CLcs.AIcs.LG

TLDR

This paper demonstrates that multitask finetuning of large multilingual language models on English and machine-translated prompts enables strong zero-shot crosslingual generalization to many languages, including those unseen during training.

Key contributions

  • Finetuning multilingual models (BLOOM, mT5) on English tasks with English prompts enables zero-shot task transfer to non-English languages.
  • Incorporating multilingual tasks and machine-translated prompts further improves performance on both English and non-English tasks, achieving state-of-the-art zero-shot results.
  • Models can generalize zero-shot to tasks in languages never explicitly seen during finetuning, suggesting learning of language-agnostic, high-level capabilities.
  • Introduction of xP3, a large multilingual multitask dataset with 46 languages and both English and machine-translated prompts, to support crosslingual multitask finetuning.

Why it matters

This work is important because it advances the ability of large language models to generalize across languages and tasks without requiring task-specific or language-specific finetuning data. By showing that models can leverage English-centric training and machine-translated prompts to perform well on many languages, including unseen ones, it paves the way for more inclusive and scalable multilingual NLP systems. The publicly released datasets and models also provide valuable resources for further research in crosslingual generalization.

Original Abstract

Multitask prompted finetuning (MTF) has been shown to help large language models generalize to new tasks in a zero-shot setting, but so far explorations of MTF have focused on English data and models. We apply MTF to the pretrained multilingual BLOOM and mT5 model families to produce finetuned variants called BLOOMZ and mT0. We find finetuning large multilingual language models on English tasks with English prompts allows for task generalization to non-English languages that appear only in the pretraining corpus. Finetuning on multilingual tasks with English prompts further improves performance on English and non-English tasks leading to various state-of-the-art zero-shot results. We also investigate finetuning on multilingual tasks with prompts that have been machine-translated from English to match the language of each dataset. We find training on these machine-translated prompts leads to better performance on human-written prompts in the respective languages. Surprisingly, we find models are capable of zero-shot generalization to tasks in languages they have never intentionally seen. We conjecture that the models are learning higher-level capabilities that are both task- and language-agnostic. In addition, we introduce xP3, a composite of supervised datasets in 46 languages with English and machine-translated prompts. Our code, datasets and models are freely available at https://github.com/bigscience-workshop/xmtf.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.