ArXiv TLDR

Gemma: Open Models Based on Gemini Research and Technology

🐦 Tweet
2403.08295

Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju + 103 more

cs.CLcs.AI

TLDR

Gemma is a family of lightweight, open language models that achieve strong performance and safety on academic benchmarks, built using Gemini research and technology.

Key contributions

  • Introduces Gemma models with 2B and 7B parameter sizes, offering pretrained and fine-tuned checkpoints.
  • Outperforms similarly sized open models on 11 out of 18 text-based academic tasks.
  • Provides comprehensive safety and responsibility evaluations alongside detailed model development documentation.

Why it matters

This paper matters because it advances open-source large language models by combining strong performance with a focus on safety and responsible release, which is crucial for fostering innovation and trust in AI technologies while mitigating risks associated with frontier models.

Original Abstract

This work introduces Gemma, a family of lightweight, state-of-the art open models built from the research and technology used to create Gemini models. Gemma models demonstrate strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.