ArXiv TLDR

Qwen Technical Report

🐦 Tweet
2309.16609

Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang + 43 more

cs.CL

TLDR

Qwen is a versatile large language model series featuring base, chat, coding, and math-specialized models that achieve strong performance across diverse AI tasks, rivaling larger and proprietary models.

Key contributions

  • Introduces Qwen base and Qwen-Chat models with human alignment and RLHF fine-tuning for enhanced conversational abilities.
  • Develops specialized variants like Code-Qwen and Math-Qwen-Chat that excel in coding and mathematical problem-solving.
  • Demonstrates superior or competitive performance compared to open-source and some proprietary models on complex downstream tasks.

Why it matters

This paper matters because it presents a comprehensive and scalable LLM framework that advances the state of open-source AI models, particularly in chat-based interaction, coding, and math reasoning, thereby enabling more capable and accessible AI applications across multiple domains.

Original Abstract

Large language models (LLMs) have revolutionized the field of artificial intelligence, enabling natural language processing tasks that were previously thought to be exclusive to humans. In this work, we introduce Qwen, the first installment of our large language model series. Qwen is a comprehensive language model series that encompasses distinct models with varying parameter counts. It includes Qwen, the base pretrained language models, and Qwen-Chat, the chat models finetuned with human alignment techniques. The base language models consistently demonstrate superior performance across a multitude of downstream tasks, and the chat models, particularly those trained using Reinforcement Learning from Human Feedback (RLHF), are highly competitive. The chat models possess advanced tool-use and planning capabilities for creating agent applications, showcasing impressive performance even when compared to bigger models on complex tasks like utilizing a code interpreter. Furthermore, we have developed coding-specialized models, Code-Qwen and Code-Qwen-Chat, as well as mathematics-focused models, Math-Qwen-Chat, which are built upon base language models. These models demonstrate significantly improved performance in comparison with open-source models, and slightly fall behind the proprietary models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.