ML-Bench&Guard: Policy-Grounded Multilingual Safety Benchmark and Guardrail for Large Language Models
Yunhan Zhao, Zhaorun Chen, Xingjun Ma, Yu-Gang Jiang, Bo Li
TLDR
ML-Bench&Guard provides a policy-grounded multilingual safety benchmark and dLLM guardrail for culturally and legally aligned LLM safety.
Key contributions
- Introduces ML-Bench, a 14-language safety benchmark built from regional regulations for culturally aligned evaluation.
- Develops ML-Guard, a dLLM-based guardrail for multilingual safety judgment and policy-conditioned compliance.
- Offers two ML-Guard variants: a 1.5B model for fast checks and a 7B model for detailed compliance explanations.
- Demonstrates superior performance over 11 strong baselines across multiple existing and new safety benchmarks.
Why it matters
Ensuring LLM safety across diverse global regulations and cultures is a critical challenge. This paper addresses it by providing a policy-grounded benchmark and a powerful guardrail system. These tools are vital for developing LLMs that are truly aligned with region-specific legal and cultural requirements.
Original Abstract
As Large Language Models (LLMs) are increasingly deployed in cross-linguistic contexts, ensuring safety in diverse regulatory and cultural environments has become a critical challenge. However, existing multilingual benchmarks largely rely on general risk taxonomies and machine translation, which confines guardrail models to these predefined categories and hinders their ability to align with region-specific regulations and cultural nuances. To bridge these gaps, we introduce ML-Bench, a policy-grounded multilingual safety benchmark covering 14 languages. ML-Bench is constructed directly from regional regulations, where risk categories and fine-grained rules derived from jurisdiction-specific legal texts are directly used to guide the generation of multilingual safety data, enabling culturally and legally aligned evaluation across languages. Building on ML-Bench, we develop ML-Guard, a Diffusion Large Language Model (dLLM)-based guardrail model that supports multilingual safety judgment and policy-conditioned compliance assessment. ML-Guard has two variants, one 1.5B lightweight model for fast `safe/unsafe' checking and a more capable 7B model for customized compliance checking with detailed explanations. We conduct extensive experiments against 11 strong guardrail baselines across 6 existing multilingual safety benchmarks and our ML-Bench, and show that ML-Guard consistently outperforms prior methods. We hope that ML-Bench and ML-Guard can help advance the development of regulation-aware and culturally aligned multilingual guardrail systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.