ArXiv TLDR

An Evaluation of Chat Safety Moderations in Roblox

🐦 Tweet
2605.04491

Priya Kaushik, Sonja Brown, Rakibul Hasan, Sazzadur Rahaman

cs.CYcs.CR

TLDR

This paper evaluates Roblox's chat moderation, finding significant failures in detecting unsafe messages like grooming and harassment, and user evasion tactics.

Key contributions

  • Collected 2 million Roblox chat messages from various games and age groups for analysis.
  • Found numerous unsafe messages (grooming, sexualizing minors, bullying) bypassed moderation.
  • Identified a wide range of techniques users employ to evade the chat moderation system.
  • Used a two-step LLM-assisted approach to categorize unsafe content at scale.

Why it matters

This study reveals critical vulnerabilities in Roblox's chat safety, especially concerning underage users. The findings highlight an urgent need for improved moderation systems and strategies to counter sophisticated evasion techniques, ensuring a safer online environment for children.

Original Abstract

Roblox is among the most popular online gaming platforms, used by hundreds of millions of users every day. A substantial portion of these users are underage, who are at a greater risk, where abusive users may utilize Roblox's real-time chat interface to make the initial contact with potential victims. Roblox employs automated chat moderation mechanisms to detect potentially abusive messages; however, to date, their effectiveness has not been independently investigated. Toward this goal, we collected approximately 2 million chat messages from four games across multiple age groups and analyzed them to evaluate the moderation system. These messages were collected from public game servers following ethical and legal norms as well as Roblox's terms of service. We use this corpus to qualitatively study which types of unsafe chats escape the moderation system and how policy-violating users evade the moderation system. Given the dataset's scale, it is prohibitively expensive to conduct qualitative content analysis manually. Therefore, we adopt a two-step approach. First, we manually labeled safe and unsafe messages (n=99.8K) and used them as a ground truth to evaluate four locally hosted state-of-the-art large language models (LLMs). Next, the best-performing LLM was applied to the entire corpus to identify potentially unsafe messages, which we manually categorized using iterative open and axial coding methods until thematic saturation was reached. Overall, our findings reveal a troublesome reality: numerous instances of unsafe chat messages related to grooming, sexualizing minors, bullying, & harassment, violence, self-harm, and sharing sensitive information, etc., escaped the current moderation. Our analysis of users whose messages were previously flagged revealed that they continue to send harmful messages by employing a wide range of techniques to evade the moderation system.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.