ArXiv TLDR

FedDetox: Robust Federated SLM Alignment via On-Device Data Sanitization

🐦 Tweet
2604.06833

Shunan Zhu, Jiawei Chen, Yonghao Yu, Hideya Ochiai

cs.CRcs.LG

TLDR

FedDetox enables robust federated SLM alignment by sanitizing toxic on-device data, preventing unintended poisoning and preserving model safety.

Key contributions

  • Addresses unintended data poisoning in federated learning from unsafe client data.
  • Employs knowledge distillation for lightweight safety classifiers on edge devices.
  • Replaces unsafe samples with refusal templates during FL, transforming them into safety signals.
  • Preserves model safety comparable to centralized baselines without compromising utility.

Why it matters

Federated learning is crucial for leveraging private data, but data toxicity is a major challenge. FedDetox provides a practical, on-device solution to ensure model safety alignment without compromising privacy or general utility, making FL more viable for sensitive applications.

Original Abstract

As high quality public data becomes scarce, Federated Learning (FL) provides a vital pathway to leverage valuable private user data while preserving privacy. However, real-world client data often contains toxic or unsafe information. This leads to a critical issue we define as unintended data poisoning, which can severely damage the safety alignment of global models during federated alignment. To address this, we propose FedDetox, a robust framework tailored for Small Language Models (SLMs) on resource-constrained edge devices. We first employ knowledge distillation to transfer sophisticated safety alignment capabilities from large scale safety aligned teacher models into light weight student classifiers suitable for resource constrained edge devices. Specifically, during federated learning for human preference alignment, the edge client identifies unsafe samples at the source and replaces them with refusal templates, effectively transforming potential poisons into positive safety signals. Experiments demonstrate that our approach preserves model safety at a level comparable to centralized baselines without compromising general utility.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.