ArXiv TLDR

Social Bias in LLM-Generated Code: Benchmark and Mitigation

🐦 Tweet
2605.00382

Fazle Rabbi, Lin Ling, Song Wang, Jinqiu Yang

cs.SEcs.AIcs.SI

TLDR

LLM-generated code has severe social bias. A new Fairness Monitor Agent reduces bias by 65% and improves functional correctness without modifying pipelines.

Key contributions

  • Comprehensive benchmark (SocialBias-Bench) reveals severe social bias (up to 60.58%) in LLM-generated code.
  • Standard prompt interventions (CoT, fairness persona) and diffused multi-agent instructions amplify bias.
  • Proposes Fairness Monitor Agent (FMA), a modular component that detects and corrects bias in code generation.
  • FMA reduces bias by 65.1% and improves functional correctness by 8.17% points over baseline.

Why it matters

As LLMs generate code for human-centered applications, ensuring fairness is paramount. This research highlights critical social biases in current LLM-generated code and shows common mitigation strategies are ineffective. The Fairness Monitor Agent offers a practical solution to build more equitable and reliable AI systems.

Original Abstract

Large Language Models (LLMs) are increasingly deployed to generate code for human-centered applications where demographic fairness is critical. However, existing evaluations focus almost exclusively on functional correctness, leaving social bias in LLM-generated code largely unexamined. Extending our prior work on Solar, we conduct a comprehensive empirical study using SocialBias-Bench, a benchmark of 343 real-world coding tasks spanning seven demographic dimensions. We evaluate four prominent LLMs and find severe bias across all models, with Code Bias Scores reaching up to 60.58%. We further show that standard prompt-level interventions, such as Chain-of-Thought reasoning and fairness persona assignment, inadvertently amplify bias rather than reduce it. We then investigate whether structured multi-agent software process frameworks can improve fairness, finding that structured pipelines reduce bias when early roles correctly scope what the code should and should not consider. However, adding explicit fairness instructions to all agent roles produces worse outcomes than providing none, suggesting that diffused responsibility goes unaddressed. To address these limitations, we propose the Fairness Monitor Agent (FMA), a modular component that plugs into any existing code generation pipeline without modifying it. FMA analyzes the task description to determine which attributes should be considered or restricted, then detects and corrects violations through an iterative review process, without requiring an executable test suite. Evaluated on all 343 tasks, FMA reduces bias by 65.1% compared to a developer agent alone and improves functional correctness from 75.80% to 83.97%, outperforming all other studied approaches.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.