ArXiv TLDR

How Supply Chain Dependencies Complicate Bias Measurement and Accountability Attribution in AI Hiring Applications

🐦 Tweet
2604.22679

Gauri Sharma, Maryam Molamohammadi

cs.CYcs.AI

TLDR

AI hiring's complex supply chains fragment responsibility, making bias measurement and accountability difficult for regulators and deployers.

Key contributions

  • Bias in AI hiring stems from component interactions, not isolated parts, but proprietary systems impede integrated evaluation.
  • Deploying organizations bear legal responsibility without technical visibility into vendor-supplied algorithms.
  • Proposes multi-layered interventions: system-level audits, vendor guidelines, continuous monitoring, and documentation.
  • Advocates for coordinated action across technical, organizational, and regulatory domains for effective accountability.

Why it matters

This paper is crucial for understanding how AI supply chain complexities undermine bias measurement and accountability in hiring. It reveals that fragmented responsibilities and information asymmetries lead to biased outcomes despite individual compliance. The proposed interventions offer a path toward more effective and coordinated AI governance.

Original Abstract

The increasing adoption of AI systems in hiring has raised concerns about algorithmic bias and accountability, prompting regulatory responses including the EU AI Act, NYC Local Law 144, and Colorado's AI Act. While existing research examines bias through technical or regulatory lenses, both perspectives overlook a fundamental challenge: modern AI hiring systems operate within complex supply chains where responsibility fragments across data vendors, model developers, platform providers, and deploying organizations. This paper investigates how these dependency chains complicate bias evaluation and accountability attribution. Drawing on literature review and regulatory analysis, we demonstrate that fragmented responsibilities create two critical problems. First, bias emerges from component interactions rather than isolated elements, yet proprietary configurations prevent integrated evaluation. A resume parser may function without bias independently but contribute to discrimination when integrated with specific ranking algorithms and filtering thresholds. Second, information asymmetries mean deploying organizations bear legal responsibility without technical visibility into vendor-supplied algorithms, while vendors control implementations without meaningful disclosure requirements. Each stakeholder may believe they are compliant; nevertheless, the integrated system may produce biased outcomes. Analysis of implementation ambiguities reveals these challenges in practice. We propose multi-layered interventions including system-level audits, vendor guidelines, continuous monitoring mechanisms, and documentation across dependency chains. Our findings reveal that effective governance requires coordinated action across technical, organizational, and regulatory domains to establish meaningful accountability in distributed development environments.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.