Why AI Harms Can't Be Fixed One Identity at a Time: What 5300 Incident Reports Reveal About Intersectionality
Edyta Bogucka, Sanja Šćepanović, Daniele Quercia
TLDR
This paper reveals that AI harms are intersectional, not isolated, using 5300 incident reports, urging a shift in risk assessment.
Key contributions
- Conducted a large-scale analysis of 5,300 AI incident reports to identify 1,513 harmed subjects and their identities.
- Revealed age and political identity are as common in AI harms as race and gender, challenging current assessment focus.
- Demonstrated harm amplification up to 3x at specific intersections like adolescent girls and lower-class people of color.
Why it matters
This paper is crucial as it empirically shows AI harms are intersectional, not isolated, affecting specific groups disproportionately. It highlights the inadequacy of current AI risk assessments, providing a data-driven argument for integrating intersectionality to achieve more equitable harm mitigation.
Original Abstract
AI risk assessment is the primary tool for identifying harms caused by AI systems. These include intersectional harms, which arise from the interaction between identity categories (e.g., class and skin tone) and which do not occur, or occur differently, when those categories are considered separately. Yet existing AI risk assessments are still built around isolated identity categories, and when intersections are considered, they focus almost exclusively on race and gender. Drawing on a large-scale analysis of documented AI incidents, we show that AI harms do not occur one identity category at a time. Using a structured rubric applied with a Large Language Model (LLM), we analyze 5,300 reports from 1,200 documented incidents in the AI Incident Database, the most curated source of incident data. From these reports, we identify 1,513 harmed subjects and their associated identity categories, achieving 98% accuracy. At the level of individual categories, we find that age and political identity appear in documented AI harms at rates comparable to race and gender. At the level of intersecting categories, harm is amplified up to three times at specific intersections: adolescent girls, lower-class people of color, and upper-class political elites. We argue that intersectionality should be a core component of AI risk assessment to more accurately capture how harms are produced and distributed across social groups.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.