FairEnc: A Fair Vision-Language Model with Fair Vision and Text Encoders for Glaucoma Detection
Mohamed Elhabebe, Ayman El-Baz, Qing Liu
TLDR
FairEnc is a VLM pretraining method that debiases both vision and text encoders for fair glaucoma detection across diverse patient populations.
Key contributions
- Mitigates biases across race, gender, ethnicity, and language in both vision and text modalities.
- Text encoder uses LLM-generated synthetic data and contrastive alignment for demographic-invariant representations.
- Visual encoder employs dual-level fairness with mutual information regularization and adversarial debiasing.
- Demonstrates reduced demographic disparity and strong diagnostic performance on public and private datasets.
Why it matters
Automated glaucoma detection is crucial, but fairness is a major challenge. FairEnc addresses this by simultaneously debiasing vision-language models across multiple sensitive attributes, ensuring more equitable outcomes. This method's ability to generalize fairness under distribution shifts makes it promising for real-world clinical deployment.
Original Abstract
Automated glaucoma detection is critical for preventing irreversible vision loss and reducing the burden on healthcare systems. However, ensuring fairness across diverse patient populations remains a significant challenge. In this paper, we propose FairEnc, a fair pretraining method for vision-language models (VLMs) that enables simultaneous debiasing across multiple sensitive attributes. FairEnc jointly mitigates biases in both textual and visual modalities with respect to multiple sensitive attributes, including race, gender, ethnicity, and language. Specifically, for the textual encoder, we leverage a large language model to generate synthetic clinical descriptions with varied sensitive attributes while preserving disease semantics, and employ a contrastive alignment objective to encourage demographic-invariant representations. For the visual encoder, we propose a dual-level fairness strategy that combines mutual information regularization to reduce statistical dependence between learned features and demographic groups, with multi-discriminator adversarial debiasing. Comprehensive experiments on the publicly available Harvard-FairVLMed dataset demonstrate that FairEnc effectively reduces demographic disparity as measured by DPD and DEOdds while achieving strong diagnostic performance under both zero-shot and linear probing evaluations. Additional experiments on the private FairFundus dataset show that FairEnc consistently preserves fairness advantages under cross-domain and cross-modality settings and maintains diagnostic performance within a competitive range. These results highlight FairEnc's ability to generalize fairness under distribution shifts, supporting its potential for more equitable deployment in real-world clinical settings. Our codebase and synthetic clinical notes are available at https://github.com/Mohamed-Elhabebe/FairEnc
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.