Differentially Private De-identification of Dutch Clinical Notes: A Comparative Evaluation
Michele Miranda, Xinlan Yan, Nishant Mishra, Rachel Murphy, Ameen Abu-Hanna + 2 more
TLDR
This paper compares differential privacy, NER, and LLMs for de-identifying Dutch clinical notes, finding hybrid LLM-DP methods best.
Key contributions
- First comparative study of DP, NER, and LLMs for de-identifying Dutch clinical notes.
- Evaluates standalone and hybrid strategies, including NER or LLM preprocessing before DP.
- Finds DP alone degrades utility, but hybrid methods improve privacy-utility trade-off.
- Highlights LLM-based redaction with DP for significant privacy-utility gains.
Why it matters
Automated de-identification is crucial for using sensitive clinical data while protecting patient privacy. This research provides the first comprehensive comparison of leading techniques, including hybrid approaches, for Dutch clinical notes. It offers valuable insights for developing practical and effective privacy-preserving solutions.
Original Abstract
Protecting patient privacy in clinical narratives is essential for enabling secondary use of healthcare data under regulations such as GDPR and HIPAA. While manual de-identification remains the gold standard, it is costly and slow, motivating the need for automated methods that combine privacy guarantees with high utility. Most automated text de-identification pipelines employed named entity recognition (NER) to identify protected entities for redaction. Although methods based on differential privacy (DP) provide formal privacy guarantees, more recently also large language models (LLMs) are increasingly used for text de-identification in the clinical domain. In this work, we present the first comparative study of DP, NER, and LLMs for Dutch clinical text de-identification. We investigate these methods separately as well as hybrid strategies that apply NER or LLM preprocessing prior to DP, and assess performance in terms of privacy leakage and extrinsic evaluation (entity and relation classification). We show that DP mechanisms alone degrade utility substantially, but combining them with linguistic preprocessing, especially LLM-based redaction, significantly improves the privacy-utility trade-off.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.