Green Shielding: A User-Centric Approach Towards Trustworthy AI
Aaron J. Li, Nicolas Sanchez, Hao Huang, Ruijiang Dong, Jaskaran Bains + 6 more
TLDR
Green Shielding proposes a user-centric approach to build trustworthy AI by characterizing how benign input variations affect LLM behavior.
Key contributions
- Introduces Green Shielding, a user-centric agenda for robust LLM deployment guidance.
- Proposes CUE criteria for benchmarks: Context, Utility, and Elicitation of model behavior.
- Instantiates Green Shielding with HCM-Dx, a medical diagnosis benchmark using patient queries.
- Shows "neutralization" input shifts improve conciseness but reduce coverage of critical conditions.
Why it matters
This paper addresses a critical gap in LLM deployment by showing how routine input variations significantly alter model outputs. It provides a framework and benchmark to develop evidence-backed user guidance for safer AI interaction. This is crucial for high-stakes domains like medicine.
Original Abstract
Large language models (LLMs) are increasingly deployed, yet their outputs can be highly sensitive to routine, non-adversarial variation in how users phrase queries, a gap not well addressed by existing red-teaming efforts. We propose Green Shielding, a user-centric agenda for building evidence-backed deployment guidance by characterizing how benign input variation shifts model behavior. We operationalize this agenda through the CUE criteria: benchmarks with authentic Context, reference standards and metrics that capture true Utility, and perturbations that reflect realistic variations in the Elicitation of model behavior. Guided by the PCS framework and developed with practicing physicians, we instantiate Green Shielding in medical diagnosis through HealthCareMagic-Diagnosis (HCM-Dx), a benchmark of patient-authored queries, together with structured reference diagnosis sets and clinically grounded metrics for evaluating differential diagnosis lists. We also study perturbation regimes that capture routine input variation and show that prompt-level factors shift model behavior along clinically meaningful dimensions. Across multiple frontier LLMs, these shifts trace out Pareto-like tradeoffs. In particular, neutralization, which removes common user-level factors while preserving clinical content, increases plausibility and yields more concise, clinician-like differentials, but reduces coverage of highly likely and safety-critical conditions. Together, these results show that interaction choices can systematically shift task-relevant properties of model outputs and support user-facing guidance for safer deployment in high-stakes domains. Although instantiated here in medical diagnosis, the agenda extends naturally to other decision-support settings and agentic AI systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.