Benchmarking the Safety of Large Language Models for Robotic Health Attendant Control
Mahiro Nakao, Kazuhiro Takemoto
TLDR
Evaluates safety of 72 large language models controlling robotic health attendants, revealing high violation rates and safety risks.
Key contributions
- Created dataset of 270 harmful instructions based on medical ethics for safety testing.
- Tested 72 LLMs in robotic health attendant simulation, finding average 54.4% violation rate.
- Proprietary models safer than open-weight; model size and release date affect safety.
- Medical fine-tuning and prompt defenses only modestly reduce unsafe behaviors.
Why it matters
This paper highlights critical safety risks of LLMs in robotic health care control, showing current models often fail ethical standards. It urges prioritizing safety in LLM development for clinical use.
Original Abstract
Large language models (LLMs) are increasingly considered for deployment as the control component of robotic health attendants, yet their safety in this context remains poorly characterized. We introduce a dataset of 270 harmful instructions spanning nine prohibited behavior categories grounded in the American Medical Association Principles of Medical Ethics, and use it to evaluate 72 LLMs in a simulation environment based on the Robotic Health Attendant framework. The mean violation rate across all models was 54.4\%, with more than half exceeding 50\%, and violation rates varied substantially across behavior categories, with superficially plausible instructions such as device manipulation and emergency delay proving harder to refuse than overtly destructive ones. Model size and release date were the primary determinants of safety performance among open-weight models, and proprietary models were substantially safer than open-weight counterparts (median 23.7\% versus 72.8\%). Medical domain fine-tuning conferred no significant overall safety benefit, and a prompt-based defense strategy produced only a modest reduction in violation rates among the least safe models, leaving absolute violation rates at levels that would preclude safe clinical deployment. These findings demonstrate that safety evaluation must be treated as a first-class criterion in the development and deployment of LLMs for robotic health attendants.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.