SafeTune: Search-based Harmfulness Minimisation for Large Language Models
Giordano d'Aloisio, David Williams, Giusy Annunziata, Zhiwei Fei, Antinisca Di Marco + 1 more
TLDR
SafeTune is a search-based method that significantly reduces harmfulness and increases relevance in LLM responses through hyperparameter tuning and prompt engineering.
Key contributions
- Proposes SafeTune, a multi-objective search approach to minimize LLM harmfulness.
- Achieves safety and relevance via hyperparameter tuning and system prompt engineering.
- Significantly reduces harmfulness and boosts relevance for Qwen3.5 0.8B.
- Identifies encouraging response repetition as most impactful for safety and relevance.
Why it matters
This paper introduces SafeTune, a novel approach to make LLMs safer and more relevant. By using search-based optimization, it offers a practical method for developers to fine-tune models. Its findings on repetition provide valuable insights for future safety research.
Original Abstract
The widespread adoption of Large Language Models (LLMs) raises concerns about the potential harmfulness of their responses. In this paper, we first investigate the harmfulness of responses from four general-purpose LLMs. Next, we propose SafeTune, a multi-objective search-based approach to mitigate harmfulness while increasing response relevance through hyperparameter tuning and system prompt engineering. Our initial evaluation shows that SafeTune significantly reduces the rate of harmful responses generated by Qwen3.5 0.8B and increases prompt-response relevance (both with a large effect size). Among the parameters we explore, we also find that encouraging greater repetition in responses is most impactful in reducing harmfulness while increasing relevance.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.