← ML Research Wiki / 2506.17209

Fine-Tuning Lowers Safety and Disrupts Evaluation Consistency

(2025)

Paper Information
arXiv ID

Abstract

Fine-tuning a general-purpose large language model (LLM) for a specific domain or task has become a routine procedure for ordinary users.However, fine-tuning is known to remove the safety alignment features of the model, even when the fine-tuning data does not contain any harmful content.We consider this to be a critical failure mode of LLMs due to the widespread uptake of fine-tuning, combined with the benign nature of the "attack".Most well-intentioned developers are likely unaware that they are deploying an LLM with reduced safety.On the other hand, this known vulnerability can be easily exploited by malicious actors intending to bypass safety guardrails.To make any meaningful progress in mitigating this issue, we first need reliable and reproducible safety evaluations.In this work, we investigate how robust a safety benchmark is to trivial variations in the experimental procedure, and the stochastic nature of LLMs.Our initial experiments expose surprising variance in the results of the safety evaluation, even when seemingly inconsequential changes are made to the fine-tuning setup.Our observations have serious implications for how researchers in this field should report results to enable meaningful comparisons in the future.

Summary

This paper investigates the implications of fine-tuning large language models (LLMs) on their safety performance. It emphasizes that fine-tuning can degrade the safety guardrails of models even when trained on benign datasets. Initial experiments revealed high variance in safety evaluations across minor variations in fine-tuning parameters, indicating that current methodologies for measuring safety may not yield repeatable or reliable results. The research highlights the need for robust methodologies in safety evaluations and suggests that fine-tuning can be considered as an attack on safety, leading to potential risks if misused. The paper also discusses the impact of stochastic factors in training and safety evaluations.

Methods

This paper employs the following methods:

  • LoRA fine-tuning
  • quantization
  • stochastic decoding
  • temperature sampling

Models Used

  • Llama-3.2-1B
  • Mistral-7B-v0.3
  • GPT-4o-mini

Datasets

The following datasets were used in this research:

  • databricksdolly-15k
  • Alpaca
  • SORRY-Bench

Evaluation Metrics

  • harmfulness score
  • toxicity score

Results

  • Fine-tuning negatively impacts safety guardrails of LLMs
  • High variance observed in safety evaluations due to different random seeds and temperatures
  • Fine-tuning on self-generated content shows improved safety compared to human-written datasets

Limitations

The authors identified the following limitations:

  • Limited to small models
  • Focus on English language only
  • Does not explore all possible parameters affecting safety

Technical Requirements

  • Number of GPUs: 1
  • GPU Type: A100
  • Compute Requirements: fine-tuned each model for five epochs, saving a model checkpoint at each epoch

Papers Using Similar Methods

External Resources