Localizing Paragraph Memorization in Language Models
Niklas Stoehr, ETH Zurich
Mitchell Gordon, Google
Chiyuan Zhang, Google
Owen Lewis, Google
arXiv
Can we localize the weights and mechanisms used by a language model to memorize and re- cite entire paragraphs of its training data? In this paper, we show that while memorization is spread across multiple layers and model components, gradients of memorized paragraphs have a distinguishable spatial pattern, being larger in lower model layers than gradients of non-memorized examples. Moreover, the memorized examples can be unlearned by fine-tuning only the high-gradient weights. We localize a low-layer attention head that appears to be especially involved in paragraph memorization.
Localizing Paragraph Memorization in Language Models
Niklas Stoehr, ETH Zurich
Mitchell Gordon, Google
Chiyuan Zhang, Google
Owen Lewis, Google
arXiv
Can we localize the weights and mechanisms used by a language model to memorize and re- cite entire paragraphs of its training data? In this paper, we show that while memorization is spread across multiple layers and model components, gradients of memorized paragraphs have a distinguishable spatial pattern, being larger in lower model layers than gradients of non-memorized examples. Moreover, the memorized examples can be unlearned by fine-tuning only the high-gradient weights. We localize a low-layer attention head that appears to be especially involved in paragraph memorization.