TY - JOUR
T1 - Evaluating the Ripple Effects of Knowledge Editing in Language Models
AU - Cohen, Roi
AU - Biran, Eden
AU - Yoran, Ori
AU - Globerson, Amir
AU - Geva, Mor
N1 - Publisher Copyright:
© 2024 Association for Computational Linguistics.
PY - 2024
Y1 - 2024
N2 - Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g., ‘‘JackDeppisthesonof Johnny Depp’’) introduces a ‘‘ripple effect’’ in the form of additional facts that the model needs to update (e.g., ‘‘Jack Depp is the sibling of Lily-Rose Depp’’). To address this, we propose novel evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RIPPLE EDITS, a diagnostic benchmark of 5K factual edits, capturing various types of ripple effects. We evaluate prominent editing methods on RIPPLEEDITS, showing that they fail to introduce consistent changes in the model’s knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing.1.
AB - Modern language models capture a large body of factual knowledge. However, some facts can be incorrectly induced or become obsolete over time, resulting in factually incorrect generations. This has led to the development of various editing methods that allow updating facts encoded by the model. Evaluation of these methods has primarily focused on testing whether an individual fact has been successfully injected, and if similar predictions for other subjects have not changed. Here we argue that such evaluation is limited, since injecting one fact (e.g., ‘‘JackDeppisthesonof Johnny Depp’’) introduces a ‘‘ripple effect’’ in the form of additional facts that the model needs to update (e.g., ‘‘Jack Depp is the sibling of Lily-Rose Depp’’). To address this, we propose novel evaluation criteria that consider the implications of an edit on related facts. Using these criteria, we then construct RIPPLE EDITS, a diagnostic benchmark of 5K factual edits, capturing various types of ripple effects. We evaluate prominent editing methods on RIPPLEEDITS, showing that they fail to introduce consistent changes in the model’s knowledge. In addition, we find that a simple in-context editing baseline obtains the best scores on our benchmark, suggesting a promising research direction for model editing.1.
UR - http://www.scopus.com/inward/record.url?scp=85187484588&partnerID=8YFLogxK
U2 - 10.1162/tacl_a_00644
DO - 10.1162/tacl_a_00644
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:85187484588
SN - 2307-387X
VL - 12
SP - 283
EP - 298
JO - Transactions of the Association for Computational Linguistics
JF - Transactions of the Association for Computational Linguistics
ER -