What happens when we relearn part of what we previously knew? Predictions and constraints for models of long-term memory

    Research output: Contribution to journalArticle

    Abstract

    Part-set relearning studies examine whether relearning a subset of previously learned items impairs or improves memory for other items in memory that are not relearned. Atkins and Murre have examined part-set relearning using multi-layer networks that learn by optimizing performance on a complete set of items. For this paper, four computer models that learn each item additively and separately were tested using the part-set relearning procedure (Hebbian network, CHARM, MINERVA 2, and SAM). Optimization models predict that part-set relearning should improve memory for items not relearned, while additive models make the opposite prediction. This distinction parallels the relative ability of these models to account for interference phenomena. Part-set relearning provides another source of evidence for choosing between optimization and additive models of long-term memory. A new study suggests that the predictions of the additive models are broadly supported.
    Original languageEnglish
    Pages (from-to)202-215
    JournalPsychological Research
    Volume65
    DOIs
    Publication statusPublished - 2001

    Fingerprint Dive into the research topics of 'What happens when we relearn part of what we previously knew? Predictions and constraints for models of long-term memory'. Together they form a unique fingerprint.

    Cite this