Reaction to Roediger & Karpicke (2006) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
November 11, 2015 [Week 12]
Roediger and Karpicke (2006) found that students have improved retention when taking memory tests in lieu of studying, which was defined as re-reading a prose passage. Students who were in the repeated study conditions predicted they would perform better, but actually did not. This has many implications for student beliefs and teaching practices; perhaps the practice of only giving 2–3 exams in lecture courses is not ideal for retention, for instance (p. 249).
The big question on my mind while reading the article was: what if the students just had poor reading habits? The only thing we know is that in experiment 2, subjects read each passage about 3.5 times per 5-minute study period, a rate of about 190 words per minute (pp. 250–252), which is reasonable. Unfortunately, we do not know how they were reading the passages. Were they “actively” reading where they highlight, underline, and write notes in the margin? Probably not, given their reading rate is fairly fast. We should ask ourselves rather these results are generalizable to people who are good at reading for retention. In experiment 1, subjects were tested by being asked to write down as much as they could remember (p. 250); this is similar to active reading, which is a useful tool for comprehension and retention. If they had been actively reading during the study periods, performance may have been similar.
Textbook chapters often give problems or discussion questions at the end of each chapter. Performing these exercises may show a similar benefit to what the authors found. It would be nice if the authors had considered student habits and behavior in their discussion, rather than just focusing on educational practice (pp. 253–254). I know that I usually skip exercises in textbooks; I and other students would probably be better off spending less time reading the chapter and more time doing the exercises.
The authors used undergraduates aged 18–24 from their institution, Washington University, for both their experiments. This is basically a convenience sample, which may not generalize to graduate students and others. Amazingly, the authors do not even mention how the students were selected or what programs and backgrounds they came from. They enrolled for “partial fulfilment of course requirements” (pp. 250–251), which could mean they self-selected in a system similar to University of Central Florida’s Psychology Research Participation System, where psychology undergraduates, as part of their course requirements, must participate in the research of graduate students, but are allowed to choose the experiments they will participate in. A volunteer sample may not be representative of even Washington University undergraduate students as a whole, let alone the general public.
Using a wholly between-subjects design in experiment 2 may have weakened the statistical power of the experiment (pp. 251–253). We have no information about whether each of the six groups of 30 students were homogeneous. Using a within-subjects design with a greater number of different reading passages may have worked, and participants could have received more course credit to incentivize a larger commitment.
Using five minutes, two days, and a week as their testing intervals in experiment 1, and then eliminating the two days condition in experiment 2, seems tenuous. For keeping the study simple, it is convenient. A condition where the interval was a few hours may have been valuable. In experiment 1, after scoring one third of the recall tests, one of the two raters bailed (p. 250) and was allegedly not needed because of the high interrater reliability observed. The authors have intriguing results, but it seems like they “cut corners” in several places.
Roediger, H. L., III, & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255.