A Review of “Spikes not slots” by Bays (2015) [PowerPoint]

 
A Review of “Spikes not slots: Noise in neural populations limits working memory” by Bays (2015)

Download in Microsoft PowerPoint 2013 format (2.4 MB)
(View PowerPoint in Slideshow mode to see transitions not shown elsewhere.)
Download in PDF format (2.8 MB)

Created by Richard Thripp as an assignment for EXP 6506: Cognition and Learning class at University of Central Florida, to help the class understand this journal article.

Presented: 9/10/2015.

References

Bays, P. M. (2015). Spikes not slots: Noise in neural populations limits working memory. Trends in Cognitive Sciences, 19(8), 431–438. http://dx.doi.org/10.1016/j.tics.2015.06.004

Brady, T., Konkle, T., & Alvarez, G. A. (2011). A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision, 11(5), 1–34. doi:10.1167/11.5.4

Luck, S. J. & Vogel, E. K. (2013). Visual working memory capacity: From psychophysics and neurobiology to visual differences. Trends in Cognitive Sciences, 17(8), 391–400. http://dx.doi.org/10.1016/j.tics.2013.06.006

Figures were primarily from the Bays (2015) article.

The conceptual figure with colored squares for “continuous resource” versus “discrete slots” was from the Luck & Vogel (2013) article.

The Super Mario 64 screenshot, analog television image, and Windows “blue screen of death” screenshot were found via Google Image Search. Images in this PowerPoint presentation are hyperlinks to the source webpages.

Tags: cognition, continuous, debate, memory, opinion, resource model, slot model, slots, spikes, vision, visual working memory, working memory

A Review of “Iconic Memory Requires Attention” by Persuh, Genzer, & Melara (2012) [PowerPoint]

A Review of “Iconic Memory Requires Attention” by Persuh, Genzer, & Melara (2012)

Download in Microsoft PowerPoint 2013 format (0.6 MB)
(View PowerPoint in Slideshow mode to see transitions not shown elsewhere.)
Download in PDF format (1.3 MB)

Click here to view an animated GIF (98 KB) roughly approximating a condition from the experiments of Persuh, Genzer, & Melara (2012), recreated by Richard Thripp. This animation is not included in the SlideShare, PPTX, or PDF files.

Created by Richard Thripp as an assignment for EXP 6506: Cognition and Learning class at University of Central Florida, to help the class understand this journal article.

Presented: 9/03/2015.

Reference

Persuh, M., Genzer, B., & Melara, R. D. (2012). Iconic memory requires attention. Frontiers in Human Neuroscience, 6(126), 1–8. doi:10.3389/fnhum.2012.00126

Tags: attention, cognition, cognitive psychology, iconic memory, recall, vision

Reaction to “Language comprehension and production” by Clifton, Meyer, Wurm, & Treiman (2013)

Reaction to Clifton, Meyer, Wurm, & Treiman (2013) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
November 19, 2015 [Week 13]

Clifton, Meyer, Wurm, and Treiman (2013) review many themes in current language research. The idea that linguistic rules might be cognitively processed with probabilities, rather than rules (p. 539), seems particularly applicable to English; there are so many oddities and exceptions in English that rule-based processing is often untenable. While we know from Jean Berko Gleason’s research (1958) that young children develop rules fairly early, being able to pluralize the nonsense word “wug,” it may be plausible that a probabilistic approach becomes tenable with a larger database of linguistic rules that is developed in later childhood and adolescence. Having a broader understanding of the language might help us make lexical, semantic, syntactic, and other decisions based on probabilities rather than rules, even for unfamiliar linguistic elements, based on our tacit and explicit memories.

While the authors note that the interactive view of language processing is not falsifiable (Clifton et al., 2013, pp. 539–540), this does not necessarily mean that it is wrong. The modular view is easier to model, conceptualize, and theorize about, and is conveniently similar to how computers and other machines operate. However, the human brain is complex, enigmatic, and far more advanced than computers in many ways—for example, given that unconscious precursors have been found to precede volitional motor movement by as much as several hundred milliseconds (Shibasaki & Hallett, 2006), one might conclude by conventional models that parts of the brain are able to travel backward through time—a manifestly ludicrous conclusion. Adhering to the modular view might be similarly ludicrous. Even “flow” and so-called “a-ha!” moments seem to favor interactive views and preclude modular views, albeit not with respect to language processing. The idea that we may process language in any order, at any time, with any available information, does make the phenomenon difficult to study, unfortunately.

Giving unnecessary information may be easier than paring down our speech output to what is needed (Clifton et al., 2013, p. 536). This is reminiscent of the quote commonly attributed to Mark Twain: “If I had more time I would write a shorter letter.” Unlike other brain processes, the idea of distillation taking more effort and processing power parallels computers; consider simply Phil Katz’s DEFLATE algorithm, commonly used by the Linux gzip function. It takes exponentially more computing power to apply and decode increasing levels of lossless compression to digital data. However, translating or distilling language, particularly spoken language, is an art, and certainly a lossy process, although the degree of lossiness and whether such lossiness is acceptable varies between individuals, dyads, and larger groups. Consider that academics are required to write abstracts for their scholarly articles—a process made arduous merely by severe length constraints. If abstracts were lossless, we would never have to read a journal article, but the abstract would probably take a lot longer to read than the article itself, despite being technically shorter. Therefore, producing a good abstract, a short letter, or effective utterances conveying only necessary information is not easy. While it requires more thought and effort than simply spewing out the necessary information among a sea of garbage, doing too good a job of compressing the required information can backfire, resulting in confusion or requests for reiteration. I can think of times in my life where I have explained something concisely and elegantly, yet unsuccessfully. Success may paradoxically require both lossiness and repetition in unexpected places that vary between individuals and subcultures.

References

Berko, J. (1958). The child’s learning of English morphology. Word, 14, 150–177.

Clifton, C. J., Meyer, A. S., Wurm, L. H., & Treiman, R. (2013). Language comprehension and production. In A. F. Healy, R. W. Proctor, I. B. Weiner, A. F. Healy, R. W. Proctor, I. B. Weiner (Eds.), Handbook of psychology, Vol. 4: Experimental psychology (2nd ed.) (pp. 523–547). Hoboken, NJ, US: John Wiley & Sons, Inc.

Shibasaki, H., & Hallett, M. (2006). What is the Bereitschaftspotential? Clinical Neurophysiology, 117, 2341–2356. doi:10.1016/j.clinph.2006.04.025

Reaction to “Retrieval practice enhances new learning: The forward effect of testing” by Pastoötter & Bäuml (2014)

Reaction to Pastoötter & Bäuml (2014) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
November 11, 2015 [Week 12]

This brief literature review discusses the forward effect of testing, which, to me, implies the brain actually changes modes, so to speak, when tested, perhaps becoming more focused and less distractible. This carries over, improving retention for material studied immediately after the test, even if no test is given on the subsequent material. The retrieval explanation (p. 2) may be related. To me, the encoding explanation (p. 2) seems similar to the retrieval explanation, rather than being a distinct counterpoint—the improvements in list segregation following recall testing might be characterized as recoding rather than a retrieval effect. Basically, this would mean that list memories are partly updated upon retrieval, which is a type of encoding similar to the Unix “diff” command or a rolling backup of computer files that only updates files that have been changed. Retrieval would thus enhance memory through the compartmentalization effect the authors’ describe, as well as perhaps an indexing effect similar to a full-text computer search engine, which builds an index of words that speeds search at the cost of increasing data storage and processing requirements. The retrieval process itself may give the brain time to recode memories and compile or improve this index.

The results from studies of testing before misinformation are intriguing; they seem a plausible explanation for the unreliability of eyewitness accounts (p. 3). Though the authors did not discuss implications, I infer that ensuring students have correct understandings of curriculum materials may be more important than generally known. If students are allowed to encode misinformation, whether due to instructor error, vagueness in course material, or student error, it may be especially persistent if testing immediately follows.

Overall, the forward effect of testing is an exciting phenomenon that might help explain other mysteries and improve educational practice, as researchers continue to study it.

Reference

Pastötter, B., & T. Bäuml, K. T. (2014). Retrieval practice enhances new learning: The forward effect of testing. Frontiers in Psychology, 5(286), 1–4. doi:10.3389/fpsyg.2014.00286

Reaction to “Test-enhanced learning: Taking memory tests improves long-term retention” by Roediger & Karpicke (2006)

Reaction to Roediger & Karpicke (2006) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
November 11, 2015 [Week 12]

Roediger and Karpicke (2006) found that students have improved retention when taking memory tests in lieu of studying, which was defined as re-reading a prose passage. Students who were in the repeated study conditions predicted they would perform better, but actually did not. This has many implications for student beliefs and teaching practices; perhaps the practice of only giving 2–3 exams in lecture courses is not ideal for retention, for instance (p. 249).

The big question on my mind while reading the article was: what if the students just had poor reading habits? The only thing we know is that in experiment 2, subjects read each passage about 3.5 times per 5-minute study period, a rate of about 190 words per minute (pp. 250–252), which is reasonable. Unfortunately, we do not know how they were reading the passages. Were they “actively” reading where they highlight, underline, and write notes in the margin? Probably not, given their reading rate is fairly fast. We should ask ourselves rather these results are generalizable to people who are good at reading for retention. In experiment 1, subjects were tested by being asked to write down as much as they could remember (p. 250); this is similar to active reading, which is a useful tool for comprehension and retention. If they had been actively reading during the study periods, performance may have been similar.

Textbook chapters often give problems or discussion questions at the end of each chapter. Performing these exercises may show a similar benefit to what the authors found. It would be nice if the authors had considered student habits and behavior in their discussion, rather than just focusing on educational practice (pp. 253–254). I know that I usually skip exercises in textbooks; I and other students would probably be better off spending less time reading the chapter and more time doing the exercises.

The authors used undergraduates aged 18–24 from their institution, Washington University, for both their experiments. This is basically a convenience sample, which may not generalize to graduate students and others. Amazingly, the authors do not even mention how the students were selected or what programs and backgrounds they came from. They enrolled for “partial fulfilment of course requirements” (pp. 250–251), which could mean they self-selected in a system similar to University of Central Florida’s Psychology Research Participation System, where psychology undergraduates, as part of their course requirements, must participate in the research of graduate students, but are allowed to choose the experiments they will participate in. A volunteer sample may not be representative of even Washington University undergraduate students as a whole, let alone the general public.

Using a wholly between-subjects design in experiment 2 may have weakened the statistical power of the experiment (pp. 251–253). We have no information about whether each of the six groups of 30 students were homogeneous. Using a within-subjects design with a greater number of different reading passages may have worked, and participants could have received more course credit to incentivize a larger commitment.

Using five minutes, two days, and a week as their testing intervals in experiment 1, and then eliminating the two days condition in experiment 2, seems tenuous. For keeping the study simple, it is convenient. A condition where the interval was a few hours may have been valuable. In experiment 1, after scoring one third of the recall tests, one of the two raters bailed (p. 250) and was allegedly not needed because of the high interrater reliability observed. The authors have intriguing results, but it seems like they “cut corners” in several places.

Reference

Roediger, H. L., III, & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255.