Category Archives: EXP6506

Reaction to “The what, where, and why of priority maps and their interactions with visual working memory” by Zelinsky & Bisley (2015)

Reaction to Zelinsky & Bisley (2015) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
September 30, 2015 [Week 6]

Zelinsky and Bisley (2015) have presented a literature review regarding visual working memory and priority maps, reaching the conclusion that a vital relationship exists between these concepts, even though researchers often ignore the connection (p. 159). Further, the authors believe priority maps play an integral role in goal-directed behavior, and propose the common source hypothesis: that visual working memory is the foundation for goal prioritization, which is “propagated to all the effector systems” through tailored priority maps for each system, all reaching toward a common goal such as making a cup of tea (pp. 159–160). Priority maps may prevent “interrupts” (distractions) from stealing priority and preventing the goal from being reached.

Zelinsky and Bisley spend a great deal of time talking about the oculomotor system (pp. 156–158). They argue that it and visual working memory provide us with the model for priority maps and that this model generalizes to other visuomotor systems. They discuss evidence of a transformation from retinotopic to motor reference frames as priority maps move from the parietal cortex to the frontal lobe, and predict that a similar transformation will be found for responses in the premotor cortices (pp. 157–158).

The authors seem to have conflated general working memory (“WM”) with visual working memory (“VWM”)—they only refer to WM in regard to the tea-brewing task (p. 160) and argue for the centrality and singular importance of visual working memory throughout their paper. They reach the perhaps regrettable conclusion that all priority maps must have a topographical representation (p. 156). They give agreeable examples involving arm movements (p. 158), saccades (p. 159), and choosing to run right or left (p. 161), while conveniently leaving out discussion of hearing, smell, taste, and touch. Can we not have an auditory or olfactory priority map? Mechanics might listen for particular sounds to diagnose their machines; humans in general may have priority maps for particular smells and tastes to warn them of spoiled or poisonous food. How are these maps topographical? Just because there is a glut of research on vision and visual working memory does not mean that we should simply interpolate such findings to other domains without supporting evidence. Perhaps “priority map” is not the best term, since it is admittedly “by definition, organized into a map of some space” (p. 161). Zelinsky and Bisley seem to want to generalize priority maps to all domains of human attention, and yet the analogy is ill-suited to many of them.

Principally, Baddeley and Hitch’s working memory model and its derivatives focus on the senses that are most salient and important to survival: sight (visuo-spatial sketchpad) and hearing (phonological loop). However, congenitally blind subjects have been found to have significantly better tactile working memory than even semi-blind subjects who were equally fluent in Braille (Cohen, Scherzer, Viau, Voss, & Lepore, 2011). Zelinsky and Bisley (2015) do not even once discuss blindness, nor the possibility of visual working memory’s dominance being experiential in origin. In their defense, congenitally blind subjects have been found to have spatial recognition for Braille reading and to use the same pathways for Braille reading that are typically used for the visual system (Cohen et al., 2010). Nevertheless, the role of experience should always be considered—the priority maps of sighted, congenitally blind, acquired blind, and semi-sighted individuals may have distinct differences. While it is easy to gloss over blind individuals due to their rarity, there may be much to learn from studying blindness. The authors may have benefited from identifying it as an area requiring further research, rather than extrapolating over it.

Cohen et al. (2011) present an intriguing possibility: working memory might have a higher capacity when spread over multiple modalities. Could this allow for several simultaneously operating priority maps? Consider a hunter-gatherer exploring a forest—he or she may have multiple priority maps for sight, hearing, smell, and touch (e.g. wind direction and skin temperature), each contributing to finding food and avoiding danger. It is then apparent that a more fundamental analysis, rather than the technical analysis that Zelinsky and Bisley have provided, may be in order. Not only should experiential sensory history and alternate models be considered, but even the possible evolutionary origins of priority maps.

References

Cohen, H., Scherzer, P., Viau, R., Voss, P., & Lepore, F. (2011). Working memory for braille is shaped by experience. Communicative & Integrative Biology, 4(2), 227–229. doi:10.4161/cib.4.2.14546

Zelinsky, G. J., & Bisley, J. W. (2015). The what, where, and why of priority maps and their interactions with visual working memory. Annals of the New York Academy of Sciences, 1339, 154–164. doi:10.1111/nyas.12606

Reaction to “Different states in visual working memory” by Olivers, Peters, Houtkamp, & Roelfsema (2011)

Reaction to Olivers, Peters, Houtkamp, & Roelfsema (2011) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
September 30, 2015 [Week 6]

Olivers, Peters, Houtkamp, and Roelfsema (2011) have presented a review of literature regarding interactions between visual working memory and attentional deployment with respect to search tasks. A major focus of their review is on orthogonal coding, where different informational sources are represented by different coding patterns within the same neuronal populations (p. 327). Their literature review concludes, purportedly by convergent evidence, that “only one memory representation can serve as a search template, and this representation blocks attentional guidance by other memory representations” (p. 330).

For me, the idea that only one “search template” can be loaded into visual working memory for active processing, while other templates must be held in abeyance, brings two computing analogies to mind. First, the Microsoft Windows “Clipboard”—a space where text, images, files, or other data may be held, but only one item or set of items can be held at a time—anything from a single character of text to a massive folder with hundreds of files and subfolders. While the virtually unlimited capacity of the Windows clipboard is not analogous, the idea of having to swap things in and is, and becomes particularly salient when you have two types of content that you want to paste into a file in multiple different places, or when you must remember not to accidentally overwrite your clipboard contents. Second, the entire concept reminds me of paging and swap files. Modern computer operating systems exchange information between random access memory or RAM (lower capacity but very fast processing) and hard disk drives or solid-state drives (higher capacity but much slower processing). In this analogy, the active search template is loaded into RAM for efficient processing, while the accessory item(s) are maintained on the HDD or SSD. Swapping search templates is not trivial—RAM is often 1,000 times faster than conventional hard disk drives. While this latency difference is much greater than the sub-5% latency differences shown in typical experiments (p. 329), it represents a conceptually similar process.

If the search target is used repeatedly, the search process is offloaded to “less demanding memory representations” and becomes automated (pp. 328–329), thus freeing up explicit, effortful working memory for a new search template. This is seen in the differences between color search tasks for 1 of 3 colors as compared to 2 of 3 colors—the former is more efficient and neither distractor color captures attention, but the lone distractor color captures attention when looking for 2 of 3 colors (p. 330). The authors ask whether this generalizes to other types of memory, and lament there is a lack of research in this area (p. 332). There is a potential conceptual overlap with other types of memory—for example, one’s name might be an example of an automized search template with respect to auditory cognition and might have explanatory power for the cocktail party phenomenon (Wood & Cowan, 1995). Text search may be another area of interest—for example, say you are searching a printed bank statement for two transactions of different amounts. Should you try to load both search templates at once, or should you make two passes over the statement, looking for only one amount on each pass? How will completion time and error rates vary? While text search involves vision, it is also distinct from colors or objects and involves different considerations such as language, words versus numbers, context, etc. Moreover, implications drawn from visual working memory research might apply in many other areas. At the very least, they can aid in developing research questions.

References

Olivers, C. N. L., Peters, J., Houtkamp, R., & Roelfsema, P. R. (2011). Different states in visual working memory: When it guides attention and when it does not. Trends in Cognitive Sciences, 15(7), 327–334. doi:10.1016/j.tics.2011.05.004

Wood, N., & Cowan, N. (1995). The cocktail party phenomenon revisited: How frequent are attention shifts to one’s name in an irrelevant auditory channel? Journal of Experimental Psychology, 21(1), 255–260.

Reaction to “Visual attention within and around the field of focal attention: A zoom lens model” by Eriksen & St. James (1986)

Reaction to Eriksen & St. James (1986) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
September 24, 2015 [Week 5]

Eriksen and St. James (1986) believe their experiments support the zoom lens model, which differs from the spotlight model in that it proposes we can vary our attentional distribution on a continuum from a wide field of view to a fraction of a degree (pp. 226–227). The spotlight model typically involves a discrete or even binary dichotomy where we can only have a broadly or narrowly focused attentional field, but with restricted or nonexistent choices in between these two poles (p. 226).

As a photographer, I could not help but thinking of analogies to camera lenses and digital processing chips while reading this article, particularly since that is the crux of the authors’ analogy. The authors indicate in the discussion for experiment 1 that a 50 ms stimulus onset asynchrony (SOA) does not allow time for the attentional field to “zoom in,” so to speak, so an incompatible noise letter three positions away from the cued area delays the subject’s response time, but if given 100 ms, delayed reaction time is not observed, which may indicate the noise letter is now excluded or “cropped out,” so to speak (p. 233). This reminds me of the autofocus delay on cameras, which often measures in hundreds of milliseconds and can prevent the photographer from capturing desired moments.

Regarding the displays in experiment 1 where 3 of 8 letters were cued, reaction times paradoxically increased in the 200-ms SOA condition as compared to the 100 ms or even 50 ms displays. As an explanation, the authors present the possibility that with 3 of 8 letters cued, attending to the whole display may be nearly as efficient as attending to the cued area; thus, subjects may have elected to attend to the whole field in some displays, increasing reaction time (p. 234). Unfortunately, this is a post-hoc explanation and the experiments did not entail the collection of data to support this possibility. While the authors believe experiment 2 verified this explanation, it also had a very small sample size (n = 6), fewer trials, and an incompatible noise letter that was comparatively ineffective (p. 239). Fortunately, the authors seem to have produced a stronger argument that the cued letters are searched simultaneously rather than serially—specifically, that reaction times between 1, 2, and 3 cued positons increased far less than it should have if the positions were searched serially (p. 234).

In both experiments, multiple cued letters were always adjacent to each other in the circle. It would be interesting to see 2 cued locations not adjacent to each other—would the subject revert to processing the whole display, or somehow divide attention between the non-adjacent cued locations? How would this fit into the zoom lens model? Also: the authors assume that with no precues, all display elements are processed in parallel (pp. 232–233). In experiment 2, they include displays where all 8 letters are precued (p. 237). It would be nice to see if there are any implications of precueing all the positions versus none of them. When all the letters are underlined, does the underlining have any effects on reaction time? In neither experiment were there any conditions that had no cued or precued letters.

The authors’ ANOVA results have impressive statistical significance and they have purposely used methodology similar to past research in the hopes of allowing compatible comparisons (p. 229). However, I have lingering doubts about unspecified variables. We are given very little detail about the subjects—only that they are right-handed University of Illinois students who self-reported having normal or corrected vision (pp. 229, 237). Who is to say these self-reports were accurate? Why did the authors not bother with a visual acuity test? What were the ages of the participants? Did they have any other visual or attentional problems? The sample sizes of 8 and 6 are fairly small, meaning that a smaller number of non-equivalent participants could have thrown off the results. This research was published in 1986 and used a tachistoscope with individually constructed slides with affixed letters, rather than computer displays (pp. 229–230). The care and uniformity with which these slides were constructed is not specified. We are told subjects received reaction time feedback after each trial (p. 230), but not how this feedback was structured or conveyed. Whether the feedback was spoken by the researchers or conveyed in text or graphs may have implications. Further, encouraging participants to keep their error rate below 10% (p. 230) could have been a factor in the unusual reaction time pattern shown in figure 4 (p. 232)—perhaps this pattern is not indicative of parallel processing, but rather error avoidance? Despite the statistical power of their results, the authors may be overreaching, based on the assuredness of their conclusions.

Reference

Eriksen, C., & St. James, J. (1986). Visual attention within and around the field of focal attention: A zoom lens model. Perception & Psychophysics, 40(4), 225–240. doi:10.3758/BF03211502

Reaction to “Driver performance while text messaging using handheld and in-vehicle systems” by Owens, McLaughlin, & Sudweeks (2011)

Reaction to Owens, McLaughlin, & Sudweeks (2011) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
September 22, 2015 [Week 5]

Owens, McLaughlin, and Sudweeks (2011) conducted what was, to their knowledge, the first controlled, real-world study regarding text messages and driving (p. 940). They used a closed, 1.4-mile two-lane road, and the actual trials were conducted on straight uphill and downhill sections (I was surprised that no information was given regarding the steepness of these grades). Participants (n = 20) sent and received text messages using their mobile phones (typed) on some trials, and using the in-vehicle Ford SYNC system and selecting from a pre-programmed list of 15 possible text messages on other trials (p. 940). Overall, sending text messages was more dangerous than receiving them, mobile phone use was more dangerous than the in-vehicle system, and texting in general appeared more dangerous for older participants.

I question the generalizability of this experimental study. Why not do a naturalistic study, where drivers agree to have their cars equipped with audiovisual and kinematic sensors that monitor their texting habits in real-world situations? An example of such a study, funded by the same agency (the National Surface Transportation Safety Center for Excellence), is Distraction in commercial trucks and buses: Assessing prevalence and risk in conjunction with crashes and near-crashes (2010). Instead, we get an experimental study that is simplistic in implementation and hampered by safety concerns. Owens, McLaughlin, and Sudweeks (2011) conducted their trials with no other vehicles on the roadway, at a maximum speed of 35 miles per hour, on straightaways! This is not like actual texting while driving, which may involve congestion, traffic signals, curves, higher speeds, pedestrians, and more. They proceed to make inferences that texting by hand results in greatly degraded control of the vehicle, based on steering velocities (p. 945); however, the possibility that participants might text more cautiously (with more frequent steering corrections) on an actual roadway with other drivers is not explored. We are expected to believe that the conditions are valid because participants did not know if a single confederate vehicle might enter the roadway again, after passing them in the opposite direction on the first practice lap (p. 942)—quite a stretch, to say the least.

While mental demand, glances, and steering was measured, there was no consideration of velocity, following distance, weather conditions, or a host of other factors. To be fair, the authors did conduct a naturalistic study for about an hour with each participant, immediately prior to the 40-minute study in question, the results for which were released in 2010 (p. 942). However, both studies are of limited depth and were conducted with an “in-vehicle experimenter” present (p. 942), possibly influencing behavior. Considering that the experimenters had a control tower and many cameras and sensors (pp. 941–942), they could have eliminated the in-vehicle experimenter if they wanted. As a further point to limit generalizability, the system used does not even exist in the real world—the actual Ford SYNC system had to be modified by the manufacturer to allow texting while driving, since it typically disables texting at speeds over 3 miles per hour (pp. 940, 945).

This study was conducted in Virginia, where texting while driving is illegal—therefore, the researchers did not screen participants on their texting while driving habits (p. 940). Had the researchers conducted the research in a state where texting while driving was “legal,” such as Florida, they could have asked these questions and perhaps gained further insights.

The researchers relied on post-hoc tests to investigate interactions (p. 943), including measuring the baseline duration post hoc (p. 942). Post-hoc analysis should be used with caution and may reveal statistically significant patterns that are of no practical significance. They also assumed normality and homogeneity even though there were deviations in the ANOVA residual plots, and did not show us the plots (p. 943).

Importantly, driving while texting was not measured with respect to where the mobile phone was located—it could be better if the phone was in a cradle on the dashboard or mounted to the windshield, since participants would not have to look down (away from the roadway) to use their phones. I did not see any mention of this possibility, though interior glances were timed and counted. Further, the information in this article is already somewhat dated: only 6 of 20 participants had touch-screen phones (p. 940), while in 2015, this proportion would be much higher. 10 of 20 participants had archaic numeric keypads that require much more typing than a full QWERTY keypad (whether it is on a touch-screen or with physical keys). Many phones have fairly reliable voice recognition systems now, which may be less distracting than typing. It is possible that text messaging could be safer for some drivers than inferred from this study: for example, drivers who use a dashboard cradle and primarily text at red lights. Beneficial factors may even exist, such as reduced speed and increased following distance while texting.

References

Distraction in commercial trucks and buses: Assessing prevalence and risk in conjunction with crashes and near-crashes. (2010). Washington, DC: U.S. Dept. of Transportation, Federal Motor Carrier Safety Administration, Office of Analysis, Research and Technology, [2010]. Retrieved from http://ntl.bts.gov/lib/51000/51200/51287/Distraction-in-Commercial-Trucks-and-Buses-report.pdf

Owens, J. M., McLaughlin, S. B., & Sudweeks, J. (2011). Driver performance while text messaging using handheld and in-vehicle systems. Accident Analysis and Prevention, 43, 939–947. doi:10.1016/j.aap.2010.11.019

Note: Per Florida Statute 316.305, texting while driving became illegal on 10/01/2013, but was “legal” at the time of this study (most states already have laws against driving while encumbered, reckless driving, etc., but they typically go unenforced with respect to texting, necessitating the creation of new laws against texting on a state-by-state basis).

Reaction to “A mechanical model for human attention and immediate memory” by Broadbent (1957)

Reaction to Broadbent (1957) by Richard Thripp
EXP 6506 Section 0002: Fall 2015 – UCF, Dr. Joseph Schmidt
September 15, 2015 [Week 4]

Broadbent (1957) presents a model for human attention conceptualized as a Y-shaped tube that receives balls that represent information. A flap divides the Y-connection, and various parallels between what would happen to the actual balls and to human attention and memory are proposed.

Broadbent could instead have used water flowing through a Y-connector as his analogy—the rate or constriction of flow could vary between pipes, for example. There are many analogies that could be used. Whether this is a good analogy is up for debate, but seeing that Broadbent had to attach numerous codicils (p. 206, 208, 210) and discusses many limitations (p. 213) and conceptual problems with his model seems to suggest it is questionable. His modified model in Figure 2 (p. 210) appears as a circuit, which models memory as a recurrent process, but is admittedly an unwieldy and difficult model, given the author’s humorous comments that the apparatus would need to be filled with acid to replicate the disappearance of a memory item by dissolving a ball. This model might be more detrimental than useful as a teaching tool, if it results in profound, lasting misconceptions. The author admits: “Certain properties of the model are likely to be misleading” (p. 213)—no kidding! I can only imagine that getting this published in 1957 was much easier than it would be now.

We are familiar with the idea of “semantically impoverished” stimuli—that stimuli such as colored boxes and abstract shapes are not as salient as real-world stimuli. When Broadbent clarifies that stimuli can bypass the Y tube “if they convey sufficiently little information” (p. 213), one wonders if he considered the distinction between semantically rich and semantically impoverished stimuli? Being that he goes on to discuss reflexes and generalize them to “voluntary” reactions, it appears the distinction was (momentarily) lost on him. Broadbent may have been a visionary if he replaced “convey[ing] sufficiently little information” with something like “requiring sufficiently little processing resources.” The quantity of information is not always the most important part—later on the same page, Broadbent makes the point that decimal digits (base 10) convey far more information than binary digits (base 2), and yet do not require much (or any) extra effort for our brains to remember (p. 213). Therefore, the Y tube model is grossly oversimplified—some balls may in fact be bigger than others, and some may require negligible resources.

Broadbent concedes the Y tube analogy is of “obvious absurdity” if one identifies it with the organism, rather than as a mechanical conceptualization for human attention and immediate memory (p. 213). He proposes the model is primarily for people who find the abstract theory “unintelligible”—and indeed, it may help them. However, individuals who have a rudimentary understanding of attention and memory may be better off skipping Broadbent’s paper, given that it may imbue them with gross simplifications, rather than refining their understanding.

Reference

Broadbent, D. E. (1957). A mechanical model for human attention and immediate memory. Psychological Review, 64(3), 205–215. doi:10.1037/h0047313