EME 6646 Assignment on Moral Emotions, Self-Regulation of BOLD Signals, and Monetary Rewards

Assignment 4, Part A: Individual Explanation of Rewards and Emotions
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
June 8, 2017

Moral Emotions

Using a task where participants passively viewed morally charged pictures (e.g., starving children and warfighting), Moll et al. (2002) found that while such images activated the amygdala, thalamus, and upper midbrain just like basic emotions, images evoking “moral emotions” additionally activated the orbital and medial regions of the prefrontal cortex, as well as the superior temporal sulcus region, both of which were previously known to be important for perception and social behavior. Moll et al. (2002) argue that these functional magnetic resonance imaging (fMRI) results indicate that humans automatically assign moral values to social events, and that this is an important function of human social behavior.

While traditionally, the prevailing paradigm was that moral judgments are guided by reason, neuroimaging evidence has shown us that emotion plays a vital role. For example, Greene, Sommerville, Nystrom, Darley, and Cohen (2001) tackled the issue by presenting fMRI-connected participants with a battery of moral dilemmas, with moral–personal (e.g., pushing a bystander off a bridge to stop a trolley that would kill five people), moral–impersonal (e.g., voting in favor of a referendum that would result in many deaths), and non-moral conditions (e.g., whether to stack coupons at the grocery store). Moral–personal dilemmas activated brain regions (i.e., medial frontal gyrus, posterior cingulate gyrus, and angular gyrus) that were significantly less active in the other conditions. Moreover, reaction time was higher in instances where a participant responded an action was “morally appropriate” (it was a dichotomous choice between this and “morally inappropriate”) when this was emotionally incongruent—for example, when participants said “appropriate” to sending the bystander to his or her death to stop the trolley from killing five people. The authors specifically compared this to the Stroop test, contending that this was a similar phenomenon in that it required extra processing time. Overall, emotions can help us understand why the majority will say it is acceptable to flip a switch that changes the direction of the trolley, killing a bystander to save five others, while a majority will say it is unacceptable to push the bystander in front of the trolley to stop it, even though the outcome is the same. Greene et al. (2001) say the latter is more emotionally salient. While the trolley problem is a philosophical paradox if considering only reason, adding emotion resolves it.

If moral dilemmas light up different parts of the brain, and if emotional salience is important to judging whether an issue is morally unacceptable, educators can use this to design instruction to engage moral emotions. For instance, the music industry has long argued that illegally downloading a song is no different than shoplifting the CD from Target. The former might be compared with flipping the trolley switch, while the latter is like pushing the bystander in front of the trolley—far fewer would shoplift than illegally download a song. Casting academic integrity in a similar light could help promote ethical and prosocial behaviors among students. Marketing research implies that most people are honest to a fault—they would not be grossly dishonest to get ahead, but if they can profit while continuing to believe they are righteous, they will do so (Mazar, Amir, & Ariely, 2008). In addition to promoting academic honesty, moral emotions can be evoked in instruction through vignettes, case studies, or interrogatories (e.g., “What would you do if you could save five people by harvesting the organs of a cerebrally dead 22-year-old who is an organ donor but whose family actively protests?”). Integrating these as both individual and group activities may be useful. Group activities invite going along with the group, so individually completion might precede group discussion. Sadly, while Walt Disney Studios appeals to our moral emotions and emotions of all forms in their motion pictures, instructors typically leave this engagement opportunity untapped.

Self-Regulation of BOLD Signals and Monetary Rewards

Recently, Sepulveda et al. (2016) combined measurement via real-time FMRI neurofeedback (NF) with instructing participants to increase their blood-oxygen-level dependent (BOLD) signals (i.e., self-regulation of brain physiology), in a between-groups study with four groups (n = 20 with five per group) which received either NF only (a.k.a. contingent feedback), NF and motor-imagery training, NF and monetary reward, and NF + motor-imagery training + monetary reward. The BOLD signal is a proxy for “volitional control of supplementary motor area” (Sepulveda et al., 2016, p. 3155)—this ability can improve “planning and execution of motor activity” (p. 3155), and may be important to self-regulation, learning, academic success, et cetera. Interestingly, while all groups were successful at up-regulating their BOLD signals, monetary reward resulted in the greatest increase, while motor-imagery training did not even result in a statistically significant enhancement. That is to say, the participants who were evidently the most motivated to increase their BOLD signals were the ones who received NF and an on-screen dollar amount where the amount increased in proportion to their real-time increase in BOLD signal. While the authors were careful to note that monetary rewards—which are by definition an extrinsic motivator—lose their effectiveness over time and thus should be used as an initial motivator that is withdrawn over time (hopefully giving way to intrinsic motivation), their discussion does not mention that this neuroimaging evidence may be important to the use of monetary rewards for academic and organizational success.

Monetary Rewards May Be Ineffective in Academic Settings

In contrast to Sepulveda et al. (2016), Mizuno et al. (2008) found that while learning motivated by monetary rewards activated the putamen bilaterally much like self-reported level of motivation learning, the intensity of activity (measured via fMRI BOLD signals) increased with higher levels of motivation for learning, but not with increased monetary rewards. This may suggest that, at least in an academic context, greater monetary rewards do not increase motivation. While it did not employ neuroimaging, a study of 300 middle schoolers by Springer, Rosenquist, and Swain (2015) may be relevant. They offered either no incentive, $100, or a “certificate of recognition signed by the district superintendent” (p. 453) to students who attended tutoring regularly. While the preceding fMRI research may lead us to believe that the monetary incentive would have been effective, in fact it had no significant differences from the control group, while the certificate of recognition was a highly effective motivator. Therefore, for academic motivation, financial rewards may be inferior to other forms of extrinsic motivation (e.g., a certificate), or to intrinsic motivation. Nevertheless, they may be a useful tool for the unimaginative instructor, particularly in contexts where a grading scheme cannot be implemented (e.g., some forms of organizational training). For a more typical academic setting, grades and “extra” credit opportunities (which, ironically, are available even to students who achieve far less than 100% on their work) may basically take the place of what would have been monetary rewards in another setting.

References

Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108. http://doi.org/10.1126/science.1062872

Mazar, N., Amir, O., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45, 633–644. http://doi.org/10.1509/jmkr.45.6.633

Mizuno, K., Tanaka, M., Ishii, A., Tanabe, H. C., Onoe, H., Sadato, N., & Watanabe, Y. (2008). The neural basis of academic achievement motivation. NeuroImage, 42, 369–378. http://doi.org/10.1016/j.neuroimage.2008.04.253

Moll, J., de Oliveira-Souza, R., Eslinger, P. J., Bramati, I. E., Mourão-Miranda, J., Andreiuolo, P. A., & Pessoa, L. (2002). The neural correlates of moral sensitivity: A functional magnetic resonance imaging investigation of basic and moral emotions. Journal of Neuroscience22, 2730–2736. http://doi.org/10.1016/j.bandc.2016.07.007

Sepulveda, P., Sitaram, R., Rana, M., Montalba, C., Tejos, C., & Ruiz, S. (2016). How feedback, motor imagery, and reward influence brain self-regulation using real-time fMRI. Human Brain Mapping, 37, 3153–3171. http://doi.org/10.1002/hbm.23228

Springer, M. G., Rosenquist, B. A., & Swain, W. A. (2015). Monetary and nonmonetary student incentives for tutoring services: A randomized controlled trial. Journal of Research on Educational Effectiveness, 8, 453–474. http://doi.org/10.1080/19345747.2015.1017679

EME 6646 Assignment on Long-Term Potentiation, Learning Strategies, Memory Consolidation, and Sleep

Assignment 3, Part A: Individual Explanation of Learning and Memory
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
June 2, 2017

Long-Term Potentiation

In their 2014 literature review, Granger and Nicoll lament that long-term potentiation (LTP) has never been precisely defined. They explain that “the broadest definition is a long-lasting enhancement in synaptic strength following a brief high-frequency stimulation” (p. 1), which might be summarized as “neurons wire together if they fire together” (Löwel & Singer, 1992, p. 211). While the debate over whether long-term potentiation occurs pre- or postsynaptically continues, it is perhaps tangential to the educational ramifications. Fundamentally, the discovery of long-term potentiation in the 1970s was critical to our understanding of the mammalian brain (Teyler & DiScenna, 1987), showing us that the brain is not a static object, but that, even in the short term, neural pathways can be strengthened by mental exercise, not unlike physical exercise. This synaptic plasticity is important for many types of declarative memory (Byrne, n.d.), which, for educators, suggests that an important component of learning is repeated activation of the relevant synapses. Consequently, practice of the to-be-acquired skill should be integrated early and throughout a program of study. For example, given our knowledge of LTP, it would be inappropriate to structure a course on driving with 15 weeks of reading a textbook followed by one week of application behind the wheel. In the realm of teacher education, LTP might be accounted for by integrating field experiences early, which may also narrow the theory–practice gap (Coffey, 2010), rather than inefficiently delaying such fieldwork for a senior-year internship.

Learning Strategies

The strategy used during learning is a determinant of what type(s) of memory systems are engaged, and subsequently, the degree of success (Squire, 2004). This was discovered, in part, by inhibiting the hippocampus in rats, which aided them in navigating a maze tailored for non-declarative memory by solidifying the supremacy of the caudate nucleus. In fact, hippocampal lesions allowed the rats to perform better at this task! In humans, one can imagine a renegade researcher invasively inhibiting regions of the anterior cingulate and dorsolateral prefrontal cortices in an attempt to improve human performance on the Stroop test. More practically, educators can attempt to evoke efficient learning strategies for the materials at hand. For example, trying to learn a habit while also trying to memorize the requisite steps can result in failure on both counts (Squire, 2004). Therefore, educators might specifically instruct learners to focus on repetition in one trial and memorization in another. For example, if the task is executing a mathematical operation with a series of sub-steps, a textbook author could encourage habit learning by providing several problems with a reference sheet containing the sub-steps in view. Then, to encourage memorization, the reference sheet could be subsequently confined to a separate sheet requiring a page turn, with the learner being explicitly directed to test his or her memorization of the requisite steps.

Memory Consolidation and the Essentiality of Sleep

While it is easily observed via EEG oscillations that normal human sleep consists of 90-minute cycles including rapid eye-movement sleep (REM) and four stages of non-REM sleep, how memory consolidation occurs during sleep is less clear (Stickgold, 2005). However, that it has occurred is abundantly clear because certain tasks such as finger-tapping, rotation adaption, and visual texture discrimination have been experimentally shown to be enhanced subsequent to sleep, but when tested after 4 to 12 hours of wakefulness without sleep, the enhancement does not occur. Moreover, for some such tasks, improvement occurs even more when re-tested 72 hours later rather than 24, suggesting that additional nights of sleep further enhance these forms of procedural learning. Stickgold’s (2005) literature review goes on to summarize what he dubs as “converging evidence” (p. 1276) at the molecular, cellular, and higher levels, showing that sleep helps cell membranes, myelin, and cortical neuronal responsiveness. Further, at least in the Zebra finch, songs rehearsed during wakefulness appear to continue to be rehearsed in sleep, based on detection of similar “patterns of neuronal excitation” (Stickgold, 2005, p. 1277). Therefore, strong evidence exists that several forms of procedural learning are consolidated via sleep, although the evidence for declarative memory is less conclusive.

More recently, Tononi and Cirelli (2014) have argued, with molecular, electrophysiological, and structural evidence, for the synaptic homeostasis hypothesis (SHY). This basically says that “sleep is the price the brain pays for plasticity” (Tononi & Cirelli, 2014, p. 12). The essentiality of sleep is supported by neuroscience and behavioral evidence, yet this advice is often not followed by students nor educators and other professionals. Educators might encourage learners to get adequate sleep by directly exposing them to neuroscience research on sleep’s importance and by discourage cramming or “all-nighters” with adequate instructional scaffolding (e.g., setting draft and format review deadlines prior to a final submission deadline). Finally, researchers have recognized (e.g., Piffer, Ponzi, Sapienza, Zingales, & Maestripieri, 2014) that some humans are geared toward “morningness” (i.e., “early birds”) while others have a propensity toward “eveningness” (i.e., “night owls”).

As an individual with a lifelong propensity toward eveningness, I am baffled by the culture in America and elsewhere that favors early birds while mocking and ridiculing night owls for their purported laziness. The machinations of society are organized to confer privilege upon early birds—such as requiring children to go to school early in the morning, businesses and government offices that open early in the morning, and a majority of employment opportunities requiring us to wake up shortly after sunrise. Now that most people have electricity, if we are to facilitate learning, memory consolidation, and human performance in general, why not provide parallel structures and opportunities for night owls? Surely, would it not relieve congestion on Orlando’s roads if instead of a majority of workers working in the neighborhood of 9 a.m. to 5 p.m., if workplaces could be staffed in the evening and overnight so that unused nighttime road capacity might be leveraged? What about tweens and teenagers who would be better served if school was from noon to 7 p.m. rather than requiring them to rise before dawn and be tired all day? In higher education, afternoon and night course offerings can cater to night owls while not necessarily being punitive toward early birds. To further improve memory consolidation, housing developments and apartment complexes might be constructed with soundproofing and futuristic windows that use amorphous metal oxides to become completely opaque at the flip of a switch (Llordés, Garcia, Gazquez, & Milliron, 2013), facilitating daytime quiet and darkness for improved sleep, memory consolidation, and learning.

References

Byrne, J. H. (n.d.). Chapter 7: Learning and memory. Neuroscience online: An electronic textbook for the neurosciences. Retrieved from http://neuroscience.uth.tmc.edu/s4/chapter07.html

Coffey, H. (2010). “They taught me”: The benefits of early community-based field experiences in teacher education. Teaching and Teacher Education, 26, 335–342. http://doi.org/10.1016/j.tate.2009.09.014

Granger, A. J., & Nicoll, R. A. (2014). Expression mechanisms underlying long-term potentiation: A postsynaptic view, 10 years on. Philosophical Transactions of the Royal Society, 369(1633), 1–6. http://doi.org/10.1098/rstb.2013.0136

Llordés, A., Garcia, G., Gazquez, J., & Milliron, D. J. (2013). Tunable near-infrared and visible-light transmittance in nanocrystal-in-glass composites. Nature, 500, 323–326. http://doi.org/10.1038/nature12398

Löwel, S., & Singer, W. (1992). Selection of intrinsic horizontal connections in the visual cortex by correlated neuronal activity. Science, 255, 209–212. http://doi.org/10.1126/science.1372754

Piffer, D., Ponzi, D., Sapienza, P., Zingales, L., & Maestripieri, D. (2014). Morningness–eveningness and intelligence among high-achieving US students: Night owls have higher GMAT scores than early morning types in a top-ranked MBA program. Intelligence, 47, 107–112. http://doi.org/10.1016/j.intell.2014.09.009

Squire, L. R. (2004). Memory systems of the brain: A brief history and current perspective. Neurobiology of Learning and Memory, 82, 171–177. http://doi.org/10.1016/j.nlm.2004.06.005

Stickgold, R. (2005). Sleep-dependent memory consolidation. Nature, 437, 1272–1278. http://doi.org/10.1038/nature04286

Teyler, T. J., & DiScenna, P. (1987). Long-term potentiation. Annual Review of Neuroscience, 10, 131–161. http://doi.org/10.1146/annurev.ne.10.030187.001023

Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity: From synaptic and cellular homeostasis to memory consolidation and integration. Neuron, 81, 12–34. http://doi.org/10.1016/j.neuron.2013.12.025

Poster Presentation at ICEL: Fortifying Asynchronous Online Learning With Digitally Delivered In-Person Assessments to Leverage the Testing Effect

I am re-posting this here as a WordPress “post” but the permanent and more convenient “page” URL is thripp.com/epc-poster.

Here is our Fortifying Asynchronous Online Learning With Digitally Delivered In-Person Assessments to Leverage the Testing Effect poster by Richard Thripp, M.A.; Ronald DeMara, Ph.D.; Baiyun Chen, Ph.D.; and Richard Hartshorne, Ph.D., presented on June 2, 2017 at the 12th International Conference on e-Learning (ICEL) hosted at University of Central Florida (UCF).

This poster focuses on innovations implemented at the Evaluation and Proficiency Center at UCF’s College of Engineering and Computer Science, but these strategies could be used at many institutions.

EPC Poster

I am also making the poster available in PDF format. Feel free to share a link to this page with your colleagues.

EME 6646 Assignment on Visual Working Memory Capacity, Cognitive Load Theory, and Hearing Range

Assignment 2: Explain Sense, Perception, Attention, and Control
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
May 28, 2017

Visual Working Memory Capacity

Visual working memory (VWM) capacity typically refers to the number of visual objects an individual can hold in short-term memory. Twenty years ago, Luck and Vogel (1997) found that VWM capacity was not tied to individual features of visual objects, but rather the objects themselves as an integrated whole. For example, we can remember the color, orientation, and shape (“conjunctions”) of four objects no less easily than if we were tasked with remembering only their color while orientation and shape were held fixed across all four objects. In instructional design, an implication is that we can reliably ask learners to remember several details about a small set of visual objects. For example, in designing an educational game where a player has to remember the characteristics of four keys that open various doors, the player could be required to remember the shape, size, color, and shininess of the required keys when tasked with selecting them out of a pool of thirty keys of which only four are correct. (Assume it is a matching game so that the keys must be remembered only briefly and thus long-term memory and short-term decay are side issues.) However, a game designer would produce nothing but frustration for gamers by changing this task to require remembering only the shapes of 16 different keys, even though in both cases, 16 features are presented. Loading several features onto a single object allows more information to be retained in VWM.

As an aside, individuals can hold about 3–4 objects in VWM, although there is a hot debate between researchers such as Luck and Vogel (2013) and Schneegans and Bays (2016) about whether a “slot” model for VWM capacity is more accurate, where additional items cannot be remembered even if sacrificing fidelity is an option (Luck and Vogel), or whether an analog model is more accurate, where fidelity may be sacrificed to load additional items into VWM (Bays). While the slot (“quantized”) model has a long history of experimental support, the analog (“continuous”) model has recently been gaining ground, in part due to neuroimaging advances.

Interestingly, VWM capacity differs significantly between individuals and may be stable and reliable (Xu, Adam, Fang, & Vogel, 2017). In other areas besides vision, individual differences in working memory are similarly important and are positively correlated to cognition and learning, due in part to aiding “planning, comprehension, reasoning, and problem solving” (Cowan, 2014, p. 217). Nonetheless, Cowan (2014) argues that although it may be impossible to increase learners’ working memory capacities, we can adjust our educational presentations accordingly for learners with less working memory capacity.

Cognitive Load Theory

Cognitive load theory (CLT) is arguably of fundamental importance to effective instructional design (Paas, Renkl, & Sweller, 2003). Whereas working memory alone would only allow us to deal with very simple problems, long-term memory is essential to learning complex knowledge and skills. Long-term memory contains schemas to organize information into frameworks that can be leveraged for automaticity and efficient use of working memory. For example, an accomplished sight-reading pianist can play a complex, unfamiliar piano score due to an iterative, years-long cycle of practice and schema-building that greatly reduces the intrinsic cognitive load for the task. However, the intrinsic cognitive load of this task would simply be overwhelming for a novice, regardless of whether instruction is delivered in an effective manner that minimizes extraneous cognitive load. Intrinsic cognitive load is irreducible to what is being learned, while extraneous cognitive load is introduced by ineffective instructional design. Finally, germane cognitive load is relevant to automation and acquiring schemas.

Extraneous and germane cognitive load can be influenced by the instructional designer. Avoiding situations where learners must divide their attention, such as between a presenter’s speech and words he or she is displaying on projected slides (the split-attention effect), is one example of reducing extraneous cognitive load (for many others, see Mayer & Moreno, 2003). Instructional designers may further aid learning by increasing germane cognitive load through strategies that increase learner motivation and effort (see “note” on p. 2 of Paas et al., 2003). For tasks with low intrinsic load, instruction may be inefficiently designed without noticeable consequences. However, harder tasks, particularly those with high levels of element interactivity, have high intrinsic load. Paas et al. (2003) give the example of image manipulation software, in which individual functions can be learned with low intrinsic cognitive load due to a low level of element interactivity, meaning that each function can be learned in isolation. However, putting this knowledge together to successfully edit a digital photograph has high intrinsic load, in part because of high element interactivity—the image-editing functions must be used in concert. However, novices are overwhelmed by intrinsic cognitive load if the functions are taught in concert—they must be taught in isolation. Other examples include piano performance (new pieces are learned hands-separate to reduce intrinsic load), learning to drive, et cetera.

For instructional designers, a primary consequence of CLT and related theoretical elements, distilled as the expertise reversal effect (Kalyuga, 2007), is that novices and experts cannot receive the same instruction. Instructing a novice on how to use Adobe Photoshop might best be accomplished one function at a time, but an intermediate or expert user may learn new techniques more effectively if editing an actual image, because intrinsic cognitive load has been subjugated by prior knowledge and the associated schemas and automations. Therefore, teaching an expert about Photoshop functions in a piecewise fashion would have too little intrinsic cognitive load, and therefore a holistic approach may be more effective, while the exact opposite might be true for a Photoshop novice. Hence, the expertise reversal effect.

Auditory Range

Human hearing typically operates in the range of 20–20,000 Hz (“hertz”), but men after Age 20 lose, on average, the ability to hear a hertz per day at the upper end of this range (Gray, n.d.). According to Gray, this means a 50-year-old likely cannot hear sounds over 10 kHz (note: one kHz is 1000 Hz). An amusing implication is that teenagers and young adults can use high-pitched ringtones on their phones to be alerted to phone calls or text messages without their teachers or parents knowing (Noise Help, n.d.). From trying the sample tones on the Neuroscience Online and Noise Help websites, I discovered I am able to hear tones at 15 kHz, but not at 17.5 kHz or 20 kHz, which may indicate that my hearing loss is already well underway. Another website (www.ultrasonic-ringtones.com) contains more frequency choices. Here, I was able to hear the 15.8 kHz tone comfortably, the 16.7 kHz tone faintly, and could not hear the 17.7 kHz at all. If I set my text message ringtone to 16.7 kHz, I doubt I could reliably hear it, but would notice 15.8 kHz more readily.

The fundamental frequency of speech is typically within the 85–180 Hz range for males and 165–255 Hz range for females (Titze, 1994). This is the lowest frequency transmitted in the speech waveform. It appears curious, then, that the best range for hearing is around 3000–4000 Hz (Gray, n.d.). It may also surprise readers to learn that typical voice applications such as telephones and tele-conferencing software only transmits in the neighborhood of 300–3400 Hz, which completely excludes the fundamental frequencies of human speech! However, in actuality, human speech, like many sounds, encompasses a broad waveform with many overtones, which are tones higher than the fundamental frequency. Therefore, speech sounds natural, if somewhat tinny, with the fundamental frequencies omitted. Nevertheless, frequency restrictions, along with inferior visual cues and other factors, explain why it can be harder to understand a webinar or teleconference broadcast than a face-to-face (F2F) instructional engagement. Instructors and instructional designers should consider the modality of delivery—F2F instruction is more engaging of senses and perception, while distance audiovisual instruction has limitations with respect to auditory frequencies, visual depth, transmission latency, et cetera (Anderson, Beavers, VanDeGrift, & Videon, 2003). Thus, instructors in online or hybrid modalities may need to speak more slowly and clearly than in a purely F2F modality.

References

Anderson, R., Beavers, J., VanDeGrift, T., & Videon, F. (2003). Videoconferencing and presentation support for synchronous distance learning. Paper presented at the 33rd ASEE/IEEE Frontiers in Education Conference, Boulder, CO.

Cowan, N. (2014). Working memory underpins cognitive development, learning, and education. Educational Psychology Review, 26, 197–223. http://doi.org/10.1007/s10648-013-9246-y

Gray, L. (n.d.). Chapter 12: Auditory system: Structure and function. Neuroscience online: An electronic textbook for the neurosciences. Retrieved from http://neuroscience.uth.tmc.edu/s2/chapter12.html

Kalyuga, S. (2007). Expertise reversal effect and its implications for learner-tailored instruction. Educational Psychology Review, 19, 509–539. http://doi.org/10.1007/s10648-007-9054-3

Luck, S. J., & Vogel, E. K. (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281. http://doi.org/10.1038/36846

Luck, S. J., & Vogel, E. K. (2013). Visual working memory capacity: From psychophysics and neurobiology to individual differences. Trends in Cognitive Sciences, 17, 391–400. http://doi.org/10.1016/j.tics.2013.06.006

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43–52. http://doi.org/10.1207/S15326985EP3801_6

Noise Help (n.d.). The teen buzz “ultrasonic” ringtones. Retrieved from http://www.noisehelp.com/ultrasonic-ringtones.html

Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38, 1–4. http://doi.org/10.1207/S15326985EP3801_1

Schneegans, S., & Bays, P. M. (2016). No fixed item limit in visuospatial working memory. Cortex, 83, 181–193. http://doi.org/10.1016/j.cortex.2016.07.021

Titze, I. R. (1994). Principles of voice production. Englewood Cliffs, NJ: Prentice Hall.

Xu, Z., Adam, K. C. S., Fang, X., & Vogel, E. K. (2017). The reliability and stability of visual working memory capacity. Behavior Research Methods. Advance online publication. http://doi.org/10.3758/s13428-017-0886-6

EME 6646 “Explain Brain Basics” Assignment

Assignment 1: Explain Brain Basics
For EME 6646: Learning, Instructional Design, and Cognitive Neuroscience
By Richard Thripp
University of Central Florida
May 21, 2017

Magenetoencephalography

Magenetoencephalography (MEG) is a new type of non-invasive brain scan that detects brain activity via the associated magnetic fields (MEG Community, 2010b; PBS, n.d.). Although strictly speaking it is not an “imaging” technique, it nevertheless provides time-sensitive data about the activity of groups of neurons, and can be combined with functional magnetic-resonance imaging for spatial information (Rees, 2011). MEG is very expensive—not only does one MEG device costs millions of dollars and weigh approximately eight tons (PBS, n.d.), but it must be placed in a room with carefully designed, comprehensive magnetic shielding. Magnetic fields emitted by the brain are so faint that the earth’s magnetic field itself is 100 million times more powerful (MEG Community, 2010b). Consequently, it is unsurprising that few MEG machines exist—in the entire state of Florida, the only MEG machine is at the Florida Hospital for Children in Orlando (Florida Hospital, n.d.; MEG Community, 2010a).

An MEG device principally includes a helmet with about 300 sensors that use superconducting coils cooled with liquid helium to –452° F. This array is able to detect signals from the brain to an accuracy of less than 1/1000 of a second, which was unheard of with prior technologies (MEG Community, 2010b). Thus, it can detect, in real time, both spontaneous brain activity and activity from an evoked response such as visual or auditory stimuli. MEG is valuable for both medical treatments (e.g., epilepsy; Florida Hospital, n.d.) and research (e.g., cognition; Freeman, Ahlfors, & Menon, 2009). On its own, it may provide more accurate “source localization” than electroencephalography (EEG), meaning that the source of brain activity can be isolated to within a general region of the brain (MEG Community, 2010b). However, while EEG has a much higher latency, it also has specific uses that make it complementary to MEG (Sharon, Hämäläinen, Tootell, Halgren, & Belliveau, 2007), and in fact, MEG, EEG, and fMRI can be used in concert to give a more accurate spatial and temporal depiction of brain activity, and perhaps even to determine the antecedents of cognition (Freeman et al., 2009), albeit with significant challenges and costs.

Security, Lie Detection, and Privacy

Rees (2011) explains that the desire for neuroimaging to allow humans to “detect covert mental states or deception” (p. 17) is strong. Despite the many problems and limitations associated with current techniques, a prevailing assumption that these will be overcome via technical means is apparent. While the polygraph is an unreliable approach to lie detection that relies on skin conductance rather than neuroimaging, neuroimaging techniques themselves are also quite susceptible to countermeasures—individuals may deceive such attempts at detecting deception with practice or training. While present attempts to deploy neuroimaging and related techniques for lie detection, predicting recidivism, and determining criminal intent are lacking in rigor and validity (Rees, 2011), the privacy implications of deploying such technologies to improve human–computer interactions are plainly evident (Fairclough, 2009). Data about neurophysiological states can be used to make computers more responsive and useful, but can also be leveraged to spy on or manipulate individual users, as well as to analyze users in aggregate without their consent. Therefore, Fairclough (2009) suggests that users should be given a great deal of control over the information collected, and should also be required to opt-in to such data collection with written consent.

How Much of the Brain Can One Develop Without?

Amazingly, anomalies in brain development can be compensated for by neuroplasticity, to the extent that such individuals may have a semblance of normalcy in adulthood. For example, Herkewitz (2014) summarizes the story of Michelle Mack, who was missing almost half of her brain at birth, yet graduated high school and is now in her 40s living a satisfying life. Another case described by Yu, Jiang, Sun, and Zhang (2015) involves a woman who has no cerebellum, and yet did not discover this until a hospital visit at Age 24. While according to her mother she could not speak intelligibly until Age 6 nor walk until Age 7, in her hospital visit she presented no signs of aphasia and only mild to moderately impaired speech, and she is married and gave birth to a daughter without incident. Finally, the case of Trevor Waltrip, a boy born with severe hydranencephaly whereby he developed with only a brainstem but no brain, is highly unusual because he lived to Age 12, although blind and unable to speak (Madden, 2014). Typically, children with this condition die shortly after birth. However, although there are many popular news articles with Waltrip’s story online (www.google.com/search?q=Trevor+Judge+Waltrip), it may be dubious because there appear to be no references to it in academic literature. Nevertheless, there are many other cases that demonstrate the brain’s plasticity particularly in childhood, but also to a less extreme degree in adulthood. Therefore, it has become clearly inaccurate to characterize the brain as a machine that can only deteriorate—the brain can also adapt to physical damage, and, of potentially greater importance, cognitive performance may be improved or regained through rehabilitation in a manner reminiscent of physical rehabilitation (Doidge, 2009).

References

Doidge, N. (2009). The brain: How it can change, develop and improve [Video file]. Retrieved from http://www.youtube.com/watch?v=tFbm3jL7CDI

Fairclough, S. H. (2009). Fundamentals of physiological computing. Interacting With Computers, 21, 133–145. http://doi.org/10.1016/j.intcom.2008.10.011

Florida Hospital. (n.d.). MEG: Advanced neuroimaging at Florida Hospital for Children. Retrieved from https://www.floridahospital.com/children/neuroscience/epilepsy/MEG

Freeman, W. J., Ahlfors, S. P., & Menon, V. (2009). Combining fMRI with EEG and MEG in order to relate patterns of brain activity to cognition. International Journal of Psychophysiology, 73, 43–52. http://doi.org/10.1016/j.ijpsycho.2008.12.019

Herkewitz, W. (2014). How much of the brain can a person do without? Retrieved from http://www.popularmechanics.com/science/health/a13017/how-much-of-the-brain-can-a-person-do-without-17223085/

Madden, N. (2014, September). Keithville boy born without brain dies at 12. Retrieved from http://www.ksla.com/story/26405843/keithville-boy-born-without-brain-dies-at-12

MEG Community. (2010a). Groups and jobs page. Retrieved from http://megcommunity.org/groups-jobs/groups

MEG Community. (2010b). What is MEG? Retrieved from http://megcommunity.org/what-is-meg

PBS. (n.d.). Scanning the brain: Magenetoencephalography. Retrieved from http://www.pbs.org/wnet/brain/scanning/meg.html

Rees, G. (2011, January). The scope and limits of neural imaging. In C. Blakemore et al. (Eds.), Brain Waves Module 1: Neuroscience, society, and policy (pp. 5–18).

Sharon, D., Hämäläinen, M. S., Tootell, R. B. H., Halgren, E., & Belliveau, J. W. (2007). The advantage of combining MEG and EEG: comparison to fMRI in focally stimulated visual cortex. NeuroImage, 36, 1225–1235. http://doi.org/10.1016/j.neuroimage.2007.03.066

Yu, F., Jiang, Q.-J., Sun., X.-Y., & Zhang, R.-W. (2015). Letter to the editor: A new case of complete primary cerebellar agenesis: Clinical and imaging findings in a living patient. Brain, 138(6), 1–5. http://doi.org/10.1093/brain/awu239

Writing on education, finance, psychology, et cetera