All posts by Richard Thripp

UCF student in the Education Ph.D. program. Treasurer of Port Orange Toastmasters. 24-year-old photographer, writer, and pianist.

Educational Attainment and Financial Literacy Questions from the National Financial Capability Study, 2009–2015

Here are my comments and an overview of the questions on the National Financial Capability Study (NFCS) about educational attainment and financial literacy, as they have changed during the three iterations (“waves”) of the survey (2009, 2012, and 2015). The NFCS is a survey that is administered nationally (by the FINRA Investor Education Foundation) to approximately 500 participants per U.S. state (about 27,000 per iteration total, due to large states or certain ethnicities being over-sampled) every three years. It began in 2009, so there have only been three iterations so far. While the raw data is not nationally representative—obviously, sampling 500 people from Alaska and 500 people from Florida grossly over-represents Alaska by proportion of population—the datasets include weighting variables to account for this at the national, state, and census-area levels. It is disappointing to see that the NFCS only began oversampling highly populous states in the latest iteration (2015), and only did so for New York, Texas, Illinois, and California (1000 respondents instead of 500), but this may be due to decisions about the NFCS being surprisingly political.

It is disappointing to see the lack of depth in the educational attainment question in the 2009 and 2012 surveys. Only in the latest version (2015) were options for Associate’s and Bachelor’s degrees added, while the vague “college graduate” was removed. However, now we have no option for trade school or certificate graduates. Moreover, making comparisons between the surveys is difficult. We can combine the two high-school graduate options in the 2012 and 2015 iterations to compare them to the single 2009 option, but it is somewhat tenuous to compare “some college” (2009 and 2012) to “some college, no degree” (2015), to consolidate “Associate’s degree” and “Bachelor’s degree” (2015) to compare them to “college graduate” (2009 and 2012), or to compare “post graduate education” (2009 to 2012) to “post graduate degree” (2015). These changes between survey iterations are not necessarily trivial. Indeed, the “tracking dataset” provided by FINRA, which includes respondents from all three survey iterations but only questions that are included in all three iterations, omits educational attainment due to the lack of consistency.

For the other questions about actual and perceived financial literacy, which is also known as financial capability (the two terms are fairly synonymous but “financial literacy” is the more popular term, despite FINRA and the post-2011 Obama administration relabeling it “financial capability”), the response options (not shown below but can be seen at the NFCS website or the Washington Post) remained identical between survey iterations. As someone interested in investigating the relationship between educational attainment and financial literacy (perceived and actual), it is disappointing to see “do you think financial education should be taught in schools?” being only included in the 2012 iteration, and to see “how strongly do you agree or disagree with the following statements? – I REGULARLY KEEP UP WITH ECONOMIC AND FINANCIAL NEWS” only included in the 2009 iteration. However, it is understandable that the survey cannot be overly long, and perhaps these questions were judged to be unimportant.

Special note on the question: “Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow?” —— The response options are “more than $102,” “exactly $102,” “less than $102,” and “don’t know.” This wording is very easy. In fact, it may have been more interesting to make the options centered around $110 rather than $102, in which case the question would be about understanding exponentiation (compound interest) rather than simple addition. Nevertheless, in every iteration, 25 to 27% of respondents got this question wrong! And, as with every question, those with higher educational attainment did better. However, for some questions, such as the one on interest rates and bond prices, even postgraduates did shockingly bad—only 46% of postgraduates in the 2015 iteration correctly answered “they will fall,” and overall, a mere 28% of respondents answered correctly, with 38% answering “don’t know.” Although some of the interpretations are spurious and I disagree with using pie charts to represent such data, this blog post by “the Weakonomist,” regarding the 2012 iteration, shows how terrible the public’s financial literacy is. Of course, on the question where respondents are asked to assess their financial knowledge on a 1–7 scale, most think they are geniuses… pretty sad.


EDUCATIONAL ATTAINMENT

 
What was the last year of education that you completed? [2009 codes]

  1. Did not complete high school
  2. High school graduate
  3. Some college
  4. College graduate
  5. Post graduate education

What was the last year of education that you completed? [2012 codes]

  1. Did not complete high school
  2. High school graduate – regular high school diploma
  3. High school graduate – GED or alternative credential
  4. Some college
  5. College graduate
  6. Post graduate education

What was the highest level of education that you completed? [2015 codes]

  1. Did not complete high school
  2. High school graduate – regular high school diploma
  3. High school graduate – GED or alternative credential
  4. Some college, no degree
  5. Associate’s degree
  6. Bachelor’s degree
  7. Post graduate degree

PERCEIVED FINANCIAL LITERACY

 
2009, 2012, 2015: How strongly do you agree or disagree with the following statements? – I AM GOOD AT DEALING WITH DAY-TO-DAY FINANCIAL MATTERS, SUCH AS CHECKING ACCOUNTS, CREDIT AND DEBIT CARDS, AND TRACKING EXPENSES.

2009, 2012, 2015: How strongly do you agree or disagree with the following statements? – I AM PRETTY GOOD AT MATH.

2009 only: How strongly do you agree or disagree with the following statements? – I REGULARLY KEEP UP WITH ECONOMIC AND FINANCIAL NEWS.

2009, 2012, 2015: On a scale from 1 to 7, where 1 means very low and 7 means very high, how would you assess your overall financial knowledge?


ACTUAL FINANCIAL LITERACY

 
2009, 2012, 2015: Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow?

2009, 2012, 2015: Imagine that the interest rate on your savings account was 1% per year and inflation was 2% per year. After 1 year, how much would you be able to buy with the money in this account?

2009, 2012, 2015: If interest rates rise, what will typically happen to bond prices?

2015 only: Suppose you owe $1,000 on a loan and the interest rate you are charged is 20% per year compounded annually. If you didn’t pay anything off, at this interest rate, how many years would it take for the amount you owe to double?

2009, 2012, 2015: A 15-year mortgage typically requires higher monthly payments than a 30-year mortgage, but the total interest paid over the life of the loan will be less.

2009, 2012, 2015: Buying a single company’s stock usually provides a safer return than a stock mutual fund.


MISCELLANEOUS

 
2012 only: Do you think financial education should be taught in schools?

Thoughts on Cognitive Load and the Modality Effect; Self-Regulation and Mindsets

I wrote the following discussion replies for an assignment in IDS 6504: Adult Learning, instructed by Dr. Kay Allen. The first reply is about cognitive load theory and the modality effect; the second is about self-regulation and mindsets.

IDS 6504 Assignment 6: Replies to Others
Richard Thripp
University of Central Florida
March 17, 2017


FIRST REPLY

Richard Thripp, responding to [redacted]

Question: What are strategies that can be implemented to reduce cognitive load?

General Comment: Reducing extraneous cognitive load, that is, cognitive load unrelated to the instructional materials themselves, is a worthy goal. Two of your references might be characterized as the modality effect—that presenting information both visually and auditorily can reduce cognitive load as compared to using only one modality.

Supplement: When considering cognitive load and the modality effect, one should also look at whether the instruction at hand is system-paced or self-paced. Classroom lecturing, such as the Lewis (2016) article you cited, is a classic example of system-paced instruction, because the learner cannot decouple the auditory portion of the presentation from the visual portion—these two modalities are temporally linked. This is good. In fact, Ginnas’s (2005) meta-analysis found a strong presence of the modality effect for system-paced instruction, but a weaker presence when instruction is self-paced. In self-paced instruction, the learner consumes instructional materials in one modality while having the option of referring to materials in other modalities. An example is a textbook or learning modules with graphics and text, supplemented by an audio or video clip to be accessed separately. The modality effect may be so bad for self-paced instruction that it may even be worse than presenting instruction in one modality, at least according to a study by Tabbers, Martens, and van Merriënboer (2004). This implies that temporal contiguity is essential. Therefore, instructional designers may want to be cautious about providing text-based modules with multimedia supplements. In fact, if we accept the argument of Tabbers et al. (2004), it may be better to force students to watch a video where the temporal contiguity of multimodal information is preserved (i.e., learners hear the audio that accompanies relevant text at the right time, rather than minutes or hours after reading the text in the learning module or textbook), at least with respect to cognitive load theory and the modality effect.

While I have not mentioned the cueing effect, it may be important to the modality effect if cues are linked across modes (e.g., a narrator telling the learner to look at a particular portion of a diagram). However, the cueing effect, quite often, is seen purely in the visual modality, such as highlighting or otherwise visually drawing attention to an area of a figure, graph, table, diagram, or block of text.

As an added comment, what Dr. Allen does in this course with real-time learning sessions is a great example of using system-paced instruction to leverage the modality effect. She does not read from the slides, but auditorily elaborates on the points on the slides with different words. She does not offer the slides for download, nor a text transcript of the spoken portion of the presentation. Ironically, not offering these supplements may actually be preferable to offering them; even learners who miss the real-time session must review a video-recording of it, which ensures that temporal contiguity of the instructional modalities is preserved. If slides and transcripts were offered, learners availing themselves of them would become self-paced with respect to instructional modality, which can have deleterious, or at least sub-optimal, results (Tabbers et al., 2004).

References

Ginns, P. (2005). Meta-analysis of the modality effect. Learning and Instruction, 15, 313–331. http://doi.org/10.1016/j.learninstruc.2005.07.001

Tabbers, H. K., Martens, R. L., & van Merriënboer, J. J. G. (2004). Multimedia instructions and cognitive load theory: Effects of modality and cueing. British Journal of Educational Psychology, 74, 71–81. http://doi.org/10.1348/000709904322848824


SECOND REPLY

Richard Thripp, responding to [redacted]

Question: How can instructors of adult language-learners address the issue of learners’ self-regulation so they may better manage their learning?

General Comment: Self-regulation is multi-faceted. Explaining the research on self-regulation to learners may be beneficial. Influencing learners’ mindsets is another worthy avenue. The instruction or assessment goal at hand is a factor in whether self-regulation should be prioritized or deferred.

Supplement: In their blockbuster literature review and position piece, Muraven and Baumeister (2000) contend that self-regulation is like a muscle—it is finite, can be easily depleted, and yet may also be strengthened by being frequently exercised. Explaining this to learners may improve their understanding of self-regulation and perhaps reduce inappropriate self-blame. Moreover, learners’ personal situations and an educator’s present goals are important. During instruction and formative assessment, encouraging self-regulation among learners may be beneficial. However, allowing learners to exhibit self-regulation by making all assignments and assessments due on the last day of the semester may have profoundly negative results for learners who fail to self-regulate; instead, staggered deadlines can reduce learners’ self-regulatory burdens. Further, educators and institutions arguably should reduce the need for self-regulation among learners who are going through transitions or already have a lot of self-regulatory burdens. For instance, the self-regulation required of doctoral candidates may be foreign and overwhelming, which is a contributory factor toward the undesirable outcome of doctoral attrition (Bair & Haworth, 1999). In response, universities might mandate format reviews, committee meetings, and draft deadlines to reduce doctoral candidates’ reliance on self-regulation.

Another important factor is mindset—whether the learner has a growth mindset (incremental theory of intelligence), meaning they believe they can improve their abilities with effort, or a fixed mindset (entity theory of intelligence), meaning they believe their abilities in a particular domain, or in general, cannot be increased through effort (Thripp, 2016). In an extensive meta-analysis, Burnette, O’Boyle, VanEpps, Pollack, and Finkel (2013) found that having a growth mindset predicted superior self-regulation. Growth mindset can be easily taught through brief instructional modules advocating the brain’s plasticity and potential for growth (Paunesku et al., 2015). Such interventions may have collateral benefits to self-regulation. Efforts should be made by educators to demystify important concepts, such as mindsets and self-regulation, among their learners. Then, learners may achieve metacognitive awareness, becoming empowered to recognize and adjust for their human limitations as a step toward truly taking control of their educations.

References

Bair, C. R., & Haworth, J. G. (1999, November). Doctoral student attrition and persistence: A meta-synthesis of research. Paper presented at the meeting of the Association for the Study of Higher Education, San Antonio, TX.

Burnette, J. L., O’Boyle, E. H., VanEpps, E. M., Pollack, J. M., & Finkel, E. J. (2013). Mindsets matter: A meta-analytic review of implicit theories and self-regulation. Psychological Bulletin, 139, 655–701. http://doi.org/10.1037/a0029531

Muraven, M., & Baumeister, R. F. (2000). Self-regulation and depletion of limited resources: Does self-control resemble a muscle? Psychological Bulletin, 126, 247–259. http://doi.org/10.1037/0033-2909.126.2.247

Paunesku, D., Walton, G. M., Romero, C., Smith, E. N., Yeager, D. S., & Dweck, C. S. (2015). Mind-set interventions are a scalable treatment for academic underachievement. Psychological Science, 26, 784–793. http://doi.org/10.1177/0956797615571017

Thripp, R. X. (2016, April 21). The implications of mindsets for learning and instruction. Retrieved from http://thripp.com/2016/05/mindsets-education-lit-review/

The College Graduate’s “Late Start” Earnings Disadvantage

The College Graduate’s “Late Start” Earnings Disadvantage

This is a simplified, contrived example for illustrative purposes.

Below, we have the 2017 IRS tax brackets for single individuals (for taxes filed in 2018). For this example, I’ll pretend income stays constant and the tax brackets remain the same over an entire working life.

2017 IRS Tax Brackets for Singles

Let’s pretend a high-school graduate earns $37,950 per year of taxable income for 47 years: Age 18 to 64. Let’s pretend a college graduate spends seven years in college, perhaps earning a graduate degree, earning no taxable income during this time, but then earns $69,591.25 per year of taxable income for 40 years: Age 25 to 64. This figure is contrived to result in the college graduate earning exactly $1 million more taxable income than the high-school graduate: $2,783,650 vs. $1,783,650. The college graduate’s $1 million earnings advantage is a common talking point among marketers and educators.

$37,950.00 × 47 years = $1,783,650 [High-school graduate]
$69,591.25 × 40 years = $2,783,650 [College graduate]

(Note: We are just considering taxable income in these examples. The standard personal deduction for 2017 is $6350, so in this example, the high-school graduate is actually earning $44,300 per year and the college graduate is actually earning $75,941.25 per year, assuming both do not itemize.)

Although the college graduate earns $1 million extra, $31,641.25 per year of the college graduate’s taxable income is in the 25% tax bracket, while all of the high school graduate’s taxable income is in the 15% tax bracket or below.

The high school graduate’s taxable income is taxed at 10% on the first $9325 and 15% on the next $28,625.

The college graduate’s first $37,950 of taxable income is taxed the same way as the high school graduate’s. However, the additional taxable income ($31,641.25) is taxed at 25%.

In each year, the high-school graduate pays $5226.25 of federal income taxes, or $245,633.75 over 47 years, which is 13.77% of his/her lifetime taxable income.

In each year, the college graduate pays $13,136.56 of federal income taxes, or $525,462.50 over 40 years, which is 18.88% of his/her lifetime taxable income.

This means that while the college graduate earned $1 million additional taxable income than the high-school graduate, after federal income taxes, the college graduate netted only $720,171.25 more. This is the college graduate’s “late start” earnings disadvantage. While it will usually be more subtle, it is usually there. There is a high cost for failing to saturate the 10% and 15% tax brackets in any calendar year. There is also a high cost for earning more.

Because the college graduate had no taxable income during his/her seven years of college, he/she was missing out on saturation of the lower tax brackets in these years. He/she could have been earning taxable income and enjoying a lower tax bracket on this income if his/her earnings were “spread out,” so to speak, rather than concentrated in a lesser number of lucrative years once the college education had been completed.

Moreover, the college graduate was prevented from contributing to tax-advantaged retirement accounts during Ages 18–24, because earned income is required to make such contributions. This is a tremendous loss. Just by itself, contributing the current maximum of $5500 to a Roth IRA during Ages 18–24 would total $38,500. If this Roth IRA is invested, rather aggressively, in a S&P 500 or total-market index fund, it will probably double every ten years, even adjusting for inflation. This is potentially a 16-fold increase at Age 65, to $616,000. The high-school graduate could contribute to the Roth IRA during Ages 18–24, paying only 15% federal income taxes in these years, which totals $5775 of taxes. Because Roth IRAs are tax-exempt, tax is paid when the money goes in, but not when it comes out. All $577,500 of inflation-adjusted gains would be tax-exempt. These earnings would be in addition to what both the high-school or college graduate could potentially earn from contributing to tax-advantaged retirement accounts at Age 25 and beyond.

Although the college graduate could have, while in college, just worked a few months each year to achieve $5500 of earned income and then contributed the maximum to a Roth IRA, this does not compensate for the previously discussed tax bracket differential. Also, college graduates typically accumulate student-loan debts and would have to take additional student loans to be able to contribute to a Roth IRA. (Taking student loans, if they are below perhaps 10% APR, to contribute the annual maximum to a Roth IRA in a whole-market index fund is actually a great idea, at least from a mathematical, non-psychological perspective. Tax-advantaged retirement contributions are so valuable that the APR of the loan required to make them can exceed the stock market’s average annual return-on-investment [ROI] and they can still be worthwhile. Debt hawks like Dave Ramsey totally eschew this point.)

Finally, there is a lot of collateral damage, so to speak, with achieving a higher income. Higher earners may become ineligible for healthcare subsidies such as the advance premium tax credit (APTC), ineligible for the earned income tax credit (EITC), ineligible for other need-based aid, and subjected to higher income taxes at the state and local level as well.

Due to the time required to acquire specialized knowledge and expertise (whether actual or perceived expertise), college graduates, financially, are late bloomers. This “late start” has substantial mathematical and tax-related costs. Therefore, a comparison of nominal dollars earned in one’s lifetime may present a rosy picture of college’s ROI.

Pedagogical Implications of the Testing Effect, Working Memory

I wrote the following for an assignment in IDS 6504: Adult Learning, instructed by Dr. Kay Allen. I chose the testing effect and cognitive load theory because of my interest in these constructs and their pedagogical importance.

IDS 6504 Assignment 6
Richard Thripp
University of Central Florida
March 8, 2017

1. Theory and Construct – Cognitive Information Processing – The Testing Effect

2. First Implication for Instruction – Testing learners’ ability to recall (“retrieval practice”) improves learning and assessment outcomes by strengthening both retrieval ability and knowledge encoding.

3. Question – When should teachers and trainers implement retrieval practice to engage the testing effect?

4. Answer – My claim that the testing effect may even improve knowledge encoding sounds audacious to the uninitiated, but is being borne out by recent research—Karpicke and Blunt (2011), in a statement that sounds more like synaptic pruning than an educational phenomenon, propose that “retrieval practice may improve cue diagnosticity by restricting the set of candidates specified by a cue to be included in the search set” (p. 774). That is to say, the testing effect is not so much increasing the number of encoded features, but rather improving the lucidity of the existing encoded features, somewhat like tracing over a pencil sketch in pen. For closed-book assessments, retrieval practice has been shown to be much more effective than repeated study of learning materials, if the exam is given some time after the last study session (in Roediger & Karpicke, 2006, the testing effect was apparent two days and a week later, but not five minutes later). Teachers and trainers should augment their lessons with retrieval practice activities early and often, even for complex materials (Karpicke & Aue, 2015). Simply re-reading a textbook is not enough. Even teachers who implement elaborative learning activities are leaving a great deal of potential learning gains on the table if they do not engage the testing effect through retrieval practice (Karpicke & Blunt, 2011). One of the few times where retrieval practice may not be useful is immediately before an exam (i.e., the five-minute condition in Roediger & Karpicke, 2006). Giving yourself flashcard quizzes while waiting for the exams to be passed out is probably not very useful, perhaps because there simply is not enough time for the testing effect to incubate at this point.

5. References

Karpicke, J. D., & Aue, W. R. (2015). The testing effect is alive and well with complex materials. Educational Psychology Review, 27, 317–326. http://doi.org/10.1007/s10648-015-9309-3

Karpicke, J. D., & Blunt, J. R. (2011). Retrieval practice produces more learning than elaborative studying with concept mapping. Science, 331, 772–775. http://doi.org/10.1126/science.1199327

Roediger, H. L., & Karpicke, J. D. (2006). Test-enhanced learning: Taking memory tests improves long-term retention. Psychological Science, 17, 249–255. http://doi.org/10.1111/j.1467-9280.2006.01693.x

6. Specific Application – One specific application, employed by Dr. Kay Allen at the University of Central Florida in such courses as EDF 6259: Learning Theories Applied to Instruction and Classroom Management, and IDS 6504: Adult Learning, is to give learners multiple-choice quizzes during lectures. This retrieval practice may aid long-term retention and retrieval ability, particularly for learners who read the textbook, modules, or other supporting materials prior to attending the lecture or web conference.


7. Theory and Construct – Cognitive Load Theory – Working Memory and Cognitive Efficiency

8. Second Implication for Instruction – Instruction should be designed to accommodate the learner’s working memory capacity by reducing or eliminating the need to hold information in working memory unnecessarily. This is just one step toward designing instruction with cognitive efficiency in mind.

9. Question – How should instructional designers account for working memory capacity in multimedia learning?

10. Answer – Multimedia learning activities should be designed to avoid cognitive overload for the target audience (Mayer & Moreno, 2003). If the target audience is learners who are already experts in the field of study at hand, obviously, learning activities that produce substantial cognitive overload for novices might become viable. Cognitive efficiency, or “qualitative increases in knowledge gained in relation to the time and effort invested in knowledge acquisition” (Hoffman, 2012, p., 133), is arguably a worthy consideration—the time and resources available to learners and instructors are perennially constrained. Instruction that exceeds the learner’s working memory capacity most commonly results in cognitive inefficiency, not unlike a computer running out of random-access memory and being forced to “swap” information to the hard disk which is one one-thousandth as efficient. Therefore, instructional designers should not only consider their target audience(s), but develop their multimedia materials with good pedagogy that transcends the target audience. For example, expecting learners to memorize a lengthy number or sentence and then enter this information on a different screen is neither appropriate for novices nor experts (except, in the rare case that the instructional goal is short-term retrieval practice). Instead, the learning activity should be designed so the learner can simultaneously view this information while entering it into a different area or application. In a similar vein, multimedia learning should employ techniques such as segmenting, pretraining, signaling, and weeding to avoid extraneous cognitive load and optimize learning-relevant cognitive load (Mayer & Moreno, 2003), thereby avoiding cognitive or working-memory overload and improving cognitive efficiency.

11. References

Hoffman, B. (2012). Cognitive efficiency: A conceptual and methodological comparison. Learning and Instruction, 22, 133–144. http://doi.org/10.1016/j.learninstruc.2011.09.001

Mayer, R. E., & Moreno, R. (2003). Nine ways to reduce cognitive load in multimedia learning. Educational Psychologist, 38, 43–52. http://doi.org/10.1207/S15326985EP3801_6

12. Specific Application – If you are designing multimedia training that requires interacting with external computer programs, it is not appropriate for this training to take up the entire computer monitor. The training window should be capable of being resized to a smaller size by the learner, so that he or she can avoid unnecessary working memory usage and avoid the split-attention effect by being able to position other computer program windows next to the training window (Mayer & Moreno, 2003). Similarly, within the multimedia training, such situations should be avoided. For a specific application, in IBM’s Statistical Package for the Social Sciences (SPSS), there are many instances where it is impossible to access certain information about the data-set at hand without closing a statistical options menu or dialog box. This phenomenon occurs even when accessing certain information about the data-set is essential to the task at hand in an options menu or dialog box. This is a prime example of poor design that fails to consider cognitive load theory, working memory, or efficiency of any kind besides the convenience of the programmers and developers of the software or training at hand.

13. Specific Application – Supplement – If the prior application is difficult to understand, here is an easy example: you have received a voicemail on your smartphone where the caller has spoken a call-back number that is different from his or her caller ID number. However, without some external tool such as a pen and paper, it is impossible to record this phone number in your phone while listening to it. Consequently, you are forced to hold the number in working memory if an external recording device such as a pen and paper is unavailable, and then enter it into your contacts or dialing app. If you are familiar with the area code, remembering seven numbers is an easy task, but if the area code is unfamiliar, attempting to hold 10 numbers in working memory may easily exceed your working memory capacity. Regardless, from a design standpoint, this is a poorly designed and needlessly inefficient situation.

Task Analysis Comparison for Calculation of Net Worth

I wrote the following paper for my coursework in EME 7634: Advanced Instructional Design, instructed by Dr. Atsusi Hirumi. Net-worth calculation was my chosen topic, due to my persistent interest in financial education.

I am also making this paper and companion slides available for download. The companion slides are not included in the paper. They were made two weeks before I wrote this paper, prior to conducting the actual task analyses.

Download paper as Microsoft Word 2016 document
Download paper as PDF
Download companion slides as Microsoft PowerPoint 2016 file
Download companion slides as PDF

My work should only be used appropriately and I should be credited.


Task Analysis Comparison for Calculation of Net Worth
Richard Thripp
University of Central Florida
March 1, 2017

Calculating one’s net worth is a vital part of financial literacy (French & McKillop, 2016). Tallying the value of one’s assets and debts improves understanding of one’s financial situation. Although at first, this process may seem simple, appraising one’s assets is a complex issue, and even remembering all of one’s possessions and liabilities may be difficult. Therefore, net-worth calculation seems a suitable instructional situation to analyze. For this portfolio analysis, I am applying three alternative analysis techniques that were included in Jonassen, Tessmer, and Hannum’s (1999) handbook—procedural analysis, critical-incident analysis, and case-based reasoning (CBR). The former two are differentiated by their focus on overt elements and underlying methods, respectively, while CBR’s status as a task-analysis method is tenuous and its utility in this situation is marginal—it is included here for demonstration purposes.

Procedural Analysis

This type of analysis is geared toward assembly lines and other easily observable tasks. However, it can be used to describe cognitive activities if they are overtly observable, and when extended with flowcharting, can even describe relatively complex decision-making processes.

The following analysis is for the net-worth calculation task, based on the steps described by Jonassen et al. (1999, pp. 47–49):

  1. Determine if the task is amenable to a procedural analysis. Listing assets and liabilities, looking up their values, and sometimes, appraising values are overt actions and can be conceived as a series of steps. However, recalling all relevant items and appraising values can require covert cognitive processes in some cases, so procedural analysis does not capture everything required for this task.
  2. Write down the terminal objective of the task. “Calculates their net worth by estimating and tallying the values of their real assets and liabilities.” Note that this task excludes analyses of liquidity, cash flow, monthly expenses, and interest rates on debts, which are also important components of one’s financial situation.
  3. Choose a task performer. I am the performer for this task. I achieved competence in this task three years ago. If the training is for novices, Jonassen et al. (1999) say the flowchart should be based on someone who has only achieved expertise recently, to avoid “an idiosyncratic sequence” (p. 47). For this task, Investopedia’s Net Worth Calculator (www.investopedia.com/net-worth) was examined to help guide the analysis. Additionally, based on my knowledge of personal finance, I accounted for a variety of common financial situations (e.g., marriage, retirement funds, etc.).
  4. Choose a data-gathering procedure. I took notes as a silently executed the task.
  5. Observe and record the procedure. I made a text-based list of tasks before starting, and opted to construct a flowchart while executing the net-worth task.
  6. Review and revise outline. This step was skipped, because I did not do an outline.
  7. Sketch out a flowchart of the task operations and decisions. See Figure 1. In constructing this flowchart, is was readily apparent that a complete flowchart would be “cumbersome in detail” (Jonassen et al., 1999, p. 53). Consequently, I constructed the flowchart at an abstracted level that condenses or generalizes many steps. For example, Item 210: “Cash equivalent asset or debt?” actually applies to a host of items including bank accounts, taxable investment accounts, mortgages, student and auto loans, and credit card debts. Item 120: “Recall and list real assets and liabilities …” implies the learner will list assets and debts as separate line items (e.g., house and mortgage would be listed separately). These details and others are omitted from the flowchart to prevent it from becoming overwhelming and unwieldy. At Item 200, a foreach loop is used to iterate over the array (list) of assets and debts, similar to the foreach construct in PHP, a popular web scripting language.
  8. Review the procedural flowchart. This was done during its construction.
  9. Field-test the flowchart. I compared the flowchart to the Investopedia’s Net Worth Calculator (www.investopedia.com/net-worth) to see if it could fit the same situations. The categories of assets and liabilities on this calculator all fit into items on the flowchart. A net-worth spreadsheet is more versatile than Investopedia’s calculator because it can be saved, amended, and reused.

Procedural-analysis flowchart for net-worth calculation task

Figure 1. Procedural-analysis flowchart for net-worth calculation task.

Critical-Incident Analysis

This type of analysis involves interviewing subject-matter experts (SMEs) to gain a realistic understanding of the task at hand, including the important elements (Jonassen et al., 1999). Interview or survey data from SMEs must be culled to remove noncritical elements, focus on the required behavior, and to arrange tasks by importance (Flanagan, 1954). You can also ask your SMEs to arrange tasks by importance (Jonassen et al., 1999).

Continue reading Task Analysis Comparison for Calculation of Net Worth