Understanding opportunity cost is pivotal to financial literacy

Opportunity cost, the trade-off of making a particular choice, is typically seen as more applicable to economics than personal finance. However, I believe understanding opportunity cost is central to being financially literate, and is sorely lacking among the general public. For example, most people are not aware of the magnitude of the “late-start” opportunity cost of obtaining a four-year degree. Many psychology doctoral graduates are overwhelmed by debt and wish they hadn’t stayed in school so long. While other factors are partly responsible, on the whole, college attendees do not recognize the sky-high value of their late teens and twenties, nor the opportunity cost of their decision. Moreover, delay discounting research shows that people frequently overvalue rewards now versus in the future, particularly if they have addictive dispositions. In part, this may be due to a lack of understanding or consideration for opportunity cost.

Research on consumer behavior shows that perceptions of value are often unduly influenced by coupons, promotions, and advertising. For example, an individual who knows that Tylenol and ibuprofen are the same might buy Tylenol with a coupon, even though the cost is still higher than the generic counterpart. The coupon is seen as a savings, when in fact it induced a purchase which was actually more expensive.

Retail gas prices are highly visible, and are given undue weight by consumers. The same consumer who purchased Tylenol might drive out of his/her way to save a few cents per gallon on gas, or might experience psychological distress at observing a lower price at another gas station subsequent to fueling up. However, the opportunity cost of the price difference is almost certainly inconsequential. In fact, the unhappy feelings themselves are more costly and are antithetical to rational choice theory, because they irrational and counterproductive. An understanding of opportunity cost can make this irrationality explicitly visible.

Human behavior when receiving a “windfall gain,” the unexpected acquisition of wealth that feels unearned, is a premier example of failure to understand opportunity cost. The opportunity cost of spending the windfall money is identical to the opportunity cost of spending any other money. However, the $3000 that is received as an IRS tax refund is spent more easily than the $3000 that is earned day-to-day, as if spending the former has less opportunity cost than the latter. Not so! Amazingly, many people will fail to understand opportunity cost even when it is merely their money that was withheld, interest-free, and is now being returned to them.

Money or items of value that are received for “free” are free only when received, but not when spent. Travel hackers who gain “free” vacations via credit card sign-up bonuses fail to recognize that only the acquisition of the rewards was trivial, but that the opportunity cost of using them for travel is what could have been received by cashing them in or selling them to mileage brokers (albeit, with varying levels of risk which should also be factored into one’s valuation). Gift card recipients spend lavishly, even when they would self-flagellate for making identical purchases with money they “earned.” However, the opportunity cost of spending money one received as a windfall or gift is usually no different from the opportunity cost of spending “earned” money.

Not only do purchasing decisions have opportunity costs, but also time-usage decisions. The opportunity cost of driving, for instance, is much greater than the cost of gas—it also encompasses maintenance, depreciation, and insurance on one’s vehicle, time spent driving, and risk of bodily harm. If one can earn $50 per hour in their area of expertise, the opportunity cost of doing one’s own secretarial or housekeeping work is quite substantial.

Investors who reject the risk of stocks for the “safety” of bonds or treasury bills do so at tremendous opportunity cost. In fact, over long time periods (e.g., over 20 years), the risk of stocks evaporates so much that one is over 99% more likely to lose money having picked the “safe” investments. Pandering to one’s psychological shortcomings comes at immense expense.

If an employer offers a 401(k) or IRA match, the opportunity cost of not taking advantage is staggering. Putting $50 per week into such an account at Age 25, which will immediately be doubled by your employer and feasibly may double every 10 years in the market even adjusting for inflation, can be equivalent to 1600 inflation-adjusted non-taxed dollars at Age 65! Even if we are conservative and halve this to $800, statistically as an American you are very likely to live past 65 and still need money at this age. Nevertheless, so many young people “need” this money to make ends meet, without even understanding the raw deal they have given themselves by not contributing.

Understanding and applying the principle of opportunity cost can literally be the difference between becoming a millionaire or pauper. In the Jump$tart Coalition’s National Standards in K–12 Personal Finance Education, 4th edition, it is mentioned only as “every investing decision has alternatives, consequences and opportunity costs” (4th grade knowledge statements; p. 24) and “every spending and saving decision has an opportunity cost” (8th grade additional knowledge statements; p. 8). Moreover, most K–12 teachers don’t even understand “technical” topics such as opportunity cost, let alone being able to teach them. What a pity.

Tech Insights for Educators #2: The nature of digital data

What is digital data? Mainly, data that is represented discretely—that is, in steps—rather than continuously. For example, while a mercury thermometer can represent infinitesimally small variations in temperature, a digital thermometer would be limited to displaying a specific value. While a more expensive, more accurate thermometer might be accurate to several decimal places (e.g., 62.341 degrees), it still cannot be as continuous as the analog equivalent.

When a continuous, analog equivalent is available, why would we want to limit ourselves by representing a phenomenon digitally? There are actually many reasons! Digital data can be more compact, transmittable, faithfully reproducible, duplicable, and losslessly manipulable. For instance, a set of photographic prints take up a lot of physical space, cannot be transmitted, cannot be reproduced without loss of fidelity, is not easily duplicated, and cannot be manipulated without loss of data. A set of digital photographic files can be stored in as small a space as a microSD memory card (the size of a fingernail), can be easily transmitted via bus including over the Internet, can be reproduced with high fidelity, can be easily duplicated via digital copying, and can be manipulated easily and without data loss (e.g., by making a copy).

In common parlance, a digit can be any of 10 values: 0, 1, 2, 3, 4, 5, 6, 7, 8, or 9. However, when we talk about digital data, we are almost always talking about binary data. A bit, or binary digit, can only take on two values, represented as (0) zero (“off”) and (1) one (“on”). Bits are the building blocks of all modern computing. Even something as complex as a high-definition motion picture or immersive, interactive video game can be represented, stored, and processed as billions of bits.

Since approximately 1993, bits have been organized into groups of eight, called bytes (before, a “byte” might have had a different number of bits, but now it is universally eight). When you see storage capacity listed for a USB flash drive, optical disc, hard or solid-state disk drive, smartphone, et cetera, it is listed in bytes. Because a byte has eight bits, it can take on one of 256 values. That is, the number of potential combinations for a byte is 2^8, which is 2 × 2 × 2 × 2 × 2 × 2 × 2 × 2 = 256, encompassing 00000000, 00000001, 00000010, … all the way to … 11111111.

This means a byte is enough space to store a typical character of text—for instance, the word “byte” can be represented by four bytes—one for each letter. You would need 26 different combinations to store all letters of the alphabet. If you add case sensitivity, you have to double this to 52 to be able to represent both “a” and “A,” “b” and “B,” et cetera. If we add digits 0–9, we now need 62 combinations. With 256 combinations, this leaves plenty of combinations for common symbols and punctuation. While there are characters requiring more than one byte to represent, because 256 combinations isn’t enough when you consider the vast range of symbols, typographical marks, or diacritical marks, eight bits is enough to represent most English text. In more complex text-editing environments (e.g., Microsoft Word), additional bytes are employed to represent other attributes such as font type, font size, and text style (e.g., bold, italics, underline).

If you have an essay of 5000 words in a simple text-editing environment, with an average word length of five characters, and, if we are generous and add two characters per word for spaces, line breaks, and punctuation, this gives 5000 × 7 = 35,000 bytes, or 280,000 bits. The 1981 Hayes Smartmodem could transmit 300 bits per second, so in 1981, our essay would take about 280,000 / 300 = 933 seconds to transmit (that is, just under 16 minutes). At the end of the dial-up era, transmission speeds in the United States improved to about 53,000 bits per second, which means our essay could be transmitted in just over five seconds. Modern Internet connections are asymmetric, meaning they download (receive) data faster than they can transmit (“send,” “upload”) data. As of 2013, the average United States Internet user can download 8,700,000 bits per second, and perhaps transmit 1,000,000 bits per second. Therefore, our 5000-word, 280,000-bit essay can now be transmitted in only 0.28 seconds! If we add time for network latency, which is basically limited by the speed of light, we can still transmit our essay in under a second, typically. This is simply impossible if the essay was represented in text on physical paper.

When talking about digital data, because we deal with such large numbers, it is necessary to introduce metric prefixes for ease of discussion and comprehension. That is, we talk about bytes and bits with prefixes that multiply them by factors of a thousand (“kilo”—kilobyte, kilobit), a million (“mega”—megabyte, megabit), a billion (“giga”—gigabyte, gigabit), or a trillion (“tera”—terabyte, terabit). Therefore, a megabit, commonly written as Mb or Mbit, is 1,000,000 bits (125,000 bytes). A megabyte, commonly written as MB, is 1,000,000 bytes (8,000,000 bits). Note that the lowercase “b” indicates a bit, while an uppercase “B” indicates a byte, which is eight bits.

Typically, network transmission speeds are discussed in bits, while storage capacity is discussed in bytes. A common Internet connection speed is asymmetric, with 10 Mb/sec downstream and 1 Mb/sec upstream, meaning that 10 Mb (1.25 MB) of data can be downloaded (received) per second, and 1 Mb (125 KB) of data can be uploaded (transmitted) per second. The Samsung Galaxy S8 smartphone comes with 64 GB of internal nonvolatile storage, meaning that it can store 64 billion bytes (512 billion bits). The latest microSD memory cards can reliably store 256 GB in an area smaller than a thumbnail, which is 2.048 trillion bits (2.048 Tb)!

Digital data can also be compressed. For example, our 35 KB essay has patterns in it which can be stored more succinctly. Doing so requires more computing power to encode and decode, but might reduce the amount of space needed to represent the essay to 10 KB. When dealing with text, this would be a lossless operation, meaning the compression results in no loss of fidelity when reversed (expanded or “decoded”). For example, the HTML, or hypertext markup language that is the foundation of this webpage, is losslessly compressed using “gzip” before being transmitted to, and subsequently decoded by, your web browser.

When we represent complex data such as audio, still photographs, and videos digitally, compression is vital, almost universal, and more commonly lossy, meaning that data in unimportant areas is permanently discarded to save storage space. If you remember the old days of audio compact-discs (CDs), they could only store 74 or 80 minutes of audio because they weren’t compressed. However, through a lossy compression mechanism known as MP3, you could store 10 hours of music on a CD! Similarly, JPEG is the most common method of lossily compressing digital photographs, and H.264 is a leading way to lossily compress digital audiovisual materials. While lossless compression formats exist for audio, images, and video, particularly with video, the space requirements are tremendous, which is why lossy compression algorithms are used to simplify and discard data in areas likely to be unimportant. For example, in a photograph with dark areas, JPEG encoding discards data in the dark areas because you are unlikely to see it. But, if you were to brighten the image, this data loss would become abundantly apparent! (Pictured right in example below—photograph by Richard Thripp.)

JPEG artifacts in shadows pictured left

Most lossy compression algorithms, and even lossless compression algorithms, let you specify the degree of compression. If you want to save more space, you can choose to do so. However, with lossy algorithms, you will lose fidelity, and with lossless algorithms, although no fidelity will be lost, more computational power will be required to compress and decompress the data.

Humans cannot actually listen to audio nor view a photograph or video in binary format. When you view a digital image, you are actually seeing an analog representation of that image. Mainly, this means it could look different depending on the device or medium of presentation. For example, a digital image displayed on a computer monitor may look different than when displayed on a smartphone, or printed on paper. However, the digital data itself remains the same and can be duplicated without loss of data. In the old days, we would have an analog “master” copy of an audio recording, still image, or video that would be duplicated with loss of fidelity. Then, when that master copy wore out from being frequently duplicated, we might be limited to duplicating a copy of the master copy, and eventually a copy of a copy of a copy, with declining quality each time. For example, security cameras often used to use analog tape that would be recorded and re-recorded ad nauseam, causing the tape to degrade. If the tape was not replaced regularly, shoplifters might appear on the tape as a useless, fuzzy blob. Digital recording largely eliminates this type of problem. (Although repeatedly subjecting digital data to a lossy encoding algorithm produces similar effects, the master copy itself does not degrade by being accessed or duplicated—unless you erase it!)

Digital data, particularly when compressed, is more fragile than analog data. For example, if the signal was bad, analog television transmissions often had noise or “snow,” but could still be watched. However, digital television transmissions stutter or are completely unwatchable if the signal is bad.

Intuitively, it makes sense that uncompressed digital data is more resilient than compressed digital data, meaning that we could lose part of the data and still be able to view the rest of it. For example, if we lost part of our 35 KB essay file, we could still read the rest of it. However, if we compress it to 10 KB, the compression algorithm might require all of those 10 kilobytes to be present to produce readable output. In fact, the more powerful the compression, the more likely that every bit is required to produce any usable output, because of how efficiently and intricately the data is compressed. Moreover, if we lose or forget how the algorithm to decompress the data, we are lost! Nevertheless, compression is necessary, valuable, and relatively safe if we stick with popular and mainstream formats.

Although a byte has eight bits, it can be more useful to represent it as a number using all 10 digits, or as a “hexadecimal” code. While you would think a base-10 representation would be numbered 1–256, in fact, counting from (0) zero is the prevailing practice, so we would represent the binary byte 00000000 as 0, 00000001 as 1, 00000010 as 2, 10000000 as 128, 11110000 as 240, and 11111111 as 255. In contrast to base-10, hexadecimal extends base-10 to base-16, giving us 16 combinations to work with in one character instead of 10. While in base-10, 9 is the 10th and final character, hexadecimal extends this by making A the 11th character, B the 12th character, C the 13th character, D the 14th character, E the 15th character, and F the 16th character. Therefore, 0 (00000000) is 00 and 255 (11111111) is FF in hexadecimal.

It is very common to represent colors in hexadecimal, three-byte R–G–B format. Here, 16,777,216 colors (2^24) can be represented hexadecimally with only six characters, representing 24 bits. R, G, and B stand for red, green, and blue (the three additive primary colors), with higher values indicating brighter colors. In a six-character hexadecimal color code, Characters 1–2 represent red, Characters 3–4 represent green, and Characters 5–6 represent blue. FF is the highest intensity, while 00 is the lowest intensity. Thus, pure red would be FF0000, pure green would be 00FF00, pure blue would be 0000FF, pure white would be FFFFFF, and pure black would be 000000.

Twenty-four bits per pixel is considered a “true color” image. However, if we were to store a photograph from a 15-megapixel (MP) digital camera in true color without compression, we would need three bytes per pixel, or 45 MB! JPEG compression is essential for reducing this to a more manageable filesize of approximately 2–5 MB.

While this was by no means an exhaustive discussion of digital data and focused primarily on capacity, representation, and compression rather than other concerns such as storage, volatility, latency, transmission, processing, and encryption, nonetheless, you should now have a grasp of the fundamental underpinnings of the digital world.

Can Derision Enhance Teaching and Learning?

As educators we assume that we have to be kind and supportive to establish a safe and supportive learning environment. We might even embrace the trite saying, “there are no dumb questions.” However, in environments such as League of Legends and other multiplayer online video games, as well as online discussion forums (e.g., FatWallet, Reddit), dumb questions exist, and the individuals who ask them are criticized, demeaned, and mocked by their peers.

At least for some subset of learners, might such a “toxic” environment actually enhance learning by motivating learners to demonstrate due diligence? Might it enhance teaching by preventing wasted time on frivolous questions? At what threshold does a “supportive” learning environment cross over into an unproductive learning environment that rewards incompetence and encourages mediocrity?

One issue is that newcomer integration might be impeded by being condescending toward newcomers. However, perhaps this is less of an issue if the learner is highly motivated to persist? For example, someone addicted to a video game such as World of Warcraft is unlikely to cease playing due to being derided—in fact, derision might motivate one to perform better and avoid being called a “noob.” Derision from peers might actually enhance learning.

However, in the typical classroom, derision needs to be applied carefully. It might shut down learning for some learners, while others may find it more motivating than supportive comments. Also, being derided by someone “in charge” (i.e., the teacher) is different from being derided by a peer. Therefore, I am not advocating educators embrace deriding their students, but only discussing possibilities.

As a Ph.D. student who has completed a Master’s degree and seven full-time years of college education, I have noticed that practically every class starts out with a discussion of the syllabus. Instead, what if instructors expected students to read the syllabus and derided them for asking questions that were answered in it? Instead of giving them the answer and needlessly pulling up the syllabus on the screen, tell them “if you would have actually read the syllabus, you would not have wasted our time with this question.” Similarly, throughout the semester there are perennial questions from students who are simply lazy, failing to read assigned readings, directions, et cetera. Instead of offering derision, instructors typically enable and reward these students’ laziness by serving up easy answers. Conversely, students who exercised due diligence are penalized by having their time wasted. If an instructor spends two minutes on a frivolous question in a class of 30, that’s an entire hour of time wasted. At University of Central Florida (UCF), some classes in other departments (e.g., business, engineering) have as many as 1000 students, which could waste as much as 2000 minutes of time!

When I was a psychology student at UCF Daytona Beach, professors such as Ed Fouty had rather ostentatious “three before me” policies for their students. Specifically, this meant that when asking a question of the professor or teaching assistant, students had to list three actions they took to figure out the answer on their own (e.g., consulting the syllabus, readings, Google Search). In a way, this is derision—it communicates that there are dumb questions and that instructor time is inherently more valuable than student time. And yet, mustn’t it be? Professors, in particular, must juggle teaching dozens to hundreds of students among many other professional obligations. There is simply no way to do this if one’s time is consumed with trivialities. (Note that I never actually took a course with Dr. Fouty because alternative professors taught all the courses he taught at easier levels of difficulty—although I had enrolled in one of Dr. Fouty’s courses, and then dropped it immediately after the first meeting.)

Here are several examples of how participants are derided on the FatWallet Finance forum:

1. In a topic about tipping, the first reply, receiving many upvotes, says: “OK – why is this a difficult concept? If you feel like they did a great job, leave them a tip. If not, don’t. It’s very simple.” This derides the original poster (OP) by implying (s)he lacks critical thought for asking a frivolous question.

2. An OP asks for a simple explanation of BitCoin, and the first response, receiving several upvotes, is merely “https://en.wikipedia.org/wiki/Bitcoin.” This derides the OP for asking a question that they could easily have figured out on their own. However, the OP is arguably deserving of derision for being lazy and wasting others’ time, which shows a lack of respect. Let Me Google That for You (LMGTFY) is a website that can similarly be used to deride individuals who ask questions that could be answered via a simple Google Search query—it provides a link that shows an animation of typing the question into Google and then loads the search results. Deriding learners in this manner can enhance their learning by encouraging them to take personal responsibility, while also enhancing teaching by eliminating a particularly insidious type of time-wasting questions.

3. An OP asks about doing a chargeback for canceling a hotel reservation that lost its Best Western branding, but admits to having canceled for other reasons and that loss of branding is a “convenient excuse.” One commentator says: “Stop using the brand change as a way to scumbag you’re way out of it. It’s pretty pathetic. If you had a problem with the room then that would be the time for a chargeback. The room is exactly the same as the one you were paying for. They didn’t hack it to bits and throw garbage all over the floor bc of the brand change. Saying you want to cancel on the off chance their is a problem you can’t complain to Best Western management is an absurd stretch.” Although this commentator received more downvotes than upvotes, this sentiment of derision was echoed by several other commentators and might discourage the OP from asking similar questions in the future.

In other forums, derision commonly is incited by “reposting”—that is, posting about a topic that has already been covered elsewhere. OPs for such topics are ridiculed for their lack of due diligence—they could easily have searched for the prior topic. Here, derision potentially elicits a social norm of avoiding duplication of questions and content, which increases the efficiency of the forum.

Derision can enhance teaching by making it abundantly clear that the instructor, or a peer group, will not accept unproductive behaviors. For instance, in the realm of financial literacy education, instead of coddling individuals who continue to incur overdraft fees or resort to the services of payday lenders, we might mock, demean, and ridicule them for their lack of financial competence. “You know your actions are financially disastrous, and yet you persist—you have no one to blame but yourself for your situation, and you will find no sympathy here.”

Derision might encourage “lurking” or “participatory spectatorship” instead of active participation, particularly in games or activities with steep learning curves. Just because some activity is difficult to learn does not necessarily mean it is the responsibility of others to aid that learning. In environments where incompetence is derided, effective learners might avoid derision and exercise due diligence by observing and learning from the behaviors of others (social norms), and even by researching and implementing meta-cognitive strategies to aid their performance. Instead of “spoon-feeding” learners, may we not expect them to take at least a modicum of personal responsibility for their learning rather than behaving as lazy, impetuous children?

Tech Insights for Educators #1: Special typographic characters and alt codes

This is the first in a new series of Technology Insights for Educators which I will use as supplemental materials for my students in EME 2040: Introduction to Technology for Educators at University of Central Florida, which may also be of general interest. As I enter my second year of the Education Ph.D., Instructional Design and Technology program, I am becoming a Graduate Teaching Associate and will be teaching two mixed mode sections of EME 2040 (Monday 10:30 A.M. – 1:30 P.M. and Wednesday 1:30 – 4:20 P.M.) as Instructor of Record in Fall 2017. At a later time, I will make a landing or index page for these insights.

When preparing documents, et cetera, there are many typographic characters that are not available on a standard keyboard, and yet are supported by Unicode and can be used in most applications (e.g., Microsoft Office).

On Microsoft Windows, if a numeric keypad is available (found on the right side of the keyboard), such characters can be directly typed with alt codes. With the Num Lock key enabled, one should depress one of the Alt keys, and while doing so, type a sequence of numbers on the numeric keypad, and then release Alt. Then, the special character will appear. I found a list of many alt codes in this blog post by “Techno World 007.” Here are some of the most important ones:

Symbol Alt Code Description
Alt + 0149 Bullet point
Alt + 0150 En dash
Alt + 0151 Em dash
¢ Alt + 0162 Cent sign
° Alt + 0176 Degree symbol
× Alt + 0215 Multiplication sign
÷ Alt + 0247 Division sign
* Alt + 8242 Prime symbol
* Alt + 8243 Double prime symbol

* Alt code works in Microsoft Office, but not most other programs.

If a numeric keypad is unavailable (e.g., on a laptop), or you are in a non-Windows environment, there are other options. In Microsoft Word, there is the “symbol” section. Another option is simply copying-and-pasting the symbol into the target document. In Microsoft Word, this should be done with the “keep text only” paste option to prevent inheriting conflicting font size or formatting from the source.

What you see in many academic manuscripts, books, and other materials is frequently incorrect. Using a hyphen between a number range (e.g., 10-99) is not correct—an en dash should be used (e.g., 10–99). When an author speaks of a two-by-two interaction, calling it a 2*2 or 2x2 is typographically incorrect. Instead, the multiplication sign should be used (i.e., 2×2). When talking about height or distance, one should use the prime and double-prime symbols, rather than the single and double-quote symbols, respectively (e.g., not 5’10”, but rather, 5′10″).

In some cases, Microsoft Word will help you. For example, if you type two hyphens between words, it automatically converts the two hyphens to an em dash (—).

Personally, I am so used to using some of the symbols that I have memorized the alt codes for an en dash, an em dash, the cent sign, and the multiplication sign (–, —, ¢, ×). This way, when I am typing in an online discussion, et cetera, and must employ these symbols to be typographically correct, there is no need for me to copy-and-paste from an external source or consult a character map.

You can impress or annoy your colleagues with your knowledge of typography. Surprisingly, I have found that knowledge of the en dash, in particular, is sparse. Most people, including full professors, incorrectly use hyphens where en dashes are required. I suppose many academic journals correctly employ en dashes only because the editors make corrections to the authors’ manuscripts.

Why UCF should allow faculty and staff to change Windows 10 taskbar display settings

June 21, 2017

My bid to get University of Central Florida’s (UCF) I.T. department to allow education faculty and staff to change taskbar settings so they could ungroup Windows 10 taskbar items and be able to display labels in addition to icons was shot down. I am told this issue does not affect job performance in any way and that there is no need for changes because work is not being impeded. My concluding remarks:

Thanks, [redacted], for your help! I disagree with [redacted]—faculty in the education department are provided with dual monitors, even though by this standard, single monitors would not impede work. I believe that like dual monitors, being able to ungroup items on the taskbar and being able to display labels instead of icons would improve productivity. However, I will take no further action.

Writing on education, finance, psychology, et cetera