Proofs, Alibis, and Falsifiability

As a graduate student in the social sciences, I have firmly learned to avoid saying anything is “proven” on the basis of empirical research, but instead that some concept, such as Carol Dweck’s work on mindsets which I have recently began studying, has been “supported” by research.

This blog post by Satoshi Kanazawa from Psychology Today (2008) explains it in an accessible way:

Proofs exist only in mathematics and logic, not in science. Mathematics and logic are both closed, self-contained systems of propositions, whereas science is empirical and deals with nature as it exists. The primary criterion and standard of evaluation of scientific theory is evidence, not proof.

Surprisingly, I am just learning that math is not science. I had not heard of this before. “Doctor Ian” from the Math Forum at Drexel explains it fairly well in this forum post from 2001. Since math and logic exist in fairytale worlds, proofs are entirely possible in these worlds, but not in the world of empirical science.

Regarding science: This word is derived from the Latin word scire, which simply means “to know.” This is one of many areas where the accoutrements of a field have come to popularly define it—a “scientist” arguably does not need to have a position, title, or advanced education, but merely knowledge.

Alibis are a legal defense used to establish innocence, based on the defendant being in a different location at the time a crime occurred. Assuming validity, an alibi proves the defendant did not commit a crime (albeit not disproving ancillary involvement). Since people can only be in one place at a time, it is not possible for you to have directly committed a murder in someone’s home at 9:30 am and also to have been at church. Of course, you could have booby-trapped their home with explosives, which would still be murder, but my point is that an alibi can actually prove something by disproving all alternatives.

This blog post by Claes Johnson (2012) was my source for the idea regarding alibis and falsifiability. Johnson says:

Comparing with a legal case, we know that to convict someone for murder it is necessary with some positive evidence which connects the suspect to the deed, like fingerprints. We know that lack of negative evidence, such as lack of alibi, is not enough for conviction to the electric chair. It should neither be enough to convict a theory to the heavy burden of being scientific.

Popper’s negativism expresses his criticism of positivism, which serves the purpose of making modern physics based on statistics acceptable as science.

I thought an alibi would be positive evidence, but either I do not understand the difference between positivism and negativism or it is a matter of perspective, e.g., an alibi could be positive evidence with respect to establishing innocence, but negative evidence with respect to establishing lack of guilt, like the difference between “innocent” and “not guilty.” I am not sure if this is correct.

Regardless, the alibi is a time-limited device. It is much easier to prove that you did not commit a murder at 9:30 am on Sunday, than it is to prove that you have never committed a murder during your entire life to date, as the latter would require a comprehensive accounting of your entire life to date. Further, it is impossible to prove that you will not kill someone in the future, even to yourself. However, at all times, the conjecture that you have never committed a murder remains falsifiable—we may falsify it by simply discovering one murder you have committed. While such a discovery does not establish a ceiling on the total number of murders you may have committed, it does increase the floor from zero to one. (Floors and ceilings are mathematical terms that I am probably misusing.)

In science, we are often trying to establish theories that explain and make predictions that are timeless. Since this is a much broader task, “proving” something is out of the question. While we can disprove a conjecture by finding definitive evidence to the contrary, no amount of agreeable evidence can prove that disagreeable evidence does not exist.

Finding a ceiling requires comprehensive evidence. We need to have the entire population, rather than a sample, at our disposal. Further, we can always question the completeness and veracity of our record-keeping. While “big data” may solve some of these problems with respect to some subsets of digital communications, for practical purposes this problem is always in play. Even the U.S. Census admittedly misses thousands or millions of fugitives and illegal aliens. However, finding a floor is far easier. We could look at just 20 Americans and say, “well, we know at least 9 males exist in the U.S. population.” Without expanding our sample, we can say for sure that there are not fewer than 9 males in the U.S. population. Of course, this does not apply when mathematically negative values are a possibility, e.g., we cannot surmise an individual’s net worth is at least $5000 because they have $5000 in a bank account, for obvious reasons.

The difficulty associated with producing comprehensive evidence might be compared to the difficulty associated with “proving” something in science. Both are insurmountable. However, producing an alibi might be compared to falsifying a scientific conjecture. Both are possible, and sometimes easy. (Ah-ha: Now I understand what Johnson meant with an alibi being “negative evidence.”)

At this point, I want to introduce the Dunning–Kruger effect. This is a common cognitive bias where incompetent people grossly overestimate their skills; sometimes their self-appraisals are more aggrandizing than self-appraisals of experts! Experts suffer from the converse problem of underestimating their skills respective to others; I postulate that this leads to a chilling effect where experts give too much weight to crackpots and are restrained in denouncing them, while crackpots have no such reservations about attacking experts. Thus, crackpots get an unfair amount of attention and credibility. They make arguments that inherently (but not necessarily deliberately) cater to human cognitive biases. These might possibly include phrases like “I believe,” “any sensible person can see,” or “prove me wrong.” (“Crackpot” in this case is meant to be a humorously contemptuous characterization of individuals lacking both credentials and requisite expertise, but not individuals who merely lack the former.)

While the expert tries to present empirical evidence that the crackpot is wrong, and refrains from judgment until such evidence can be procured, the crackpot proceeds with circumstantial and logically flawed arguments, without worrying about rationally disproving the expert. Therefore, the crackpot is more confident and may even appear more credible, but this is because the playing field is not level. The expert is holding him- or herself to much higher standards than the crackpot. This is not fair to the laypeople who become misled; therefore, experts should probably “suspend the rules” and discredit crackpots more vigorously than they would colleagues. (For a presentation by me on Kruger’s more recent work on the “first-instinct fallacy,” see: A Review of “Counterfactual thinking and the first instinct fallacy” by Kruger, Wirtz, & Miller (2005) [PowerPoint].)

Getting back to proofs, alibis, and falsifiability: giving someone the “benefit of doubt” is related to all three. (Side note: I do not understand why it is always phrased “benefit of the doubt.” Additional side-note: It is not fair for anyone to imply something or someone is wrong or unusual just because they have not heard of it or encountered it before. Perhaps the problem is with me.) First, you lack proof. Second, they have an alibi, or you can think of one for them. Third, their innocence is falsifiable, but you don’t have enough evidence to feel confident. Clearly, the other person has all the knowledge with respect to his or her behavior. This might be called the “defender’s advantage,” and is present whenever the defender is given benefit of doubt and has a knowledge advantage—both are principles of American criminal justice. Conversely, the “attacker’s advantage” exists when benefit of doubt is removed, particularly when the attacker also has a knowledge advantage. An example of the attacker’s advantage is Amazon.com, Inc. defrauding me in September 2015 (and remains unresolved). Amazon.com, Inc. bans users and steals their gift card balances, Amazon Prime memberships, Kindle e-books, and other content, without possibility of appeal or explanation; they do not give customers benefit of doubt and immediately remove the customer’s access to all order history and other account data, which represents a strong and arguably unfair attacker’s advantage. Neither advantage is fair unless affirmative—in American criminal justice the defender’s advantage is typically fair because it compensates for the awesome prosecutorial power of the state (unless the defendant is wealthy). However, corporations defrauding their customers by enforcing punitive and open-ended terms of use is particularly insidious. It is a triple threat, because they hold both attacker’s advantages (greater knowledge and removal of benefit of doubt, which might also be labeled presumed innocence), plus far greater money and power.

While I am ending this essay on what appears to be a substantial tangent, I think tangents and interdisciplinary approaches are important. (I can even recall reading research that supports this.) There are often connections that we do not see without stepping out of our box. While this is not a license to become a dilettante, consider that branching out may be a better use of your energy than trudging forward.

If you are looking to branch out, I recommend reading about logic and philosophy, which has long been one of my favorite pastimes.

Leave a Reply

Your email address will not be published. Required fields are marked *