Rating: 4.5 out of 5.

If you took a statistics class, you probably have some of the same memories I do of sitting in that class and thinking that what was being said did not quite make sense and align with my understanding of the world, but not being able to develop an entirely cogent argument for why what was being taught didn’t sum.  Whether it was a statistics class or a standardized test, statistics and probability were always frustrating to me.  Now, I fully admit that I am not a mathematical savant, but I have taken a lot of math classes in my time, studied a lot of different mathematical topics, and use and think about math more than most people would think is healthy.  Statistics and probability, though, inevitably trip me up when I start trying to study them.

There are so many reasons why you should read Bernoulli’s Fallacy, many of which we will be addressing in this review, and finally understanding why statistics and probability didn’t make sense back in school is just one of them.  Within the first chapter, Clayton had laid out in rigorous terminology and mathematical logic the problems with probability that had been struggling to come to light from the edges of my consciousness for years.  Specifically, the logic of probability as frequency, which is what is taught as orthodox statistics and has been employed for almost all purposes of probability for the past century, is fundamentally flawed, with a flaw that goes all the way back to elementary logic as described in Aristotle’s Art of Rhetoric.  Probability as a derived frequency attempts to treat the probabilistic argument as a syllogism, when in reality probabilistic arguments are enthymemes.

The impetus for the book is a problem in the institutions of science to which you may have heard reference: the replication crisis.  In the past few years, projects to redo classic experiments in psychology, economics, and even in medicine and biology have failed to produce results that align with the original conclusions.  If you’ve ever been skeptical of the results and conclusions drawn by researchers in the soft sciences that seemed contrary to common sense or lived experience, you might have been right to be, because a lot of those results have now been found to be, in a word, wrong.  For someone like me who always suspected that, but had never taken the effort, resources, or mathematical tools necessary to make a coherent argument as to why the results from a seemingly valid experimental setup should be dismissed, having the mathematical, logical, and experimental backing to affirm that lingering suspicion is quite gratifying.

For a book with only eight chapters, there is a lot to unpack within it.  It covers everything from the history of probability and statistics, to its uses through the centuries, to examples, exercises, implementations, and mathematical derivations.  Plus, it is dense: do not go into this expecting to whip through it.  I found myself lingering over passages, rereading sections, and spending hours after a reading session pondering and ruminating over what I had just read.  It’s a deeply thought-provoking, and directly applicable book, because probability is everywhere.  Probability has always been everywhere, but it is even more so today, with the rise of concepts like Big Data and the Information Age.

At its heart, Bernoulli’s Fallacy is an argument for a certain understanding of probability.  It introduces the reader to two dominant schools of probabilistic thought: the frequentist school, which we have already referenced as the school of orthodox statistics and the source of many problems more serious than student struggles on the ACT, and the Bayesian school, which we could also call inferential probability.  Clayton takes us through the history of both statistical methods, and shows the arguments for each, but this is not the sort of book that attempts to present an unbiased argument: Clayton is out to convince you that Bayesian probability is the cure to the fundamental flaws on frequentist statistics, and he does not hide that fact, nor pull his punches.  In places I found this a little over-the-top, but it did not detract from the book’s credibility, and he did attempt to explain frequentist methods in the best possible light before exhibiting their deep-seated flaws.

If there is a place where Clayton’s arguments grow a little excessive, it is in his discussions of the intertwined history of frequentist statistics and eugenics.  Nothing he said was false, or misleading, but it at times diverted the book more into a condemnation of eugenics than a condemnation of frequentist statistical methods, and an effort to incite in readers an instinctive revulsion for frequentist methods based solely on its emotional and historical ties to eugenics.

Here is the heart of the argument.  Frequentist methods attempt to provide probabilities as objective truths based on observed frequencies and notional concepts of infinite trials.  Bayesian methods treat probabilities as subjective entities that provide quantification of the contributing factors and outside information through the integration of prior probabilities.  That is to say, whereas a frequentist method will look at a dataset and tell us what the chances were of getting that data, a Bayesian method will inform us of the probability of an effect existing based on the data collected and any prior information we might have about the likelihood of the effect.  If all of that seems a little confusing, I suggest you read the book, because it will explain it much better than I can in a post that is supposed to be a book review and not a detailed essay on probabilistic methods.

I’m torn over how much more to discuss in this review.  On the one hand, there was a lot of important content in the book that would be worth discussing, but on the other hand, you might be better off reading the book itself rather than listening to me regurgitate and ruminate over its contents here.  At a high level, what Bayesian methods accomplish that frequentist methods don’t is to better align probability with reality, instead of with an imaginary mathematical contrivance of infinite trials and notional experiments.  As humans, we work with Bayesian-style probabilities all the time, even if we’re not doing rigorous and ugly mathematics to do it, like .  When we make risk-benefit decisions, we’re doing Bayesian probabilities.  When we make assumptions about how the world is going to work from day to day, we’re doing Bayesian probabilities.

Look at the problems that surround us and all of the circumstances to which statistics are applied (or attempted applications, at least).  Statistics can be made to say anything you might want them to, which is why they can’t be trusted.  Bayesian probabilities help, in some small way, to address these problems by forcing everyone to acknowledge that there is no such thing as an objective conclusion from a dataset, and that data never “speaks for itself.”  Data is data, and it is up to us as thinking, rational, moral humans to interpret it and infer conclusions.  They also remind us that morality is independent from science, from statistics, and from data.  Information can help suggest causes, correlations, maybe even solutions, but it is no substitute for morality.

There are very few books that I think everyone needs to read, but Bernoulli’s Fallacy might be one of them.  Whatever kinds of books you normally read, whatever your background, whatever your usual interactions with data and statistics, this book matters.  In fact, I calculate that there is a 100% probability that you will find this book valuable.  If nothing else, you’ll know why a 100% probability makes no sense at all.  Go read Bernoulli’s Fallacy.

4 thoughts on “Bernoulli’s Fallacy Review

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s