When I wrote my review for Human Dimension and Interior Space, I mentioned that I picked up the book because I was interested in designing and building my own workbench. I’ve begun that process, but even its completion will constitute only the first step in what I hope will one day be a sort of small-scale, home laboratory set-up. After all, reading, writing, distance running, cooking, astrophysics, and carpentry aren’t enough hobbies for me, and I really need to get to work on building a fusion reactor in my garage – I’m way behind my thirteen year old nuclear physics goals. The point of such a setup, which I hope would include things like a multimeter, a microscope, a heat source, some electromagnets, and maybe a centrifuge, would be to enable practical experimentation.
Now, I don’t expect that I will be rewriting our understanding of thermodynamics in my garage laboratory, even when and if it ever becomes more than an idle fantasy in my peculiar and intellectually restless mind. Instead, it would enable me to replicate experiments that have already been done, and investigate my own questions and hypotheses as an alternative to accepting the word of the mysterious scientists behind the papers that I read in Journals like Science or Nature. My initial interest in science arose, after all, from my fascination with understanding practical things, like how a clock works, or what enables a radio to make noise, or by what means a hair dryer generates heat, or why does ice make crackling noises when dropped in a glass of room temperature water. It makes me miss the days when you could make a reasonable effort at new scientific discoveries by heating samples of material and separating them out by density to find new elements.
I think science as a discipline could benefit from a more practical approach. This doesn’t so much refer to some of the really abstract and intangible research happening in fields like quantum physics as it does to something that I see more and more presented in lieu of actual experiments: computer models. In just the past few weeks, I’ve read everything from government reports, to news articles, to peer-reviewed scientific papers that leverage as their evidence not practical experiments or real datasets, but computer models and statistical simulations. There was even one that proudly proclaimed that it was based on interpolated data – in other words, data that is only inferred to exist between known data points.
To understand why I consider this a problem, and why I want to persuade you that you also should be skeptical of conclusions drawn from computer models and simulations, we’re going to talk about Kalmann filters. From a technical standpoint, a Kalmann filter is a mathematical entity consisting of weighted matrices. The math is quite ugly, and I have no intention of going into it here, but the concept is relatively straightforward. A Kalmann filter is just a particular implementation of what is called, in the parlance, closed loop control, and I can almost guarantee that you’ve used closed loop control in the past few days. If your house has a thermostat, or your car has cruise control, you’ve used a form of closed loop control. In closed loop control, the system takes a model, generates and output, compares the output to the measured results, and then feeds the measured results back into the model to improve the system. This is compared to open loop control, which would be something like a space heater. You turn it on, and it generates heat, without regard for any input save the on/off switch. A thermostat, on the other hand, will turn the heat on or off based on the temperature it detects in the house. That’s closed loop control.
Kalmann filters in particular are interesting because they use both a model, and actual measurements in order to perform their operations. The observations and the predictions are each given a rating based upon a variety of factors that can be reduced to how much each ought to be trusted. This can lead, in certain cases, to the filter becoming “smug.” In other words, it starts trusting its own predictions so much that it ignores the actual observations entirely. There are certain cases where this might be desirable, like if the sensors being used to provide the observations have stopped working, or have become otherwise unreliable, but it is very rare that we will want to ignore the observations completely. It would be like getting into the car, closing your eyes, and driving to the airport based only on what you remembered from the map you looked at before leaving.
Drawing conclusions from computer models and simulations is the same idea. It is stuffing our fingers in our ears, closing our eyes, and insisting that we know enough about how the world works and how it is right now to predict how it will be in the future. Without practical experimentation to provide real data, real observations, this kind of science becomes as smug as a bad Kalmann filter. Granted that we do have a good understanding, and know a lot about the initial conditions, but we certainly do not know everything, especially as the systems become more complex, with more variables and considerations involved. It doesn’t matter how fast or powerful the computer running the simulation is, if the inputs provided do not account for everything. And I guarantee that they will not account for everything.
This is not to say that computer models and simulations cannot be useful tools. They allow us to save a lot of expense, a lot of hassle, and a lot of time in a lot of different situations, and they can provide insights that cannot be reasonably obtained another way. My concern is not that people are using these powerful tools to test hypotheses and predict possible outcomes. No, my concern is about smugness, that these tools are taking the place of actual data and practical experimentation.
Socrates claimed that the beginning of wisdom is knowing we know nothing. It might be an overused saying, but it led me to the lens through which I try to view most things with which I am confronted: that there are no right answers, only less wrong answers that fit the currently known data. That’s what experimentation is really all about – assuming we know nothing, and trying to find more data so that we can take our wrong answer and make it a little less wrong. Computer models assume we know everything, and tell us where that knowledge leads. And yes, I know that it’s more complicated than that, with margins of error and noise and other complexities often modeled to provide an uncertainty to go along with whatever predictions or conclusions are drawn, but those caveats only emphasize my point. If we really want to find out more about our universe, the world in which we live, practical experimentation is necessary.
Surrounded by the apparently inexhaustible information of the “Information Age,” it is easy to grow complacent, to believe that the answers really are all out there, just waiting for the right search query to unearth them. My personal laboratory probably wouldn’t enable me to discover a new element, or spot the Trojan asteroids, or probe the secret natures of neutrinos. That’s okay. I might use the facility to answer questions in a year that I could have answered in five minutes on the internet. That’s okay, too. Whether it’s something that has never been tried before, or something that has been repeated so many times that some people would call it a “fact,” it is still what science is all about: experimentation.
3 thoughts on “Smug Science”