Civilizations rise and fall, their learning, their wisdom, their knowledge, and their technology often rising and falling with them. Though the past few centuries have featured a rise in global civilization to technological and scientific heights undreamt by previous eras, this can conceal the inherently unstable trajectory of civilizations. There is no principle which states civilizations must, in the long term, advance. My favored example is Xenophon marching past ruins of ancient civilizations during his Ten Thousand. Indeed, an argument could be made based upon entropy that civilizations must, in the long term, deteriorate and fail, that all of our efforts to preserve our knowledge are ultimately futile.
In a recent article, which I cannot seem to find to provide a link, a few thinkers conclude that not only must civilizations fall eventually, no matter how high they rise, but that there are limits imposed upon how high they can rise in the first place. They propose a “universal limit on technological development” as a solution to Fermi’s paradox based upon a notion of diminishing research returns and increasing civilizational complexity. In other words, because civilizations must become more complex in order to accommodate greater specialization, but that complexity eventually leads to inefficiencies and cultural instability, the resources available to spur technological development become opportunity cost-limited; furthermore, probing the next level of understanding requires exponentially greater resources.
Essentially, the authors assert that we may have already plucked the low-hanging fruit of technological and scientific advancement, and that reaching the higher fruit requires us to build a ladder too complicated to bother building when we need to use those resources to make the would-be ladder-builders happy. Yes, the metaphor might be just a little strained, but it roughly communicates the point. It also, conveniently, captures something of how many logical holes there are in the argument.
Let us assume, for a moment, that this argument has some validity for the human species and the human condition, which may be giving the word “assume” a workout in this instance. Terming this a “universal” limit on technological development demonstrates the same anthropocentric thinking of which many analyses of extraterrestrial habitability are guilty: it presupposes that other intelligent lifeforms in the universe would have comparable needs, desires, and circumstances to ourselves. Given our statistically insignificant sampling (of one), we have no basis to assume this to be the case. Would hyperintelligent, sentient, amoebic organisms inhabiting the atmosphere of a gas giant be subject to the same limitations as our corporeal, terrestrial, primate-based, mammalian forms? It seems distinctly unlikely.
As for the physical aspect of the hypothesized limitation, an examination of the history of innovation suggests the fallacy and hubris inherent in assuming that merely extrapolating from our current understanding gives an accurate picture of the parameters involved in constructing, for instance, galaxy-scale particle accelerators, or even that such devices would be necessary to enable continued advancements in technology and scientific understanding. History is replete with examples of the supposedly impossible and unattainable becoming commonplace by virtue of unexpected, nonlinear innovations. Consider the significant concerns about overpopulation common in the late nineteenth and early twentieth centuries, which have largely been muted thanks to innovations in food production which, at present, still represent only a fraction of the potential sustenance output which could be derived with modern techniques. Or, consider the revolution that came with the advent of the semiconductor switch for use in computers.
Of course, the authors are essentially arguing that looking to history is itself fallacious, and it is true that past performance is no guarantee of future results. The authors assert that previous revolutions in science and technology are the result of plucking the metaphorical low-hanging fruit, and that future such revolutions will require significantly greater investments of energy, time, and resources, reaching a point where these requirements become too large to be implemented. To extrapolate this based on our current understanding, though, is a hubris born of believing we today have some special insight into the nature of reality. Continuing flaws and gaps in our understanding, such as those explored in the discussions of emergent space-time referenced in our previous post, not to mention the conceits of dark energy and dark matter, suggest that we have no special claims on “right” or “true” understanding today – only a somewhat less wrong understanding than yesterday.
There may well be a limit on technological development. Eventually, every civilization must fail, simply by entropy if by no other factor. This discussion of a universal limit on technological development, though, puts me in mind of the science fiction thinkers of the mid-twentieth century, at the heights of the Cold War, who believed it all but inevitable that a society that develops nuclear weapons should eventually destroy itself. This was even proposed as a solution to Fermi’s paradox in its time, that civilizations never achieve interstellar travel because they blow themselves back to the stone age or into extinction before they can progress so far. Those predictions, like the idea of a ULTD, are founded on the same pessimistic premises. We can no more claim that there is a universal limit than we can claim that there isn’t one, for we do not know enough and our current, incomplete sample size is one. Our answers will have to be a lot less wrong before we can contemplate a universal limit as anything but a failure of imagination.
