"For Love of Neighbor" is a new documentary film offering a hopeful vision for Christian engagement in politics. Click here to learn more.

A Calculation Debate for the 21st Century

A fervent discussion about economic systems engulfed some economists’ hearts and minds a century ago. Popularly known as the “socialist calculation debate,” this multi-decade discourse was really an argument about the efficiency outcomes of different resource allocation mechanisms. Economists F. A. Hayek and Ludwig von Mises argued that resource allocation via centralized control would be inefficient relative to decentralized action. They contended that economies predominated by command would lack the kind of information necessary to formulate an efficient, welfare-maximizing distribution of resources because they would fail to find an alternative to the price mechanism that evolves in decentralized, market economies.

Other thinkers rejected this claim, positing that centralized allocation could achieve welfare outcomes at least as good as market economies. Economist Leonid Kantorovich used his newfound technique of linear programming to show that, with the right mathematical prowess, centralized economies could theoretically achieve the same results as competitive markets could. Others, such as Oskar Lange and Abba Lerner, posited that a central directorate could simulate a market economy with trial-and-error methods.

The collapse of the Soviet Union and the turn in economics from theoretical questions to empirical and statistical problems gradually rendered the original calculation debate obscure. However, a modern variant of this exchange has emerged. Thanks to considerable advances in computer technology and computational power over the past half century, some popular writers and academic economists argue that better algorithms, data access, and computing power make the prospects for centralized resource allocation more feasible.

Both centralized and decentralized allocation mechanisms can be better deployed with more information and computational capacity. However, the amount of information and computing power available is arguably conditional on the prevailing mechanism. Indeed, this is the entire point behind the original calculation debate. It is possible to compare the algorithmic efficiencies of different mechanisms by evaluating their respective time complexities, a concept afforded by developments in computer science.

The time complexity of an algorithm refers to the increase in the number of computational steps required to compute a solution for a unit increase in the number of inputs. Some algorithms have a time complexity that only scales linearly or as some fixed power of the number of inputs. For other algorithms, each unit increase in the number of inputs can double the time necessary to compute a solution. The former class of “polynomial-time” algorithms is generally considered time efficient, while the latter class of “exponential-time” algorithms is usually deemed inefficient. Mathematical problems with exponential-time algorithms become computationally intractable even for relatively small input sizes.

We can think of economic systems from this algorithmic-computational perspective. Resource allocation mechanisms are algorithms that take a given supply of goods and services as inputs and direct them to consumers, firms, and other agents in a manner that achieves some kind of welfare-increasing equilibrium. Thus, time complexity can contribute another way to analyze the efficiency of economic systems.

In general, both centralized and decentralized mechanisms operate with exponential-time complexities. Without additional assumptions or provisos, optimizing the allocation of resources is a computationally difficult problem, in the sense that, as the economy grows, any welfare-maximizing algorithm takes an exponentially longer time to compute an equilibrium. This results from some institutional or organizational failure in the allocation mechanism, but from the mathematical structure of the problem the mechanism solves. This is not to say that the institutional and organizational features of specific systems could not contribute to differential changes in the computational efficiency of their underlying allocation processes, but they cannot transcend the polynomial-exponential divide.

With other structural considerations, complexity analysis can yield more precise information about different mechanisms’ time-efficiency potential. In practice, some polynomial-time algorithms can help find approximate solutions to the true optimum. Moreover, additional assumptions about the composition of the underlying economy—for example, the functional specifications of consumer preferences or the returns to scale in production—can dramatically improve the prospective time efficiency of resource allocation. Remaining cognizant of how the economy’s composition or structure can improve allocative efficiency is helpful, but it comes at the expense of generality and, in the case of approximation schemes, can be costly on the scale of modern economies.

We can still glean insight from time complexity analysis purely on its own terms. Broadly equivalent time efficiencies across differential mechanisms entail that neither competitive markets nor optimized supercomputers can computationally outstrip the other based on algorithms alone. This equivalence also implies that there are no diminishing returns to industrial concentration in a decentralized mechanism or to partitioning the computation of allocations in a centralized mechanism from the perspective of time efficiency. Future research may yield finer computational distinctions, but for the moment, we must remain relatively epistemologically agnostic regarding time complexity’s impression on comparative systems.