A central tenet of partisans of a free-market system is that it uniquely provides economic agents with the incentives that secure an optimal economic outcome. “I believe in markets,” “People respond to incentives” are among the mantras they repeat tirelessly. Sometimes they take a darker twist, as in the former EU budget commissioner Gunther Oettinger’s ominous “Markets will teach them.”
A recent, authoritative example of this view is the October 2018 report on the “Opportunity Cost of Socialism” published by the White House Council of Economic advisors. Just before recalling Margaret Thatcher’s definition of freedom, it states: “In assessing the effects of socialist policies, it is important to recognize that they provide little material incentive for production and innovation.”
But the die-hard advocates of free markets do not exclusively belong to politics or policy environments. They seem to have the solid guarantee of academic respectability. A prominent case is Gregory Mankiw’s Principles of Economics (1998), possibly now the most widely adopted and influential textbook in economics; a quick glance at the book’s introduction teaches one the ten “principles” of the economic discipline that define the relationship between incentive-driven individual decisions and the aggregate welfare generated by trade in competitive markets.
The present essay is an attempt at instilling some doubt about this view, while retaining the basic premises of its holders. We shall proceed in a mostly historiographical way, retracing some basic tenets of the otherwise complex theoretical elaborations of the notions of market efficiency and incentives. The picture that will eventually emerge turns out to be more nuanced, if not bleak.
First, the relationship between (perfect) competition and individual incentives remains unclear if one investigates it through the lens of the traditional Neo-Classical (Walrasian) theory, which is still today a fundamental reference for the free-markets discourse. In that perspective, individuals only interact via the impact of their independent behaviors on market prices. Their “incentives” are therefore limited to determining their preferred amounts of goods and services to be bought or sold at given prices. Trade takes place in a centralized market place, and they are mediated by a fictitious institution assumed to be perfectly able to monitor transactions and to enforce individual agreements. Such a representation of the market mechanism, we shall argue, shares several features with those of socialist economies, as acknowledged by several among the founders of Neo-Classical theory.
Second, the by-now common idea that economics involves the design of institutions providing economic agents with the “right” incentives to efficiently consume, save, work, and communicate was in fact originally developed by socialist scholars. In their effort to formalize centralized mechanisms for the allocation of scarce resources when individual agents can misbehave, they laid down the foundations of modern incentive theory. Somewhat paradoxically, the developments of this theory, centered on the design of economic institutions when agents hold private information, do not emphasize the role of free markets as mechanisms guaranteeing an optimal provision of incentives. On the other hand, extending the theory of perfectly competitive markets and their welfare properties to incorporate the design of economic incentives has proven a delicate matter. Recent theoretical results suggest that, in most situations in which the design of incentives is at the heart of the economic problem, little hope can be placed on the “invisible hand” of free-market forces.
1. The Common Theoretical Roots of Free-Market and Planned Economies
According to a conventional view, there exists a “neo-classical” body of doctrine, connecting the Eléments d’Economie Politique Pure by Léon Walras (1874), Value and Capital by John R. Hicks (1939), Foundations of Economic Analysis by Paul A. Samuelson (1947), and Theory of Value by Gérard Debreu (1959), to the anti-Keynesian revolution of the 1980’s centered around the work of Robert E. Lucas. In this perspective, the main attribute of the theory is not so much its positive content – the theoretical possibility of determining equilibrium market prices under general assumptions on agents’ behavior – than its normative implications in terms of the desirability of a free-market system.
These implications are often summarized by the “First Theorem of Welfare Economics,” which identifies sufficient conditions for any equilibrium allocation, or distribution of resources, in an economy in which agents trade in perfectly competitive markets to be “Pareto-efficient:” this means that, starting from any such allocation, there does not exist a way to redistribute resources that makes all agents better off. Although this theorem permits a broad range of interpretations, depending on our reading of the sufficient conditions, a prominent one acknowledges it as a modern reformulation of the invisible hand, in which the informal style of Adam Smith’s (1776) arguments gives way to the dry language of modern mathematics. The bottom line, conveyed by most economics textbooks since its inception, is that a perfectly competitive free-market system tends to implement an efficient allocation of economic resources.
But because it postulates a free-market system, the First Theorem of Welfare Economics offers little if any guidance as to compare the market mechanism with different allocation rules. The “Second Theorem of Welfare Economics” sets the stage for such a comparison. It takes as given the set of Pareto-efficient allocations, and provides sufficient conditions under which any such allocation can be achieved through trade in perfectly competitive markets, once the government commits to an appropriate lump-sum redistribution of resources. That would be a fiscal intervention based on individuals’ characteristics rather than on their market behavior.
The standard interpretation of this result is that it conceptually separates redistributive or equity concerns from efficiency concerns: “politicians” can arbitrarily select a Pareto-efficient allocation, make the required lump-sum transfers, and then let markets determine the price system that “decentralizes” this allocation, in the sense that optimal individual decisions given this price system lead, in equilibrium, to this allocation.
Yet the logic of the Second Theorem of Welfare Economics can be equivalently exploited to argue that the same Pareto-efficient allocation can be obtained by command, imposing a social control over production. This remark was first made in the early 20th century by Vilfredo Pareto (1906) and his disciple Enrico Barone (1908), who emphasized that the market prices supporting a given Pareto-efficient allocation can be directly calculated by a Central Planning Board, who then delegates to each member of society the final choice over his consumption decisions and his supply of productive services. This view puts at the center stage of the economic discipline an engineering problem of optimal control.
Anti-socialist authors like Ludwig von Mises (1920, 1922) fiercely criticized this idea, insisting on the impossibility of separating rational economic calculation from the private property of means of production. A planned economy allegedly faces a “computational limit” in that no planning office would be able to aggregate billions of individual trades and effectively calculate the implicit price of each traded commodity. Friedrich von Hayek (1935) argued that a decentralized market economy is computationally less demanding, because prices convey all the information relevant for individual decisions.
Socialist scholars such as Oskar Lange (1936, 1938), Abba Lerner (1944) and Fred M. Taylor (1948) convincingly contested these arguments. They showed how the above informational problems may be overcome in “market socialist” alternative system– that is, an economic system in which there is no private ownership of the means of production, but the market mechanism still regulates the allocation of capital and consumption. Government agencies would only be required to adjust prices in response to “excess demand,” by instructing socialist managers to compete on markets like privately owned firms do. Theoretically, this market socialist economy could reach the same outcome as a decentralized one in terms of efficiency.
To better understand this approach, it is useful to recall that the traditional view of perfect competition in a market economy requires agents to take prices as given; equality between supply and demand is then guaranteed by a fictional market institution, the so-called “Walrasian auctioneer,” who observes net positions, that is, the discrepancy between demand and supply of each good in all markets, and thereby guides prices towards an equilibrium level. In a nutshell, market socialists convincingly argued that government agencies have no need to calculate in advance, and can instead reproduce the trial-and-error process followed by a Walrasian auctioneer to discover and implement equilibrium prices.
Overall, the above discussion establishes a theoretical equivalence between a market economy and market socialism as alternative mechanisms for the allocation of economic resources. Yet two fundamental issues remain.
First, this equivalence result relies on a purely technocratic perspective on decentralization, in which social relations of production do not play any role. A socialist planner may be keener on egalitarian distributions than the government of a free-market society, but no distinction arises regarding their relative efficiency at distributing available resources.
Second, the equivalence is established with reference to a very abstract view of the market mechanism for which Debreu’s Theory of Value (1959) offers the most rigorous and systematic representation. In his model of a market economy, trading is centralized: buyers and sellers meet at a central location, all prices are fixed by an auctioneer, and no exchange occurs until the equality between demand and supply of each good is achieved. Trade may occur over space and time and under uncertainty, but there is a complete set of markets for the trade of every contingent good. That is, a specific price can be quoted for the consumption of a commodity at every possible location, date, and state of nature. In particular, each agent has access to a complete set of insurance markets to hedge against risky events. Also, agents are informationally homogenous, so that no agent is able to take advantage of a less informed one. Finally, an agent’s behavior affects others’ welfare only through its effect on equilibrium prices. In other words, the decisions of an agent do not directly benefit or harm his neighbors: there are no “externalities.”
The first of these issues highlights the limits of any institutional comparison based on allocative-efficiency criteria only. In this essay, however, we focus on the second issue, which is crucial to evaluate recent trends in economic theory.
2. The Common Challenge of Realism: Dealing with Disperse Information and Perverse Incentives
To start with, it is important to realize that extending the First Theorem of Welfare Economics to deal with a larger and empirically more compelling set of situations has proven to be an insurmountable task. A case in point is that of “incomplete markets,” in which individual trades are limited by the impossibility of accessing a complete set of insurance and financial markets. Markets might not exist, for instance, because there is no legal guarantee that contracts will be enforced, or because – and this is far more common – economic agents are asymmetrically informed. In seminal papers, Bruce C. Greenwald and Joseph E. Stiglitz (1986) and John G. Geanakoplos and Heraklis Polemachakis (1986) have shown that market incompleteness typically yields “constrained inefficient” equilibria. That is, a public authority which takes as given the incomplete structure of markets can directly improve on market outcomes, making some individuals better off without making anyone else worse off. For instance, in the context of financial markets, such an improvement in the Pareto sense can be reached by redistributing the agents’ initial portfolios of assets. Thus public intervention ends up being welfare-improving even if the government does not have the instruments necessary to overcome the incompleteness that lies at the root of inefficiencies.
These results fundamentally challenged what, in the famous definition of Kenneth J. Arrow and Frank H. Hahn (1971), is regarded as “the most important contribution of economics to social theory” – namely, the idea that market forces are the artifact of some invisible hand. At the same time, though, these results provide a benchmark for comparing the allocative role of markets vis-à-vis alternative institutions in a richer set of circumstances. That is, once the incompleteness of markets, the asymmetry of information between different agents, and the strategic interdependence of their behaviors are put at the center stage, is there a general way to evaluate different allocation mechanisms?
In principle, though, these imperfections should also constitute a severe obstacle for any socialist government. Take, for instance a public good, from which individuals cannot be excluded, and the use of which by an individual does not reduce availability to others. How can we achieve its efficient provision?
The key problem is that asking an agent to contribute to the financing of a public good according to the benefits he derives from it may induce him to misrepresent its preferences, which are typically not perfectly known to the government. Not surprisingly, socialist authors understood early on the need to take into account the disperse information held by individuals and their incentives to manipulate it. In fact, in the attempt to design allocation mechanisms superior to the market mechanism for situations in which the latter is known to fail, they soon faced the need to formalize the role of incentive constraints in the allocation of resources. The modern discipline of “mechanism design” was born in this context.
Polish economist Leonid Hurwicz (1973) was the first to introduce the by-now familiar distinction between informational incentive constraints, reflecting the problem of aggregating dispersed information, and strategic incentive constraints, reflecting the problem of controlling agents’ decentralized behavior. Following his seminal work, it has become customary to treat incentive constraints in addition to resource constraints in the definition of the allocation problem. In the words of Roger B. Myerson (2009): “In situations where individuals’ private information and actions are difficult to monitor, the need to give people an incentive to share information and exert effort may impose constraints on economic systems just as much as the limited availability of raw materials.” This has led to a reformulation of the criterion for comparing economic allocations: an allocation is “incentive-constrained Pareto-efficient” if it cannot be improved upon from the perspective of all agents, while staying within the set of allocations that satisfy both resource and incentive constraints.
The mechanism-design approach has proven to be extraordinarily useful in dealing from problems ranging from the design of auctions (William Vickrey 1961, Robert B. Myerson 1981) and the provision of public goods (Edward H. Clarke 1971, Theodore F. Groves 1973), to the design of income tax (James A. Mirrlees 1971) and the regulation of public utilities (Jean-Jacques Laffont and Jean Tirole 1993). One of the key insights of these contributions has been to show that, unlike when resource allocation is only subject to physical constraints, there is now a tradeoff between redistribution and efficiency concerns. For instance, a heavier taxation of high incomes brings about more equality, but discourages labor supply and eventually reduces the taxable income to be redistributed within society. From a theoretical perspective, the problem is not redistribution per se, but the fact that taxation can be made contingent on observable outcomes only – here earned income – and not on the unobservable characteristics of individuals – such as their innate productivity levels. In such a context, lump-sum transfers can no longer be made contingent on all relevant economic variables, and taxation, being contingent on observable behavior only, leads to distortions in the allocation of resources.
3. The Self-Sentenced Superiority of Free Markets
It is not necessary to emphasize the success and political exploitation of this idea that “incentives matter” from the 1980’s on. What is more surprising is that this common view coincided with the invigorated belief that a free-market system is best able to provide such incentives. In fact, markets are not among the most prominent applications of the mechanism-design approach, as the above list reveals: these pure allocation problems would be as relevant in a socialist economy as in a capitalist one, and the objective would be the same – namely, the determination of a social optimum, considered independently from any issue of decentralizing it through a price system. Hence there is no obvious sense in which either system should be more efficient at solving them than the other. If anything, it might be faithful to the spirit, if not to the letter, of market socialism to argue that a socialist system might be better suited to deal with the complex externalities between different sectors of the economy.
Whereas the market mechanism is not at the center stage of applications of the mechanism-design approach, several authors, including Hurwicz, have investigated the consequences of informational asymmetries on the efficiency of its functioning. A key impetus was provided by George A. Akerlof (1970), who argued that, left to its own devices, a market in which sellers have an informational advantage vis-à-vis buyers can unravel to a no-trade equilibrium. In his theoretical framework, the informational advantage concerns the quality of the good to be traded. Buyers only know the distribution of different qualities over the entire population of sellers and hence stand ready to trade at a unit price reflecting the average quality. This may however induce sellers of higher quality products to stay out of the market, as the market price is not high enough, thereby forcing the market price to fall even lower, as only lower quality products will be sold. Eventually, this process may lead to market breakdown with no trade occurring at equilibrium. Akerlof’s famous example was the market for “lemons” – or second-hand cars – but his analysis has been successfully applied to financial or insurance markets to understand market freezes, credit crunches, or underinsurance. Despite its intuitive appeal, however, the above example provides little guidance to assess the market behavior of uninformed agents. In particular, it does not take into account the economic role of mechanisms inducing agents to reveal their private information. What if sellers were left free to design their pricing rules?
The question calls for a theoretical understanding of the relationship between market forces and the potentially sophisticated mechanisms designed to control individual incentives, an issue that has become central in contemporary economic theory.
The most ambitious attempt to reconcile the insights from mechanism design and the neo-classical theory of markets is embodied in the work of Edward C. Prescott and Robert M. Townsend (1984), whose avowed purpose was to provide incentive-constrained versions of the First and Second Theorems of Welfare Economics for economies with information asymmetries, an approach which proved to have a great success in macroeconomics. Their accomplishments, however, are mixed.
First, they confirm that in economies in which agents have private information about some of their characteristics that are directly relevant to their trading partners – as is the riskiness of an insuree for an insurance company – the First Theorem of Welfare Economics is sometimes void. In other words, a market equilibrium may fail to exist altogether. This can hardly be taken as a convincing defence of the market mechanism.
Second, and more importantly, it is doubtful that their decentralization results offer a convincing picture of a market economy. Indeed, and somewhat paradoxically on the part of dedicated defenders of supply-side economics, their depiction of the production sector remains a black box that reminds us of the Central Planning Board posited by market socialists. In a sense, the debate does not seem to have considerably moved on since the Mises–Hayek–Lange–Lerner controversy. Except that the very question of a comparison between the different systems disappeared from the literature and, to some extent, even from historical recollections. The market economy had meanwhile become the only possible world.
4. Where does the debate leave us?
The picture that emerges from this discussion is that, if one remains within the theoretical framework in which free market supporters operate, there is no clear sense of why the market should be better suited to provide adequate incentives to economic agents. As we tried to show, these concepts become elusive when we try to scrutinize them through the lens of current economic theory.
Perhaps this should not come as a major surprise from a Walrasian, or neo-Walrasian perspective. As we explained above, this abstract representation of a market economy requires an extremely sophisticated system of institutions to perfectly monitor trades and punish any attempt at violating possibly very complex contracts. In particular, exchanges should take place in a centralized market place.
Yet, despite having been the most relevant attempt at formalizing the notion of “invisible hand” throughout the 20th century, this perspective puts little emphasis on the design of economic institutions. In other words, the incentives of an individual to manipulate them in their favor are not explicitly considered as part of the economic problem.
Eliminating the above restriction allows one to distinguish, from a conceptual viewpoint, the role of the planner from that of the government. While the former is an abstract notion representing the set of physical and informational constraints faced by all individuals, the latter refers to an economic agent whose incentives may, and typically do, stand in conflict with social welfare. This distinction is at the root of a somehow different view of the market process, popularized in the post-war years by public choice theory. This view puts at the centre stage the need to explain institutional behavior in terms of self-interested actions of agents interacting in the political arena: politicians, regulators, voters, lobbyists. In such a perspective, the desiderability of a free-market system is an implication of the inefficiencies systematically generated by the perverse behavior of political actors.
In the words of Andrei Shleifer and Robert W. Vishny (1994): “Under socialism the government is much richer […] than under capitalism: it owns the cash flow of most or all the firms in the economy. As a result, the government can afford many more politically motivated inefficient projects that lose money than it could in a capitalist economy.” Competitive markets should then be thought of as a counter-power, or a threat, against the bureaucratic centralization of economic decisions.
Few general descriptions of market forces themselves, however, are provided in the political economy-based approaches. A main argument appears to be that market systems would delegate the task of providing incentives, that is, of eliciting information or inducing effort, to a multitude of firms, or “principals.” Their small size and the fierce competition they face would guarantee, the argument goes, that rents tend to be eliminated. At the same time, a liberal market regime is based on the firms’ freedom to choose from the largest possible set of contracts so as to capture mutual gains from trade, and to exploit the individuals’ “responsiveness to incentives.” However, revisiting these issues in the light of the contemporary theory of incentives turns out to be a very delicate task. The analysis of the situations in which different firms, or principals, compete by proposing different allocation mechanisms stands as the main challenge for modern incentive theory and its developments. Making an extensive use of game-theoretic concepts, this recent research area provides novel insights on the relationship between markets and incentives. Although its theoretical foundations are rather complex, the research questions are basic: is competition between different incentive schemes beneficial ? How do people respond to incentives when they are designed by competing parties?
As a matter of fact, most recent results sharply contradict the claim that enlarging the contractual opportunities available to competitors ends up being welfare-enhancing. Indeed, a large number of contributions have recently emphasized how, by offering sophisticated allocation mechanisms, firms may expropriate the resources of their counterparts and, at the same time, erect barriers to prevent entry of their competitors, even under a regime of free competition (Takuro Yamashita 2010, Michael Peters and Balázs Szentes 2012, Michael Peters and Cristián Troncoso-Valverde 2013). As a result, the First Theorem of Welfare Economics, which remains the cornerstone of the free-market advocates’ argument, dramatically fails. Indeed, the very notion of equilibrium loses its predictive power, and the efficiency of equilibrium is no longer guaranteed, even in an incentive-constrained sense. For instance, collusive or anticompetitive outcomes can emerge, leading to an insufficient provision of insurance or credit to final consumers. The common view that public authorities should confine themselves to eliminating all impediments to trade and enforce complete freedom of contract thus appears – in light of the recent developments of economic theory, and despite its very historical roots – more dubious than ever.
At the end of this long historiographical journey, the current state of the discipline may look more complex and contradictory than one could expect at first glance. At the same time, it is the rational development of most of its arguments that lays the foundation for a critical analysis, and for the possible search of alternative systems. In this spirit, we share the wish that Nobel Prize recipient Myerson expressed in his 2007 Hurwicz lecture: “Of course, the later twentieth century provided much evidence of capitalist economic success and socialist economic failure, but a theorist should not give up a good question simply because there seems to be evidence to answer it empirically. If our theories do not give an adequate answer, then we must continue working to develop theories that can, because one can always propose new institutional structures that do not exactly match those for which we have data. If we have no general theory about why socialism should fail, then we have no way to say that greater success could not be achieved by some new kind of socialism that is different from the socialist systems that have been tried in the past.”