Piketty and thinking about economics

There is a new economics rock-star touring the US by all accounts, and his name is Thomas Piketty. More precisely, the star of the show is Picketty's Capital in the Twenty-First century which is a 700-page volume on wealth distribution in 30 countries over decades and centuries of data.

I have not yet read it, but I wanted to note that the last time a piece of economic research got this sort of attention, it was also a long book, it was also a historical study, and it was all about data and looking at the world through the historical lens. That time it was Rogoff and Reinharts book on national debt:This time is different.

Could it be that books are the way to impact the discussion on economics and not papers - is the academic/economics grind counter-productive? I like this sort of scholarship, or let me qualify that, I like how Piketty describes his form of scholarship and his vision for economics:

Capital in the Twenty-first Century, Piketty makes clear, is his notion of what economics scholarship should look like: combining analyses of macro (growth) and micro (income distribution) issues; grounded in abundant empirical data; larded with references to sociology, history, and literature; and sparing on the math. In its scale and scope, the book evokes the foundational works of classical economics by Ricardo, Malthus, and Marx—to whose treatise on capitalism Piketty’s title alludes. (The Chronicle, 17 April 2014)

On that note I'm off to order my copy.

Economics as engineering III: Carnegie stories

Episode I and Episode II; Background chronology of economics at Carnegie.

The “economics and engineering” line of argument is part of economists' rhetoric. While they've used it primarily as a metaphor to describe their practices (formalization, modeling, market design, problem solving, policy expertise), historians have documented the substantial impact engineering as a science, practice or culture had on the formation of economics since the 19th century. For instance, they have studied how engineering was instrumental in shaping the “New Economics” economists built at MIT between the 1940s and the1960s. In a workshop organized by fellow blogger Pedro Duarte on the history of macroeconomic dynamics last summer, Judy Klein and Esther-Mirjam Sent presented new evidence of how New Classical macroeconomic modeling was heavily influenced by the Carnegie engineering mindset (video of their talks here). In particular, they show how Lucas, Prescott and their colleagues built upon the Cold War “modeling strategies” previously devised by economists and engineers under the supervision of the military.

Klein's take on the rise of New Classical economics is especially interesting. She aims at explaining how the demands of military patrons in the context of the Cold War turned some branches of mathematics into a “science of economizing.” The solutions yielded by production planning or warfare models had to be expressed as simple, e.g. implementable and computable decision rules. This requirement forced applied mathematicians, engineers and economists working on such projects to rethink their “modeling strategies.” Klein thus demonstrates that the “modeling strategies” Carnegie researchers crafted for their clients proved especially fit for economics use : computational and practical constraints shaped Bellman's dynamic programming techniques, Simon's certainty-equivalence theorem and Muth's notion of rational expectations, three pillars of New Classical macrodynamics.

Bellman's research interests evolved hand in hand with the warfare between the US and the USSR, Klein explains. His first assignment was to work on the best allocation of the 9 US nuclear bombs in case of a strike. The solution had to be designed as a decision rule telling the bomber how to reoptimize after each move to inflict the maximum damage to the enemy. As bombs, missiles and other war equipments multiplied in the second half of the 50s, Bellman switched to inventory control and the determination of optimal military production in the face of uncertain demand. In the wake of the launch of Sputnik, he concentrated on the design of optimal control trajectories for fuses and missiles. Again, his models were constrained by the requirement that the resulting decision rules accommodate uncertainty, be operational and quickly computable. It was in this context that he fashioned his dynamic programming method, a more tractable alternative to the refined type of calculus of variations enabled by the 1962 translation of Pontryagin's book. Bellman envisioned his clients' control issue as a multi-stage decision process, in which the state of the environment at each stage determined which control variables should be used and how (the policy decision). He managed to show that solving for the optimum value of the objective function was equivalent to solving for the optimal policy and then derive the value of the function, a protocol much easier to compute and more useful for his clients.

 

 

Bellman's protocol was immediately implemented by those economists and engineers recruited at the newly established Graduate School of Industrial Administration of the Carnegie Institute of Technology. During the fifties, the Air Force, then the Office of Naval Research asked Herbert Simon, former engineer Charles Holt, Franco Modigliani and William Cooper to devise decision rules aimed at minimizing some plants' production and inventory costs. Those rules had to be implementable by non-technicians and easily computable, so that approximate solutions were preferred to abstract solutions. As a result, the Carnegie team worked in the spirit of “application-driven theory,” in Cooper's words: you start with a problem, you develop a solution, and then the theory comes from the generalization of such solution. In particular, they resorted to reverse engineering: knowing from previous work on servomechanism that the use of Laplace transforms made calculus much easier, they looked for the cost function amenable to such results, and found out that it had to exhibit a quadratic form. The quadratic cost function had yet another virtue. Since the models had to include some forecast of the (uncertain) future demand, Simon demonstrated that, in the case of a quadratic objective function, uncertain state variables could be approximated by their unconditional expectation. This certainty equivalence theorem spared the modeler the task of handling the whole probability distribution of future variables and performing long and complex computation. Reflecting of his team's modeling strategy, Simon came up with the idea that, just like scientists, those entrepreneurs confronted with limited information and computation abilities were not maximizing, but looking for satisfying rules of behavior. They thus exhibited bounded rationality. One of Simon's PhD student working on the project, John Muth, however interpreted their experience in an opposite way. While Simon had concluded that economic models assume too strong a form of rationality, Muth argued that “dynamic economic models do not assume enough rationality.” Entrepreneurs' expectations, “since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory,” he famously explained in 1960.

 

 

While Pontryagin's type of dynamic optimization was immediately translated into the growth theoretic modeling developed at Stanford and MIT, Bellman's dynamic programming, combined with Kalman filtering, rational expectations, and the certainty equivalent theorem, was the “modeling strategy” chosen by Carnegie economists. Lucas first used Muth's framework in a 1966 paper aimed a studying individual investment decisions, then in his and Prescott's 1971 model of firms facing random shifts in the industry demand curve. Bellman's protocol enabled them to determine the optimal time path of investment and prices, a technique subsequently spread in macroeconomics works. As Esther-Mirjam Sent emphasized, opposing “freshwater” science to “saltwater” engineering is thus misleading: both modeling styles heavily drew upon modeling strategies initially developed for engineering purposes.

             

 

Klein's catchword “modeling strategy” is extremely important in that story. It conveys what the turn to rational expectations was for those Carnegie economists immersed in a Cold War engineering zeitgeist. That economic agents were viewed as a collection of decision rules reflected modelers' daily practice. The normative decision rules Carnegie economists developed for their military and private clients eventually became a positive representation of how economic agents behave, so that it made sense to assume that a communism of models existed between the modelers and the agents. The modeling strategies chosen by New Classical economists had been previously designed for practical purpose, and their specific pattern therefore framed economists' research and policy conclusions. The stabilizing quality of agents' rules of behavior was built in those models exhibiting a new kind of rationality and a multistage decision process, while Pontryagin types of protocols led modelers toward alternative notions of equilibrium and thus, diverging policy prescriptions. All this suggests that the modeling and policy aspects of the macro debates raging in the 70s and 80s are ultimately impossible to disentangle. 

 

Macrowars, economists' narratives, and my dreamed history of macro

Economists' macro stories

The last straw in the enduring blog debate over microfoundations has taken a decisive historical turn. Last December already, Paul Krugman gave his own account of how microfoundations came to be the 70s, and why they unduly spread over subsequent decades, and so did Stephen Williamson. Economists' need to ground their methodological debates into self-made historical narratives is in itself an interesting feature for historians. But in a recent post dealing with the “Faustian bargain” New Keynesians may have sealed by endorsing of the New Classical microfoundational program, Simon Wren-Lewis asked a more direct historical question:

Is this how it happened? It is true that New Keynesian models are essentially RBC models plus sticky prices. But is this because New Keynesian economists were forced to accept the RBC structure, or did they voluntarily do so because they thought it was a good foundation on which to build?

And Noah Smith added a second one:

“are macroeconomists doomed to always 'fight the last war'? Are they doomed to always be explaining the last problem we had, even as a completely different problem is building on the horizon?”

It thus seems the kind of narratives economic bloggers are after are of two kinds:

  1. Brain stories about how and why economic ideas (microfoundations, representative agents, technology-driven business cycles, calvo-pricing types of rigidities) emerged. As Bob Lucas puts it in a reminiscence on Kydland and Prescott's Time to Build paper,“how did they ever think to put all these pieces together in just this way?" Not quite sure that the appealing “new-ideas-emerge-in-reaction-to-the-latest-economic-crisis” pattern, widely used by economists and historians alike, is an accurate representation of how macroeconomic ideas develop.

  2. War stories about how such ideas gained traction, spread, and eventually became the exclusive standard for academic publication, forcing their opponents into defensing positions. A possibility that Wren-Lewis entertains (an eventually rejects) it is that New Keynesians' turn to microfoundations was a strategy “to be reintroduced into the academic mainstream.” Likewise, Krugman mentions a journal “blockade”

For once, let's refrain from complaining again about the disregard of HET scholarship economists' half-baked attempts at self-made/ self-serving history reveal, and let's ask instead: do historians have any such brain and war stories to offer? And if not, is it possible and valuable at all to write such stories?

 

“How did they ever think to put all these pieces together in just this way?"

The recent book Microfoundations Reconsidered edited by fellow blogger Pedro Duarte and by Gilberto Lima exemplifies both historians' ability to tell rich and complex brain stories and their failure to produce the war stories that are crucial to understand the current state of macro.

The authors' concern is not merely to document the intellectual development of those economists usually associated with the dominant breed of microfoundations – the formal derivation of aggregate relations from the inter-temporal optimizing behavior of rationally-anticipating agents – , but more broadly to demonstrate how pervasive the question of the relationship of micro to macro had been since Ragnar Frisch coined the two terms in the 30s. Several chapters trace alternative microfoundational programs back to the 50s, when the word “microfoundations” was coined. Long before Phelps intended to “found a theory of aggregate supply” on “a new kind of microeconomics of production, labor supply, wage and price decisions” and Lucas initiated what would quickly become an “eliminative” microfoundational program – the word chosen by Kevin Hoover to emphasize the centrality of the representative agent –, the general equilibrium framework on which the “neoclassical synthesis” was built had been seen as a bridge between micro and macro and had fueled sustained discussions over the relationships of aggregate equations to individual behavior. In his opening survey of alternative “microfoundational programs,” Hoover also details Lawrence Klein's long-standing attempts to ground the aggregate relationships of his large-scale macroeconometric models into microeconomic theory without endangering their empirical tractability. Other papers investigate how figures ranging from Marschak, Koopmans, Modigliani, Hicks, Morgenstern, Patinkin, Samuelson, to the usual New Classical suspects, up to Goodfriend, King, Blanchard and Woodford envisioned the links between macro and micro.

While these contributions do not answer Wren-Lewis and Smith's questions, they clearly belie the kind of crude narrative provided by Williamson: “when the Phelps volume came out in 1970, micro and macro looked like they came from people living on different planets, and you had to convince people that it made sense to take ideas from Mars and use them on Venus.” A consequence of the intellectual hotpot of the 50s and 60s, a wider variety of approaches to microfoundations than is usually assessed were competing in the 70s, in the Phelps volume as in the macro literature at large. Economists were trying to derive consumption, investment and labor equations from formalized data-backed individual behaviors, working on adequate representations of search behavior, dealing with the consequences of the Sonnenschein-Mantel-Debreu results. They were modeling heterogenous agents and coordination issues but wanted tractable models to estimate for policy evaluation purpose.

Historians could yet go even further in painting the diversity of the postwar intellectual landscape. The influence of economists' tendency to pitch the history of macro as a war between two unified groups (Neokeynesians vs Monetarists, then New Classical vs New Keynesians) has created an biased focus on key characters such as Lucas, Sargent, etc. to the detriment of a wide range of research, such as disequilibrium modeling or sunspot theorists' attempt to counter policy-ineffectiveness propositions while maintaining a general equilibrium framework with rational expectations. That such a variety of approaches coexisted in the 70s makes the New Classical takeover of the 80s even more dramatic.

Other pieces of HET scholarship show how key protagonists' agenda are difficult to establish. They suggest that their agenda were presumably muddled and sinuous even to themselves for years, regardless of the apparent straightforwardness or simplemindedness emanating from Nobel reminiscences, or, say, Krugman's account of the birth of RBC :

But many economists had so committed themselves to the idea that Keynes was dead and rationality roolz that they simply dug in deeper. Rationality-based microfoundations must be right; if their microfoundations couldn’t explain why nominal shocks have real effects, then nominal shocks must not have real effects – it’s all real shocks. And so real business cycle theory was born.

Warren Young's combined analysis of protagonists' recollections, unpublished drafts and referee reports sketches an altogether more tortuous path to Time to Build. Throughout the 70s, Kydland and Prescott chased a lot of rabbits in dozens of overlapping and changing drafts: replacing optimal control with a new way of evaluating policy rules, writing up dynamic equilibrium business cycle models, describing the cyclical properties of time series, finding impulse and propagation mechanisms, devising dynamic optimal taxation schemes. Accordingly they kept combining monetary and technology shocks, switching back and forth between dynamic game theoretic, general equilibrium and growth models, models with several heterogenous or single representative agents, with one or several sectors, with or without wage-stickiness or government expenditures.

 

How to win a science war?

Historians, then, do provide highlighting stories of macroeconomists' intellectual development. Yet, the crucial issue is not how these ideas came into existence, but how they spread and became dominant. More specifically, the ultimate puzzle is how a bunch of less than 10 guys – Hoover notes that Lucas, Barro, Sargent, Wallace, Sims, Kydland and Prescott together account for 44 of the 81 articles of his anthology of New Classical macro – who neither agreed on the theoretical explanation of business cycles nor on the methods to confront those theories with facts (structural estimation? calibration? vars?) managed, in less than 10 years, to appear as carrying a common program, hijack a whole discipline and set the theoretical, empirical and institutional agenda. While's everyone has his own opinion, there is no documented story of the offensive and its couterstrikes to be found. I believe writing such history would require:

- a decompartmentalization of our field. In particular, writing the history of macro and econometrics together. Stories which beautifully wave economists' econometric practices, their theoretical understanding of individual behaviors and aggregate fluctuations, their approach to policy, and the wider transformations of the US policy and political context in the 30s to 60s already exist. They show that macroeconomists have always been empirically minded. But I wonder to what extent the ties between macro and econometrics were of different nature after 1970. Economists were not merely using new techniques to test new models, they were coproducing them, and fighting the influence war on both fronts simlatenously. And since the key to the development of macro in the 70s and 80s seems to be the Lucas critique, a better understanding not only of how it was thought but also of of how it was understood, spread and accepted is required. I haven't yet read Duo Qin's new book on the history of econometrics after 70, but it looks promising.

-a greater awareness to the intellectual and institutional ecology of specific places. Understanding the genesis of rational expectations is not possible without a clear sense of what kind of intellectual hothouse Carnegie was in the 50s to 70s ( see all the works listed here, and follow closely Judy Klein's research on the Cold War and Carnegie economics-as-engineering style as a craddle for macrodynamics). Similarly, understanding macro in the 70s 80s may require looking more closely at Minnesota, where Sargent, Sims, Wallace and Prescott were interacting in the dissertation committees of 2013 Nobel Prize Lars Hansen, Larry Christiano and Martin Eichenbaum, among many others, teasing each other's econometrics with fancy “don't regress, progress” jingles (see note 36), and actively contributing to the monetary policy views developed by the Federal reserve Bank of Minneapolis. Studying what the graduates were taught there may be a good idea. For New Keynesians, such crucial place may be the NBER, after it was reorganized under the leadership of Martin Feldstein, in particular its program on economic fluctuations, headed by Bob Hall.

- to drop the usual focus on a limited set of key academic papers. Economists' recollections are riven with references to textbooks, readers, programatic introductions, and the like: what if the impact of Sargent 1979, Sargent and Lucas 1981, Sargent 1987, Blanchard and Fischer 1989, Mankiw and Romer 1991, Woodford 2002 was in fine as decisive as Lucas 1972 or Kydland and Prescott 1982?

-to study the changes in the reward structure of the discipline, the referee process, the academic publishing industry (see what economists say about the need to provide microfoundations to publish in a top journal)

-to look beyond departments of economics. What about other research bodies (NBER, Brookings, IMF etc.), central banks, policy makers, and the media? Who organized seminars, controlled discussion forums, provided publication outlets (count how many New Classical papers were first published in outlets edited by the Minneapolis Fed), who was whispering to the ear of such and such central banker, who governed the Council of Economic Advisers ?

 

From wishful thinking to active writing: missing data and pigeonholing

Granted, such wishful thinking is of little use, since we don't have access to the data necessary to write such stories. While published articles and books are the first stone on which to build our narratives, they are far from sufficient. Roy Weintraub's comments to Tom Scheiding's thoughtful post, as well as Ezra Klein's complaint against academic journals explain why published papers is a frustrating material. It freezes an entire intellectual and institutional process into a much constrained form. It collapses the initial identification of a puzzle, the compromises between co-authors, the technical trials and errors, successive drafts, workshop discussions, suggestions, revisions, the impact of peer-reviewing, the expectations of the publisher. Disentangling these various elements requires access to preliminary drafts, correspondence, workshop minutes, grant applications, referee reports. Tracking the spread of economic ideas similarly commands not only bibliographic work but also careful investigation of departmental and institutional records. Which usually comes with a 30 to 50 years lag. But the first thing is to make sure it will come, by worrying where the papers of Kydland, Prescott, Wallace, Romer, Fischer will be deposited, and how the archives of the NBER and US and worldwide central banks are curated.

There's one final move to contemplate. Economic historians/philosophers/methodologists are driven by a variety of aspirations, such as assessing the theories and practices of contemporary economists, using fine-grained studies of “past” texts as another means (besides modeling) to contribute to contemporary public debate, or highlighting the sequence of events and the underlying individual and collective dynamics which had led economics to its current state of affairs, regardless of its desirability. While it doesn't reflect any sort of pecking order, this post deal only with the latter kind of history, and a historian writing with different aspirations will rightfully advocate altogether different investigation methods. Which is not a problem. The problem is with the eternally simmering debate between advocates of more intellectual/internal/analytical and more institutional/contextual/external history. It is one not only utterly meaningless, but also potentially harmful for the quality of our work. Obviously, “contextual” historians have read and reflected on Lucas 1972 as well as on Christiano, Eichenbaum and Trabandt 2013. Obviously, “analytical” historians know that the theories developed by economists were influenced by the ideas and practices of their spouses, of central bankers, prime ministers, military bodies, foundations. And answering any of the questions listed above obviously requires the combination of both perspectives, if they can be separated at all. Less pigeonholing and more collaborative work is thus in order. 

 

Thomas Scheiding: A history of scholarly communication in economics

We invited Thomas Scheiding from Cardinal Stritch University to review what we know about the scholarly communication process in economics. Tom has written forcefully on the history and economics of economic literature (see for instance, his 2009 JEM article). His latest is a study of the scholarly communication process in physics (an article in Studies).

The scholarly communication process is one of those underappreciated aspects within a discipline that determines the discipline’s boundaries, its influence on public policy, and its trajectory. This process also determines which ideas reign supreme, legitimizes certain research methods, and rank-orders the contributions made by scholars. The twentieth century has been a time of intense change in the scholarly communication process - change that has been fueled by the expansion of the academic system. This change was certainly evident in the discipline of economics.

The scholarly communication process in economics today can be characterized as a set of rank-ordered list of specialty research journals and a smaller set of top-ranked general journals. The number of journals in economics is vast with an estimated 600 journals in 2000 (with approximately half of these coming from the United States). To put this number of journals into perspective with the size of the discipline, one scholarly journal in economics exists for every 37 members of the American Economic Association (AEA). The emergence of this type of scholarly communication process in economics isn’t by any means natural.  Rather, when universities halted the establishment of journals that appealed to generalists in the early twentieth century, this had the impact of directing the research of scholars to either small circulation, newly established specialty journals or the few number of large circulation and more established generalist journals. Had there been more generalist journals or had researchers been encouraged to distribute their research via the monograph in the second half of the twentieth century, there would have been a greater need for indexing and abstracting services to organize the widely scattered research articles. Furthermore, had there been more generalists journals, there would have been less of an incentive for specialization by researchers and the formation of specialized research communities in universities.

The three major journals, are the American Economic Review (AER) established in 1911, Journal of Political Economy (JPE) established in 1892, and the Quarterly Journal of Economics (QJE) established in 1886. These are three generalist journals that collectively have published most of the notable economic theories in the twentieth century. Data on the amount published by these three journals and the type of material published indicate that 1970 served as an inflection point. The number of articles published collectively by these journals on an annual basis was 80 - a number that was relatively constant in 1911-1969. After 1970, the AER and JPE published fewer book reviews and a greater number of research articles (such that the AER, JPE, and QJE collectively published on average slightly less than 150 articles). Although the number of articles published annually has changed little within the two eras, the length of the articles has been more volatile on a year-to-year basis. The average length of an article had declined from 23 pages in the late nineteenth century to 17 pages by 1969. From 1970 to 1995 however the average length of articles became less volatile on a year-to-year basis but had steadily increased back to 23 pages. Perhaps the most volatile sections of these journals were the fraction of pages devoted to notes and comments and the number of pages devoted to book reviews. The fraction of pages devoted to notes and comments has changed greatly throughout the twentieth century and has ranged from as high as 50% in the early 1970s to less than 10% in the 1930s and less than 20% in the 1990s with the historical average being 20%. The volatility in pages devoted to notes and comments is due primarily to different editorial policies and the emergence of journals after 1970 that were focused on publishing notes and comments.

Just how the scholarly communication process emerged as it did in economics in the twentieth century is determined by many factors not the least of which is the source and magnitude of research funding. At the beginning of the twentieth century the emphasis on research and the funding for it in economics and every other discipline was low. The role of the scholar at this time was primarily to educate and then supplement their earnings with industrial research contracts that often demanded non-disclosure of findings. In terms of scholarly communication, the few number of journals and the ability of committed researchers to publish monographs was aligned with the small scale of research activity. WW II represented a dramatic shift from this pattern with research emphasized and the government expanding its role in organizing and funding research. As a result, economists published more with greater frequency – a fact that prioritized publishing in journals rather than monographs. The burden placed on journals increased markedly in the late 1950s and early 1960s with challenges emerging due to space and financial constraints. The space constraints were somewhat relieved as more specialized research titles were introduced and existing journals increased the number of issues per volume. The financial challenges in publishing the journals were more difficult to overcome.

In the case of the AER, before the 1970s the journal was financed primarily through fees from members to the AEA. After WW II, this model of funding came under strain as the larger amount of published research became more expensive to produce and distribute. The Committee on Association Deficits in 1959, in the face of rising costs, recommended that expenses be reduced by eliminating the Papers and Proceedings issue, more advertising space being sold, and more revenue earned by encouraging the republication of AER material. Between 1944 and 1958 the AEA had deficits with membership dues not increasing despite the journal’s larger size. The AEA felt pressure to keep dues low as it worked to establish the scholarly society as the premier generalist association in a discipline that was rapidly specializing.  Whereas immediately after the war the income from membership dues and subscriptions covered 90% of publication costs, after 1947 this dropped to 80%. The financial situation became worse in the 1960s as the number of newly trained economists increased dramatically and sought to publish research funded more generously than had been the case in the past.

Although in 1959 the deficit of the AEA had been declared as "…not a matter for serious alarm," the strain of publishing more in the 1960s had taken a toll. The 1970s became a time for conversations about how to resolve these financial problems. The journal came to be modeled as one good produced by a scholarly society alongside a number of other goods and services. The journal was seen as generating significant positive externalities and as a quasi public good, needed to secure a subsidy from the government or research funding agency. The publication of a journal under a cartel arrangement, as done in other disciplines, allows the journal to be produced as inexpensively as possible (with readers paying the marginal costs) and coordinating the collection of subsides from outsiders.

In the early 1970s there was an increase in membership fees upon the realization that reader demand was inelastic. There was also an attempt by the AEA to capture revenue from authors with the imposition of a submission fee and a page charge. Within the AEA there were extensive conversations about the rise of specialized scholarly societies and inefficient pricing policies. The concern was that the scholarly communication process in economics had become too fragmented and inefficient. The financial challenges by 1975 had become so severe that deficits had reduced the net worth of the AEA down to ¼ of annual expenditures.  The attempt to capture revenue from authors failed with both the submission fee and page charge being phased out. Instead, the financial problems of the AEA ended up being resolved as the scholarly society captured license revenue from electronic indexing, abstracting, and distribution, the imposition of progressive membership dues, and especially the rapid increase in non-member (institutional) subscription fees. The subscription fees charged to institutions were able to finance most of the scholarly communication process in economics. Not only do institutions have a highly inelastic demand curve but the institutional purchaser will also provide encouragement and funding to the researchers that are publishing in the journal.

Since the 1960s the AEA has been steadily increasing the number of publication outlets. This was done initially so as to offload some content from the AER to other journals. The first journal other than the AER was the Journal of Economic Abstracts (JEA). The JEA in the mid-1960s published review articles and it was financed with grants from the Ford and Rockefeller Foundations (philanthropies that provided the bulk of research funding in the discipline).By 1969 this journal was converted to the Journal of Economic Literature and published review articles, book reviews, and other items previously found in the AER. In the 1970s the AER was able to publish more and compete more effectively with the specialized journals published by Elsevier and Academic Press. In 1987, the AEA introduced a third journal, the Journal of Economic Perspectives, to publish articles for a more generalist audience. In 2009, the AEA introduced four field journals – Applied Economics, Economic Policy, Macroeconomics, and Microeconomics. These journals permitted the publication of more specialized research under the AEA name and allowed for the publication of longer articles in the AER. Finally, the AEA began to publish more in the AER in 2011 by increasing the number of issues per volume from 4 to 6. The expansion of the AER and creation of 6 other journals since 1969 has translated into the AER articles being highly cited and the acceptance rate being stubbornly low in the face of greater number of submission. Thus, much to the chagrin of some, the publications of the AEA play an outsized role in the legitimization and circulation of ideas in the economics discipline.

What we see with this briefest of overviews of the scholarly communication process in economics during the twentieth century is accelerated growth over the last few decades to accommodate the growth in the volume of research. Who is chosen to be the editor of a journal, the decisions made about the type of content to publish, and the manner in which the scholarly communication process is financed all shape the outlines of a scholarly communication process that reflects the type of discipline that others demand.

Macroeconomics in Perspective

In which MIT decided to teach micro first so as to make economics more relevant

I've already blogged on how undergraduate education evolved at MIT in the postwar era here and here, but since Mike Konczal and Paul Krugman make the case that, to bring introductory economics closer to the real world, macro should be taught before micro as Samuelson did in the first 13 editions of his Economi Read more

Mature history of economics

In the past decade, the volume of literature in the history of economics has been of 500 articles and just under 50 books a year. The graph below traces the count in two year intervals (articles left axis, books right axis). The absolute volume is stable but given the growth of economic literature in the period, stable might be rebranded as static.

Read more

In the thick of it (labels and research)

Historians like labels. X history. History of y. The labels carve out subjects, set boundaries in time and space, at times even suggest methodological commitments. Read more

Do social movements create new ideas?

The short answer is yes. For the long answer I will make you sit through seven paragraphs. Read more

Economic theory declassified?

So, most Nobel Prize exegetes went a long way, this week, toward explaining that asset pricing is not primarily born out of theoretical reflection but out of prize-deserving empirical work.John Cochrane, for instance, writes about efficient markets that: Read more

The Political Economy of the Nobel Prize, 45th edition

This morning, when I woke up a few hours before the Nobel announcement, I felt seriously dissatisfied. I had meant to write a post on Thomson Reuters's prediction that Card, Angrist and Krueger may win the Nobel for their work on empirical microeconomics. I thought that such prediction would come true, sooner or later, because of the irresistible development of empirical economics in the past 50 years, one illustrated by the list of John Bates Clark medalists. Such expansion, fostered by the increasing availability of data and computing resources, could itself be recast as part of an even larger move toward making economics an applied science. Read more

Guy Numa "The Financial Crisis Five Years Later: The Role of Banking"

Present day puzzlements shed their complexity when Guy Numa in this essay draws on some age old distinctions borrowed from Jean-Baptiste Say. Numa is a INET Research Fellow who specializes in the History of Economic Thought and Industrial Organization.

Five years ago Lehman Brothers filed for bankruptcy marking the unofficial start of the latest financial crisis. Several commentators argued that the key to understanding the root cause of the crisis lies in the partial repeal of the Glass–Steagall provisions that occurred in the late 1990s. The provisions restricted commercial banks from participating in the investment banking business and therefore institutionalized a de facto separation of the two types of institutions. Read more

The rise of economics as engineering II: the case of MIT

Looming behind the aforementioned narratives of postwar economics is a notion – economics as engineering – which at times appears as a metaphor and at times stands for a straight depiction of economists' professional milieu and practices. Also, it is used to characterize both highly theoretical undertakings element and some everyday engagement with policy making. As exemplified by Mankiw's remark that, all but one top selling macro textbooks from the 1980s as well as the 2000s wee written by MIT economists, MIT is one relevant site for the investigation of the nature and significance of the “engineering” comparison. Read more