Historically, microeconomics was the domain of scientific methodology in economics, while macroeconomics attracted less mathematically oriented economists. In recent years, the level of sophistication of macroeconomics has grown dramatically, and that field now attracts many of the most mathematically oriented economists.
Nevertheless, the field's set of shared views (i.e., maintained hypothesis) has not grown. The purpose of the scientific method is to permit the maintained hypothesis within a field to grow by establishing a rigorous methodology for deductively deriving and empirically testing hypotheses. The field of macroeconomics has failed that test of scientific success during precisely the decades of most rapid growth in the use of scientific methodology.
It is argued that the source of the paradox
lies in the fact that the inroads of science in macroeconomics
have been asymmetric. Central to the definition and objectives
of macroeconomics is dimension reduction and dynamics. Rigorous
dimension reduction is impossible without formal aggregation,
and complex dynamics is impossible without nonlinearity. Yet
applications of formal aggregation theory and nonlinear dynamics
to macroeconomics have progressed very slowly, at the time that
scientific methodology in other areas of macroeconomics have advanced
rapidly. This asymmetry explains the paradox.
1. Introduction
The nature of macroeconomic theory has changed dramatically during the past decade. Some people say that macroeconomic theory is in crisis, and many detractors have argued that macroeconomic theorists are less willing to make policy recommendations with confidence than at any other time in this century. I disagree. In fact the dramatic transformation of macroeconomics during the past decade has reflected the adoption of more scientific methodology than had previously characterized the field. A corollary is that conclusions are drawn more cautiously than earlier, as a natural result of the conservative nature of the scientific method itself. Hence the caution characterizing macroeconomic policy conclusions should be viewed as positive evidence of the advancement of science in macroeconomics.
Prior to the recent transformation of macroeconomics, economists with strong mathematical and statistical leanings usually chose to work in microeconomic theory and microeconometrics. The use of formal mathematical logic was much deeper in microeconomics than in macroeconomics, and the most sophisticated econometric methods were more frequently applied to microeconometric than macroeconometric analysis. But during the past decade, macroeconomics has absorbed many of the most sophisticated methods of general equilibrium theory, dynamic programming, optimal control theory, stochastic choice theory, and game theory. As a result, a large percentage of the best and brightest young mathematical economists and econometricians now choose to work in macroeconomics.
Nevertheless, the divisions among macroeconomists are no less deep than ever before. In fact the divisions may be deeper now than ever before, since the gap between real business cycle theorists and Keynesians is perhaps greater than the earlier gap between monetarists and Keynesians. For example, Keynesians and monetarists have always agreed that money matters, but real business cycle theorists frequently take the position that money matters little, if at all. In addition, monetarists and Keynesians have usually depended upon disequilibrium dynamics to explain the transmission mechanism of monetary policy, while real business cycle theorists often assume continuous market clearing. The differences of opinion between Keynesians and monetarists regarding long run macroeconomic policy now also divide Keynesians from real business cycle theorists, but those latter divisions now apply even to very short run anaysis.
These divisions would appear to contradict my conclusion that macroeconomics is more scientific than before. The scientific method is intended to produce agreement on hypotheses that survive scientific tests and replications of those tests. While the state of the art in any science is always a contest between competing hypotheses, there normally is a noncontroversial backlog of accepted hypotheses, shared by nearly all researchers in the field. Such is indeed the case in microeconomics. But that backlog in macroeconomics has not grown over the years, even after the transformation of the field that has taken place during the past decade.
The purpose of this paper is to explore the reasons for this odd state of affairs.
2. The Definition of Macroeconomics
All economics suffers from a shortage of experimental data. In other sciences, controlled experiments are used as an integral part of the scientific method. But the economy is not a controlled experiment. Hence it is not surprising that economic hypotheses are difficult to test in a manner that is satisfactory to all economists. However, this is no less the case for microeconomics than for macroeconomics. The source of the macroeconomic controversies must be found elsewhere.
Before we can look more deeply at these paradoxes, we need a formal mathematical definition of macroeconomics that can be contrasted rigorously with the definition of microeconomics. That distinction no longer can be found in the choice of tools, since microeconomists and macroeconomists use most of the same tools. It is sometimes argued that macroeconomics is "what macroeconomists do," while macroeconomists are defined in terms of their policy concerns regarding inflation, unemployment, the business cycle, and aggregate economic growth. While this definition does capture the reason for the existence of macroeconomics as a field, the definition is too informal to be useful for our purposes.
My definition will be the following: microeconomics is the economics of a high dimensional economy, while macroeconomics is the economics of a low dimensional economy. If the real world were low dimensional, the two fields would be identical. But except for some small island economies, the transition from microeconomics to macroeconomics unavoidably requires aggregation over economic agents and over goods to create such concepts as aggregate investment, consumption, savings, money, durables, and representative consumers. There also can be aggregation over time to produce overlapping generations models with consumers having two or three period lifetimes.
3. Aggregation Theory
There is indeed a formal literature on aggregation theory. If macroeconomics were based entirely upon formal aggregation theory, macroeconomics would be a subfield of microeconomics, since the tools of aggregation theory are microeconomic tools. Macroeconomic conclusions would be derived directly from microeconomics. Under those circumstances, the paradoxes described above would disappear. The growing collection of hypotheses embedded within the maintained hypotheses of microeconomics would be the hypotheses held in common among economists interested in macroeconomic policy, and the advancement of science in microeconomics and in macroeconomics would be directly linked through aggregation theory.
But unfortunately that is not the case. Macroeconomics has made remarkably little use of the tools of aggregation theory. For example, in microeconomic aggregation theory, weakly separable collections of goods are clustered into groups over which aggregation then is possible. Weak separability is a necessary condition for aggregation over a group of goods. Yet this test is rarely used in the macroeconomic literature. Goods are clustered together as "durables," "investment," "capital," "perishables," and other such aggregates without any consideration whatsoever to the separability condition needed to produce an aggregate admissible under aggregation theory.
Once a group of goods is selected to be the components of an aggregate, aggregation theory tells us how to find the exact aggregator function to use in aggregating over those goods. For a consumer, the aggregator function is the category (conditional) utility function over those goods. These aggregator functions are never linear, unless the components are perfect substitutes, since a utility function is linear if and only if the goods in the utility function are perfect substitutes. Yet the monetary aggregates used by nearly every central bank in the world are not only linear, but in fact are simple sums. To say that aggregation theoretic principles have not been used is an understatement.
Regarding aggregation over economic agents, the situation is even more disturbing. Virtually the entire literature on macroeconomics is free from distribution effects. Mean preserving redistributions of income have no effects upon the macroeconomy. Only the first moment of the income distribution matters. However it can be shown that freedom from distribution effects is sufficient for the existence of a representative consumer. Hence logical internal consistency would dictate that all macroeconomic models must assume the existence of a representative consumer, and therefore aggregate consumer demand systems must be integrable to the utility function of a representative agent. In an admirable display of deductive logic, most macroeconomists in recent years have assured that their models are integrable in that manner.
However, Gorman (1953) proved that a necessary condition for the existence of a representative agent is that all consumers have parallel linear Engel curves. Hence all utility functions must either be identical homothetic functions or simple affine translations of that function. The resulting class of functions is called Gorman's polar form. Yet no one seriously believes that tastes are that uniform across consumers. Even as an approximation, the assumption appears to be much too strong. The assumption can be weakened somewhat by more complicated procedures that permit the existence of distribution effects, such as those derived by Muellbauer (1975) and Barnett (1979). But those sophisticated methods have not found their way into macroeconomics.
As a result of the unreasonable strength of the assumptions producing a representative consumer, macroeconomists directly involved in policy decisions, such as those in government and those employed by consulting firms, typically have used large scale macroeconomic models that are not integrable. No attempt is made to produce consistency with the existence of a representative consumer, since realism militates against accepting the existence of a representative consumer. The use of representative agent models in macroeconomics is limited to professional journal publications, where formal scientific rigor is more important than detailed empirical realism--as indeed should be the case in any scientific journal. The kinds of ad hoc methods that can easily produce "better fit" have no place in science.
But an internal inconsistency now arises. The macroeconometric models used by policy makers are usually free of distribution effects, even though those models are not integrable to a community utility function. Although such models dominate applied use of macroeconomics in government, such models rarely appear in the best published journals. In fact no nonintegrable macroeconomic models should be taken seriously by scientific journals, unless distribution effects are incorporated. The importance of the inclusion of distribution effects within large scale macroeconometric models does not appears to have been recognized by the builders of those models.
Clearly the use of aggregation theory in macroeconomics is slight, at best. Hence low dimensional macroeconomics cannot be derived from high dimensional microeconomics. The gap between low dimensional economics and high dimensional economics has not been filled. Without the discipline of aggregation theory imposed, macroeconomics floats free from important constraints. As I shall argue below, it is the lack of those logical constraints that produces many of the paradoxes in current macroeconomic research.
However, I should add at this point, that there are good reasons for some of the logical gaps described above. In particular, the literature on aggregating over economic agents is a very difficult one, and there exist many unsolved problems in that literature. Although it would be possible to improve upon the current macroeconomic literature by using the existing literature on aggregating over economic agents, the relevant results in aggregation theory are not adequate to permit full unification of microeconomics and macroeconomics. Without more progress in microeconomic aggregation theory, the aggregation theoretic gap between microeconomics and macroeconomics cannot fully be closed.
Aggregation over goods is another matter. That aspect of formal aggregation theory is highly developed. The fact that macroeconomics makes so little use of aggregation theory in aggregating over goods is a failing of the field of macroeconomics. As discussed by Barnett, Fisher, and Serletis (1992), the use of simple sum monetary aggregates by central banks, monetary economists, and macroeconomists is indefensible.
4. Dimension Reduction
If macroeconomics could be derived from microeconomics by using microeconomic aggregation theory, the resulting macroeconomic models undoubtedly would be very complex. The existence of distribution effects through higher order moments of the income distribution would be only one of the many complications that would enter into macroeconomic models. Theory would not rule out many such complications. Only econometric tests could be used to test for simplifying null hypotheses.
But if microeconomic aggregation theorey is ignored, the dimension reduction that defines macroeconomics can be accomplished in an infinite number of ad hoc ways. The theorist is free to emphasize whatever phenomenon he subjectively believes to be most important, at the expense of the phenomena that are assumed away.
Indeed it is precisely that procedure that characterized all macroeconomics. For example, some models emphasize market imperfections, some emphasize the fundamental solution to rational expectations models, some emphasize bubbles, some emphasize continuous market clearing, some emphasize supply side shocks, some emphasize incomplete markets, some emphasize uncleared markets, and some emphasize chaos. Obviously the low dimension that defines macroeconomics cannot easily be attained while including all of those phenomena. Nevertheless, subject to the imposition of prejudiced priors excluding some of those phenomena, macroeconomic models currently appearing in the literature contain more scientific content than those models of earlier decades.
Convergence of views to produce a growing maintained hypothesis, as characterizes all science, cannot be attained within macroeconomics without the elimination of the prejudiced priors that emphasize one aspect of the economy at the expense of others; and the elimination of those controversial priors is impossible without the solution of the currently unsolved problems in formal microeconomic aggregation theory. Nevertheless, I shall survey below some of the recent advances in scientific methodology found in macroeconomics. However, it must be kept in mind that each of those advances is produced conditionally upon the imposition of a maintained hypothesis that has never been subjected to scientific testing.
5. Keynesian Discretion versus Open Loop Rules
Game theory has produced a much improved focus in the ongoing debate between rules versus discretion. It is now well understood that advocacy of open loop rules depends upon the assumption of the existence of time inconsistency in policy making, while advocacy of Keynesian discretion depends upon the assumption of reputational equilibrium. This conclusion is based upon the assumption that government knows the structure of the private sector's decision, while the private sector knows the announced plans of the government, but not the structure of the government's decision. The result is a Stackelberg game, with the government being the leader and the private sector being the follower.
It can be shown that in this game, the government's optimal solution is a nonrecursive simultaneous one. To produce the optimal Stackelberg solution to the game, the government must produce its simultaneous intertemporal solution, announce it, and then stick with it as time passes. In short, once the government announces its optimal intertemporal solution path, the government must follow that path, so long as no new information is acquired that might change the optimum. The government must not replan as time passes, unless new information is acquired.
It can shown that continuous replanning, because of the simultaneous nature of the solution, will not produce the optimal path. Hence the optimal intertemporal solution and the continuously replanned solutions differ. While it is the optimal intertemporal solution that needs to be followed to track the Stackelberg solution, there are incentives built into the game for the government to replan, while nevertheless announcing that it will follow the simultaneous solution. This odd phenomenon is called "time inconsistency." By fooling the private sector in that manner, the government feels that it can benefit everyone. But the private sector eventually will learn that the announced plans by the government are false, and then the private sector will adjust its own plans accordingly. The economy drops into an inferior solution.
In the macroeconomic literature, there are two solutions that are especially important. One is the rules solution, in which the government establishes it commitment to the optimal solution by imposing the solution on itself through a rule that cannot be broken. The optimal Stackelberg solution then will be attained. This solution supports the policy prescription of many monetarists and new classical economists, who advocate that the government remove uncertainty about its plans by imposing its plans through an irreversible rule.
The other important solution is called "reputational equilibrium." The model is structured in a manner that eliminates the incentive for the government to lie. The private sector recognizes that fact and then believes all governmentally announced plans, regardless or how frequently they may be changed. This highly desirable solution is most likely to be attained if the government does not discount the future rapidly. If the government does in fact place much weight on the distant future, then the possible loss of reputation by the government in the distant future may be so important to the government that it will not take the risk of providing false information in the short run. The government will thereby resist the temptation to replan and will inevitably follow the optimal intertemporal path that it has announced. This solution leaves the government with the ability to run a discretionary policy without risking loss of public confidences, and is the solution often advocated by Keynesians.
6. Nonlinear Dynamics
In all areas of scientific research, nonlinear dynamics and the special case of chaotic dynamics have been growing in importance. Macroeconomics has been no exception. For example, Grandmont (1985) has demonstrated that even the most classical of economic models can produce either the stable solutions used in new classical economics or the unstable solutions characterizing much of Keynesian economics. The distinction between the two possibilities is found in the parameter settings. In some regions of the parameter space, the model produces stable solutions, while in other regions the model produces more complex solutions, such as cycles or even chaos. Although the parameter settings are different in the two cases, the structure of the model is the same.
This conclusion contrasts with earlier views, in which different policy views were supported by models having different structures. We now find that even if all economists had the same structural models in mind, different economists still could have different views on policy, depending upon their views about the parameter values. This insight could prove to be unifying, since convergence on maintained features of economic structure could occur among economists who nevertheless have very different views on policy. In short, there can be agreement on theory, without agreement on empirical inferences regarding parameter values. The competition between different structural models may have resulted from the imposition of linearity on those models. Once modelers are freed from the linearity restriction, the need for alternative structural macroeconomic models may be replaced by empirical research regarding the values of the parameters in a single unifying model.
As a result of the potential importance of nonlinear dynamics in economics, there is a growing empirical literature testing for nonlinear dynamics and chaos with economic data. At present only one such confirmed finding of chaos in economic data has been published.
7. Consequences
Clearly the introduction of deeply mathematical insights, such as those of nonlinear dynamics and time inconsistency, has strengthened the scientific foundations of macroeconomics. However, macroeconomics and microeconomics contain many such insights, and focusing on any one often requires assuming away others. The dimension reduction that is central to the definition of macroeconomics renders this fact unavoidable.
The question is whether the dimension reduction itself is conducted in a scientific manner. Science used conditionally upon nonscience is nonscience. If scientific methodology is applied conditionally upon a maintained unscientific approach to dimension reduction, the result is not science.
7.1 Time Inconsistency
It has been shown by Barnett (1993) that establishing reputational equilibrium is very difficult, when more than one monetary asset exists in the economy. The models that are currently used to establish reputational equilibrium contain only one monetary asset. But with multiple monetary assets, exact monetary aggregation produces relevancy of at least two different monetary assets, including one to measure outside money and another to measure the flow of monetary services in the economy. Although money enters the economy primarily through the monetary service flow, outside money is relevant to measuring the magnitude of the seigniorage tax, which enters the government's budget constraint. With the existence of two or more different monetary aggregates in the economy, conflicting signals to the private sector are difficult to remove, and hence determining parameter settings that will assure the existence of reputational equilibrium is much more difficult than previously believed.
Establishing the conditions under which a discretionary Keynesian policy would not generate disbelief of government announcements may be very difficult. Similarly, selecting a satisfactory rule to be imposed under a monetarist policy could be very difficult, when two relevant monetary aggregates exist that serve important but sometimes uncorrelated roles in monetary policy.
In short, the dimension reduction produced by replacing multiple monetary assets by only one monetary asset masks the complexities that exist in policy selection under time inconsistent planning.
7.2 Nonlinear Dynamics
Similarly the literature on nonlinear dynamics in macroeconomics has tended to overlook important matters. For example, Grandmont's paper, which advocates an activist policy based upon the possibility of nonlinear dynamics, overlooks the related literature on social welfare. Since the structure of his model is fully classical, with no forms of market failure in existence, solutions in his model are Pareto optimal, regardless of the parameter settings. Hence it can be argued that stabilizing governmental intervention into the economy could produce a Pareto loss, even if the solution without governmental intervention is chaotic. Pareto optimal chaos is to be preferred to Pareto inferior stability.
Attempts to produce a connection between chaos and market failure are just beginning, as in Woodford (1989). However, no necessary connection has yet been proven, and hence the connection between nonlinearity and policy views is not yet established. Related approaches based upon sunspot theory (Cass and Shell (1983)) and bubbles have been attracting attention, because of their connection with possible market failure. But even with market failure in some of those models, questions exist about the possibility of Pareto improving market intervention. While all of this literature is important, the effects that they will have on policy views remains unclear.
7.3 Distribution Effects
As has been discussed above, there have been many advances in index number and aggregation theory. In addition, Diewert (1976) has unified the two fields by determining a class of index numbers, called superlative index numbers, that track the exact aggregates of microeconomic aggregation theory. Given that aggregation is central to the dimension reduction that defines macroeconomics, one would think that aggregation and index number theory would be fully absorbed into macroeconomics. Unfortunately that is not the case. The literature on aggregation over economic agents appears in macroeconomics only in terms of the simplest possible approach: Gorman's representative agent approach. The more sophisticated approaches, such as Pareto's method of integrating over distributions, have found no role in macroeconomics. Similarly, simple sum aggregation still is used to produce monetary aggregates by most central banks.
It would be comforting to be able to argue that ignoring advances in aggregation theory has little effect on macroeconomic conclusions. But that is not the case. Consider, for example, the representative agent approach. Under that approach, there are no distribution effects produced by macroeconomic policy. In fact in any model that contains only per capita income, without inclusion of any higher order moments of the income distribution, macroeconomic policy can have no distribution effects. But virtually every macroeconomic model currently in use contains only the first moment of the income distribution. Hence, regardless of whether or not the modelers have made conscious use of Gorman's method, those models implicitly accept Gorman's assumptions and thereby rule out distribution effects. Yet the connection between macroeconomic policies and political parties demonstrates that macroeconomic policies have strong distribution effects. Consider, for example, the effect of inflation on fixed income retirees.
The situation is even more surprising in the case of aggregation over monetary assets. Despite the existence of a large and sophisticated literature on exact aggregation over monetary assets, central banks throughout the world continue to fight a rear guard action resisting proper aggregation over monetary assets. Again the effects are far from trivial. For example, every few years the monetary economics literature is flooded with articles claiming that the demand for money function has shifted. Yet the parallel literature using the Divisia monetary aggregate has never found any such evidence. Furthermore, Barnett (1984) has shown that the recession produced by the Federal Reserve's "monetarist experiment" was generated by a well intentioned policy, which unfortunately was biased towards restrictiveness by targeting an upwardly biased simple sum monetary aggregate.
8. Conclusion
Macroeconomics is economic theory with dimension reduction imposed. Conditionally upon that dimension reduction, current macroeconomic theory is far more strongly founded in the scientific method than the ad hoc macro models of a generation ago. However the maintained dimension reduction upon which all of those models condition is problematic. It cannot be argued convincingly that the issues assumed away in the maintained hypothesis are less important than those left in the model after the dimension reduction. The dimension reduction is itself the source of many of the controversies that continue to plague the field. Advocates of Keynesian macroeconomic, new classical economics, chaos, sunspots, and other areas of active research in macroeconomics routinely assume away the issues viewed to be important by the competing school of thought.
There are two alternatives that could avoid the resulting gridlock. One is to abandon macroeconomics and base all economics upon high dimensional microeconomics. Unfortunately that approach would leave governmental decision makers with little access to advice from the economics profession, which would be forced to view most macroeconomic policy problems as being unsolved. The other alternative is to impose a scientific discipline upon the admissible approaches to dimension reduction: (1) either laboratory experimentation could be used to determine which maintained hypotheses can be defended, or (2) deductive logic---i.e. aggregation theoretic principles---could be imposed.
In addition to conditioning upon ad hoc dimension reductions, most macroeconomic models condition upon the maintained hypothesis of linearity, despite the fact that microeconomic theory rarely produces linear models. As I have argued above, that maintained linearity is equally responsible for the divisions between macroeconomists as is the ad hoc dimension reduction. Within the class of linear models, differences in policy conclusions usually require differences in the structure of models. As a result, macroeconomists having different policy views have often proposed competing structural models. But once those macroeconomists are freed from the linearity restriction, differences in solution properties can be acquired from the same model at different parameter settings.
It is my opinion that convergence of views in macroeconomics is possible only through increased use of experimentation or of aggregation theory, to remove the arbitrariness of current approaches to macroeconomic dimension reduction, and through the removal of the linearity straight-jacket, which forces competing political groups to advocate different structural models. Without the imposition of any scientific discipline on dimension reduction and without the removal of the linearity restriction, macroeconomists can be expected to continue producing models that are increasingly sophisticated, but which reduce dimension and structure models in a manner that prejudices the results. The backlog of maintained hypotheses on which the profession agrees will continue not to grow, and hence macroeconomics will continue to fail the most fundamental criterion for judging scientific advancement.
Barnett, William A. (1987), "The Microeconomic
Theory of Monetary Aggregation" in New Approaches to Monetary
Economics, Proceedings of the Second International Symposium
in Economic Theory and Econometrics, edited by William A. Barnett
and Kenneth J. Singleton, Cambridge University Press.
Barnett, William A. (1979), "Theoretical
Foundations for the Rotterdam Model," Review of Economic
Studies, 46, 109-130.
Barnett, William A. (1984), "Recent Monetary
Policy and the Divisia Monetary Aggregates," American
Statistician, 38, 165-172.
Barnett, William A. (1993), "Monetary
Policy, Credibility, and Politics under Exact Monetary Aggregation,"
in W. Barnett, Melvin Hinich, and Norman Schofield (eds.), Political
Economy: Institutions, Information, Competition and Representation,
Proceedings of the Seventh International Symposium in Economic
Theory and Econometrics, Cambridge University Press.
Barnett, William A. and Ping Chen (1988), "The
Aggregation-Theoretic Monetary Aggregates are Chaotic and Have
Strange Attractors: An Econometric Application of Mathematical
Chaos," in William Barnett, Ernst Berndt, and Halbert White
(eds.), Dynamic Econometric Modeling , Proceedings of the
Third International Symposium in Economic Theory and Econometrics,
Cambridge University Press, Cambridge, pp. 199-246.
Barnett, William, Fisher, Douglas, and Apostolos,
Serletis (1992), "Consumer Theory and the Demand for and
Measurement of Money," Journal of Economic Literature
, vol. 30, pp. 2086-2119.
Barnett, A. Ronald Gallant, Melvin Hinich,
and Mark Jensen (1993), "Robustness of Nonlinearity and
Chaos Test to Measurement Error, Inference Method, and Sample
Size," working paper, Washington University in St. Louis.
Barnett, William and Melvin J. Hinich (1992),
"Empirical Chaotic Dynamics in Economics," Annals
of Operations Research, vol. 37, pp. 1-15.
Calvo, Guillermo A. (1978), "On the Time
Consistency of Optimal Policy in a Monetary Economy," Econometrica
, vol 46, no. 6, pp. 1411-1428.
Cass, D. and Shell, K. (1983), "Do Sunspots
Matter?," Journal of Political Economy, 91, 193-227.
Diewert, W. Erwin (1976), "Exact and Superlative
Index Numbers," Journal of Econometrics , 4, pp. 115-145.
Diewert, W. Erwin (1980), "Aggregation
Problems in the Measurement of Capital," in D. Usher (ed),
The Measurement of Capital, University of Chicago Press,
437-538,
Gorman, W. M. (1953), "Community Preference
Fields," Econometrica, 21, 63-80.
Grandmont, J. M. (1985), "On Endogenous
Competitive Business Cycles," Econometrica, 53, 995-1045.
Muellbauer , John (1975), "Aggregation,
Income Distribution and Consumer Demand," Review of Economic
Studies, 42, 524-544.
Woodford, Michael (1989), "Imperfect Financial
Intermediation and Complex Dynamics," in William Barnett,
John Geweke, and Karl Shell (eds.), Economic Complexity: Chaos,
Sunspots, Bubbles, and Nonlinearity , Proceedings of the Fourth
International Symposium in Economic Theory and Econometrics, Cambridge
University Press, Cambridge, pp. 309-338.