Estimating Policy-Invariant Technology and Taste Parameters in the Financial Sector,
When Risk and Growth Matter

by

William A. Barnett, Milka Kirova, and Meenakshi Pasupathy,
Washington University in St. Louis,

and Piyu Yue,
IC2 Institute at the University of Texas at Austin
November 16, 1994

1. Introduction

This paper provides an approach to the estimation of technology parameters in the financial sector. The relevant technologies are those of the financial intermediaries that produce inside money as output services and the nonfinancial firms that demand financial services as inputs to production technology. Virtually every firm in the economy falls into one of those two groups. We also provide the analogous results for consumer demand. In that case, the Euler equation parameters become parameters of tastes, rather then of technolgy. The problems that we seek to solve through our approach to modeling and estimation of those tastes and technologies are the "Lucas Critique" and what Chrystal and MacDonald (1994, p. 76) recently have called the "Barnett Critique." We also explore the tracking ability of the Divisia monetary aggregates and simple sum monetary aggregates relative to the GMM estimated exact rational expectations monetary aggregates nested within the technologies of firms and utility functions of consumers.

1.1 The Lucas Critique

According to the Lucas Critique, private sector parameters and the parameters of central bank policy rules are confounded together within the demand and supply solution functions that typically are estimated in macroeconometric models. When modeled dynamically, those demand and supply functions are the feedback rules or contingency plans that comprise the solution functions to dynamic programming or optimal control decisions of consumers and firms. However, the central bank's policy process is among the laws of motion serving as constraints in the private sector's dynamic decision. Hence the feedback rules that solve the private sector's decision depend upon the parameters of those processes as well of the private sector's own taste and technology parameters. Shifts in the parameters of the central bank's policy process will shift the private sector's solution feedback rules.

The source of this confounding is the solution of the first order conditions (Euler equations) of the private sector's decision, since that solution cannot be acquired without augmenting the private sector's Euler equations with the government's policy rules. In particular, the central bank's policy rule, interest rate processes, and other governmentally influenced stochastic processes for variables that are in the private agent's decision but are not under the control of that private decision maker must be augmented to private decision maker's Euler equations, before the solution for the feedback rules (demand and supply functions) can be found. But if the Euler equations of the private sector are estimated directly, the confounding problem is avoided. Hence in macroeconomics in general, there is wide acceptance of the idea that Euler equations should be estimated, rather than the demand and supply functions that are the solution to the augmented system. In addition, generalized method of moments (GMM) estimation has made the estimation of Euler equations practical.

Despite the influence of Euler equation estimation and the Lucas Critique in macroeconomics in general, a substantial portion of the literature on monetary economics has continued to base its conclusion on estimates of money demand and money supply functions, which are vulnerable to the Lucas Critique. An exception is Poterba and Rotemberg (1987) who have proposed and applied an approach to Euler equation estimation applicable to consumer decisions in money markets. In this paper, we apply the analogous approach to modeling and estimating the technologies of firms in the financial sector.

1.2 The Barnett Critique

According to the Barnett Critique, as defined by Chrystal and MacDonald (1994, p. 76), an internal inconsistency exists between the microeconomics used to model private sector structure and the aggregator functions used to produce the monetary aggregate data supplied by central banks. The result can do considerable damage to inferences about private sector behavior, when central bank monetary aggregate data are used. Chrystal and MacDonald (1994, p. 76) have observed the following regarding "the problems with tests of money in the economy in recent years....Rather than a problem associated with the Lucas Critique, it could instead be a problem stemming from the 'Barnett Critique.'" In fact Barnett Critique issues have been used to cast doubt upon many widely held views in monetary economics, as recently emphasized by Barnett, Fisher, and Serletis (1992), Belongia (1993), and Chrystal and MacDonald (1994). Based upon this rapidly growing line of research, Chrystal and MacDonald (1994, p. 108) conclude---in our opinion correctly---that: "Rejections of the role of money based upon flawed money measures are themselves easy to reject."

The Poterba and Rotemberg approach to inference about consumer behavior in the monetary sector circumvents the Barnett Critique by nesting the monetary aggregator function within the consumer's utility function and estimating the aggregator function jointly with the other parameters of the consumer's decision. Hence Poterba and Rotemberg have extended Barnett's (1980,1987) perfect certainty theory to the case of risk. Subsequently, Barnett, Hinich, and Yue (1991) have investigated the tracking abilities of various nonparametric statistical index numbers, such as the Divisia, to the Poterba and Rotemberg estimated aggregator function under risk. Recently Stock and Feldstein (1994) have drawn further attention to the importance of risk by finding possible empirical gains from incorporating risky stock and bond mutual funds within weighted monetary aggregates, although Stock and Feldstein, unlike Poterba and Rotemberg, do not base their analysis upon formal aggregation theory.

An initial step in the direction of Euler equation estimation of technology in the financial sector with internally consistent nested monetary aggregation was taken by Barnett and Zhou (1994a). However their paper, which concentrated on the aggregation problem, introduced little dynamics into the decision of the commercial banks that they modeled. We introduce capital dynamics through Tobin's Q. We also introduce the concept of Euler equation estimation with nested monetary input aggregation into the literature on manufacturing firm modeling. The importance of investigating money demand function properties separately across sectors, including the manufacturing firm sector, has been emphasized in Drake and Chrystal (1994).

2. Financial Intermediaries

One of the recent approaches to modeling financial intermediaries is to model them as profit maximizing neoclassical multiproduct firms, which produce financial services, such as demand deposits and time deposits, as outputs by employing financial and non financial factors as inputs. Early work that used this approach was based on the assumption of perfect certainty. See Hancock (1985, 1987, 1991), Barnett (1987), and Barnett and Hahm (1994). Barnett and Zhou (1994a) extended this approach to the case of uncertainty. In this paper we extend Barnett and Zhou's model by introducing capital accumulation and by relaxing the assumption of "no retained earnings." We also rigorously nest exact supplied monetary output aggregates within the transformation function of the financial intermediary, so that supply side monetary aggregation can be accomplished in a manner that circumvents the Barnett Critique. We derive and estimate the Euler equations in a manner that circumvents the Lucas Critique.

We view the resulting model as a step in the direction of exploring technological change and economies of scale and scope in financial intermediation in a manner that is invariant to central bank policy intervention, and in a manner that can produce inside money aggregates that are consistent with the theory that produced the policy invariant Euler equations.

2.1. Financial Firm's Production Under Perfect Certainty

We begin with the existing perfect certainty model, which we later extend. Financial firms produce financial assets through financial intermediation between borrowers and lenders. Hancock (1985, 1987, 1991), Barnett (1987), and Barnett and Hahm (1994) model the financial firm in a manner that fully incorporates the role of "producer of monetary assets." Under the assumption of perfect certainty, the financial intermediary is modeled as a conventional neoclassical firm which maximizes the discounted present value of variable profits subject to a technological constraint in the form of a transformation function. The financial firm produces monetary assets including demand and time deposits as outputs by using monetary goods such as cash and nonmonetary goods such as labor and materials as inputs.

Hancock and Barnett differ in their specifications of the variable profit function. Hancock (1991) specifies the variable profit during period t as follows:

(2.1)

where is expenditure on variable inputs, is the real balance of financial good i at time t, is an asset for and a liability for , and is the holding cost on the financial good if it is a liability or revenue per dollar if it is an asset. The indicator variable , if the financial service is an asset, and , if the financial service is a liability. The general price index is . This variable profit function includes both period t-1 and period t values of the monetary goods and their respective prices.

In contrast, Barnett (1987) uses only current period variables in his specification of variable profit at the beginning of period t. His specification of the present value of period t variable profits at the beginning of the period is

(2.2)

where is the vector of monetary assets produced by the firm, the vector of user cost prices of these monetary assets, and are respectively the quantities of nonmonetary non-labor factor inputs and their prices, Rt is the portfolio rate of return on the fnancial firm's assets, is quantities of labor inputs, is the corresponding wage rates, is the excess reserves held by the firm, and is the user cost of . See Barnett (1987) for the formulas for those user costs. This difference in the specification of the variable profit function causes the discounted capitalized value of the profit stream in their models to differ, but only by a function of initial conditions that are invariant to the firm's setting of its decision variables.

Another difference between the model specifications of Barnett and Hancock arises in the definition of outputs of the financial firm. Barnett classifies demand and time deposits as outputs, while loans are not classified either as inputs or outputs but rather as dependent functions of other variables under the assumption of no retained earnings. Hancock lets the estimated user costs determine whether a particular monetary good is an input or output. Monetary goods with positive user costs are classified as inputs while those with negative user costs are outputs. This rule leads Hancock to the classification of demand deposits as outputs and time deposits as inputs, in conflict with Barnett's classification. Based on the same rule, loans are classified as outputs since all loan types yield positive net revenue. This classification rule adopted by Hancock does not seem reliable, since it does not fix the inputs and outputs of the financial firm. An asset could be an input for some observations and an output for others. We believe that the difference in classification of time deposits between Barnett and Hancock reflects a difference in their cost of capital data and thereby a difference in their user cost imputations for time deposits.

In our model below, we follow Hancock's specification of the variable profit function, except for minor modifications. However, Barnett's method is followed in defining outputs and inputs. In accordance with this method of defining inputs and outputs, demand deposits and time deposits are categorized as outputs of the firm, while cash, labor, materials, and capital constitute the firm's inputs in the production process.

2.2 Financial Firm's Production Under Uncertainty

Hancock (1985, 1987, 1991), Barnett (1987), and Barnett and Hahm (1994) assume perfect certainty. However this is not necessarily a reasonable assumption for a financial firm. Risk is an important factor in the decision making of a financial firm. Risk applies not only to future prices and interest rates, but also to current interest rates, since contemporaneous interest rates, which are paid at the end of the period, are not known at the time of decision making. Barnett and Zhou (1994a) derive a model of financial firm behavior under risk and also find the exact monetary services output aggregate.

Barnett and Zhou build a model of a risk averse financial firm which issues its own liabilities and invests the borrowed funds in primary financial markets. Real resources such as labor and materials are used as factors of production in the creation of accounts providing monetary services. Those created accounts are the liabilities of the firm. The financial firm makes its profits from the interest rate spread between its assets (loans) and its produced liabilities. Rational expectations are assumed.

Variable profits for the financial firm are specified in the form given in equation (2.1). Portfolio investment during period t, , is constrained by total available funds. This constraint is given by:

(2.3)

where is the general price index, is the required reserves ratio on the ith produced liability, , is the real balance of cash holding during period t, and and are respectively the quantity of the jth real resource used in production in period t and its price. The above relationship implies that total deposits are allocated to excess reserves, required reserves, and payments for real resources, and the residual is invested in loans. In other words, the financial firm does not have any undistributed retained earnings. This is a stringent assumption, since it implies that the firm does not carry over any part of its earnings from one period to another. The dividend payout ratio is thereby assumed to be 100%, and the nature of the model's dynamics are heavily restricted, as emphasized by Brainard (1994). In the model derived in the next section we modify this constraint by allowing for non zero retained earnings. However, the retained earnings are assumed to be used by the financial firm in the form of capital investment.

In the Barnett and Zhou model, dynamics are further constrained by the assumption that capital is a fixed factor. Since the objectives of their paper were directed at aggregation theoretic explorations in monetary aggregation theory, the dynamics of the model were of only modest importance. But in determining policy invariant structural inferences, as is the objective of this paper, we need to be able to investigate traditional properties of technology. Ignoring variable capital growth and retained earnings is not suitable for such objectives. We now modify Barnett and Zhou's assumption of 100% dividend payout ratio and their assumption that no variable capital factors exist.

2.3. The Model

The theoretical model builds on the ground work set forth by Barnett and Zhou (1994a) in their adaptation and application of Hancock's (1991) specification of the variable profit function. The financial firm uses real resources such as labor, capital and other material inputs, plus monetary input in the form of cash in the production of the services of the produced liabilities. The output of the firm in our application consists of demand deposits and time deposits, which are liabilities to the firm.

Let be the real balances of the asset (loan) portfolio, the real balances of the ith produced account (liability) type, the real balances of cash holdings, the quantity of jth real input (including labor), and the quantity of capital stock of the financial firm at time t. In the model, constitute the outputs of the financial firm, while , , and are the inputs. Let be the portfolio rate of return, which is unknown at the beginning of period t, and let be the holding cost per dollar of the ith liability. All financial transactions are contracted at the beginning of the period. Interests on the deposits are paid at the end of the period. The cost per unit of the jth real input, , is incurred at the beginning of the period. Let be the cost of capital and be the general price index, which is used to deflate nominal to real terms.

Variable profits (net of investment expenditure), , at the beginning of period t, can be represented by

(2.4)

The first two terms in the above equation represent the change in variable profits from rolling over the loan portfolio during period t. The third and fourth terms represent the change in the nominal value of excess reserves. The fifth term represents the change in the firm's variable profits from the change in the issuance of produced financial liabilities. The next two terms account for the change in from the change in the level of capital stocks. The eighth term constitutes payments for real inputs, and the last term is the expenditure on investments.

Portfolio investment ,, is constrained by total available funds. The constraint is given by

(2.5)

where is the required reserves ratio on the ith produced liability. Equation (2.5) implies that the total deposits are allocated to required reserves, excess rreserves, payment for all real inputs used in production, investment in capital, and investment in loans.

The time to build approach is adopted to model capital dynamics. Capital accumulation based on this approach is given by:

(2.6)

where the depreciation rate is a constant and is assumed to be given. Gross investment at time t-1, , becomes productive only in period t.

Substituting equations (2.5) and (2.6) into equation (2.4) to eliminate investment in loans and investment in capital goods, we get the variable profits at time t to be

(2.7)

The financial firm maximizes the expected value of the discounted intertemporal utility of its variable profits stream, subject to the firm's technological constraint. The firm's optimization problem is then given by:

(2.8)

where is the expectation at time t, is the subjective rate of time preference, is the utility function, is the variable profit at time s, and is the transformation function.

The transformation function, , is convex in its arguments. The derivatives of with respect to the inputs and outputs are respectively given by:

(2.9)

and

(2.10)

To derive the Euler equations using the Bellman method, we must select state and control variables in a manner that will transform the decision to be in Bellman form. The financial firm is assumed to behave competitively both in the input and the output markets. Hence , and cannot be controlled by the firm. Let be the vector of all state variables and be the vector of all control variables during period s. We define the vector to contain , , , , , , , , , , and . We define the vector to contain , and . Let defined by, , be a subset of . We assume that follows a first-order Markov process, with transition equations given by the conditional distribution . Hence the transition equations, which represent the evolution of the state, for the vector of exogenous state variables (, , , , ,,,) are implicitly defined by . The remaining transition equations are defined by the obvious time shifts between some of the elements that appear simultaneously, but with time shifts, in the control and state vectors and by the technological transformation function, W, used to produce the transition equation of the remaining state variable, . Making these substitutions and changes in notation, it can be shown that the problem now is in Bellman form.

We now specify the utility function, U, to be in the class of functions exhibiting Hyperbolic Absolute Risk Aversion (HARA ). The HARA class functions can be represented by:

(2.11)

where , h and d are parameters to be estimated.

Using Bellman's method and the Benveniste and Scheinkman equation, we obtain the following set of Euler equations:

(2.12)

(2.13) (2.14)

where (2.15)

Since closed form algebraic solutions rarely exist for Euler equations, we can solve only numerically for from the system of Euler equations and the transformation function, W. The parameter estimation can be done through the estimation of Euler equations under rational expectations by using Hansen and Singleton's Generalized Methods of Moments (GMM) estimation.

2.4 Output Aggregation

The financial firm's outputs consist of demand deposits and time deposits. The financial firm's output of demand and time deposits are important in determining the level of inside money in the economy. In this section, we find the aggregation-theoretic exact quantity output aggregate that measures the firm's produced service flow. Relative to the money markets, our aggregation in this case is on the supply side. In a later section below, when we investigate monetary service factor demand by nonfinancial manufacturing firms, we will be producing demand side monetary aggregates, as also is relevant to consumer demand for monetary services.

Generating the exact quantity aggregate consists of first identifying the components over which aggregation is admissible and then determining the aggregator function defined over the identified components. The first step determines the existence of an exact aggregate, and the second step produces that aggregate in the manner that is consistent with microeconomic theory. The second step cannot be applied unless the first step succeeds in identifying the existence of an admissible cluster of components. The condition for the existence of an admissible component group is blockwise weak separability. In accordance with the definition of weak separability, a component grouping is admissible if and only if the group can be factored out of the rest of the economy's structure through a subfunction. Then the economic structure can be represented in the form of a composite function, with the goods in the separable block being the only goods in the inner function of the structure. If this condition is satisfied, an exact quantity aggregate exists over the goods in the block, and the aggregator function that produces the exact quantity aggregate over those goods is the inner ("category") function itself. Without weak separability, no such inner function exists and hence no aggregate exists.

Let be the firm's output vector, and let be the input vector, so that transformation function can be written as An exact supply side aggregate exists over all of the firm's outputs if and only if y is weakly separable from x within the function . In accordance with the definition of weak separability, there then exist two functions and such that

,

where the output aggregator function, , is a convex function of y. Although weak separability alone is sufficient for the existence of an aggregate, a considerable ( although unnecessary) simplification is available if we also assume that is linearly homogeneous in y.

The weak separability condition on functional structure is equivalent to the following restriction::

If we can test for weak separability of the transformation function and then estimate the resulting aggregator function , we obtain the econometrically estimated exact output aggregate. The related literature on statistical index numbers, such as Divisia and Laspeyres, seeks to produce nonparametric approximations that can track the level of over time without the need to estimate the parameters of the aggregator function itself.

In this paper the econometric estimate of the aggregator function is obtained by estimating the Euler equations using the generalized method of moments (GMM) technique.

2.5 Testing for weak separability

The conventional parametric approach to testing weak separability is adopted in this paper, since weak separability is a strictly nested null hypothesis within our parametric specification of technology. To minimize the biases that can be produced from specification error, we use a flexible functional form for technology. Unfortunately flexible functional forms need not satisfy the regularity conditions imposed by economic theory, including the monotonicity and curvature conditions. Hence we must consider methods for testing and imposing those conditions, at least locally, as well as methods for testing and imposing global blockwise weak separability of the technology in its outputs. For existence of aggregator functions, the weak separability must be global. Hence we must test and impose weak separability globally. We use the Generalized McFadden functional form to specify the technology of the firm. That specification, which also was used in the case of stochastic choice by Barnett and Zhou (1994a), was originated by Diewert and Wales (1991), who also originated the Generalized Barnett functional form. That latter model was applied by Barnett and Hahm (1994) in the perfect certainty case, but has not yet been adapted to the case of stochastic choice.

We assume that the transformation function, , is linearly homogeneous. Instead of specifying the form of the full transformation function , and then imposing weak separability in y, we directly impose weak separability by specifying and separately. The specification for is then obtained by substituting into . Since and are both specified to be flexible, the full technology is flexible, subject to the separability restriction.

The function is specified to be the symmetric generalized McFadden functional form

, (2.16)

where , and , and are parameters to be estimated. The matrix is and symmetric. The vector contains all fixed nonnegative constants, which are chosen by the researcher. The matrix is partitioned as follows:

,

where is a scalar, is a row vector, is an column vector, and is an symmetric matrix. Since is symmetric, we have .

Let be the chosen point about which the functional form is locally flexible. Within the class of linearly homogeneous transformation functions, the specification given above is not parsimonious, and hence we can impose further restrictions on the model without losing local flexibility. We impose the following restrictions, which reduces the number of free parameters in our specification to the minimum required number to maintain local flexibility.

, (2.17)

, (2.18)

, (2.19)

Solving (2.18) and (2.19) for and , and then substituting into (2.19) results in

(2.20)

which is flexible at .

The aggregator function is specified as:

, (2.21)

where and the symmetric matrix contain the parameters to be estimated, while is the vector of fixed nonnegative constants chosen by the researcher. As similarly done above with H, we can impose the following restrictions without losing local flexibility:

, (2.22)

(2.23)

(2.24)

Substituting (2.21) into (2.20), we get the following flexible functional form for , which satisfies the weak separability condition

(2.25)

The neoclassical curvature conditions require and to be convex functions. Monotonicity requires that and . Convexity of and requires the matrices and to be positive semidefinite. Global convexity of further requires the condition

. (2.26)

Positive semidefiniteness of the matrices and can be imposed without loss of flexibility by substituting

(2.27)

and

, (2.28)

where q is an lower triangular matrix and u is an lower triangular matrix.

The method of squaring technique can be used to obtain local monotonicity of at . The first derivatives of are

(2.29)

(2.30)

At the value of the derivatives reduces to

(2.31)

and

. (2.32)

Imposing monotonicity on (2.31) and (2.32) results in

and (2.33)

The transformation function defined by (2.25) and restricted to satisfy equations (2.17), (2.22)-(2.24), (2.26)-(2.28), and (2.33) is flexible, locally monotone at and globally regular, subject to the weak separability condition. Local monotonicity is verified empirically at each point within the data.

Testing for weak separability and estimating the parameters of the transformation function can be done by Hansen and Singleton's generalized method of moments (GMM) method. Substituting the functional form given by equation (2.25) into the system of Euler equations, we obtain the structural model, which is a system of integral equations. The GMM estimator of the parameters of such a nonlinear rational expectations system is asymptotically efficient and normally distributed under very weak conditions. We test for weak separability using Hansen's asymptotic statistic to test for no overidentifying restrictions. Since equation (2.25) was derived after imposing weak separability, (2.25) can be substituted into the Euler equations to impose the null hypothesis of weak separability . If the test of no overidentifying restrictions is rejected, then we reject the null hypothesis of weak separability. That rejection in turn would imply that no aggregator function exists over all of the outputs of the firm.

2.6. Empirical Application

We apply our approach to estimating the technology of commercial banks. The outputs of that aggregated financial firm in our application consist of demand deposits and time deposits. Demand deposits and time deposits account for the major portion of the fund-providing functions of the bank's balance sheet. The inputs used in the production process include both financial and nonfinancial inputs. The financial input in the form of cash is excess reserves. The nonfinancial inputs includes labor, materials, and physical capital. The output vector is given by and the input vector is , where is demand deposits, is time deposits, Ct is excess reserves, is labor input, Mt is material inputs, and Kt is capital.

In our empirical application we use the power utility function, which is a nested special case of the general class of HARA utility functions,, given by equation (2.11). We use this simplification, since the available sample size does not permit the use of the more general form. The power utility function is obtained by setting and by imposing the restriction in equation (2.11). The power utility funtion is then represented by:

(2.34)

and the derivative of is given by

(2.35)

Using equations (2.12) - (2.14) and equation (2.35), the Euler equations are

(2.36)

(2.37)

(2.38)

(2.39) (2.40)

where and are respectively the holding costs of demand deposits and time deposits, and are respectively the required reserves ratio on demand and time deposits, and and are respectively the prices of labor and material inputs. The derivatives of with respect to the various inputs and outputs are given by equations (2.29) and (2.30).

Before using Hansen's statistic to test for weak separability in outputs, we have to choose the fixed constants and the center of local approximation. We choose and as the center of approximation. To locate the center within the interior of the observations, we rescale the data about the midpoint observation so that the rescaled data becomes and , where is the midpoint observation. Each price vector is correspondingly rescaled by multiplying by the midpoint observation. This rescaling of prices leaves the dollar expenditure on various goods unaffected by the rescaling of the corresponding quantity.

The fixed nonnegative constants are chosen such that

(2.41)

and

, (2.42)

where and are the sample means of and respectively. The and thus chosen satisfy restrictions (2.17) and (2.22) respectively.

Equation (2.23) implies , which is imposed through the substitution . There are also inequality restrictions to be imposed. The monotonicity condition (2.33) implies and hence from equation (2.23) we also have Combining these two conditions and the mathematical identity , we have the substitution ,where the parameter q must now be estimated.

We further normalize , since The monotonicity condition (2.33) implies that . We impose that restriction by replacing by , and estimating . The convexity conditions are imposed by replacing the matrices and by the matrices and respectively, where the lower triangular matrices and are given by

and . (2.43). Equation (2.24) implies

, (2.44)

which, when solved, produces the restrictions .

These relationships reduce the matrix to

.

Following these substitutions, the parameters that remain to be estimated within technology are , , and . In addition the subjective rate of time discount m and the risk aversion parameter r must be estimated.

The data used for estimating the model was mainly obtained from the Federal Reserve Bank Functional Cost Analysis (FCA) Program. Data on the National Average Banks for the years 1966-1992 was used in the estimation. Labor inputs consist of two groups: managerial and non-managerial. Data on expenditure and quantity for the two categories of labor were obtained from FCA. Material inputs are divided into three categories: printing and stationery, telephone and telegraph, and postage, freight and delivery. Physical capital is made up of structures (bank buildings), furniture and equipment, and computers. Data on expenditure on the various types of material inputs and physical capital were obtained from the FCA, while the corresponding price indices were obtained from the Survey of Current Business. A quantity aggregate and the corresponding price aggregate must be constructed for each of the three nonfinancial inputs.

Data on the nominal quantity of demand deposits and time deposits, net interest rate on demand deposits and time deposits, and the bank's portfolio rate of return were obtained from the FCA data set. The required reserves ratio was obtained from the Federal Reserve Bulletin. Nominal dollar balances of all financial goods were converted to real balances by deflating the nominal balances using Fisher's ideal price index.

2.7. Results

The parameter estimates were obtained by estimating the system of Euler equations ((2.36)-(2.40)) using the GMM estimation procedure on mainframe TSP (version 7.02). This estimation process allows for heteroskedasticity and autocorrelation in the disturbance terms. We specified a second order moving average serial correlation. Bartlett kernels were specified for the kernel density. Discount window rtate, federal funds rate, composite bond rate, lagged value of excess reserves, lagged value of the Fisher ideal price index, and constant were chosen as instruments. In the estimation, to ensure that , we replace by and estimate . Similarly, to rule out possiblity of getting negative values for the subjective rate of time preference, , we replace by and estimate .

As is evident from Table 2.1, the precision of the GMM estimates of the parameters of financial firm technology is extremely high for all of the parameters in the aggregator function z0(z). Figures 2.1 and 2.2 display the levels and growth rates of the estimated theoretical aggregate, the simple sum index, and the Divisia index. Clearly the estimated theoretical and Divisia indices move closely together, while the simple sum aggregate tracks less well.

The estimated exact aggregate depends upon our choice of estimated aggregator function, but was estimated in a manner that permits risk. The Divisia index on the other hand does not depend upon the form of the aggregator function, but is derived under the assumption of perfect certainty. However, the damage to its tracking ability from risk has been shown in other research to become significant only when the degree of risk and of risk aversion are very high. Evidently the degree of risk and of risk aversion were not sufficiently large to do much damage to the tracking ability of the Divisia index. The violations of regularity reported in the footnote of Table 2.1 were few and did not seem to be a serious problem in modeling bank behavior.

We tested weak separability of monetary assets from the other variables in technology. We ran that test using Hansen's c2 test for no overidentifying restrictions in the model with weak separability imposed through the structure of the model. The test statistic is F=TQ, where T=25 is the sample size and Q=0.36 is the value of the objective function in the GMM estimation. The test statistic is distributed as a c2 with e-f=7 degrees of freedom, where e is the number of orthogonality conditions and f is the number of parameters. The calculated test statistic is 9, and the critical value at the 10% level of significance is 12.02. Hence we cannot reject the hypothesis of weak separability of monetary assets. That weak separability condition is the existence condition for an economic monetary aggregate.

3. Manufacturing Firms

As discussed above, the supply side model produced in section 2 is an extension of the model previously produced and estimated by Barnett and Zhou (1994a), and an analogous model on the consumer demand side has been produced and estimated by Poterba and Rotemberg (1987) and by Barnett, Hinich, and Yue (1991). But this modern approach to modeling, with nested demand-side monetary aggregator functions and GMM estimation of Euler equations, has not previously been attempted. However, in the perfect certainty case, relevant theory is available in Barnett (1987) and a positive contribution to dynamic modeling of firm demand for money, although without nested exact quantity aggregation, has been made by Robles (1993). The potential importance of this under-researched area has been emphasized by Drake and Chrystal (1994). We now provide a model of a manufacturing firm that employs a monetary asset portfolio as inputs. We assume rational expectations under risk, and we investigate the existence of an exact aggregation-theoretic monetary asset input aggregate.

There is no unanimous agreement among economists about the specific role that money plays in the production process. But regardless of the explicit role of money in the operation of a manufacturing firm, a derived production function always exists that absorbs that motive into the firm's techonology, even if no direct role exists for money inside the factory's production activities. When we enter the monetary asset portfolio into the firm's technology as factors of production, the technology should be understood to be that derived technology of the firm, and not necessarily the physical technology of the factory.

3.1 The Model

Our model is based on Barnett's (1987) monetary aggregation-theoretic approach, extended to include uncertainty and capital accumulation. Perfect competition in all markets and risk neutrality of the firm are assumed. The objective of the firm is to maximize the expected discounted value of its future variable profit flow, subject to its technology. The firm uses Lt real units of labor, Kt real units of capital goods, a vector et of monetary assets, and a vector xt of other variable inputs as factors of production in producing a vector yt of real output quantities during period t. The firm's technology is given by the transformation function:

(3.1)

The transformation technology W is assumed to be convex in its arguments in accordance with the properties of a neoclassical transformation function. In addition, the following monotonicity conditions hold:

,

,

and . (3.2)

Let yit be the ith real output component with a price , where i=1,...,I, xnt be the nth variable input with a price , where n=1,...,N, and wt be the wage rate of labor Lt, which is paid at the end of the period. The jth component of the real balances of monetary assets, held by the firm in period t is ejt , where . Real balances et are defined to equal nominal balances divided by , the true cost of living index. The return on holding nominal money balances of type j is rjt and is paid to the firm at the end of the period.

As the firm operates over time, it retains part of the earnings and uses them to finance its expansion and development. It is assumed that there exist markets for new and used capital. Capital accumulation is given by:

(3.3)

where It is gross investment and d is the physical depreciation rate of capital. Investment becomes productive instantaneously ,but capital installation is costly to the firm. Thus the total costs of purchasing and installing It are given by , where is the price of capital goods and is a convex function, representing the costs of adjustment associated with installing capital.

Extending Barnett's (1987, eq. 4.3) formula to include q theory capital dynamics, the firm's variable profits during period t are:

(3.4)

where

(3.5)

The first term represents revenues from production during period t . The second term is the cost of other variable inputs xt, which is paid out at the beginning of the period. The third term represents costs of labor supplied during the previous period, but paid its wage at the end of that period. As in section 2, the end of one period is assumed to coincide with the start of the next period, since time intervals are assumed to be closed on the left and open on the right. The fourth term in the equation represents the flow of funds from rolling over the firm's portfolio of monetary assets, where the first part of the term is the nominal value of the monetary asset portfolio, available at the beginning of the period as a result of last period holdings. The last term is the total cost of purchasing and installing capital during the period.

Summing over each period's discounted profit flow and substituting (3.5) into (3.4) to eliminate It, we obtain the intertemporal profit flow function:

(3.6)

where ms is a discount factor, defined as

and Ra is the rate of return on the firm's capital.

Regrouping the terms with common time subscripts in (3.6) gives the following form of the intertemporal profit flow function:

(3.7)

In equation (3.7) the contribution of monetary assets and depreciated capital from period t-1 is ignored, since it is fixed in period t and does not affect the variable profit flow in period t.

Define , , , , ,

and . Substituting these definitions into (3.7) gives another expression for the intertemporal profit function:

(3.8)

It is assumed that the manufacturing firm chooses the levels of output and real factors of production to maximize the expected discounted intertemporal profit flow, subject to its technology. Under the assumption of complete markets, perfect competition, and risk neutrality of the owners of the firm, the problem can be presented as the following dynamic choice problem:

Max

(3.9)

subject to

where Et denotes expectations, given the information available at period t.

The first order conditions of this stochastic optimal control problem can be derived by applying Bellman's dynamic programming method. To do so, the state and control variables must be selected in a manner such that the decision is in Bellman form. The prices and Rs are random stochastic processes that are not controllable by the firm. The wage rate ws-1, the monetary asset yield rj,s-1, and are nonstochastic and are taken as given by the firm. The control variables are and Ks, while the selected state variables for period s are .

Define wt to be the vector of all state variables, and define ut to be the vector of all control variables. Let Ls be the subset of state variables defined by Ls= . We assume that Ls follows a first-order Markov process, with transitions governed by the conditional distribution function . This conditional distribution function defines implicitly the transition equation for the state variables included in Ls. The remaining transition equations are defined by the obvious time shifts between some of the elements that appear simultaneously, but with time shifts, in the control and state vectors and by the technological transformation function, W, used to produce the transition equation of the remaining state variable, Ls. Making these substitutions and changes in notation, it can be shown that the problem now is in Bellman form.

The dynamic decision problem now can be put in the familiar Bellman form:

Max (3.10)

subject to ,

where ps(ws,us) is given by equation (3.4) and g is the vector of all transition equations for the state variables. Using Bellman's method and the Benveniste and Scheinkman equation, the Euler equations are found to be

(3.11)

Substituting the quadratic function for the cost of adjustment function C(It) and replacing the corresponding symbols with the actual state and control variables and transition equations, we obtain the following system of Euler equations:

(3.12)

(3.13)

(3.14)

(3.15)

The results is a system of I+J+N+1 nonlinear equations. A solution exists for to these Euler equations augmented by the transformation function (3.1). But in practice a closed form algebraic solution rarely exists, so that only a numerical solution can be produced. Nevertheless, with a parametric specification of the technology, GMM can be used to estimate the parameters from the Euler equations themselves.

3.2 Demand-Side Monetary Aggregation and Weak Separability

By estimating the parameters of the Euler equations by GMM, we can investigate properties of technology, such as returns to scale. If the firm's monetary inputs are weakly separable from output, we also can investigate the resulting exact demand-side monetary aggregate.

The approach to identifying and generating an exact theoretical demand-side monetary aggregate for a manufacturing firm is described by Barnett (1987) in the case of perfect certainty. We extend his approach to the case of risk. The procedure involves two steps. First one tests the existence condition for an exact aggregate. The existence condition is blockwise weak separability of the monetary assets in the transformation function from the firm's outputs and from the firm's other factor inputs. Under this existence condition, it becomes possible to factor those monetary assets as a subfunction out the rest of the firm's structure.

Define and . If z is weakly separable from y0, then there exist functions H and z0 such that

,

where z0 is the aggregator function over the monetary asset inputs z. The weak separability condition is mathematically equivalent to:

for ,

where f is any of the components of y0. Under that condition, the marginal rate of substitution between any two different monetary assets is independent of the levels of any of the outputs or of any of the other nonmonetary inputs. An additional restriction which is usually imposed on the aggregator function is linear homogeneity in the components.

If the existence condition is satisfied, we can progress to the second step, which is estimation of the aggregator function zo(z). Clearly the first step must precede the second step, since the aggregator function estimated in the second step does not exist unless the hypothesis of weak separability tested in the first step is accepted.

3. 3 Flexible functional form specification and regularity conditions

As in the case of the financial intermediary in section 2, we specify the technology of the firm to be Diewert and Wales's (1991) symmetric generalized McFadden flexible functional form, but now with exact nested input aggregation for financial assets rather than exact nested output aggregation. Hence the null hypothesis of exact monetary input aggregation is imposed directly on the transformation function by specifying separately flexible functional forms for H(y0,z0) and z0(z), where z0(z) is nested into H(y0,z0) to assure the desired weakly separable structure for W. It is further assumed that the transformation function is linearly homogeneous in its components. The extension to nonconstant returns to scale is a subject for future research.

Define H to be the symmetric generalized McFadden functional form

(3.16)

with , where a0, , and are parameters to be estimated. The matrix is symmetric . The vector contains fixed nonnegative constants selected by the researcher. The linear homogeneity of H in y0 and z0 is obtained by dividing H by .

To conform with the partitioning of the elements of H, the matrix is partitioned as follows:

,

where A11 is a scalar, A12 is a row vector, A21 is a column vector, and A is symmetric matrix. Since is symmetric, we have that .

Let be the point about which the functional form is locally flexible. Because of the linear homogeneity assumption, the parameters in the functional form (20) are n+1 more than are needed for a parsimonious flexible functional form. Hence more restrictions can be imposed without compromising the Diewert-flexibility property. We therefore impose

, (3.17)

, (3.18)

and

, (3.19)

where 0n is an n-dimensional vector of zeros. Under the above restrictions the number of parameters is reduced to 1+n+n(n+1)/2, which is the minimum number of free parameters needed to maintain flexibility.

Solving (3.18) and (3.19) for A11 and A12, and substituting into (3.16), we get

(3.20)

Similarly define the monetary aggregator function z0(z) to be

(3.21)

with the parameters satisfying

, (3.22)

, (3.23)

, (3.24)

and . The vector and the symmetric matrix B are parameters to be estimated. The vector is a vector of fixed nonnegative constants, and the point is the point about which equation (3.21) is locally flexible.

Substituting (3.21) into (3.16) yields

(3.25)

which by construction satisfies weak separability in monetary asset inputs.

Neoclassical curvature conditions require to be a convex function in all of its arguments, increasing in outputs, and decreasing in inputs. The monetary aggregator function should be concave and monotonically increasing. Suppose further that

. (3.26)

Then is globally convex, when is convex in and is concave in z.

Diewert and Wales (1987) prove that is a globally convex function if and only if the matrix A is positive semidefinite, and is a globally concave function if and only if the matrix B is negative semidefinite. Positive semidefiniteness of the matrix and negative semidefiniteness of the matrix B can be imposed without destroying flexibility by the substitution

(3.27)

and

(3.28)

where q is a lower triangular matrix and u is a lower triangular matrix. Following the substitution of (3.27) and (3.28), we estimate q and u.

Monotonicity restrictions are imposed locally at the point of flexibility. The first derivatives of (3.25) with respect to are

(3.29)

and

(3.30)

Similarly the first derivative of the aggregator function (3.21) with respect to z is

(3.31)

Evaluating these derivatives at the chosen point of local flexibility yields

, (3.32)

(3.33)

and

. (3.34)

At present, we assume that the firm produces only one output, which is the first element of the vector y0. Imposing the neoclassical monotonicity conditions, equations (3.32), (3.33), and (3.34) imply the following constraints on the parameters

, , for i=2,...,n and . (3.35)

Under imposition of (3.35), the monotonicity conditions are satisfied locally at . Substituting the functional form defined by (3.25) into the Euler equations (3.12)-(3.15), provides a structural dynamic system of nonlinear integral equations to be estimated. Our parameter restrictions assure valid global curvature, local monotonicity, and global weak separability in the monetary asset portfolio employed by the firm.

As with the financial firm, we use GMM to estimate the technology of the representative manufacturing firm, and we use Hansen's test to test the imposed assumption of weak separability in monetary assets, although those assets now are inputs rather than outputs. If the test of no overidentifying restrictions is rejected, then the null hypothesis of weak separability is rejected.

3.4 Data and Empirical Application

The model is applied empirically to the aggregate US manufacturing sector with data for the period 1949-1988. Real input resources include capital, labor, and materials. Monetary inputs include two types of assets: cash on hand and in US banks and US government securities.

Using equations (3.12)-(3.15), the system of Euler equations to be estimated becomes

(3.36)

(3.37)

(3.38)

(3.39)

and

(3.40)

where Ot is gross output, Kt is capital services, Lt is labor , Mt is materials, Ct is cash on hand and in banks, St is government securities, d is the rate of capital depreciation, is the true cost-of-living index, and Rt is the rate of return to capital. The prices of output, capital , labor and materials respectively are ptO, ptK, ptL, ptM, while the rates of return on cash and securities respectively are rtCand rtS.

In accordance with this notation, we can write and . The center of local approximation is selected to be the point at which , and . To assure that the center of approximation is located within the interior of the observations, we rescale the data on all quantities about a chosen data point such that and , where represents the year of the chosen data point, are the elements of y0, and are the elements of . To prevent dollar revenues and expenditures from being altered by our data normalization, each price is rescaled by multiplication by the corresponding quantity at the chosen data point.

The fixed nonnegative constants ai and bi are selected such that

(3.41)

and

(3.42)

where and are the sample means of and respectively. With our data, we find a1=0.24, a2=0.20, a3=0.32, a4=0.24, b1=0.56 and b2=0.44. Note that the constants ai and bi satisfy equations (3.17) and (3.22), as is required.

Equation (3.23) implies that b1+b2=1, and the monotonicity condition (3.35) requires that and . Therefore it follows that for i=1,2. Combining these constraints permits us to replace bi by

and . (3.43)

We then estimate j.

Since , we can normalize a0=-1 to satisfy the monotonicity condition (3.35). The monotonicity conditions (3.35) are imposed by the substitution , , , and , where are new parameters to be estimated. The curvature conditions are imposed by replacing the matrices A and B by and respectively, where the lower triangular matrices q and u are as in equations (2.43) of section 2. Equation (3.24) then becomes as in equation (2.44) of section 2.

Solving (2.44) for u21 and u22, we get and . Substituting for these parameters in (3.28) yields the following form of the matrix B

(3.44)

The parameters to be estimated are j, u11, g, the vector , and the matrix q.

The data comes primarily from two sources. Data on output and factor inputs in US Manufacturing for the period 1949-1988 is acquired from the Division of Multifactor Productivity of the Bureau of Labor Statistics. The data consists of quantity and price Törnqvist indices. Output is defined as gross sectoral output. Capital input is defined as the flow of services from physical assets, which include equipment, structures, inventories, and land. Labor input is defined as the paid hours of all persons engaged in the sector. Materials input consists of all commodity inputs exclusive of fuel inputs.

The source of data on money balances held by manufacturing firms is the Quarterly Financial Report for Manufacturing, Mining, and Trade Corporations for the period 1949-1988. To convert to real units, we deflate the nominal balances by the Fisher ideal index approximation to the true cost-of-living index, computed as in section 2 above. The rates of return on cash on hand and in banks and government securities are from the City Bank database. We use the 6-month commercial paper yield as the rate of return on cash on hand and in banks and the 3-month Treasury-bill rate as the rate of return on government securities. The reason for the non-zero rate of return for cash on hand and in banks is that it does not consist solely of currency. Cash on hand and in banks in our data source is defined to be the sum of manufacturing sector holdings of currency, demand deposits, and time deposits. Separate data on such holdings of currency are available only for the most recent observations.

We use an external nominal bond rate for the rate of return on capital, Rt, since data was unavailable to compute an internal rate of return. Data on Moody's Baa bond rate is obtained from the City Bank database.

3.5 Results

The model is estimated using the GMM estimator in the TSP mainframe version 7.02. The following variables are used as instruments in the estimation procedure: a constant, total US population, and lagged values of the prices of capital and materials, of the rate of return on cash and securities, and of the Moody's Baa bond rate. The results are robust to heteroscedasticity and autocorrelation.

As is evident from Table 3.1, the precision of the GMM estimates of the parameters of manufacturing firm technology is extremely high for all of the parameters in the aggregator function z0(z). Figures 3.1 and 3.2 display the levels and growth rates of the estimated theoretical aggregate, the simple sum index, and the Divisia index. Clearly the estimated theoretical and Divisia indices move closely together, while the simple sum aggregate tracks less well.

The estimated exact aggregate depends upon our choice of estimated aggregator function, but was estimated in a manner that permits risk. The Divisia index on the other hand does not depend upon the form of the aggregator function, but is derived under the assumption of perfect certainty. However, the damage to its tracking ability from risk has been shown in other research to become significant only when the degree of risk aversion and of risk is very high. Since our model of the manufacturing firm assumes risk neutrality, we should expect the Divisia index's tracking ability to be good, as indeed we find to be the case.

The violations of regularity reported in the footnote of Table 3.1 suggest the limitations of the generalized McFadden parametric specification of the firm's technology and perhaps suggest that our future research on manufacturing firm technology should be based upon the generalized Barnett model originated in Diewert and Wales (1987) rather than upon our current use of the generalized McFadden model, also originated in Diwert and Wales (1987). Unlike the generalized McFadden model, the generalized Barnett model is globally regular. However, in the current research we investigate only the properties of the estimated aggregator function of monetary assets, and as reported in the footnote to Table 3.1, the estimated monetary aggregator function was globally regular.

We tested weak separability of monetary assets from the other variables in technology. We ran that test using Hansen's c2 test for no overidentifying restrictions in the model with weak separability imposed through the structure of the model. The test statistic is F=TQ, where T=40 is the sample size and Q=.35 is the value of the objective function in the GMM estimation. The test statistic is distributed as a c2 with e-f degrees of freedom, where e=30 is the number of orthogonality conditions and f=17 is the number of parameters. The calculated test statistic is 14, and the critical value at the 1% level of significance is 27.69. Hence we cannot reject the hypothesis of weak separability of monetary assets. That weak separability condition is the existence condition for an economic monetary aggregate.

4. Consumers

This line of research in monetary economics began with Barnett (1980) in the perfect certainty case and Poterba and Rotemberg (1987) in the case of risk. Both papers were produced from models of consumer behavior. A long list of papers have been motivated by Barnett's perfect certainty model, and recently Poterba and Rotemberg's extension to risk has motivated papers by Barnett, Hinich, and Yue (1991) and by Rotemberg, Driscoll, and Poterba (1994). While the applications of the perfect certainty approach are far more extensive than those of the recent extensions to the stochastic environment case, there is in place a small and growing literature on consumer behavior under risk with Euler equation estimation and exactly nested (weakly separable) aggregator functions over monetary assets. This research fits well into the Sidrauski (1967) tradition.

However, the work on firm behavior in this tradition is far more limited. The perfect certainty theory for manufacturing firms and for financial intermediaries can be found in Barnett (1987). Also see Hancock (1991) regarding financial firms under perfect certainty. But as observed above, this paper is the first to extend Barnett's model of manufacturing firms to the case of risk, with capital dynamics and nested monetary aggregation. Similarly the only prior work extending this approach to financial firms under risk is Barnett and Zhou (1994a), who introduced no capital dynamics and assumed 100% payout of earnings as dividends. Since research in this tradition has been far more limited in the case of firms than in the case of consumers, this paper primarily emphasizes our extensions to technology estimation.

Nevertheless, for the sake of completeness, we now include results on consumer behavior under risk with exactly nested monetary aggregation. In particular, we provide those extensions needed to render compatibility of the consumer demand Euler equation estimation with that presented above for manufacturing firms and financial intermediaries. In particular we emphasize the index number issues associated with tracking the exact monetary aggregate nested within the Euler equations, when the tracking is to be attempted with a nonparametric statistical index number, rather than an econometrically estimated parametric aggregator function. While much research has appeared in recent years on the nature of those tracking errors, little previously has been available for consumption under risk. See Barnett, Hinich, and Yue (1991) for discussion of the need for research on the properties of that tracking error under risk, since the existing theory on the order of the remainder term in such nonparametric approximations is not applicable to the case of risk.

4.1. Consumer Demand for Monetary Assets

In this section we formulate a representative consumer's stochastic decision problem over consumer goods and monetary assets. The consumer's decisions are made in discrete time over a finite planning horizon for the time intervals, t, t+1, ..., s, ...,t+T, where t is the current time period and t+T is the terminal planning period. The variables used in defining the consumer's decision are as follows:


xs = n dimensional vector of real consumption of goods and services during period s,

ps = n dimensional vector of goods and services prices and of durable goods rental prices during period s,

as = k dimensional vector of real balances of monetary assets during
period s,

rs = k dimensional vector of nominal holding period yields of monetary assets,

As = holdings of the benchmark asset during period s,

Rs = the one-period holding yield on the benchmark asset during period s,

Is= the sum of all other sources of income during period s,
p=p= the true cost of living index.

Define Y to be a compact subset of the n+k+2 dimensional nonnegative orthant. The consumer's consumption possibility set, S( s) for s OE {t,...,t+T} is:

S( s) = { (as, xs,, As) OE Y: =
+ pai,s-1-pais ] + (1+Rs-1)pAs-1 - pAs + Is}. (4.1)

Under the assumption of rational expectations, the distribution of random variables is known to the consumer. Since current period interest rates are not paid until the end of the period, they may be contemporaneously unknown to the consumer. Nevertheless, observe that during period t the only interest rates that enter into S(t) are interest rates paid during period t-1, which are known at the start of period t. Similarly pt and p*t are determined and known to the consumer at the start of period t. Hence (at,xt,,At) can be chosen deterministically in a manner that assures that (at, xt, At) OE S(t) with certainty. However, that is not possible for s > t, since at the beginning of time period t, when the intertemporal decision is solved, the constraint sets S(s) for s > t are random sets. Hence for s > t, the values of (as, xs,, As) must be selected as stochastic process.

The benchmark asset As provides no services other than its yield Rs. As a result, the benchmark asset does not enter the consumer's intertemporal utility function except in the last instant of the planning horizon. The asset is held only as a means of accumulating wealth to endow the next planning horizons. The consumer's intertemporal utility function is
U = U ( at, ..., as, ..., at+T; xt , ..., xs , ...,xt+T; At+T),

where U is assumed to be intertemporally additively (strongly) separable, such that

U = u( at, xt ) + () u( at+1, xt+1) + ...

..... + ()T-1 u( at+T-1, xt+T-1) + ()T u( at+T, xt+T, At+T)

= s-t u( as, xs) + ()T uT( at+T, xt+T, At+T), (4.2)

and the consumer's subjective rate of time preference, x , is assumed to be constant. The single period utility functions, u and uT, are assumed to be increasing and strictly quasiconcave.

Given the price and interest rate processes, the consumer selects the deterministic point (at,xt, At) and the stochastic processes (as, xs, As), s=t+1, ..., t+T to maximize the expected value of U over the planning horizon, subject to the sequence of choice set constraints. Formally, the consumer's decision problem is the following.

Problem 1: Choose the deterministic point (at, xt, At) and the stochastic process (as, xs,As), s = t+1, ..., t+T, to maximize

u( at,xt) + Et [ s-t u( as,xs)+ ()T uT( at+T,xt+T,At+T)] (4.3)

subject to (as, xs, As) OE S(s) for s = t, . . . t+T.

We use Et to designate the expectations operator conditionally upon the information that exists at time t.

In the infinite planning horizon case, the decision problem becomes:

Problem 2: Choose the deterministic point (at, xt, At) and the stochastic process (as,xs,As), s = t+1, ..., ï, to maximize

u(at, xt) + Et [ s-t u( as, xs)] (4.4)

subject to (as, xs, As) OE S( s) for s ³ t, and also subject to

Et ()s-t As = 0.

The latter constraint rules out perpetual borrowing at the benchmark rate of return, Rt.

4.3 Existence of a Monetary Aggregate for the Consumer

In order to assure the existence of a monetary aggregate for the consumer, we partition the vector of monetary asset quantities, as, such that as = (ms, hs). We correspondingly partition the vector of interest rates of those assets, rs, such that rs=(rs,is). We then assume that the utility function, u, is blockwise weakly separable in ms and in xs for some such partition of as. Hence there exists a monetary aggregator ("category utility") function, M, and consumer goods aggregator function, X, and a utility function, u*, such that
u(as, xs ) = u*(M(ms), hs, X(xs) ). (4.5)

We assume that the terminal period utility function in the finite planning horizon case is correspondingly weakly separable, such that uT(as, xs, As ) = u

Then it follows that the exact monetary aggregate, measuring the welfare acquired from consuming the services of ms, is

Ms = M(ms). (4.6)

We define the dimension of ms to be k1, and the dimension of hs to be k2, so that k=k1+k2.

It is clear that equation 4.6 does define the exact monetary aggregate in the welfare sense, since Ms measures the consumer's subjective evaluation of the services that he receives from holding ms. However it also can be shown that equation 4.6 defines the exact monetary aggregate in the aggregation theoretic sense. In particular, the stochastic process Ms , s³t, contains all of the information about ms that is needed by the consumer to solve the rest of his decision problem. This conclusion is based upon the following theorem, which we call the consumer's aggregation theorem.

Let Ds = Is + pmi,s-1-pmis ] ,

and let

D( s) = { (hs, xs,, As) OE Y: =
+ phi,s-1-phis ] + (1+Rs-1)pAs-1 - pAs + Ds}. (4.7)

Let the deterministic point (a, x, A) and the stochastic process (a,x,A), s ³ t+1, solve problem 1 (or problem 2, if T=ï). Consider the following decision problems, which are conditional upon prior knowledge of the aggregate process M=M(m), although not upon the component processes m.

Problem 1a: Choose the deterministic point ( ht, xt, At) and the stochastic process (hs,xs,As), s=t+1, ..., t+T, to maximize

u*(M, ht, xt )

+ Et [ s-t u*(M, hs, xs )+ ()T u] (4.8)

subject to (hs, xs, As) OE D( s) for s = t, . . . t+T, with the process Mgiven for s ³ t.

Problem 2a: Choose the deterministic point ( ht, xt, At) and the stochastic process (hs,xs,As), s=t+1, ..., ï, to maximize

u*(M, ht, xt ) + Et [ s-t u*(M, hs, xs )] (4.9)

subject to (hs, xs, As) OE D(s) for s ³ t, and also subject to

Et ()s-t As = 0,

with the process Mgiven for s ³ t.

Theorem 1 (Consumer's Aggregation Theorem): Let the deterministic point (mt,ht,xt,At) and the stochastic process (ms, hs, xs , As), s=t+1, ..., t+T solve problem 1. Then the deterministic point (ht,xt,At) and the stochastic process (hs,xs,As), s = t+1, ..., t+T, will solve problem 1a conditionally upon M= M(ms) for s = t, ... , t+T. Similarly let the deterministic point (mt,ht,xt, At) and the stochastic process (ms, hs, xs , As), s³t+1 solve problem 2. Then the deterministic point (ht,xt,At) and the stochastic process (hs,xs,As), s ³ t+1 will solve problem 2a conditionally upon M= M(ms) for s³t.

Clearly this aggregation theorem, proved in the appendix, applies not only when Ms is produced by voluntary behavior, but also when the Ms process is exogenously imposed upon the consumer, as through a perfectly inelastic supply function for Ms imposed by central bank policy. In that case, problems 1a and 2a describe optimal behavior by the consumer in the remaining variables. Since (hs,xs,As) are not assumed to be weakly separable from Ms, the information about Ms is needed in the solution of problems 1a and 2a for the processes (hs,xs,As). For example, the marginal rate of substitution between labor and goods may depend upon the value of Ms. Alternatively information about the simple sum aggregate over the components of ms is of no use in solving either problem 1a or 2a unless the monetary aggregator function M happens to be a simple sum. In other words, the simple sum aggregate contains useful information about behavior only if the components of ms are perfect substitutes in identical ratios (linear aggregation with equal coefficients).

4.4 The Solution Procedure

Using Bellman's principle, we can derive the first order conditions for solving Problems 1 and 2. Under the somewhat more restrictive conditions assumed by Poterba and Rotemberg (1987), the first order conditions derived below reduce to those acquired by Poterba and Rotemberg.

We concentrate on the infinite planning horizon problem 2, rather than on the finite planning horizon problem 2, since the contingency plan functions ("feedback rules") that solve problem 1 are time dependent, while those solution functions are independent of time in the infinite planning horizon case. Time enters only through the variables that enter those equations as arguments, rather than through time shifting of the functions themselves.

We begin by solving the budget constraint in equation (2.1) for the quantity of an arbitrary consumer good, xjs, and we then use the resulting rearranged constraint to eliminate xjs from the intertemporal utility function in problem 2 for all s³t. For notational simplicity, we let j=1. Let z1s = (as, As). To apply Bellman's method, we must define the control and state variables. Define the control variables during period s to be zs=(z1s, x2s, ..., xns). We define the state variables during period s to be (ß1s, øs), where the price and income state variables are øs = ((p2s, ..., pns), p, p, Rs-1, rs-1, Is)/p1s, and where ß1s = (as-1, As-1).

Having eliminated the budget constraint by substitution as described above, problem 2 can be rewritten as follows:

Problem 2b: Choose the deterministic point zt and the stochastic process zs, s = t+1, ..., ï, to maximize

u(zt,ßt) + Et [ s-t u(zs,ßs)] (4.10)

subject to

ß1,s+1 = z1s (4.11)

and

Et ()s-t As = 0, (4.12)

with ßt given.

Equations (4.11) are the transition equations, ßs+1 = g (zs, ßs), providing the evolution of future state variables as functions of the controls and the current state. We assume that the øs process is Markovian. Applying the Benveniste and Scheinkman equations, we can acquire the Euler equations for the control variables.

The Euler equations which will be of the most use to us below are those for monetary assets. Replacing X(xt) by ct in u, those Euler equations become:

(4.13a)

for i = 1, . . . , k1, where ct = X(xt) is the exact quantity aggregate over xt and pis its dual exact price aggregate. Similarly we can acquire the Euler equation for the consumer goods aggregate ct, rather than for each of its components. The resulting Euler equation for ct is

(4.13b)

4.5 Monetary Policy

Having the Bellman solution at hand, we are in a position to give further consideration to the policy implications of monetary aggregation through the Theoretical aggregate. Hence we now return to Theorem 1 and Problem 2a. Clearly the Bellman equation for Problem 2a can be written in a form analogous to that of the Bellman equation produced by Problem 2. The only changes are that the controls now are (hs,x2s,...,xns,As), s = t, ..., ï, while the state variables are (hs-1,As-1,øs,M), where øs is the vector of price and income state variables defined earlier. Hence the solution contingency plans solving problem 2a are of the form:

(hs,x2s,...,xns,As) = f(hs-1,As-1, øs, M), (4.14)

where all of the controls and state variables are deterministic for s=t.

The appearance of Mas a state variable has interesting policy implications. Clearly if Mis used as an indicator in the conduct of monetary policy, the monetary aggregate will indeed contain information about (hs,x2s,...,xns,As) and thereby about the final targets of monetary policy both in goods and labor markets. Alternatively suppose that policy instruments, such as the monetary base, are used to target the equilibrium path of Mas an intermediate target of policy. Assuming that the instruments are used in a manner that is not time inconsistent, as for example through an open loop policy, the equilibrium stochastic process for Mcan be influenced by policy. Under our assumption of rational expectations, economic agents will know about the policy rule and hence about the targeted equilibrium process for M. The consumer then can solve problem 2a to acquire the optimal solution for the remaining variables conditionally upon the targeted process for M.

We see that only Mcan play these roles, if policy operates through a monetary target or indicator. The simple sum aggregate, which does not appear as a control in f, can serve neither role. In fact the only information from the portfolio, m, that is useful in solving problem 2a is M=M(m), since menters the contingency plans f only through M.

At this point, we have completed our theoretical analysis of demand for money in a risky environment. We now can use GMM estimation to estimate the parameters of first order conditions under a particular specification for tastes. We then compute the estimated theoretical monetary aggregate and proceed to investigate the quality of currently available statistical index numbers in tracking the monetary service flow. But we first determine the applicability of existing index number theory under the assumptions of our exact aggregation theory.

4.6. The Risk Neutral Case

In the perfect certainty case, nonparametric index number theory is highly developed and is applicable to monetary aggregation. In the perfect certainty case, Barnett (1978,1980) proved that the nominal user cost of the services of mit is ¹it, where

(4.15)

The corresponding real user cost is ¹it/p*. In the risk neutral case, the user cost formulas are the same as in the perfect certainty case, but with the interest rates replaced by their expected values. It can be shown that the solution value of the exact monetary aggregate M(mt) can be tracked without error in continuous time (see, e.g., Barnett (1983b)) by the Divisia index:

d log Mt = d log mit , (4.16)

where the user cost evaluated expenditure shares are sit = pitmit / pjt mjt. The flawless tracking ability of the index in the risk neutral case holds regardless of the form of the unknown aggregator function, M. However, under risk aversion the ability of equation (4.16) to track M(mt) is compromised, and the rate at which that tracking ability deteriorates is unknown as the degree of risk aversion increases above zero. We investigate the magnitude of that error below by econometrically estimating M(mt).

4.7. A Generalization

The fact that the Divisia index tracks exactly under perfect certainty or risk neutrality is well know. However, we show in this section that neither perfect certainty nor risk neutrality are needed for exact tracking of the Divisia index. Only contemporaneous prices and interest rates need be known. Future interest rates and prices need not be known, and risk averse behavior relative to those stochastic processes need not be excluded. The proof is as follows.

Assume that Rt, p, and rt are known at time t, although their future values are stochastic. Then the Euler equations (4.13a) for mt are

(4.17)

for i = 1, . . . , k1. Similarly the Euler equation (4.13b) for aggregate consumption of goods, ct, becomes

(4.18)

Eliminating between (4.17) and (4.18), we acquire

(4.19)

But by the assumption of weak separability of u in mt, we have

(4.20)

where Mt = M(mt) is the exact monetary aggregate that we seek to track.

Substituting (4.19) into (4.20) and using (4.15), we find that

(4.21)

Now substitute (4.21) into the total differential of M to acquire

(4.22)

But since M is assumed to be linearly homogeneous, we have Euler's equation for linearly homogeneous functions. Substituting (4.21) into Euler's equation, we have

(4.23)

Dividing (4.22) by (4.23), we acquire (4.16), which is the Divisia index. Hence the exact tracking property of the Divisia index is not compromised by uncertainty regarding future interest rates and prices or by risk aversion. Nevertheless, this assumption is not trivial, as emphasized by Poterba and Rotemberg (1987), since current period interest rates are not paid until the end of the current period. In fact current period interest rates are not assumed contemporaneously known in our Euler equations (4.13a) and (4.13b).

4.8. Data and Specification

In order to simplify the illustration, we accept a common clustering of components without weak separability testing. We first set ms equal to those components of M1 found by Belongia and Chalfant (1989) to be weakly separable. We then repeat our analysis with ms set equal to the components of M2, but with those components clustered into three groups with prior aggregation within groups, so that ms contains three aggregated elements. Hence we implicitly assume that as is partitioned in accordance with a recursively nested two level separable blocking, such that the components of our M1 aggregate are separable within the components of our M2 aggregate, which in turn are separable within as. Considering the little that is known about testing for separability in the risk averse case, the clustering that we have chosen without explicit separability testing is hardly the last word on that subject.

We now select a specification for the function, u, satisfying our weak separability assumption, and we estimate the parameters by GMM. In that estimation, the data that we use is the monthly monetary component data available in Fayyad (1986) for January 1969 to March 1985. We begin by defining ms to contain two components: currency and demand deposits, which Belongia and Chalfant (1989) found to be blockwise weakly separable, at least under risk neutrality, from other goods and assets. In the utility function, u*(M(ms),hs, xs ), we assume a further higher level of nested blockwise strong separability, such that
u(ms,hs, xs ) = V(M(ms), Xs) + H(hs), (4.24)
where Xs=X(xs) is the exact quantity aggregate over consumer goods. The utility function that we specify and estimate is the category utility function V(M(ms), Xs).

Since the variables in V(M(ms), Xs) are disjoint from those in H(hs), we can restrict the original decision to be defined in terms of the utility function V(M(ms), Xs) in the following manner without altering the solution for the variables (ms,Xs). We redefine the utility function in Problem 2 to be

V(M(mt), Xt) + Et [ s-t V(M(ms), Xs)]. (4.25)

The utility function in Problem 1 can be restricted in the analogous manner. The budget constraint in either case is simplified in the following manner. All terms containing the variables (hs,hs-1) are absorbed into the "other income" variable, Is, with (hs,hs-1) replaced by their stochastic processes solving the complete unrestricted decision (Problem 1 or 2).

The budget constraint then becomes:

{ (ms, Xs,, As) OE H: pXs =
+ pmi,s-1-pmis ]+(1+Rs-1)pAs-1-pAs + Is}. (4.26)

In short, with M1 components we estimate a three goods model, including two monetary components and the aggregate quantity of consumer goods, Xs. With M2 components we estimate a four goods model, including three aggregated monetary components and the aggregate quantity of consumer goods, Xs. We now define our specification for V.

We assume constant proportional risk aversion, such that the utility function
V = V(M(ms), Xs) is of the form

V(M(ms), Xs)= [J( Xs,Ms)]s (4.27)

for some function, J, where Ms=M(ms) is the Theoretical monetary aggregate we seek to estimate. We then assume that the function J has the Cobb-Douglas form

J(Xs, Ms) = Xsb Ms1-b. (4.28)

Finally we assume that the monetary aggregator function, M(ms), has the CES (constant elasticity of substitution) form

Ms = (dimsi)1/n (4.29)

with di = 1, where n = 2 for M1 and n = 3 for M2.

Substituting (4.29) into (4.28), and then substituting the result into (4.27), we get

V(M(ms), Xs)= [Xsb (dimsi)(1-b)/n]s. (4.30)

Denoting the rate of subjective time discount by r = 1/(1+x) and substituting (4.30) into (4.25), we get the complete intertemporal expected utility function

Et(U) = [Xtb (dimti)(1-b)/n]s + Et [ rs-t [Xsb (dimsi)(1-b)/n]s].

(4.31)

The parameters to be estimated are r, s, b, {di}, and n. The constraints imposed on those parameters are

di = 1, 0 < b ²1, and 0 < di ² 1.

All consumption and asset quantity data are real per capita. We approximate the benchmark rate, Rs, by the maximum holding period yield across all assets in Fayyad's (1986) tables during period s. The particular asset which produced that rate of return need not be the same for all s, since our measurement of Rs produces a proxy for the rate of return on some very illiquid asset (such as human capital in a world without slavery), on which we may have no monthly data.

4.9. Estimation

We use Hansen and Singleton's (1982) generalized method of moments estimator to estimate the parameters of the Euler equations, (4.13a) and (4.13b). In accordance with Hansen and Singleton's estimator, we iterate on the weighting matrix until convergence. The Hansen and Singleton GMM estimator requires the selection of instrumental variables. When estimating the Theoretical M1 aggregate, we use the following five instruments: Z1 = constant = 10, Z2 = Xs-1 - Xs, Z3 = (ms+1,1 -ms1) + (ms+1,2 - ms2) , Z4 = ms-1,1 + ms-1,2 , and Z5 = Rs-1.

The sample size in Fayyad (1986) is 195 which covers monthly periods from January of 1969 to March of 1985. In order to impose the constraints on the parameters, we transform the parameters in the following manner:

r = B1 , s = B2 , b = cos2B3, d = cos2B4 , n = B5,
and we estimate the new parameters B1, B2, B3, and B4. The GMM estimator converged at its fourth stage. The resulting parameter estimates are as in Table 4.1.

Using these parameter estimates and the component data, the estimated theoretical M1 monetary aggregate, Ms = M(ms), was computed at each observation. We also computed the Divisia quantity index and the simple sum index over the same components. The nominal per capita time paths of these three M1 index numbers are plotted in Figure 4.1. In Figure 4.3, the corresponding real per capita data is plotted for the three series.

This procedure then was repeated with the M2 data. The components of M2 were clustered into three groups, and asset quantities within the groups were aggregated by simple summation to produce three aggregated components over which we aggregate by the three methods. For details of the prior clustering of components, see Table 4-1 in Barnett, Hinich, and Yue (1991).

In order to impose the constraints on the parameters, we transform them as follows

r = B1, s = B2, b = cos2B3, d1 = cos2B5, d2 = sin2B5sin2B6, n = B4.

The GMM estimation converged at the third stage. The resulting parameter estimates are provided in Table 4.2.

Using these parameter estimates and the component data, the estimated theoretical M2 monetary aggregate, Ms = M(ms), was computed at each observation. We also computed the Divisia quantity index and the simple sum index over the same components. The nominal per capita time paths of these three M2 index numbers are plotted in Figure 4.2. In Figure 4.3, the corresponding real per capita data is plotted for the three series along with the real per capita indexes for M1. In Figure 4.4, the nominal per capita indices are supplied for the three methods of aggregation at both the M1 and M2 levels of aggregation.

The properties of the three aggregates at each level of aggregation are easily seen by inspecting the plots. Evidently at the M1 level, Divisia M1 tracks the estimated Theoretical aggregate rather well. At the M2 level, the growth rates of those two series diverged from September 1982 through April 1983, with the growth rate of the estimated Theoretical aggregate being consistently higher than that of the Divisia aggregate throughout that time period. This phenomenon opened a gap between the plots of the levels of the two series. However, the two paths tracked parallel to each other after the eight months of diverging growth rates, since the growth rates of the two series returned to being very similar after April 1983.

The divergence from September 1982 through April 1983 probably can be found in the unusual circumstances that existed in money markets. Many innovations in money markets evolved during that period, such as the introduction of super-NOW accounts and money-market deposit accounts at commercial banks. There also was more than the usual degree of uncertainty regarding monetary policy, since that period immediately followed the termination of the Federal Reserve's "monetarist experiment," and the targets of monetary policy immediately following the termination of that experiment were unclear.

5. Conclusions

We conclude that both the Lucas critique and the Barnett critique can be circumvented by estimating Euler equations with weak separability tested and imposed prior to the construction of monetary aggregates in the financial sector. This conclusion applies to estimation of the technology of manufacturing firms, which demand monetary services as inputs, the tastes of consumers, who demand monetary services as inputs, and the technology of financial intermediaries, which supply financial services as outputs while contributing to the production of the economy's inside money supply.

While the Divisia index is not flawless when tracking the resulting weakly nested monetary aggregates that exist in such rigorous structures, we find that its tracking ability is better than that of other available statistical index numbers. This advantage is most evident when the ratios of the user cost prices of the services of components differs substantially from 1.0. Those user cost formulas depend upon the own rates of return such that the user costs are equal if and only if the own rates of return are equal. In fact, it can be shown that the Divisia index reduces to the simple sum if all own rates of return are equal, so that the components become perfect substitutes. The advantage of Divisia over simple sum is especially evident in the case of consumer demand, since our data in that case includes currency with a yield of zero, while the other assets in the consumer's portfolio yield a positive rate of return. In our results with manufacturing firms and with banks, no zero yielding asset was among the components of the aggregate and hence the results are less dramatic.

The ultimate objective of this line of research is to permit investigation of various forms of policy intervention into asset markets and to investigate properties of tastes and technology. We agree with Poterba and Rotemberg's (1987) views on the direction that this research should take, and we have taken that line of research several steps further. Clearly much remains to be done.

APPENDIX

Proof of Consumer's Aggregation Theorem

Proof of Theorem 1: Let the deterministic point (mt, ht, xt , At) and the stochastic process (ms,hs,xs, As), s=t+1, ..., t+T, solve problem 1. But let the deterministic point (ht,xt,At) and the stochastic process (hs,xs,As), s = t+1, ..., t+T, not solve problem 1a conditionally upon the process M= M(ms) given for s = t, ... , t+T. Then there exist OE D( s) , s=t, ..., t+T, such that (4.8) evaluated at is strictly greater than (4.8) evaluated at conditionally upon M= M(ms).

Hence (4.3) evaluated at is strictly greater than (4.3) evaluated at . But since , s=t+1, ..., t+T, is feasible for problem 1a conditionally upon Ms=M(ms), it follows that is feasible for problem 1. Our assumption that (ms,hs,xs, As), s = t, ..., t+T, solves problem 1 is contradicted. The analogous proof by contradiction applies to problem 2a. Q.E.D.

References

Arrow, K. J. and F. Hahn (1971), General Competitive Analysis . San Francisco, Holden-Day.

Barnett. William A. (1980), " Economic Monetary Aggregates: An Application of Index Number and Aggregation Theory," Journal of Econometrics, 14, 11-48.

Barnett, William A. (1983), "Definitions of 'Second Order Approximation' and of 'Flexible Functional Form,'" Economics Letters, Vol. 12.

Barnett, William A. (1987), "The Microeconomic Theory of Monetary Aggregation,'' in Barnett, William A. and Kenneth J. Singleton eds., New Approaches to Monetary Economics, Cambridge University Press.

Barnett, William A., D. Fisher and S. Serletis (1992), " Consumer Theory and the Demand for Money," Journal of Economic Literature, 92, 2086-119.

Barnett, William A., J. Geweke and M. Wolfe (1991a), "Seminonparametric Bayesian Estimation of the Asymptotically Ideal Production Model," Journal of Econometrics, 49, 5-50.

Barnett, William, Geweke, John, and Michael Wolfe (1991b), "Seminonparametric Bayesian Estimation of Consumer and Factor Demand Models," in William A. Barnett, Bernard Cornet, Claude D'Aspremont, Jean Gabszewicz, and Andreu Mas-Colell (eds.), Equilibrium Theory and Applications, Cambridge University Press, Cambridge, pp. 425- 480.

Barnett, William, Geweke, John, and Piyu Yue (1991), "Seminonparametric Bayesian Estimation of the Asymptotically Ideal Model: The AIM Demand System," in William A. Barnett, George Tauchen, and James Powell (eds.), Nonparametric and Semiparametric Methods in Econometrics and Statistics, Cambridge University Press, Cambridge, pp. 127-174.

Barnett, William A. and Jeong Ho Hahm (1994), "Financial-Firm Production of Monetary Services: A Generalized Symmetric Barnett Variable-Profit-Function Approach,'' Journal of Business & Economic Statistics, vol. 12, January .

Barnett, William A., Hinich, Melvin, and Piyu Yue (1991), "Monitoring Monetary Aggregates under Risk Aversion," in Michael T. Belongia (ed.), Monetary Policy on the 75th Anniverary of the Federal Reserve System, Proceedings of the Fourteenth Annual Economic Policy Conference of the Federal Reserve Bank of St. Louis, Kluwer, pp. 189- 222.

Barnett, William A. and Ge Zhou (1994a), "Financial Firm's Production and Supply-Side Monetary Aggregation Under Dynamic Uncertainty,'' St. Louis Federal Reserve Bank's Monthly Review, pp. 133-165.

Barnett, William A. and Ge Zhou (1994b), "Response to Brainard's Commentary," St. Louis Federal Reserve Bank's Monthly Review, pp. 169-174.

Belongia, Michael T. (1993), "Measurement Matters: Recent Results from Monetary Economics Re-examined," Journal of Political Economy, forthcoming.

Belongia, Michael and James Chalfant (1989), "The Changing Empirical Definition of
Money: Some Estimates from a Model of the Demand for Money Substitutes," Journal of Political Economy , 97, April, pp. 387-398.

Benston, George J., Gerald A. Hanweck and David B. Humphrey (1989), "Scale Economies in Banking: A Restructuring and Reassessment," Journal of Money, Credit and Banking, vol. 14, November.

Berger, A.N., George A. Hanweck and David B. Humphrey (1987), "Competitive Viability in Banking: Scale, Scope and Product Mix Economies," Journal of Monetary Economics, 20.

Bernanke, B. (1983), "The Determinants of Investment: Another Look," American Economic Review (Papers and Proceedings), 73, 71-75.

Blackorby, Charles, D. Primont, and R. Russell (1977), "On Testing Separability Restrictions with Flexible Functional Forms," Journal of Econometrics, 5.

Blanchard, Olivier Jean. and Stanley Fischer (1989), Lectures on Macroeconomics, Cambridge: The MIT Press.

Chrystal, K. Alec and Ronald MacDonald (1994), "Empirical Evidence on the Recent Behavior and Usefulness of Simple-Sum and Weighted Measures of the Money Stock," Federal Reserve Bank of St. Louis Review, March/April, pp. 73-109.

Clark, Jeffery A. (1984), "Estimation of Economies of Scale in Banking Using a Generalized Functional Form," Journal of Money, Credit and Banking, vol. 16, February.

Debreu, Gerard (1959), Theory of Value, Cowles Foundation Monograph 17, Yale University Press, New Haven, CT.

Diewert, W.E. (1973), "Functional Forms for Profit and Transformation Functions," Journal of Economic Theory, 79.

Diewert, W.E. (1980), "Recent Developments in the Economic Theory of Index Numbers: Capital and the Theory of Productivity Measurement,'' American Economic Review, May.

Diewert, W.E., and T.J. Wales (1987), "Flexible Functional Forms and Global Curvature Conditions," Econometrica, 55.

Diewert, W.E., and T.J. Wales (1988), "A Normalized Quadratic Flexible Functional Form," Journal of Econometrics, 37.

Diewert, W.E., and T.J. Wales (1991), "Flexible Functional Forms and Tests of Homogeneous Separability," Working Paper # 91-12, University of British Columbia.

Drake, Leigh and Alec Chrystal (1994), "Company-Sector Money Demand: New Evidence on the Existence of a Stable Long-run Relationship for the United Kingdom," Journal of Money, Credit and Banking, August 1994, Part 1, pp. 412-438.

Fayyad, S.K. (1986), "Monetary Asset Component Grouping and Aggregation: An Inquiry into the Definition of Money," Ph. D. dissertation, University of Texas, Austin, Texas.

Feenstra, Robert C. (1986), "Functional Equivalence Between Liquidity Costs and the Utility of Money," Journal of Monetary Economics, March, 271-291.

Feldstein, Martin and James H. Stock (1994), "Measuring Money Growth with Financial Markets are Changing," NBER Working Paper No. 4888, National Bureau of Economic Research, 1050 Massachusetts Avenue, Cambridge, MA 02138.

Gould, J. P. (1968), "Adjustment Costs in the Theory of Investment of the Firm," Review of Economic Studies.

Hancock, Diana (1985), "The Financial Firm: Production with Monetary and Nonmonetary Goods," Journal of Political Economy, 93.

Hancock, Diana (1987), "Aggregation of Monetary Goods: A Production Model," In Barnett, William A. and Kenneth J. Singleton eds., New Approaches to Monetary Economics, Cambridge University Press.

Hancock, Diana (1991), A Theory of Production for the Financial Firm, Boston: Kluwer Academic Publishers.

Hansen, Lars Peter (1982), "Large Sample Properties of Generalized Method of Moments Estimators," Econometrica, 50.

Hansen, Lars Peter and K. Singleton (1982), "Generalized Instrumental Variable Estimation of Nonlinear Rational Expectations Models," Econometrica, 50.

Harper, M. J. , E. R. Berndt and D. O. Wood (1989), "Rates of Return and Capital Aggregation Using Alternative Rental Prices," in Jorgenson, D. W. and Ralph Landau (Eds.), Technology and Capital Formation, Cambridge: The MIT press.

Humphrey, David B. (1985), "Cost and Scale Economies in Bank Intermediation," In Aspinwall, R.C. and R.A. Eisenbeis eds, Handbook for Banking Strategy, New York: Wiley Professional Banking and Finance Series.

Lau, L.J. (1978), "Testing and Imposing Monotonicity, Convexity and Quasi-Convexity Constraints," In Fuss, Melvyn and Daniel McFadden (eds.), Production Economics: A Dual Approach to Theory and Application, 1, Amsterdam: North-Holland.

Leontief, W. (1947a), "A note on the Interrelation of Subsets of Independent Variables of a Continuous Function with Continuous Derivatives," Bulletin of the American Mathematical Society, 55.

Leontief, W. (1947b), "An Introduction to a Theory of the Internal Structure of Functional Relationships," Econometrica, 15.

Lucas, R.E. (1967), "Adjustment Costs and the Theory of Supply," Journal of Political Economy, 75.

Murray, John D. and Robert W. White (1983), "Economies of Scale and Economies of Scope in Multiproduct Financial Institutions: A Study of British Columbia Credit Unions," Journal of Finance, vol. XXXVIII, June.

Mullineaux, Donald J. (1978), "Economies of Scale and Organizational Efficiency in Banking: A Profit Function Approach," Journal of Finance, vol. XXXIII, March.

Newey, Whitney K. and K.D. West (1987), "A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix," Econometrica, 55.

Pindyck, Robert S. and Julio J. Rotemberg (1983), "Dynamic Factor Demands and the Effects of Energy Price Shocks," American Economic Review, December.

Phlips, Louis and Frans Spinnewyn (1982), "Rationality versus Myopia in Dynamic Demand Systems," in R. L. Basmann and G. F. Rhodes (eds.), Advances in Econometrics , JAI Press, pp. 3-33.

Poterba, James M. and Julio J. Rotemberg (1987), "Money in the Utility Function: An Empirical Implementation," in William A. Barnett and Kenneth J. Singleton (eds.), New Approaches to Monetary Economics, Cambridge, Cambridge University Press, pp. 219-240.

Robles, Barbara (1993), "The Optimal Demand for Money in U.S. Manufacturing: A Dynamic Micro Theoretic Approach", Working paper, University of Colorado at Boulder.

Rotemberg, Julio J., Driscoll, John C., and James M. Poterba (1994), "Money, Output, and Prices: Evidence from a New Monetary Aggregate," Journal of Business and Economic Statistics, forthcoming.

Sargent, Thomas J. (1987), "Dynamic Macroeconomic Theory," Harvard University Press. Cambridge, Massachusetts and London, England.

Sidrauski, Miguel (1967), "Rational Choice and Patterns of Growth in a Monetary Economy," American Economic Review, 57, 2, May, pp. 534-544.

Swofford, James L. and Whitney, Gerald A. (1987), "Nonparametric Test of Utility Maximization and Weak Separability for Consumption, Leisure, and Money, Review of Economics and Statistics, August, 69(3), pp. 458-64.

Thornton, Daniel and Piyu Yue (1992), "An Extended Series of Divisia Monetary Aggregates," Federal Reserve Bank of St. Louis Review, November/December, 74, no. 6, pp. 35-52.

Table 4.1

GMM Estimates of Parameters of M1 Theoretical Aggregator Function Nested in Consumer Demand Model

________________________________________________

Inside Aggregator Outside Aggregator
__________________ ___________________________

Estimated Parameter B1 B2 B3 B4 B5

Estimate 0.9168 -0.3329 7.6018 42.717 0.6800

t-ratio 62.489 -3.726 19.171 10.424 2.3769

______________________________________________________________________

Derived Parameter r a b d n

Implied Estimate 0.9168 -0.3329 0.9825 0.5398 0.6800
______________________________________________________________________

Table 4.2

GMM Estimates of Parameters of M2 Theoretical Aggregator Function Nested in Consumer Demand Model

_____________________________________________________

Inside Aggregator Outside Aggregator
_________________________ ________________________

Estimated Parameter B1 B2 B3 B4 B5 B6

Estimate 0.8975 -0.2669 0.2173 0.8426 0.8198 0.9177

t-ratio 43.9094 -3.3072 13.1376 1.9011 17.6566 14.6081

______________________________________________________________________

Derived Parameter r s b n d1 d2

Implied Estimate 0.8975 -0.2669 0.9535 0.8426 0.4656 0.3371

______________________________________________________________________




Table 3.1

GMM Estimates of Parameters of Theoretical Aggregator Function Nested in Manufacturing Firm's Technology

______________________________________________________________________ Parameter u11 f

Estimate .05 61.82

t-ratio 9.41 228,814

______________________________________________________________________

Note: the other parameters of technology, outside the monetary aggregator function, are , and were estimated to be (1.22, .689, 1.617, 1.635, 3.98,-1.61,-3.19,5.23,4.59,-3.75,-3.33,5.52,5.76,.963,-.035) respectively, with corresponding t-ratios of (9.8, 6.21, 7.84, 10.18, 1.01,-6.85,-7.32,7.12,10.42,-6.68,-2.32,4.05,10.74,1.11,-.001) respectively. The estimated value of f implies estimates for (b1,b2) of b1=0.777 and b2=0.223.

Monotonicity was imposed upon the monetary aggregator function at only one point. However, monotonicity of that function was checked at all points, and violations of monotonicity conditions within the aggregator function did not occur at any observation. Convexity of the monetary aggregator function was imposed globally.

Monotonicity of the transformation function, W, was imposed at only one point but checked at all points. Monotonicity was violated for output at 3 points, for labor at 3 points, for materials at 5 points, for capital at 14 points, and for cash and securities at 32 points. Convexity of W was violated at 32 data points.

Table 2.1

GMM Estimates of Parameters of Theoretical Aggregator Function Nested in Financial Firm's Technology

______________________________________________________________________ Parameter u11 q

Estimate -.532 58.05

t-ratio -2.54 839.3

______________________________________________________________________

Note: the other parameters of technology, outside the monetary aggregator function, are , and were estimated to be (.039,.230,.030,.034,260.8,.259,58.0,0.098,.326,.444,-.479,-1.586,1.755,-1.764,
-.298,1.914,1.163) respectively, with corresponding t-ratios of (2.52,9.11,.06,.05,2.90,2.35,839.32,-5.06,.80,1.08,-1.30,-5.08,4.29,-6.24,
-1.59,6.26,9.26) respectively. The estimated value of q implies estimates for (b1,b2) of b1=0.72 and b2=0.28.

Monotonicity was imposed upon the monetary aggregator function at only one point. However, monotonicity of that function was checked at all points, and violations of monotonicity conditions within the aggregator function did not occur at any observation. Convexity of the monetary aggregator function was imposed globally.

Monotonicity of the transformation function, W, was imposed at only one point but checked at all points. Monotonicity was satisfied at all data points for outputs, at all data points for cash and labor, at all but 2 points for materials, and at all but 4 points for capital. Convexity of W was satisfied at all data points.

Reply to Cecchetti's Comment

by William A. Barnett, Milka Kirova, and Meenakshi Pasupathy

We very much welcome Stephen Cecchetti's comments on our paper. He has provided an interesting new method for computing monetary aggregates, and we are very pleased to find that the monetary aggregate that he proposes is not a simple sum. We hope that economists interested in progressing beyond the obsolete simple sum monetary aggregates will include Cecchetti's aggregate in their experiments. But we feel it is necessary for us to point out that the index-number-theoretic tradition that is relevant to understanding the Cecchetti index differs from the tradition that is relevant to understanding our research. In addition, the objectives of our paper are very different from those of Cecchetti's approach.

1. Historical Background

Historically there were two traditions in the index number theory literature. One was called statistical index number theory and the other was called economic index number theory. The latter was produced from microeconomic theory and generated the large literature on exact weakly separable aggregator functions, two stage budgeting, the unit cost function as the true cost of living index, and many of the results on functional duality theory. See, e.g., Fisher and Shell (1972), Blackorby, Primont, and Russell (1978), and H. A. John Green (1964). It should be understood that the literature on economic quantity and price aggregation theory is not the source of any controversy in microeconomic theory. The reason is that it is correct and is beyond dispute. The basic principle upon which that literature is based is that an aggregate is exact if the economic agent behaves as if the aggregate were an elementary good in a shadow world decision that tracks the real world decision without error.

On the other hand, statistical index number theory, in its earliest days, was not directly connected with economic theory and hence was the object of much criticism from economic index number theorists. But statistical index number theory circumvented the need to estimate aggregator functions. The clash between the convenience of statistical index numbers and the theoretical validity of economic index number theory hampered progress for many years. In recent years the explosion of activity in index number theoretic research found its origins in the path breaking research of Diewert (1976), who proved that a class of statistical index numbers tracks the exact aggregators of economic aggregation theory up to a third order remainder term in the changes. He named that class the "superlative" index numbers. His result unified a subset of the available statistical index number theory with the entirely valid economic index number theory, thereby ending the division between the two approaches. Nearly all of the post-Diewert research in statistical index number theory is based upon the unified overlap and produces provable tracking ability relative to economic index number theory.

2. Cecchetti's Number

Cecchetti's index is not in the class of superlative index numbers and in fact has unknown tracking ability relative to the exact aggregator functions of economic aggregation theory. Hence Cecchetti's index is in the older pre-Diewert tradition of statistical index number theory. We believe that his interesting index merits serious research in the growing literature on monetary aggregation, but we hasten to add that the research contained in our paper is firmly rooted in economic aggregation theory and therefore has very different objectives. As explained in our paper, both the Lucas critique and the Barnett critique (as defined by Chrystal and MacDonald (1994) and explained in our paper) are solved by the approach that we advocate. Without a known connection with a nested aggregator function, the Cecchetti index seems vulnerable to the Barnett critique, and he does not comment on our estimation of the deep parameters of Euler equations as a solution to the Lucas critique.

3. Cecchetti's Criticism of Index Number Theory

Cecchetti states that the possible covariance between the weights of index number theoretic aggregates and the component growth rates is a defect. In a superlative statistical index number, the weights contain no unknown parameters. The weights contain only data. Similarly the growth rates of component quantities or prices are data. Multiplying together two correlated data stochastic processes to get a third derived data stochastic process is not a problem. For example, income is computed by calculating the sums of products of quantities and prices. The fact that those quantities and prices are correlated is irrelevant. Furthermore, there is good reason to want the weights and component processes to be correlated. Consider, for example, the Divisia index, which is known to track any aggregator function without any error at all in continuous time, since the Divisia index is just a transformation of the first order conditions for optimizing behavior. The weights of the Divisia index's growth rates are expenditure shares. If the aggregator function being tracked is Cobb-Douglas, all expenditure shares will be constant. Hence in that unlikely special case, the shares will be uncorrelated with the component growth rates. But along any other aggregator function, prices, quantities, and shares will all be correlated. Whether shares are positively or negatively correlated with prices depends upon whether the price elasticities of demand are greater or less than minus 1.0. Any index number which does not permit that correlation cannot track a non-Cobb-Douglas aggregator function adequately. In short, we consider Cecchetti's criticism of index number theory to be unwarranted.

If Cecchetti's criticism of index number theory alternatively is directed at economic aggregation theory, rather than statistical index number theory, his criticism remains unwarranted. In economic index number theory, the aggregates are the economic aggregator functions themselves. Those functions are linearly homogeneous, increasing, and concave, and are weakly separable subfunctions within the structure of the economy. Those economic aggregator functions may have parameters, but need not have "weights" in any meaningful sense. The unknown parameters are constants and are not correlated with anything.

4. Methodology

The admissibility criteria used to produce data in index number theory and aggregation theory are outlined in Barnett (1982). Such aggregates must be produced over weakly separable groups, and the index used to aggregate over the groups must have known (preferably "superlative") tracking ability relative to the aggregator function. This construction process logically precedes the use of any such aggregate. Since more than one nested weakly separable group may exist, there can be more than one admissible aggregate, and they can be recursively nested, as in the M1, M2, and M3 component groups. Comparison between the admissible aggregates can be based upon their usefulness in policy or in other such applications. But the construction of admissible aggregates is logically prior to that final comparison. Constructing data in a manner that is designed from the start to rationalize some policy view is a violation of the principles of index number theory, and inadmissible index number theories violate existence conditions in aggregation theory and therefore cannot be viewed as tracking anything that exists. Hence they do not "measure" anything.

The purpose of all scientific research is to reveal the truth, not to alter the data in a manner that may tend to justify some preconceived policy view. The purpose of data is to measure something that exists, i.e. an aggregator function that is separable within the structure of the economy. If that which exists is not easily used in policy, or is not at all useful in policy, should we find a way to change the data so that it will appear to be useful in policy? If so, are we then justified in using that circular construct as data in a structural equation or model?

5. Conclusion

At the present time, Cecchetti's construct has no known foundations in economic aggregation theory, and is neither an economic aggregate nor an admissible statistical index number, in the modern post-Diewert definitions of the terms. But the Cecchetti number could be viewed as a "coincident economic indicator," in the Stock and Watson (1991) sense, and may thereby be of use as a policy tool. On those grounds, we believe that Cecchetti's construct merits serious further research. In fact we wish to applaud Cecchetti for contributions above and beyond the call of duty. It is rare for a discussant's comments to include an original and positive new research contribution. However when it comes to the construction of aggregate data as measures of structural variables, aggregation theory and economic index number theory exist for a very good reason. In his comments, Cecchetti asks whether there is "an easier way to do this." Relative to the clearly stated structural objectives of our paper, the answer is----No.

REFERENCES

Barnett, William A., "The Optimal Level of Monetary Aggregation," Journal of Money, Credit, and
Banking
, Nov. 1982 (pt. 2), 14, pp. 687-710.

Blackorby, Charles, Primont, Daniel, and R. Robert Russell. Duality, Separability, and Functional
Structure: Theory and Economic Applications
, North-Holland, 1978.

Chrystal, K. Alec and Ronald MacDonald. "Empirical Evidence on the Recent Behavior and Usefulness of
Simple-Sum and Weighted Measures of the Money Stock." Federal Reserve Bank of St. Louis Review 76 (March/April 1994), pp. 73-109.

Diewert, W. Erwin. "Exact and Superlative Index Numbers," Journal of Econometrics, May 1976, 4,
pp. 115-45.

Fisher, F. M. and K. Shell, The Economic Theory of Price Indices. New York: Academic Press, 1972.

Green, H. A. John. Aggregation in Economic Analysis: An Introductory Survey, Princeton, N. J.:
Princeton University Press, 1964.

Stock, James H. and Mark W. Watson. "A Probability Model of the Coincident Economic Indicator," in
Kajal Lahiri and Geoffrey H. Moore, eds. Leading Economic Indicators: New Approaches and
Forecasting Records, Cambridge: Cambridge University Press, 1991.