A Heisenberg Bound for Stationary Time Series

 

Eric Blankmeyer

Department of Finance and Economics

Southwest Texas State University

San Marcos, TX 78666 USA

 

telephone 512-245-3253

 

Abstract

 

Heisenberg's principle of indeterminacy is applied to stationary time series models, which are important forecasting tools in the natural and social sciences. The position and velocity of a forecast are defined and are shown to be imperfectly correlated. Then a first-order autoregression is used to illustrate the trade-off between precision of position and precision of velocity. A counterpart of Planck's constant is identified, and the Heisenberg bound is derived for several autoregressive moving-average models. The time-energy version of the Heisenberg principle is discussed in the context of a stationary model in continuous time.

 

 

 

 

Copyright 1996 Eric Blankmeyer

 

 

 

 

 

The Heisenberg principle

Epistemological discoveries broaden our understanding of rational processes and scientific method. A few twentieth-century examples are Kurt Godel's undecidability theorems1, Kenneth Arrow's impossibility theorem2, and the notion of computational intractability3. Through these results we begin to discern the boundaries of deduction, computation, inference and decision.

Perhaps no discovery of this kind is more remarkable than Werner Heisenberg's uncertainty principle, which put an end to the determinism of classical physics4. Heisenberg showed that an object's time path cannot be measured with arbitrarily high precision. The trajectory is constructed from the object's position and velocity; but these two variables never come into exact focus at the same time, so the observations are subject to an unavoidable random error.

Today, the status of the uncertainty principle is rather curious. Physicists believe that its practical significance is limited to measurements at the subatomic level. However, algebra students derive an abstract version in no particular scientific context, while analogies to the Heisenberg bound have been found in thermodynamics, statistical inference and economics5. These observations suggest that the scope of the uncertainty principle may be broader than is usually supposed. Meanwhile, most forecasters simply ignore Heisenberg's discovery.

Historical accident has played a role in this situation. From its inception, the uncertainty principle has been linked to quantum mechanics. Because Heisenberg was deeply involved in the development of that field, he applied the uncertainty principle to the study of atomic structure, stating it in terms of Planck's constant. This miniscule number sets the scale of measurement in the world of electrons and photons, but it is negligible where larger natural objects are concerned. Therefore, physicists play down the practical importance of the uncertainty principle outside quantum mechanics.

Nevertheless, Heisenberg's insight should be explored as thoroughly as possible. It may not change the way we forecast (nor has Arrow's theorem changed the way we vote); but it puts our forecasting models in a new perspective and reveals their inherent limitations. As the following comments attest, the uncertainty principle continues to generate interest and controversy among professional scientists and laypersons:

"Does the 'indeterminacy' in question mean only that we are restricted in our powers to observe and measure quantities which nevertheless have, in reality, perfectly definite and determinate values ? Or does it mean that the fundamental entities of matter-theory are like clouds or claps of thunder --things which, by their very nature, have vague, indefinite boundaries...? 6

"The uncertainty principle, in its statement of the limits of observation, is a part of the present scientific view of the nature of physical reality, with implications for philosophy in general." 7

"Every time there is a major advance in physics or mathematics, some social scientists and humanists are quick to see in it a new paradigm for their own modes of thought. It was this way with Newtonian physics, Einstein's theory of relativity, the Heisenberg principle, and now chaos theory. All such 'trendy' efforts are soon revealed to be transient and foolish." 8

"Thus, the determinism which we imagine and cherish remains an ideal toward which we can strive regarding large-scale phenomena, provided that limitations at the quantum level can be considered unimportant corrections. This has very real philosophical implications in everyday life, although the phenomena with which we generally deal are so gross that, within so-called practical limits, we can predict what will happen if we do certain things." 9

"We are left with an urgent question in our minds: is it just that by some curious conspiracy of nature we are prevented from knowing the world exactly and hence from predicting exactly what will happen next, or is it that the future is truly not determined by the present ? ....Just as it was important for man's view of himself in the universe to know whether his planet was the center of all the spheres, it is important to answer the question of determinism. " 10

In the sequel, we derive a Heisenberg bound for the stationary time series models used by forecasters in many fields of natural and social science11. These models represent damped cycles kept in motion by random shocks. Jolted from its mean level by the white noise, the time series tends to revert to the mean instead of wandering away from it. The speed of mean reversion depends on the time series' autocorrelation --its inertia or tendency to "track." Stationary models have been applied to a large variety of phenomena which can be described by nontrending random cycles. Moreover, the models are building blocks for time series that exhibit nondeterministic trends, like random walks (Brownian motion). Their ubiquity makes stationary models an interesting topic within which to explore the Heisenberg principle. As a first step, we identify the key components of a forecast.

The Ingredients of a Forecast

A time series forecast is constructed from sequential estimates of position and velocity. Obvious as this idea may seem, it is often ignored because the estimate of velocity is frequently implicit. Forecasters usually state their predictions in terms of the current position and the future (conjectured) position, without explicit reference to the velocity that links the two positions.

Contemplating an eighty-mile car trip between two Texas cities, San Antonio and Austin, I may say, "It's one o'clock now. I'll be in Austin at two thirty." This prediction mentions my current position (San Antonio) and my future position (Austin), but it also implies an estimate of my average velocity (about 53 mph). If I take into account possible delays due to road repair or bad weather, my velocity estimate is less certain, and so is my time of arrival in Austin.

Cruising at 53 mph, I eventually pass a sign that says "Austin: 60 miles". Do I then have accurate measurements of my position and velocity ? No, for the sign is already behind me, and the distance to Austin is now a little less than sixty miles. To fix my position exactly, I might stop at the sign. However, the speedometer needle would then drop to zero, giving a very poor estimate of my average velocity ! A physicist reasonably asserts that these discrepancies are unimportant. The estimates "Sixty miles south of Austin, traveling north at 53 mph" are entirely adequate in this case. The example merely illustrates that the two ingredients of a forecast, position and velocity, cannot both be measured exactly.

The correlation of position and velocity

We consider a stationary time series measured at equal intervals. The position of the series at time t is just its current value, denoted Xt . Its velocity or rate of change is simply the forward first difference Xt+1-Xt. We define two parameters: VX denotes the variance of Xt, and its autocorrelation at one interval is r = Cov(Xt+1,Xt)/VX. Stationarity requires that -1 < r < 1. Since velocity is the difference between Xt+1 and Xt, its variance is

VX + VX -2rVX = 2VX(1-r) . (1)

Moreover, the covariance of Xt and Xt+1-Xt is

rVX - VX = -(1-r)VX (2)

and so their squared correlation is

(1-r)/2 . (3)

By stationarity, (1-r)/2 < 1. Position and velocity cannot be perfectly correlated. This is the first aspect of the Heisenberg principle for stationary time series. An accurate forecast requires precise measurements of both variables, but the precision is limited by their imperfect correlation. An exact measurement of one variable cannot guarantee an exact measurement of the other.

This argument assumes that the model's parameters, r and VX, have known values and do not need to be estimated. The indeterminacy is therefore not attributable to the sampling error of estimated coefficients. It is due to the inherent randomness of Xt itself. Physicists emphasize that better laboratory apparatus and more careful observation cannot remove Heisenberg's uncertainty, for it is a fact of nature. Similarly, our conclusions represent a best case which ignores estimation problems. The point is that, even in this best case, a perfectly accurate forecast is ruled out.

If r is known, the original time series can be transformed to a new time series with any other degree of autocorrelation. (Forecasters routinely perform such filtering using an estimate of r.) In what follows, it may be helpful to think of r as a decision variable which the time series analyst can manipulate.

 

 

The analogue of Planck's constant

To develop another aspect of the Heisenberg bound, we turn to a simple time series model, the first-order autoregression12:

Xt+1 = rXt + ut+1 , (4)

where the white-noise innovation ut+1 has zero mean and standard deviation s . In time series models, the counterpart of Planck's constant is the general scale parameter s. It represents the irreducible randomness in whatever time series we are modeling. By postulating s > 0, we acknowledge our inability to make deterministic predictions.

Of course, Planck's constant has a specific numerical value (about 6.625 x 10-27 erg second), while the magnitude and unit of s depend on the particular time series under examination. Obviously, an alteration of the time series (seasonal adjustment, for example) will change s, as will any modification of the process that underlies the data. But these remarks do not affect the key point: for a given stationary random process, s is assumed to be an unchanging parameter, knowable if not actually known.

Let us demonstrate the analogy between s and Planck's constant (denoted by h) in a way that is familiar to physicists13. Their usual statement of the uncertainty principle is

 

(standard deviation of position)x(standard deviation of velocity) > h/2p . (5)

 

The right-hand side of expression (5) is a lower bound on the probable error when position and velocity are measured jointly. At that bound, greater precision of position leads to less precision of velocity, and vice versa. For a first-order autoregression, we have14

VX = s2/(1-r2) ; (6)

and it then follows from (1) that the velocity's variance is

2s2/(1+r) . (7)

According to expression (5), the geometric mean of (6) and (7) should have a positive minimum value. By differentiation with respect to r, that minimum is found to be about 1.3s2 when r = 1/3. This result is displayed in Figure 1, where expressions (6) and (7) are graphed as functions of r. When the autocorrelation increases from -1 to 0, both variances diminish; but as r increases from 0 to 1, the trade-off emerges. A smaller value of (7) corresponds to a larger value for (6), and vice versa. The minimum of the geometric mean (5) is located at r = 1/3. So in time series analysis, as in quantum mechanics, we can determine a bound on the accuracy of position and velocity. In that bound, s replaces h as the scale parameter.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Expression (5) has the merit of displaying clearly the trade-off between the two standard deviations. It is however incomplete because it makes no reference to the correlation between position and velocity. Expressions (3) and (5) together provide a comprehensive statement of the Heisenberg indeterminacy. The relationship between (3) and (5) can be illustrated by using a random-number generator to simulate the first-order autoregression.

Figure 2 displays one hundred observations with r = -0.9. As expression (3) shows, Xt and its first difference are highly correlated; the data are closely grouped around the line Xt+1-Xt = -(1-r)Xt. Given an accurate measurement of one variable, the other variable could be predicted fairly precisely. However, it is difficult to measure either variable in the first place. Both position and velocity have large standard deviations; the observations are widely spread horizontally and vertically. In terms of expression (5), the product of the standard deviations is far above its bound.

In Figure 3, the simulation is repeated with r = 0. Now both variables have smaller standard deviations, but this advantage is offset by the weaker correlation between them. Neither is a very good predictor of the other. Finally, in Figure 4, r = 0.9 . While the standard deviation of the velocity Xt+1-Xt is very small, the standard deviation of the position Xt is again large, and the correlation between them has practically disappeared. These trade-offs in measuring position and velocity are the essence of Heisenberg's principle.

 

 

Figure 2. First-order autoregression, r = -0.9

 

Figure 3. First-order autoregression, r = 0.0

 

 

 

Figure 4. First-order autoregression, r = 0.9

 

 

 

 

 

 

 

Uncertainty past, present and future

The gist of Figures 2, 3 and 4 can be stated in another way. For a first-order autoregression, the obvious forecast of Xt+1 is rXt. Let us ask whether it is advantageous to forecast from a highly autocorrelated time series, one in which r = 0.9, for example. The good news is that the current value, Xt, is typically quite useful for predicting the future value, Xt+1, because of the strong autocorrelation. The bad news is that the current value is hard to measure accurately because its variance is relatively large [about 5.26s2 according to equation (6)]. In other words, the present is a reliable guide to the future, if only we knew what is happening in the present. On the other hand, suppose that the time series is filtered so that r = 0. Now the good news is that Xt has the least possible variance (just s2), so the current value can be ascertained with more precision. The bad news is that the present is no longer informative about the future since there is no autocorrelation. This argument shows that the variance of the forecast error is s2, irrespective of the parameter r, a conclusion which agrees with standard statistical analysis since E(Xt+1-rXt)2 = s2 when the value of r is known.

We are reminded that, in stationary models, uncertainty is not confined to the future. The present and the past are also known only approximately. The current value Xt is merely a sample drawn from an underlying reality. The historical record is contaminated with random errors that distort our understanding of the past. It is easy to lose sight of these assumptions on which the stationary model rests. One naturally falls into the habit of considering the observed time series to be "what actually happened" --as if it were undistorted by noise. Heisenberg stated the situation clearly:

"In the strict form of the principle of causality, 'If we know the present exactly we can calculate what will happen in the future' it is not the conclusion but the premise which is false. We cannot, even in principle, know every detail of the present." 15

Other stationary models

We have used the first-order autoregression to examine the Heisenberg bound; let us briefly discuss its application to several other important models of stationary time series. The second-order autoregression16

Xt+1 = a1Xt + a2Xt-1 + ut+1 (8)

has VX = s2(1-a2)/(1-a2-a1)(1-a2+a1)(1+a2) (9)

and r = a1/(1-a2) . (10)

Stationarity requires that all the terms in parentheses be positive. Substitution of (9) and (10) into (1) gives an expression for the variance of the first difference. As in (5), we minimize the geometric mean of the two variances subject to the stationarity requirements. This is now a problem in nonlinear optimization, and the Kuhn-Tucker theorems provide necessary conditions for the Heisenberg bound to be attained17. They are not sufficient conditions because (9) and (10) are not convex expressions in a1 and a2. However, the validity of the bound is verified by starting the minimization algorithm at several different values of a1 and a2 and observing that it always converges to the same point: a1 = r = 1/3 and a2 = 0, implying a bound of 1.3s2. This is the same result we obtained for the first-order autoregression.

A mixed autoregressive-moving average model18

Xt+1 = b1Xt + ut+1 - b2ut (11)

is defined by the constraints

VX = s2(1+b22-2b1b2)/(1-b12) (12)

r = (1-b1b2)(b1-b2)/(1+b22-2b1b2) (13)

-1 < b1 < 1 (14)

-1 < b2 < 1 . (15)

 

Nonlinear minimization shows that the Heisenberg bound is once again 1.3s2 when b1 = r = 1/3 and b2 = 0. However, not every model's bound is attained at r = 1/3. For example, the simple moving average19

Xt+1 = ut+1 - cut (16)

has VX = s2(1+c2) (17)

and r = -c/(1+c2) (18)

for -1 < c < 1 (19)

From equation (1), the variance of the first difference is 2s2(1+c+c2). If this expression and equation (17) are graphed on a logarithmic scale as functions of c, their geometric mean is found to be smallest when c = -.29 and r = .27.

These examples are perhaps sufficient to indicate how the Heisenberg bound is determined for stationary models.

The indeterminacy of time and energy

Related to the indeterminacy of position and velocity is another physical law called the "time-energy" uncertainty principle, which also has a counterpart in time series analysis20. According to this concept, an object's energy can change abruptly during a brief time interval. The shorter the time interval, the greater the energy change can be. An electron may occasionally penetrate a barrier that it would normally find impassable. Still one cannot say that the electron has violated an energy conservation law, for the instantaneous penetration does not permit even a virtual measurement. (This phenomenon underlies the operation of the scanning tunneling electron microscope.)

To see the relevance to stationary models, consider the effect of sampling the first-order autoregression (4) more and more frequently. As the time interval shrinks toward zero, the autocorrelation r must increase. After all, the time series can change very little during an instant. As r approaches 1, VX --the "energy" or "power" of the time series-- increases without limit according to equation (6) and Figure 1. In other words, a stationary time series whose volatility is unlimited over a sufficiently brief interval is the analogue of the time-energy indeterminacy.

Conclusions

The principal conclusions of our discussion may be stated concisely. The consequences of Heisenberg's principle are not limited to the subatomic world. They apply to a wide class of phenomena that can be represented as stationary time series, once it is recognized that the general scale parameter s is the counterpart of Planck's constant. A comprehensive statement of the Heisenberg principle includes expressions (3) and (5): position and velocity are not perfectly correlated, and there is a trade-off in the precision of the two variables. These impediments to an ideal forecast cannot be eliminated by reducing the sampling error of parameter estimates. However, standard statistical analysis implicitly takes account of the Heisenberg indeterminacy, so our recognition of the uncertainty principle does not call for operational changes in time series forecasting. Instead, our discussion offers a perspective on the intrinsic limitations to which those forecasts are subject.

References

 

1. Raymond M. Smullyen, Godel's Incompleteness Theorems (Oxford, UK,

Oxford University Press, 1992).

2. Kenneth J. Arrow, Social Choice and Individual Values (New York, NY,

USA, Wiley, 1963).

3. Joseph F. Traub and Henryk Wozniakowski, 'Breaking Intractability',

Scientific American, January 1994, pages 102-107.

4. Werner Heisenberg, 'Uber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik', Z. Physik 43, 1927, pages 172-198. Translated in J. A. Wheeler and W. H. Zurek,

Quantum Theory and Measurement (Princeton, NJ, USA, Princeton University Press, 1983). See also a standard textbook such as F. Mandl, Quantum Mechanics (New York,NY, USA, Wiley, 1992), pages 82-86.

5. Gilbert Strang, Linear Algebra and Its Applications (San Diego, CA, USA,

Harcourt Brace Jovanovich, 1988), page 260; B. H. Lavenda,

"Thermodynamic Uncertainty Relations and Irreversibility," International

Journal of Theoretical Physics , Vol. 26, No. 11, 1987, pages 1069-1084;

Eric Blankmeyer, "The Uncertainty Principle in Microeconomics,"

working paper, Southwest Texas State University, 1995; Raoul

Charreton, Relativity and Quanta in Economics (Paris, France, Actibus,

1992), pages 118-119, 151-153.

6. Stephen Toulmin and June Goodfield, The Architecture of Matter (New York, NY, USA, Harper & Row, 1962), page 290.

7. Herbert L. Strauss, 'Uncertainty Principle', Multimedia Encyclopedia (New York, NY, USA, Grolier Electronic Publishing, 1992).

8. Irving Kristol, letter to the Wall Street Journal, August 1, 1994.

9. Watson Davis, The Century of Science (New York, NY, USA, Duell, Sloan and Pearce, 1963), page 145.

10. David Park, The How and the Why (Princeton, NJ, USA, Princeton University Press, 1988), pages 326-327.

11. Standard references include:

George E. P. Box and Gwilym M. Jenkins, Time Series Analysis:

forecasting and control (San Francisco, CA, USA, Holden-Day, 1970).

C. Chatfield, The Analysis of Time Series: an Introduction (London, UK,

Chapman and Hall, 1984).

Maurice G. Kendall and J. Keith Ord. Time Series (Oxford, UK,

Oxford University Press, 1990).

12. Box and Jenkins, pages 56-58.

13. Mandl, pages 82-86.

14. Box and Jenkins, page 58.

15. Heisenberg, page 197.

16. Box and Jenkins, pages 58-63.

17. Saul I. Gass, Linear Programming, third edition (New York, NY, USA,

McGraw-Hill, 1969), chapter 13; Judith Liebman, Leon Lasdon, Linus

Schrage, and Allen Waren, Modeling and Optimization with GINO

(San Francisco, CA, USA, Scientific Press, 1986), Appendices 1,2,3.

18. Box and Jenkins, pages 76-78.

19. Box and Jenkins, pages 69-70.

20. Mandl, pages 82-84.