Authored by: Mikel J. Harry, Ph.D.
Introduction and Overview
Making data-centric decisions using a simple Time Series Chart can sometimes be a little tricky. This is especially true when a large amount of data are involved. Given this situation, its entirely possible that the volume of graphed data effectively masks or otherwise conceals the presence of an underlying pattern of behavior, like some form of trend, shift or cycle.
To illustrate the latter point, let’s consider a fair pair of dice (see Exhibit 1.0). If you were to throw the dice 500 times, you would likely generate a Time Series Chart much like the one provided in Exhibit 2.0. In this case, the vertical scale represents the possible outcomes from throwing a pair of dice (2 through 12). The horizontal scale reflects the number of dice throws (1 through 500).
Using this illustration as a guide, you can clearly see there is a large mass of data displayed on the graph; however, upon close inspection, you might also “see” an underlying pattern in the data. To make the suspect pattern more distinct, many practitioners attempt to filter or otherwise “smooth the data” by employing the Moving Average.
Arguably, most process improvement specialists, managers and executives employ the Moving Average to suppress the masking effect of natural variations. Thus, the practitioner believes that by applying the Moving Average to a set of time-series data, the presence of a suspect pattern can be made to “rise above the background of noise” and reveal itself. In this context, the Moving Average is often viewed as a “filter.”
However, as we shall come to see, the Moving Average Chart can sometimes create artifactual patterns. This means the seeming nonrandom behavior (pattern) that might appear on your Moving Average Chart could be a peculiar outcome that’s inherent to the tool rather than any system of causality. Owing to this, when you employ such charts, its quite important to recognize that “not all is as it may seem.”
Throwing the Dice
To best illustrate what is meant by the latter statement, let’s consider an elementary example. In this case, we’ll use a fair (unbiased) pair of dice as the data generator. If the dice are truly unbiased (fair), then every number on each cube (1 through 6) has an equal likelihood of occurrence. This can be easily verified by conducting a simple Monte Carlo simulation. Exhibit 3.0 shows the results of an unbiased pair of dice that was rolled N = 500 times using the Monte Carlo method of simulation.
From the Bar Charts presented in Exhibit 3.0, it is easy to surmise the underlying distribution of both die. In this case, we can confidently say that the underlying distribution of both die are uniform in nature, meaning that each of the six numbers on both die have an equal likelihood of being selected during the Monte Carlo simulation.
Of course, we can sum the two numbers that result from every throw of the dice. This provides us with the total value of each roll. We can also create a Bar Chart to study the resulting underlying distribution (see Exhibit 4.0) . As statisticians already know, the resulting Bar Chart will tend toward a bell-like shape (resembling the normal distribution), especially as more dice are added to the mix and more throws of the dice are made.
To further our understanding of the issue at hand, let’s create a Moving Average Chart for all N = 500 throws of the dice. In this case, we’ll use five progressive observations to define the moving window size (frame size). This means that as the window moves forward to encompass the next observation, the fifth observation from the previous window moves out; thereby, resulting in a new window average (see Exhibit 5.0).
To better grasp how all of this comes together, we’ll conduct a basic Monte Carlo simulation using Microsoft Excel. The guiding question for the simulation is: “Can you forecast (predict) the next moving average of the dice by knowing the current moving average?” The results of the simulation are presented in Exhibit 6.0. In this case, a Line Chart of the raw data and the related Moving Average Chart is displayed in the exhibit’s background.
As should be apparent from the Scatter Diagram provided in Exhibit 6.0, there is a positive correlation between any given moving average and the next progressive average. This is to say there exists some degree of predictability when attempting to forecast the next average. In the case of this example, it can be said that, by knowing the current window average, its possible to foretell the next window average (to some extent).
But how is this possible when the originating data were fully random? Intuitively, there should be no correlation, but as we have demonstrated in this example, correlation exists. Certainly, on the surface, this would seem to be a bit of paradox.
A Practical Example
Let’s now discuss the issue by using a pragmatic scenario. Suppose a certain accounting manager (who was working in a corporate setting) decided to create a simple Time Series Chart (Line Chart) of the company’s labor-variance (Budget minus Actual).
We can inspect the manager’s chart by referencing Exhibit 7.0. With this chart serving as a backdrop, the manager became concerned about the apparent increase in labor variance that occurred between observations 30 through 35. During this interval of time, the labor variance seemed to abruptly increase.
To better understand what the chart was saying, the manager called in an internal Continuous Improvement Expert. The expert advised the manager that a Moving Average Control Chart should be used to smooth out the highs and lows and flag any out-of-control conditions (from a statistical point of view).
By using such a chart, it would be possible for the manager to determine whether or not the labor variance was in a statistically steady-state condition. You might recall that a “statistically stead-state condition” is one in which the average is stable over time and all variations around that average are merely random perturbations without any deterministic cause.
If a statistical steady-state existed, there would be no cause for the manager to take corrective action. On the other hand, if the Moving Average Control Chart was to reveal a statistical out-of-control condition, then appropriate action to stabilize the situation would be fully justified (with known degrees of statistical risk and confidence). Exhibit 8.0 displays the resulting Moving Average Control Chart along with its statistical control limits (upper and lower).
Based on the data presented in Exhibit 8.0, it was clear to the manager that some type of nonrandom event took place between observation 34 and 38. This is evident from the three data points that are outside the upper control limit (see red points). Based on this empirical evidence, the manager was convinced that something caused the sudden upswing in labor variance. Owing to this, a root-cause investigation was initiated.
Well, the manager’s investigation revealed that the out-of-control condition could be rightfully explained by the inadvertent mistakes made on the behalf of a new bookkeeper (coincidentally occurring just before the suspect time interval). As a quick remedy, the new bookkeeper was given some added instruction and returned to work.
Because the labor variance immediately improved (see the downward trend following the out-of-control condition), the manager declared that the training was successful at eliminating the problem. However, unbeknown to the manager, the underlying variations of the chart were fully random. In effect, the manager’s use of the Moving Average artificially induced the out-of-control condition and trending.
Consequently, the result was a faulty analysis which, in turn, gave rise to an unwarranted decision to take action when, in reality, no action was required. In this scenario, the manager inappropriately connected a set of fully coincidental events in a cause-and-effect way — all because the Moving Average Control Chart induced the appearance of trending and signaled a phantom out-of-control condition.
For the case scenario at hand, it is abundantly clear that an otherwise statistically stable system of random causes can be innocently misdiagnosed as being unstable; thereby, leading to the false conclusion that corrective remedies should be pursued. Perhaps you can now better appreciate how faulty decisions can be made when using the Moving Average. Again, the Moving Average can induce the appearance of nonrandom behavior (patterned data) when, in reality, the appearance is just an artifact of the tool, not necessarily some form of underlying causality.
The Technical Explanation
The phenomenon discussed in this white paper is called the Slutsky-Yule Effect. This effect says that an autoregressive time series may generate the appearance of patterned data even when there are no causal elements impacting the observations (measurements). The recognition of this effect provides a foundation for much of neo-classical business cycle theory. To this point, a decision maker can easily be misled when applying the Moving Average Chart – sometimes with grave consequences.
David Glasner addressed this point in his 1997 book entitled: “Business Cycles and Depressions: An Encyclopedia.” He stated:
“This flexible tool [moving average] for either eliminating or highlighting cycles [patterned data] can also create the illusion of cycles [patterned data] where none existed before. Simulations by Eugen Slutsky in 1927 and Udney Yule in 1926 demonstrated that a cyclical moving-average series could be constructed from a series of purely random numbers. The Slutsky-Yule effect of misleading waves is one of the problems … The moving-average series can never be brought up-to-date, so the longer the period of averaging, the more lost coverage at the end points. These are serious problems for forecasting.”
Consulting the Dictionary of Statistics by Graham Upton and Ian Cook (Oxford University Press), the authors stated:
“[There is] an undesirable consequence of applying a moving average to a time series. Suppose a time series consists of randomly chosen observations from the same population. We would therefore hope that any averaging would bring out the fact that the mean was constant. However, by chance some values will be larger than others. Let X.k be a particularly large value. When we apply a moving average, all the averages that involve X.k will be inflated. With most moving averages the inflation will be greatest for the average centered on the kth observation and will diminish on either side. Each extreme value will have a similar effect such that the series of averages will present oscillations that appear real but are due to chance.”
Regrettably, most Lean Six Sigma Black Belts, Master Black Belts, Quality Practitioners, Continuous Improvement Specialists, Managers and Executives are usually unaware of the Slutsky-Yule effect – largely because this phenomenon was not a part of their education. Of course, it was not a part of their education because their trainers were likely unaware. Obviously, Slutsky-Yule Effect carries the potential for unacceptable economic losses and human injury when erroneous decisions are made on the basis of false detections. Therefore, this phenomenon should be made known on a wider basis within the business community and process improvement industry.
Verifying the Dice Simulation
To verify the aforementioned simulation and conclusions, you can conduct your own Monte Carlo analysis. Just follow the steps provided below.
Step 1: Go to cell location A1. Establish N=500 consecutive random integers between 1 and 6. You can do this using the Excel function: =RANDBETWEEN(1,6) and then copy that down 500 rows.
Step 2: Go to cell location B1. Establish N=500 consecutive random integers between 1 and 6. You can do this using the Excel function: =RANDBETWEEN(1,6) and then copy that down 500 rows.
Step 3: Go to cell location C1 and compute: =A1+B1. You now have the sum of two fair die that were thrown at the same time.
Step 4: Go to cell location D5 and compute: =AVERAGE(C1:C5). Copy this equation all the way down. You now have the Moving Average (MA). In this case, the MA window size is 5 consecutive observations.
Step 5: Go to cell location E6 and input: =D5. You now have column E lagging column D by one observation.
Step 6: Create a scatter-plot of columns D and E. Once plotted, you will clearly see a strong positive correlation. This means that you can forecast the next average by knowing the average of the 5 throws you just made. This is to say that you can forecast (to some extent). But is this possible if the originating data is purely random?
As the dice example illustrates, a nonrandom pattern can be unwittingly induced when employing the Moving Average even thought the source data are purely random in nature. Beware the Moving Average.