*Setting the Stage*

*Setting the Stage*

The primary motive for publishing this treatise is attributable to the mass of misunderstandings that currently exists about the nature and purpose of the Six Sigma Model (SSM) and its use of a 1.5σ shift.

Where this topic is concerned, an Internet tour of the more popular Six Sigma discussion forums reveled a wide array of understandings and explanations involving the 1.5σ shift, some of which were quite bizarre. Even worse, several books on the subject were either in error or misleading.

For example, consider the case where a Six Sigma or Quality Professional is examining the capability of a particular *Critical-to-Quality* characteristic (CTQ). To this end, the practitioner collects some empirical data, makes a few calculations and then discovers the best-case capability of the CTQ is 3.8σ and the worst-case is given as 1.7σ. Thus, the practitioner concludes the shift factor would be 3.8σ – 1.7σ = 2.1σ. Obviously, this magnitude of shift (2.1σ) is far larger than the 1.5σ shift as advertised by the Motorola Six Sigma program. Consequently, the practitioner falsely concludes the 1.5σ shift is inappropriate, wrong or bogus.

In such cases, many practitioners find it far easier to say the SSM is too stringent or unrealistic (and their data prove it). They find it quite difficult to admit their process capability (and control over that capability) is substandard. So its only natural for them to say the SSM is wrong (or somehow deficient). In this way, they don’t have to admit their knowledge and skills are lacking. In the practice of human psychology, this phenomenon is called *Cognitive Dissonance Reduction.*

Many times, the unenlightened practitioner often starts with good intent; and even the right questions, but ends with the wrong answers. With wrong answers, they invariably draw inappropriate and unjustifiable conclusions — which often get translated into operational manuals, management decisions and quality reporting systems. But then again, when all you own is a hammer, everything looks like a nail.

So, if you’re using a hammer to staple some papers together, then surely any failure to accomplish the task must be due to bad staples. After all, the hammer is operating just fine. Worse yet is the conclusion that staples have no pragmatic value in the real-world, so don’t bother trying to use them – just nail the papers together. Granted, this may be easy enough, but mailing the papers with the table attached can get a little bulky and expensive. As much as we may chuckle about the staple example, it plays out far too often when it comes to the SSM (and the 1.5σ shift).

Regrettably, many practitioners are just not aware that the SSM is actually a performance goal, no different from any other type of performance goal or objective. Owing to this, they frequently treat the SSM as if it was some type of golden rule that Motorola discovered, much like finding a new theorem in the world of geometry. In this context, the SSM is not a principle or rule. Above all, the SSM is not a predictive model. The SSM is not intended to forecast the current or future performance of a CTQ — its just a model. Of course, all models are wrong, but some are more useful than others.

This author’s speculation about how this plight came about is largely based on the *degrees-of-separation* between the current understandings and definitions of Six Sigma versus those provided by the original source. As the degrees-of-separation have increased over time, so has the extent of distortion. Of course, this is a well-known principle when passing a message from person-to-person. The final message is often nothing like the original communication.

*Authenticating the Message*

*Authenticating the Message*

As time has passed, new generations of Six Sigma practitioners have entered the field. Interestingly, each new generation’s interpretations and understanding of Six Sigma is frequently based on the previous generation’s knowledge. Owing to an expanding population of Six Sigma and Quality Professionals, as well the speed and distribution capabilities of today’s communication tools (like the Internet), attempting to fix a misunderstanding (once propagated) becomes virtually impossible.

As a result, this has caused the SSM (including the 1.5σ shift) to be frequently ignored, devalued or otherwise dismissed, especially by process-centric professionals. Owing to this, the inherent value of the SSM (including the 1.5σ shift) is often not fully appreciated. Consequently, sub-optimization prevails. In short, *dilution pollution* is the consequence.

In this context, we all understand that goals and objectives are established on the basis of human judgment. The case of establishing the SSM was no exception. Of course, the making of such decisions and judgments often calls upon the thoughtful consideration of supporting information, like empirical data, statistics and mathematics, as well as the experiences and beliefs of others.

Again, such sources of artifactual, theoretical and experiential evidence are frequently used to assist the goal-setting process, not corroborate whether or not it’s the right goal. In reality, Six Sigma is not something that exists in physical space and time – it’s simply a vision of Motorola’s ideal, much like Plato’s Chair. In this context and regard, it should be easy to see that the SSM is not something that needs to proven or dis-proven, per say.

Simply stated, goals and objectives are something to be achieved. They can take the form of an aspiration, imperative or virtually anything we strive for. If we consider the simple relation Y = *f* (X), we can view Y as the goal and X as the set of unique circumstances that defines Y.

Where the SSM is concerned, there exists a number of defining circumstances, such as the population standard deviation, shape of the distribution, specification limits and so on. Certainly, sampling statistics have no place in authenticating or validating any such circumstances. The circumstances that define the SSM are the result of human choices and decisions, not statistical samplings or theoretical equations.

Another way of viewing this is to say that performance goals and objectives are not something that can be mathematically derived or empirically proven before adoption. After all, we don’t use a mechanic’s toolbox to envision a better automobile design. In a nutshell, performance goals and objectives are not subject to any type or form of “proof” before or after their adoption. Goals and objectives stand on their own merits. This point cannot be overemphasized.

However, analytical tools like sampling statistics, mathematics and logic can be used to facilitate or otherwise guide the setting of goals and objectives. Again, such tools should not be used to authenticate or validate the worthiness of that goal or objective. Here again, the SSM is no exception.

For example, consider the 1.5σ shift factor. The numeric value of 1.5 is merely a comparative artifact. It results from contrasting the *Best-Case-State *(BCS) of process capability to the *Worst-Case-State *(WCS). By no means is it an analytical outcome or the result of some set of theoretical derivations. In the context of Six Sigma, the shift factor is merely a performance objective, not a universal truth, nor is it an eternal constant as many falsely believe.

*Illustrating the Idea*

*Illustrating the Idea*

To fully grasp this idea, consider Davis Bothe’s white paper entitled: *“Statistical Reason for the 1.5σ shift.” *Straight to the point at hand — goals and objectives need no rationale, statistical or otherwise. Goals and objectives stand on their own merits (as previously mentioned).

While Bothe’s paper is focused on determining the most suitable mean adjustment factor (for purposes of reporting process capability), the SSM stands as a performance model. Naturally, these are two different, yet interrelated ideas.

Nonetheless, in the introduction to his paper Bothe says: *“By examining the sensitivity of control charts to detect changes of various magnitudes, this article provides a statistical basis for including a shift in the average that is dependent on the chart’s subgroup size.”*

At this point, the reading may be wondering whether or not the aforementioned quote contains any truth. Well, from the looks of things, it does. In this writer’s opinion, the paper is well written, to the point and accurate. However, the paper has nothing to do with whether or not the Motorola 1.5σ shift factor is a worthy goal.

Based on this exert, it is abundantly clear that Bothe is providing a world-class answer to what some would argue is the wrong question, at least where the 1.5σ shift is concerned. In other words, he is treating the shift factor from a real-world viewpoint, not as a model objective (as it was originally intended).

Toward the end, Bothe goes on to say that: *“This article has provided the statistical rationale for adjusting estimates of process capability by including a shift in μ. The range of these statistically derived adjustments is very similar to the one based on the various empirical studies referenced in the six-sigma literature.” *

This would be fine and well except for two major considerations. First, the SSM is based solely on the influence of random error. However, the correction factors set forth in the other studies (cited by Bothe) concurrently considers both random and nonrandom sources of error. Consequently, his comparisons are really apples-to-oranges. Second, the SSM is based on an infinite degrees-of-freedom, not a limited sampling (as was employed by Bothe). Here again, we have apples-to-oranges.

To grasp the full import of this discussion, let’s consider an analogy. Suppose you’re on a running team and the coach sets the objective to run one mile in 5 minutes, but you have historical data that shows in can be done in 4 minutes. Does that mean the coach’s goal is wrong or should be changed? Does it mean you should simply substitute 4 minutes for your actual time? Of course, the answer to both questions is a resounding no.

Above all, the SSM (or any of its parts thereof, including the 1.5σ shift factor) should not be used as a surrogate for empirical data when conducting actual performance capability studies, even though some still attempt to use it in this manner.

*Discussing the Model*

Where the theory of Six Sigma is concerned, the SSM contrasts the prescribe *Best-Case-State* (BCS) for a CTQ to its prescribed *Worst-Case-State* (WCS). From a more pragmatic perspective, the BCS is used to build or otherwise construct the WCS. This means the WCS is defined by either shifting or amplifying the BCS. For this portion of our discussion, the reader is directed to Exhibit 1.0 as a point of reference.

For purposes of the SSM, the WCS can be represented by a centered ±6.0σ probability distribution that has been offset from its previous location (or target value) by a factor of +1.5σ or -1.5σ. Of course, this results in a 4.5σ level of capability. This is called the *Shifted-Worst-Case-State*, or SWCS. For the unilateral case of the SWCS, the mean offset (shift) is always vectored in the worst-case direction.

Since the SSM is founded on a unilateral specification, the mean offset (shift) is always vectored toward the specified limit. This is what creates the worst-case condition. If the mean shift is vectored away from the specified limit, the related tail area probability gets better, not worse. When properly vectored, the SWCS translates to DPMO = 3.4, which is statistically equivalent to a unilateral performance capability of 4.5σ, regardless of whether the specified performance limit is given as the USL or LSL.

As a sidebar note, the reader should understand that the binomial and normal probability distributions are identical when considering an infinite degrees-of-freedom. Owing to this, the SSM can be treated in a discrete or continuous form.

Let’s now consider the second perspective of the WCS. This particular set of circumstances can be represented by a centered ±6.0σ probability distribution that has been inflated (amplified) by a factor of 1.33. This is called the *Centered-Worst-Case-State*, or CWCS. The resulting distribution is wider, but still centered. Either way, the net effect is a 4.5σ model of performance capability that translates to DPMO = 3.4.

In some situations it might be better to present the SSM in a bilateral format (two-sided specification). Given the bilateral case, the SWCS defect rate would be given as DPMO = 3.4. However, the CWCS defect rate would be given as DPMO = 6.8. This is because both sides of the performance distribution must be considered concurrently. Naturally, both cases translates to a 4.5σ level of performance capability.

Since the normal distribution is symmetrical and bilateral, we need only to concern ourselves with the unilateral case in order to fully discuss and illustrate the content of this treatise. In this way, the supporting graphics and discussion can be simplified without loss of context, meaning or specificity.

**Defining the Best-Case **

**Defining the Best-Case**

With respect to the SSM, the Best-Case State (BCS) represents the *momentary performance capability* of the model CTQ. In this regard, the BCS is often referred to as *short-term capability* or* instantaneous reproducibility*. The reader is directed to Exhibit 2.0 for a graphical summary of the BCS.

Where the SSM is concerned, the BCS has two primary components. The first component is a symmetrical bilateral (two-sided) specification that is used to represent the performance expectations of the model CTQ. The second component is a continuous random normal probability distribution. This distribution is used to model the persistent sources of random error that influences CTQ performance. Periodic sources of random error are not reflected in the BCS distribution, but are accounted for in the WCS.

Owing to the symmetrical nature of the normal distribution and associated performance specifications, the BCS is treated as a unilateral (one-sided) case for purposes of this treatise. This has been done to simplify the discussion and supporting graphics.

The upper limit of acceptable performance for the SSM is preset such that USL = +6σ_{BC}, while the target specification is fixed at T = 0. In addition, the distribution location parameter µ_{BC} coincides with T such that µ_{BC} = T.

When considering the BCS, the population dispersion parameter is defined as σ_{BC} = [ (1 – M) x (USL – µ_{BC} ) ] / Z_{U}, where BC is the best-case and M is the extent of safety margin (design margin) conventionally selected from the range 0.0 ≤ M ≤ 1.0. Although its common practice to establish M = .25, the SSM requires M = .50 for all cases.

Where the BCS is concerned, its important to recognize that σ_{BC} represents the maximum extent of error due solely to the influence of persistent random causes. In this context, the BCS represents the theoretical steady-state reproducibility of the model CTQ. The full range of performance capability associated with the BCS is specified as ±Z_{BC} = ±6.0, where -6.0σ_{BC} and +6.0σ_{BC} exactly coincide with the lower and upper performance tolerance limits (LSL and USL, respectively).

In accordance to conventional quality practice, the practical limits of unity for the normal probability distribution are given as ±Z_{U} = ±3.0 as prescribed by normal convention when using classical indices of capability, like Cp and Cpk. The reader should also be keenly aware that the SSM does not consider, factor or otherwise make any attempt to account for the influence of assignable (non-random) sources of error.

The BCS performance expectation can also be prescribed as a capability ratio in the form Cp or Cpk, where k = 0. In this case example, the BCS capability index is given as Cpk = 2.0. By virtue of the association between the normal and binomial distribution, the continuous form of the BCS capability (±Z_{BC} = ±6.0) can be statistically converted to DPMO = .002 for the bilateral case and DPMO =.001 for the unilateral case. Inversely, if the DPMO is known to be .001 and the related CTQ has been assigned a unilateral tolerance, then by way of the normal distribution the standard normal deviate would be given as Z_{BC} = 6.0σ_{BC}**. **

**Exemplifying the Best-Case**** **

**Exemplifying the Best-Case**

To better illustrate this portion of our discussion, let’s consider a practical example. We’ll hypothesize a CTQ such that USL = 140, T = 100, µ_{BC} = T and M = .50. Thus, the standard deviation for the BCM would be defined as: σ_{BC} = [(1 – M)(USL – µ_{BC})] / Z_{U} = [(1 – .50)(140 – 100)] / 3 = 6.67.

In this context, σ_{BC} is a model parameter, not a dispersion statistic commonly associated with a sampling distribution. This means that the BCS is a model of idealized performance and, as such, constitutes a goal. Again, the reader is directed to Exhibit 2.0 for a graphical summary of the given BCS example.

With this discussion serving as a backdrop, the best-case performance capability is defined as Z_{BC} = (USL – µ_{BC}) / σ_{BC}= (140 – 100) / 6.67 = 6.00. Thus, we have defined the Six Sigma Model (SSM) from a best-case point of view. We have also noted that this level of capability (Z_{BC} = +6.0) is a declared value based on a prerequisite level of design margin. Consequently, σ_{BC }is not a measured quantity — it exists by definition only. In this light, it should be treated as a goal or idealized state, not as a result or outcome. Under no circumstances should the BCM serve as a substitute or surrogate for actual data when performing capability analyses.

**Defining the Worst-Case**

**Defining the Worst-Case**

With respect to the SSM, the Worst-Case State (WCS) represents the *longitudinal performance capability* of the model CTQ. In this context, the WCS is also known as *long-term capability* or *sustainable reproducibility*.

The reader is directed to Exhibit 3.0 for a graphical summary of the WCS. Essentially, the WCS parallels that of the BCS, but with a few differences. To avoid redundancy of definitions and explanations, we will only discuss the points of difference.

Since the BCS capability is defined as ±Z_{BC}= ±6.0 and the WCS as ±Z_{WC}= ±4.5, the rate of expansion (amplification) is defined as c = Z_{BC }/ Z_{WC}= 6.0 / 4.5 =1.33. Algebraically, this reduces to σ_{WC} = σ_{BC} / c. Its imperative to understand that c is treated as a constant solely for purposes of the SSM.

In this context, we set c = 1.33 so as to represent the bias introduced by periodic sources of random error that degrades the BCS performance distribution. By definition, such sources of error are fully unpredictable in terms of magnitude and duration. In this sense, they are intermittent and nonassignable, yet must be accounted for in the SSM (by way of the WCS).

This is often confusing to many practitioners. They are familiar with the terms sporadic problems (assignable causes; special causes) and chronic problems (random causes; noise) as used in Statistical Process Control (SPC) work. Most process workers do not understand there are two forms or chronic problems: 1) persistent random causes – white noise which occurs continuously throughout the process and whose variation adds to the overall variation; and 2) periodic random causes of a temporal nature that cause either centering or variation issues, but are usually undetectable by classical SPC techniques.

To explain why the temporal random causes are undetectable would require a detailed explanation of hypothesis testing. Of course, such an explanation is beyond the scope of this article. Again, this treatise is focused on the SSM which relies on a normal probability distribution and its key parameters, not sampling statistics.

Furthermore, the location parameter for the WCS distribution is specified as μ_{WC} and set such that μ_{WC}= T. The full range of performance capability associated with the WCS is specified as ±Z_{WC}= ±4.5, where – 4.5σ_{WC} and +4.5σ_{WC} exactly coincide with the lower and upper performance tolerance limits (LSL and USL, respectively).

In this regard, the capability bandwidth of ±Z_{WC} = ±4.5 represents the prescribed worst-case range of performance that can be expected when persistent and periodic sources of random error influence the magnitude of σ_{WC}. Of prime importance, it must be remembered that the WCS distribution excludes any and all sources of nonrandom error, regardless of type, form, origin or timing.

The WCS performance expectation can also be prescribed as a capability ratio in the form Pp. The capability index Ppk is not germane to the WCS because the equality μ_{WC}= T is given as a perpetual steady-state condition (by definition).

By virtue of the association between the normal and binomial distribution, the continuous form of the WCS capability (±Z_{WC} = ±4.5) can be statistically converted to DPMO = 6.8 for the bilateral case and DPMO =3.4 for the unilateral case. Inversely, if the DPMO is known to be 3.4 and the related CTQ has been assigned a unilateral tolerance, then by way of the normal distribution the corresponding standard normal deviate would be Z_{WC} = 4.5σ_{WC}**. **

**Revealing the Shift**

**Revealing the Shift**

Many practitioners of Six Sigma often inquire (and sometime lament about) how it is the bilateral goal of ±6σ can be equated or otherwise calibrated to a unilateral defect rate of DPMO = 3.4. The resolution to this perceived dilemma is actually quite simple.

Since the SSM performance capability was prescribed in the best-case form as ±Z_{BC} = ±6.0 and in the worst-case form as ±Z_{WC} = ±4.5, the relative difference can be described in two ways. The first way is by the simple ratio c = Z_{BC} / Z_{WC} = 6.0 / 4.5 = 1.33. In this case, c is treated as a model constant since Z_{BC} and Z_{WC} are preset in the SSM; however, under certain special conditions (not related to the this treatise) the amplification factor c can be treated as a variable.

The second way to calibrate the BCS capability to that of the WCS is by way of an equivalent offset in µ_{BC}, also called a *mean shift*. Personally speaking, this author prefers the phrase *linear offset* over that of *mean shift* because an offset can be made perpetual until declared otherwise, whereas a shift is often viewed as a temporal phenomenon with a relatively short dwell time.

Using the previous discussion as a backdrop, it can be demonstrated that Z_{Shift}= ( Z_{BC} – Z_{WC }) = ( Z_{BC} – Z_{BC }/c ). Algebraic simplification of the latter equation reveals that k = 1 – 1/c, where k =|T – µ_{BC} | / |SL – T|. Since ( Z_{BC} – Z_{BC }/c ) can be simplified to Z_{BC}( 1 – 1/c), substituting k for ( 1 – 1/c ) results in the *shift factor *Z_{Shift}= kZ_{BC}. Where the SSM is concerned, we set c = 1.33. Thus, k = (1 – 1/1.33) = .25. Substituting the numerical values associated with the SSM, we note that Z_{Shift} = kZ_{BC}= .25 (6.0) = Z_{BC} – Z_{BC }/c = ( 6.0 – 6.0 / 1.33 ) = 1.5.

Thus, amplifying the BCS population standard deviation by a factor of c = 1.33 is made equivalent to a linear offset in the BCS population mean on the order of 1.5σ_{BC}. It’s quite important to note that both ways of describing the WCS reveals that DPMO = 3.4 for the unilateral case.

In this way, the goal of ±6σ is calibrated to the case DPMO = 3.4. The reader is directed to Exhibit 4.0 for a composite picture of the SSM. Exhibit 5.0 presents a table of standardized mean shift values ( Z_{Shift }) for various levels of best-case performance capability ( Z_{BC} ) and a select range of inflation constants ( c ).

**Analyzing the Shift**

**Analyzing the Shift**

It’s interesting to note there has been a fair amount of research concerning how to most appropriately handle the biases introduced by process shifts and drifts when estimating the producibility of a product, process or service design. Regrettably, the majority of this research not only includes the influence of random error, but nonrandom error as well. Recall that the SSM is based solely on the influence of random sources of error — instantaneous (short-term) and longitudinal (long-term).

In this author’s opinion, some of the best applied research on the subject comes from a *Graphic Science* article published by Art Bender in 1962 under the title: *“Benderizing Tolerances – A Simple Practical Probability Method of Handling Tolerances for Limit-Stack-Ups.”* Although the date of this reference (1962) may seem somewhat antiquated, the mathematics are still just as valid today as they were then. Unlike a perishable food, the field of mathematics has no expiration date.

For all intents and purposes, Bender’s thesis contends that the *Root-Sum-Square* (RSS) method of analyzing assembly tolerances can (and likely will) underestimate the assembly defect rate. In particular, Bender cites the negative influence of mean shifts and drifts as the primary culprit.

As he adamantly points out, shifts and drifts tend to expand the standard deviation of a component over time. In turn, this increases the probability of an assembly failure. His solution was eloquently simple — just inflate (amplify) the standard deviation of each assembly component by a factor of c = 1.50 (when performing the related tolerance analysis). According to Bender, this makes the projected probability of assembly more realistic.

Of note, Bender readily admits that his corrective index (c = 1.50) was not derived, but resulted from the merger of several corroborating sources of information and data, including personal experience. In this sense, Bender’s recommendation of c = 1.50 must be viewed as a first-order approximation that has no standard underlying probability distribution.

Back in the day of Bender, 4.0σ was considered an ideal level of reproducibility, not 6.0σ as used in the SSM and generally accepted today. By way of Exhibit 5.0, it’s quite clear that when a 4.0σ level of capability is cross-referenced against an inflation factor of c = 1.50 (as proposed by Bender), the equivalent mean shift is 1.33σ. Obviously, this level of equivalent mean shift is different in magnitude and type from that of the Six Sigma Model (SSM).

Upon closer examination, the reader will realize that Bender’s correction not only includes the influence of random sources of error, but nonrandom sources as well. Again, this in stark contrast to the SSM (which only factors random sources of error). Owing to this, the SSM promotes c = 1.33, not c = 1.50 as suggested by Bender. At the risk of redundant discussion, the two approaches are not directly comparable as they represent apples and oranges, so to speak. However, there are some highly valuable insights and principles that should be absorbed.

In this researcher’s opinion, a CTQ performance model (like the SSM) should not be configured to account for nonrandom sources of error (such as Bender’s proposed correction). The reason for this is simple — there are no probability distributions that can be used to emulate or otherwise model nonrandom error; however, there are numerous functions for random error (like the normal probability distribution). Consequently, models that are based solely on random error are likely to prove more useful than those that attempt to include sources of nonrandom error (like Bender’s model).

Granted, a person could further classify the general range of c. For example, on the upper end of things, processes that tend to exhibit a continuing “out of control” condition might need to rely on an amplification factor c = 1.8, or maybe even c = 2.0 or higher (depending on the prevailing circumstances). On the opposite end, the value c = 1.33 might be used to represent those processes that are able to sustain a high level of statistical control (random error only). Thus, a family of constants could be defined to cover a general range of process control conditions. Without saying, more research is needed before such a strategy can be adopted.

*Applying the Model*

*Applying the Model*

The mission of this case example is to exemplify the SSM in a simple yet effective way. From the bird’s eye viewpoint, we’ll come to understand how the 1.5σ shift factor can be employed to assess the robustness of a product, process or service design. The reader must be aware that performance models other than the SSM can be employed; however, since this treatise is focused on the SSM, we’ll stay consistent with its parameters and not those of some other model.

However, before continuing, we should briefly pause and explain that there are several types of analytical tools and procedures that can be used to achieve the aforementioned mission. Of course, some of these tools are more powerful than others. Where this case is concerned, only the most rudimentary tools have been used since this presentation strategy will keep the discussion light, simple and more readable. Remember, the intent is to provide insights, not training.

In this case, we’ll manipulate the design parameters of a hypothetical product using Monte Carlo simulation (Exhibit 6.0). More specifically, we will zero in on a particular CTQ called Y, which is the dependent variable. We’ll also say that the performance of Y is largely governed by 5 independent variables, often called X’s or factors (Exhibits 7.0 and 8.0). Consequently, we can say that: Y =* **f* ( X1, X2, … , X5) + ε, where ε is the residual error. Thus,* f* will determine how the X’s are blended together to create Y (Exhibit 9.0).

* Step 1:* Generate a best-case performance baseline for Y using the Monte Carlo simulator. For purposes of demonstration, the number of simulated CTQ measurements was set at N = 1,000. In this case, N was an arbitrary selection; however, under actual application conditions it would likely be computed based on a prescribed set of statistical conditions and constraints. Results of the baseline Monte Carlo simulation have been located in Exhibit 10.0.

* Step 2:* Conduct a sensitivity analysis of each X. In this case example, simple linear regression was employed to establish the slope (sensitivity) and intercept of each X relative to Y. Results of the sensitivity analysis have been located in Exhibit 11.0. According to the analysis, X1 and X5 are by far the most dominant factors. In fact, these two factors account for about 88% of the total variation in Y. Therefore, we would have to say that Y is highly sensitive to changes in X1 and X5. Generally speaking, this type of knowledge becomes important when it comes to design optimization. Of course, there are numerous types of optimization tools and methods that can be used to accomplish this task.

* Step 3:* Test the robustness of Y to a dynamic expansion of the standard deviation of each X. To accomplish this, we simply amplified the baseline (BCS) standard deviation of each X by a factor of c = 1.33. The resulting histogram has been located in Exhibit 12.0. By contrasting the BCS to the centered WCS, it is apparent the projected defect rate increased from DPMO = 0 to DPMO = 3.

* Step 4:* Test the robustness of Y to a static off-set in the mean of each X. To accomplish this, we shifted the mean of each X by a factor of 1.5σ. The resulting histogram has been located in Exhibit 12.0. By contrasting the BCS to the shifted WCS, it is apparent the defect rate increased from DPMO = 0 to DPMO = 70,929.

* Step 5:* Recall the sensitivity analysis revealed that X1 and X5 carried the vast majority of leverage within the total system of causation. Thus, we can conclude that Y is not robust to a 1.5σ static off-set in the mean of X1 or X5 (Exhibit 12.0). However, the product design is robust (insensitive) to a 1.5σ static off-set in the mean of factors X2 through X4 (because they carry virtually no leverage in the total system of causation). Consequently, an optimization study must be executed to desensitize the design to a static mean-offset on the order of 1.5σ (for factors X1 and X5).

**Closing the Conversation**

**Closing the Conversation**

Through this treatise we have revealed the basic design and inner workings of Six Sigma through the SSM, including the 1.5σ shift. To better facilitate our discussion, we provided an example case study to demonstrate how the SSM can be employed when conducting a basic sensitivity analysis. When taken together, these two discussions served as the foundation for developing better insights into what Six Sigma is and how it can be used during the course of designing a product, process or service.

Based on our discussions, It can certainly be said that the SSM is not a forecasting model, nor is it a representation of some past or present event. It’s certainly not a surrogate for reality, nor does it (or any of its objectives) serve as constant that implies every CTQ must follow the model. *Plain and simple, the SSM is a statistical vision that epitomizes the aim of breakthrough – nothing more, nothing less.*

The ideas, methods and practices set forth in this treatise can greatly extend the reach of Six Sigma by providing the bedrock upon which a network of new and innovative knowledge can be built, especially in the design of products, processes and services. In turn, new knowledge spawns new insights, which foster new questions.

Naturally, the process of investigation drives the discovery of fresh answers to lingering questions. As a consequence, ambiguity diminishes and new direction becomes clear. Only with clear direction can people be mobilized toward a super-ordinate goal. Thus, the intellectual empowerment of the SSM represents the ultimate aim of Six Sigma.

*Online MindPro Black Belt Training and Certification*

*Six Sigma Wings for Heroes*

*The Great Discovery*

*Dr. Mikel J. Harry Biography & Professional Vita*

Business Phone: 480.515.0890

Business Email: Mikel.Harry@SS-MI.com

Copyright 2013 Dr. Mikel J. Harry, Ltd.

Dr. Harry’s paper titled “The Shifty Business of Process Shifts: Part 3” provides a thorough explanation of the meaning and interpretation of the 1.5 sigma shift used in the Six Sigma Model. As a first generation Six Sigma Practitioner and Master Black Belt, I am familiar with the literature regarding the shift and its place in the Six Sigma Model (SSM). At the time six sigma was proposed at Motorola, industry was striving to breakthrough to three sigma performance. Six sigma performance was an extreme reach-out goal. It was originally proposed as a goal in order to focus attention on quality improvement desperately needed to improve field reliability of communication equipment.

In the initial technical documentation concerning the SSM, it was recognized that a six sigma level of quality equated to 1 defect per billion opportunities in a unilateral case and 2 defects per billion in the bilateral case. Further it was shown that even if a process shifted as much as 1.5 standard deviations in one direction, that the defect level would be 3.4 parts per million and 6.8 ppm for the bilateral case as Dr. Harry explains in this paper.

It has been my experience that the inherent and random variation in a process is reasonably explained by a 1.5 sigma mean shift and/or a 1.33X growth in the standard deviation. When designing a product or process, use of these corrective measures allows the engineer to study the robustness of the design. Prior to introducing the shift factor, design engineers simply kept the simulated mean equal to the specification mean – even though this practice was knowingly deficient. This “less than optimal” practice continued because designers did not know the amount of “adjustment” to fold into the design. Owing to the research of Bill Smith and Mikel Harry, the world now has a plausible and rational means to account for process shifts and drifts during the course of product design.

Cathy Lawson, PhD.

Reblogged this on Mbarriger and commented:

1.5 Sigma Process Shift Conclusion – Dr. Mikel J. Harry.

Dr.Harry,

The article clarified the purpose and proper use of the Six Sigma Model as set forth by Motorola about 30 years ago. Separation of the performance goal versus predictive modeling is explained thoroughly.

It is also clear that the worst-case condition of the Six Sigma Model can be described as a 1.5 sigma off-set in the population mean or as 1.33 expansion of the best-case population standard deviation. I can certainly see some applications where using linear mean off-set would be the best option, but at the same time, I can also envision cases where an inflated standard deviation would be more favorable. Either way the net effect is a worst-case condition of 3.4 defects-per-million-opportunities.

It is now evident that the Six Sigma Model (including the 1.5 Sigma Shift) is best used for optimizing the producibility of a product design. In this framework it would seem to have many applications. I would also have to agree with the author that many in the field of Six Sigma really don’t understand what the Six Sigma Model is or why it was created. What I found most interesting was using the Six Sigma Model as a target for qualifying a process before it starts producing a new product design.

Overall I believe this paper is one of those “high-value” resources I will keep on file.