A Basic Primer on the Subject of Complexity


Authored By: Mikel J. Harry, Ph.D.

Click Image to Enlarge

Introduction

The study of complexity can be a very enlightening endeavor.  Throughout the literature, the idea of complexity is often framed as somewhat of a paradox.  Along these lines, the Bonini’s Paradox, named after Stanford business professor Charles Bonini, explains the difficulty in constructing models or simulations that fully capture the workings of complex systems.

Many great thinkers have lamented over the idea of complexity.  In support of this notion, Albert Einstein once stated: “The definition of genius is taking the complex and making it simple.”  In this sense, Douglas Horton underscored Einstein’s conclusion by saying: “The art of simplicity is a puzzle of complexity.

Digging a little deeper into the subject, one quickly learns that the concept of complexity is multidimensional in nature.  According to Wikipedia, “definitions of complexity often depend on the concept of a ‘system’— a set of parts or elements that have relationships among them differentiated from relationships with other elements outside the relational regime.”  To further this line of reasoning, we will take a look at complexity from a purely qualitative point of view and then consider the quantitative perspective.

The Qualitative Perspective

To launch our discussion, let’s first view complexity from a purely humanist perspective.  For example, when we look at the underlying structure of a tree, we tend to immediately notice the trunk and its progressively smaller branches.  Owing to this visual structure, many would say that such an image is not complicated, perhaps using the word “simple” to describe its character (see image shown below).

Click Image to Enlarge

However, some things we view in our world, like the overlapping forms that defines a piece of modern art, there is no recognizable underlying structure or pattern (at the macro or micro level).  As a result, our mind interprets the visual image as a seemingly never-ending set of randomly overlapping shapes, colors and textures.  Thus, many would say that such an image is “complex,” like the example shown below.

Click Image to Enlarge

The Quantitative Perspective

From the dictionary we find that the term “complex” is described as anything that is “composed of many interconnected parts.”  In this case, a “part” is largely synonymous with the terms “element” and “node.”

Let’s now consider the idea of a network.  Again turning to the dictionary, we find that a network is simply “a system of interrelated elements.”  Owing to this definition; and only for the purposes of this essay, we will treat the terms “network” and “system” as synonyms.  Thus, we may now generalize and say that complexity is: “the state of a system or network that is defined or otherwise characterized by the intricate arrangements of connections between its many nodes.”

As a matter of abstraction, we can reduce the latter definition to a couple of factors; namely, nodes and connectors.  Just as the name would imply, a node is a terminal point, much like the hub on a wagon wheel; where the spokes are connectors.  While the hub has a system role, so do the spokes.

For the sake of argument, suppose we have a network that consists of only two nodes (N = 2)  and one connector (C = 1).  In this case, the complexity factor would be M = 3 since there are N = 2 nodes and C = 1 connector, or simply M = N + C = 2 + 1 = 3.  To better visualize this particular case, the reader is directed to the upper left-hand side of the graphic provided below.  From this graphic, it should be apparent that as the number of nodes increase, the number of possible connections also increases, but at a disproportional rate.

Click Image to Enlarge

The idea of forming small networks (sub-systems) into larger networks (systems) is not new.  This design strategy has been demonstrated to yield many benefits.  To extend our discussion, with must now give due consideration to Metcalfe’s Law.  Essentially, this law can be used to determine the number of unique connections (C) in a network based on the total number of nodes (N) in the network. The relationship can be expressed as C = N(N − 1)/2.

Applying the Concepts

To better illustrate the ideas discussed thus far, consider the instance of N = 5 nodes.  Given this circumstance, we might logically ask: “If a network has 5 nodes, how many possible connections could exist?”  To answer this question, we can directly compute the number of connections (C) by way of the relation: C = N(N − 1) / 2 = 5(5-1) / 2 = 10.

However, when considering the total network, the number of nodes (N) and connections (C) must be added together. Thus, for the case of N = 5 nodes, the complexity factor (M) would be given as: M = N + C = 5+10 = 15.  Understanding this point now begs us to ask the question:, “Why is this idea so important to the field of Lean Six Sigma?”

The answer to this question is really quite simple.  The complexity factor (M) of a system or network should not be viewed as being correlated to the total number of defect opportunities (O) associated with that system or network.  This would only be true if each system element and connector represented a single defect opportunity.  It should be remembered that a large number of defect opportunities does not necessarily infer a complex state.  On the flip side, a high level of complexity would necessarily infer a very large number of defect opportunities (given that every node and connector has a system purpose).

As a sidebar note, consider a system where each node and connection sustains a performance capability ratio of Cp = .83.  This would be equivalent to Z = 2.5, or about 99.4% confidence.  Given this level of confidence for each node and connection, the graphic provided below displays the probability of zero system failures (ordinate) for a select range of M (abscissa).

An examination of this graphic reveals that, for the case of Cp = .83, the confidence of system success is reasonably robust to the various combinatorial possibilities of N and C, at least to the point where M = 20.  When M > 20, there is a rather sharp drop in the system confidence.

Click Image to Enlarge

Again, the reader is reminded that this essay is a basic primer.  Owing to this, the various assumptions underlying the computations and other such analytical details are not introduced into the mix so as to keep the discussion light, simple and focused.  As previously stated, the aim is to create basic awareness, not grapple with technical details.

Expanding the Example

Considering our example model (M = N + C = 5+10 = 15), let’s say the likelihood of each element (node) successfully performing its network role is 99%. We well also say that each of the 10 connections maintains a performance confidence of 95%.  Thus, the likelihood of total mission success upon each engagement of the system would be (.99^5)x(.95^10)= .5694, or about 57%.  On the flip side, it can be said that, for each system engagement, there is a 100(1 – .5694) = 43.06% likelihood of mission failure.

Let’s now further suppose the network must be successfully engaged four (4) times over a one hour period to achieve a larger goal.  Given this aim, the likelihood of success would be estimated as .5694^4= .1051, or about 10.5%.  This is to say the likelihood of achieving mission success and realizing the post-mission goal is about 10.5%.  Obviously, such odds would not be viewed in a favorable light in most circles.  So how can the odds of success be improved?

Improving the System

Where a system or network is concerned, improving the odds of total mission success can be realized in a number of ways.  Owing to this, we might ask: “Is it possible to design (configure) the nodes, connections and capabilities in such a way the net effect will minimize the risk of mission failure while concurrently reducing the total operating costs?” 

Another question might be: “Is it possible to make the likelihood of mission success robust to the impact of complexity?”  Of course, postulating such questions could continue on and on.  However, at day’s end, there are four primary options to reduce the consequential influences of complexity.

1) Design the system with fewer nodes and connections.

2) Configure the nodes and connection in such a way so as to achieve a robust design.

3) Improve the capability and/or reliability of the sensitive nodes and/or connections.

4) Some combination of options 1 through 3.

Closing Remarks

As one might easily surmise, books could be written on how to realize the intents and aims of these four options.  For example, consider option 3.  By way of DMAIC, it’s often possible to find the most optimal parameter settings for those nodes and connections for each sub-system; thereby, decreasing the system-level probability of failure.

Per the caveat offered at the beginning of this short essay, our discussion of the topic is not intended to provide the reader with a “user’s manual” for how best to cope with complexity.  However, its purpose is to discuss the some of the most elementary subjects on the topic.  In this way, the readers can begin to wrap their minds around the many issues that could come into play when attempting to leverage and manage complexity during the course of product design.

Online MindPro Black Belt Training and Certification
Six Sigma Wings for Heroes
The Great Discovery
Dr. Mikel J. Harry Biography & Professional Vita

Business Phone: 480.515.0890

Business Email: Mikel.Harry@SS-MI.com

Copyright 2013 Dr. Mikel J. Harry, Ltd.

About Mikel Harry

Dr. Harry has been widely recognized in many of today's notable publications as the Co-Creator of Six Sigma and the world's leading authority within this field. His book entitled Six Sigma: The Breakthrough Management Strategy Revolutionizing the World’s Top Corporations has been on the best seller list of the Wall Street Journal, New York Times, Business Week, and Amazon.com. He has been a consultant to many of the world’s top senior executives, such as Jack Welch, former CEO and Chairman of General Electric Corporation. Dr. Harry has also been a featured guest on popular television programs, such as the premier NBC show "Power Lunch." He is often quoted in newspapers like USA Today and interviewed by the media, such as The Economic Times. In addition, Dr. Harry has received many distinguished awards in recognition of his contributions to industry and society. At the present time, Dr. Harry is Chairman of the Six Sigma Management Institute and CEO of The Great Discovery, LLC.
Aside | This entry was posted in Six Sigma (Technical) and tagged , , , , , , , , , , , , , , , , . Bookmark the permalink.

8 Responses to A Basic Primer on the Subject of Complexity

  1. Mikel Harry says:

    From the dictionary we find that the term “complex” is described as anything that is “composed of many interconnected parts.” In this case, a “part” is largely synonymous with the terms “element” and “node.”

    Let’s now consider the idea of a network. Again turning to the dictionary, we find that a network is simply “a system of interrelated elements.” Owing to this definition; and only for the purposes of this essay, we will treat the terms “network” and “system” as synonyms. Thus, we may now generalize and say that complexity is: “the state of a system or network that is defined or otherwise characterized by the intricate arrangements of connections between its many nodes.”

    As a matter of abstraction, we can reduce the latter definition to a couple of factors; namely, nodes and connectors. Just as the name would imply, a node is a terminal point, much like the hub on a wagon wheel; where the spokes are connectors. While the hub has a system role, so do the spokes.

    For the sake of argument, suppose we have a network that consists of only two nodes (N = 2) and one connector (C = 1). In this case, the complexity factor would be M = 3 since there are N = 2 nodes and C = 1 connector, or simply M = N + C = 2 + 1 = 3.

    The idea of forming small networks (sub-systems) into larger networks (systems) is not new. This design strategy has been demonstrated to yield many benefits. In this sense, the whole is greater than the sum of its parts.

    To further this line of reasoning, with must now give due consideration to Metcalfe’s Law. Essentially, this law can be used to determine the number of unique connections (C) in a network based on the total number of nodes (N) in the network. The relationship can be expressed as C = N(N − 1)/2.

    To better illustrate the ideas discussed thus far, consider the instance of N = 5 nodes. Given this circumstance, we might logically ask: “If a network has 5 nodes, how many possible connections could exist?” To answer this question, we can directly compute the number of interconnections (C) by way of the relation: C = N(N − 1) / 2 = 5(5-1) / 2 = 10. Thus, for the case of N = 5 nodes, the complexity factor (M) would be given as: M = N + C = 5+10 = 15.

    Considering our example model (M = N + C = 5+10 = 15), let’s say the likelihood of each element (node) satisfactorily performing its network role is 99%. We well also say that each of the 10 interconnections maintains a performance confidence of 95%. Thus, the likelihood of total mission success upon each “firing” of the network would be (.99^5)x(.95^10)= .5694, or about 57%. On the flip side, it can be said that there is a 100(1 – .5694) = 43.06% likelihood of mission failure.

    So as to realize some post-mission goal, let’s now further suppose the network must be functionally engaged four (4) times over a one hour period. Given this aim, the likelihood of success would be estimated as .5694^4= .1051, or about 10.5%. This is to say the likelihood of achieving mission success and realizing the post-mission goal is about 10.5%. Obviously, such odds would not be viewed in a favorable light in most circles.

    Well, at day’s end, there are four options to improve the performance of a system (network).

    1. Design the system with fewer nodes and connections.

    2. Configure the nodes and connection in such a way so as to enhance performance robustness.

    3. Improve the capability and/or reliability of the sensitive nodes and/or connections.

    4. Some combination of options 1 through 3.

    As one might easily surmise, books could be written on how to realize the intents and aims of these four options.

  2. Mikel Harry says:

    Let’s consider a simple comparative definition of the words “complex” and “complicated.”

    1. Complex is used to refer to the level of components and interconnections in a system. If a problem or system is complex, it means that it has many interrelated components. Complexity does not evoke difficulty.

    2. Complicated refers to a high level of difficulty. If a problem is complicated, there may or may not be many parts but it will certainly take a lot of hard work to solve.

    3. An aphorism would be: Simple is better than complex and complex is better than complicated.

    4. Quantum mechanics is an inherently complex subject, but the textbook was an even tougher slog because of the author’s complicated explanations.

    Please recognize the aforementioned comparisons are highly general in nature and are not intended to be technically precise, but rather informative and illustrative

  3. Mikel Harry says:

    We know that the idea of complexity is an essential component of reliability. For example, consider a system which exhibits twenty (20) nodes and connections (all of which must independently function in order for the system to operate).

    In this case, we shall simplify our terminology and recognize a node or connection as an independent “system element.” If the reliability of each element is R = .95, the system reliability would be .95^20 = .3585, or about 36%. Hence, the relationship between reliability and complexity.

    Of course, we can reverse the aforementioned example. If the system reliability is given as .3585, then the element reliability would be computed as .3585^(1/20) = .95, or 95%.

  4. Mikel Harry says:

    For the moment, let’s consider “manmade” complexity. This particular understanding of “complexity” must be relative to something, like “getting some cash from your bank.” From the customer’s point of view, the process is simple — put you bank card in the slot, poke in the numbers and pull out the cash. However, when the myriad of hardware and software systems that underpin that transaction are considered, its complexity (per say) is likely to be beyond the lay-person’s ability to grasp; therefore, it would be viewed as being “complicated.”

    On the other hand, the engineering team that makes the machine might see the supporting systems and interactions to be fairly straight-forward and relatively simple. Same thing goes for slot machines in Vegas. Therefore, the notion of complexity is perhaps more “relative” than what initially meets the eye. No doubt, the original vacuum tube computers were seemingly complicated at the time, but today, those same computers would be considered fairly simple.

  5. Mikel Harry says:

    As many would say, product and service eliability is paramount. However, if the reliability is given as 98% and the complexity factor is defined as M = 20. Then the overall reliability would be .98^20 = .667, or about 67%. So, you might now ask which gives you greater leverage — increase the reliability per element, reduce the number of elements or some combination of the two? By virtue of the mini-example, you can see that the complexity of a system is direct related to reliability. Therefore, complexity is of great pragmatic concern.

  6. Mikel Harry says:

    As a sidebar discussion, we should consider the “Point of Bifurcation” in Chaos Theory — a small linear change at the right moment can trigger a sudden and large shift in system stability. In other words, its the crossover point between a stable system and one that is chaotic.

    Many phenomenon in nature seem to be subject to the laws physics when in a stable state, but when a single parameter is perturbed a little bit at the right time, chaos emerges. Its the proverbial “straw that broke the camel’s back.” effect.

    A lot of Chaos Theory is now being used in the field of “Catastrophic Theory” as a way to explain how a series of small seemingly insignificant “anomalies” can induce a system failure in an otherwise highly stable system. Of course, this fits well with “Murphy’s Law.”

    By the same token, when a system’s nodes and connections reaches some limit, the system collapses. In short, complexity can induce chaos.

  7. Mikel Harry says:

    Let’s consider an operational definition of complexity. From the dictionary we find that the term “complex” is described as anything that is “composed of many interconnected parts.” In this case, a “part” is largely synonymous with the terms “element” and “node.”

    Let’s now consider the idea of a network. Again turning to the dictionary, we find that a network is simply “a system of interrelated elements.” Owing to this definition; and only for the purposes of this essay, we will treat the terms “network” and “system” as synonyms.

    Thus, we may now generalize and say that complexity is: “the state of a system or network that is defined or otherwise characterized by the intricate arrangements of connections between its many nodes.”

    From the dictionary we find that the term “complex” is described as anything that is “composed of many interconnected parts.” In this case, a “part” is largely synonymous with the terms “element” and “node.”

    Let’s now consider the idea of a network, or system as some would prefer. Again turning to the dictionary, we find that a network is simply “a system of interrelated elements.” Thus, we may now generalize and say that complexity is: “the state of a system or network that is defined or otherwise characterized by the intricate arrangements of connections between its many nodes.”

    Just as the name would imply, a node is a terminal point, much like the hub on a wagon wheel; where the spokes are connectors. While each hub has a specific function, so do the spokes.

  8. Mikel Harry says:

    The study of complexity can be a very enlightening endeavor. A little digging into the subject and one quickly learns that the idea of complexity is multidimensional. For example, we can look at complexity from a purely mathematical point of view. In this case, the estimation of complexity is built around various theories and equations.

    On the other hand, we can view complexity from a purely humanist point of view. For example, when we look at the underlying structure of a tree, we tend to immediately see the trunk and its progressively smaller branches. Owing to this visual structure, most would say it’s not complicated, perhaps using the word “simple.”

    However, for some things in our world, like the overlapping splattering’s of paint that forms a piece of modern art, we see no underlying structure or pattern (at the macro or micro level). As a result, our mind interprets the visual image as a seemingly never-ending set of random splattering’s. Thus, many would say the painting is “complex.”

    This short analysis on the perspectives of complexity should help us explore and discuss how product and process complexity can impact the likelihood of mission success.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s