- Defined process
-
There are two schools of thought about what a defined process is.
Contents
School of thought 1
There are two major approaches to controlling any process:
- The defined process control model.
- The empirical process control model.
The defined process control model requires that every piece of work be completely understood. Given a well-defined set of inputs, the same outputs are generated every time. A defined process can be started and allowed to run until completion, with the same results every time.[1]
School of thought 2
Note: The following is a private correspondence from Don Reinertsen to Alan Shalloway.
Let's start by cleaning up the terminology. We can view the output of a process as deterministic or stochastic. In a deterministic process the outputs are 100 percent determined by the inputs. In a stochastic process the output is a random variable—it has different values that occur with different probabilities.
Fully determined systems do not exist, except in academia and thought experiments. All industrial process control systems have stochastic outputs. They are partially determined. We use imperfect models to determine how to vary process inputs to control certain important measures of output within a useful control range. We embed these models in control strategies; for example, we might use a PID controller to weight the current level of parameters, their time derivatives, and their integrals to generate a control signal. We optimize our control strategy to balance the cost of control with the benefits of control. The tightness of the control band is simply an economic choice. We do not control for the sake of control. We do not believe that the lowest possible variance is the most desirable operating point.
It is useful to make a distinction between whether a process is fully determined and whether its output is fully determined. Although many people have a tendency to assume a defined process will produce a deterministic output, this is not always true—a precisely defined process can still produce an output that is random. For example, the process for obtaining and summing the results of flips of a fair coin may be precisely defined, while its output is a random variable.
Well-defined systems can produce outputs that range on a continuum from deterministic to purely stochastic. Just as we can structure a financial portfolio to change the variance in its future value—by ranging from all cash to all equity—we can make design choices that affect amount of the variance in a systems output.
I believe that thinking of system output as a random variable may be more useful than labeling it as either unpredictable or predictable. We could think of the output of a system as completely unpredictable, macroscopically predictable, or microscopically predictable. It is unclear if anything falls into the first category—even a random number generator will produce uniformly distributed random numbers. It is the zone of what I would call "macroscopic" and "microscopic" predictability is most interesting.
I can describe the distinction using the coin-tossing analogy. When I toss a fair coin 1000 times, I cannot predict whether the outcome of the next coin toss will be a head or tail—I would call these individual outcomes "microscopically unpredictable." There may be other microscopic outcomes that are fully determined since I have a fully defined process. For example, I could define this process such that there is a zero percent chance that the coin will land on its edge and remain upright. (If coin land on its edge, then re-toss the coin.)
Even when the outcome of an individual trial is "microscopically unpredictable," it is still a random variable. As such, it may have "macroscopic" or bulk properties that are highly predictable. For example, we can forecast the mean number of heads and its variance with great precision. Thus, just because the output of a process is stochastic, and described by a random variable, does not mean that it is "unpredictable." This is a rather important point because the derived random variables describing the "bulk properties" of a system are typically the most practical way to control a stochastic process.
If we conclude it is economically advantageous to make a system output variable more deterministic, then we can do this with or without feedback. For example, I can achieve good frequency response and low distortion on an audio amplifier either by selecting robust components, or by using feedback. We only use feedback when this is the most cost-effective way to achieve our goal. Using feedback is a technical choice, it is not "required" to deal with variation.
Thus, I agree with you that in that I believe that some of these apparent clusters can obstruct clear thinking. I do not believe that there are two dyads: fully determined, use no feedback; empirical, use feedback.
Furthermore, I do not believe that there are two triads: defined process, fully determined output, use no feedback; and undefined process, unpredictable output, use feedback. I consider it more useful to treat these attributes as three distinct dimensions:
1. The degree of process definition. 2. The randomness of its output. 3. The amount of feedback that the process uses.
Of course, if we try to describe the space spanned by these three dimensions we would find certain zone with very low occupancy. For example, we wouldn't find people use lots of feedback in well-defined systems that are already producing highly predictable output without feedback.
References
- ^ Schwaber, Ken; Beedle, Mike (2002), Agile Software Development with Scrum, Upper Saddle River: Prentice Hall, p. 25, ISBN 0-13-067634-9, http://www.controlchaos.com/download/Book%20Excerpt.pdf, retrieved 2007-07-06
Books
- Ogunnaike Babatunde A. and Harmon Ray W., Process Dynamics, Modeling and Control, Oxford University Press, 1994.
Categories:
Wikimedia Foundation. 2010.