Theoretical Cost Analysis

from the Perspective of Competitive Advantage

by
Edwin B. Dean

----------------------------------------------

[NASA Logo] Theoretical cost analysis is an extension of parametric cost analysis which is the generation and application of equations that describe relationships between cost, schedule, and measurable attributes of systems which must be brought forth, sustained, and retired. Parametric cost analysis has been a practical process for some years now, but there has been virtually no associated theory. Theoretical cost analysis provides a theoretical basis for all forms of cost analysis.

Goldratt (1990) notes that every science has gone through three distinct phases: classification, correlation, and effect-cause-effect. Classification in cost analysis began with the grouping of similar types of systems. Because systems were similar, it was presumed that they possessed similar cost properties. Correlation began when it was realized that these groups did had similar properties, such as cost per pound, within the group. It was also noted that differerent groups often had quite different ranges of those properties. Thus groups of systems could be distinguished by the values of their properties. Enter parametric cost analysis. Regression was applied within a group to determine the properties of that group. This was the beginning of the effect-cause-effect phase of cost analysis.

Based upon the mental correlation of weight and cost, it was postulated that weight was a cause of cost. The hypothesis was tested using regression. Low and behold, weight was an almost universally significant variable for predicting cost. Enter the weight based cost estimating relationship and parametric cost analysis. Since that time, many parameters, such as power, number of drawings, number of production units, and number of lines of code, have been found to be statistically significant within certain groups (statifications) of systems. Though this was the earliest edge of effect-cause-effect, parametric cost analysis was still correlative, though it was now a mathematically based correlation. Generalizing these parameters we now knew that size was a significant cost driver. It was soon determined that design parameters for aircraft were significant cost drivers as well. A further, and very important step forward, was that the log of the cost of the first pound of the first production unit was in truth a parameter which drove cost. It was given the name manufacturing complexity. Causes were appearing from the analysis of data. This places parametric cost analysis in the realm of empirical sciences.

The earliest appearance of theoretical cost analysis may be the efforts of Norden (1970) at IBM to quantify the dynamic flows of the number of people applied to a hardware project. Norden has references leading to this work which go back to 1958. He used a theoretical reliability model to form an analogy which applied to the number of problems remaining to be solved in a task. The result was the determination that the Wiebull distribution provided a theoretical basis for estimating the shape over time of the number of people applied to a project. Empirical validation followed. Putnam (1978) applied the Rayleigh model, a special case of the Weibull model, to the software development phase and found excellent empirical validation. He also noted that cost has units of work.

Referencing Norden's work, Roberts (1964) used system dynamics (Forrester, 1961) to model the estimation of project effort and cost. Abdel-Hamid (1989) tailored Robert's work for the estimation of software cost.

Stump (1988) applied a special purpose Markov chain to life cycle cost analysis. Unal, Dean and Moore (1990) generalized the approach and applied it to space transportation operations.

Dean (1989b) proposed the existence of theoretical cost analysis and conjectured a number of mathematical topics which could be used to extend cost analysis beyond parametric cost analysis. Dean (1990a) used geodesic descent optimization (Dean, 1988a) to define and distinguish between differentiable manifolds (Boothby, 1975) which represent design-to-cost and design-for-cost respectively.

Tse (1992) derives information measures for advanced composite structures which are interpreted as production process complexity. The accuracy of predictions of various production processes times are in the range of a well calibrated statistically based parametric model. The difference is that the information complexity can be derived directly from knowledge of the nature of the production processes. Both information theory and differential geometry were used to derive the information measures.

Kaminsky and Haberle (1995) develop the probability distributions for the Deming models of the total cost of 100 percent inspection and zero inspection.

----------------------------------------------

References

----------------------------------------------

Bibliographies

Theoretical Cost Analysis Bibliography
Geometry of Statistics Bibliography
Least Squares Bibliography
Parametric Cost Analysis Bibliography
Project Risk Bibliography
Response Surface Methodology Bibliography
Risk Bibliography

----------------------------------------------

Societies

The International Society of Parametric Analysts

----------------------------------------------

Table of Contents | Cost Technologies | Use

----------------------------------------------