Software development effort estimation

Software development effort estimation

Software development effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.


Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort [cite web
author = Jørgensen, M.
title = A Review of Studies on Expert Estimation of Software Development Effort
url =
] .

Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. The mean effort overrun seems to be about 30% and not decreasing over time. For a review of effort estimation error surveys, see [cite web
author = Molokken, K. Jorgensen, M.
title = A review of software surveys on software effort estimation
url =
] . However, the measurement of estimation error is not unproblematic, see Assessing and interpreting the_accuracy of effort estimates.The strong over-confidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or “almost sure” to include the actual effort in a minimum-maximum interval, the observed frequency of including the actual effort is only 60-70% [cite web
author = Jørgensen, M. Teigen, K.H. Ribu, K.
title = Better sure than safe? Over-confidence in judgement based software development effort prediction intervals
url =
] .

Currently the term “effort estimate” is used to denote as different concepts as most likely use of effort (modal value), the effort that corresponds to a probability of 50% of not exceeding (median), the planned effort, the budgeted effort or the effort used to propose a bid or price to the client. This is believed to be unfortunate, because communication problems may occur and because the concepts serve different goals [Edwards, J.S. Moores, T.T. (1994), "A conflict between the use of estimating and planning tools in the management of information systems.". European Journal of Information Systems 3(2): 139-147.] [Goodwin, P. (1998). Enhancing judgmental sales forecasting: The role of laboratory research. Forecasting with judgment. G. Wright and P. Goodwin. New York, John Wiley & Sons: 91-112.] .


Software researchers and practitioners have been addressing the problems of effort estimation for software development projects since at least the 1960s; see, e.g., work by Farr [cite web
author = Farr, L. Nanus, B.
title = Factors that affect the cost of computer programming
url =
] and Nelson [Nelson, E. A. (1966). Management Handbook for the Estimation of Computer Programming Costs. AD-A648750, Systems Development Corp.] .

Most of the research has focused on the construction of formal software effort estimation models. The early models were typically based on regression analysis or mathematically derived from theories from other domains. Since then a high number of model building approaches have been evaluated, such as approaches founded on case-based reasoning, classification and regression trees, simulation, neural networks, Bayesian statistics, lexical analysis of requirement specifications, genetic programming, linear programming, economic production models, soft computing, fuzzy logic modeling, statistical bootstrapping, and combinations of one or more of these models. The perhaps most common estimation products today, e.g., the formal estimation models COCOMO and SLIM have their basis in estimation research conducted in the 1970s and 1980s. The estimation approaches based on functionality-based size measures, e.g., function points, is also based on research conducted in the 1970s and 1980s, but are re-appearing with modified size measures under different labels, such as “use case points” [cite web
author = Anda, B. Angelvik, E. Ribu, K.
title = Improving Estimation Practices by Applying Use Case Models
url =
] in the 1990s and 2000s.

Estimation approaches

There are many ways of categorizing estimation approaches, see for example [Briand, L. C. and I. Wieczorek (2002). Resource estimation in software engineering. Encyclopedia of software engineering. J. J. Marcinak. New York, John Wiley & Sons: 1160-1196.] [cite web
author = Jørgensen, M. Shepperd, M.
title = A Systematic Review of Software Development Cost Estimation Studies
url =
] . The top level categories are the following:

* Expert estimation: The quantification step, i.e., the step where the estimate is produced, is based on judgmental processes.
* Formal estimation model: The quantification step is based on mechanical processes, e.g., the use of a formula derived from historical data.
* Combination-based estimation: The quantification step is based on a judgmental or mechanical combination of estimates from different sources.

Below are examples of estimation approaches within each category.

Selection of estimation approach

The evidence on differences in estimation accuracy of different estimation approaches and models suggest that there is no “best approach” and that the relative accuracy of one approach or model in comparison to another depends strongly on the context [cite web
author = Shepperd, M. Kadoda, G.
title = Comparing software prediction techniques using simulation
url =
] . This implies that different organizations benefit from different estimation approaches. Findings, summarized in [cite web
author = Jørgensen, M.
title = Estimation of Software Development Work Effort:Evidence on Expert Judgment and Formal Models
url =
] , that may support the selection of estimation approach based on the expected accuracy of an approach include:

* Expert estimation is on average at least as accurate as model-based effort estimation. In particular, situations with unstable relationships and information of high importance not included in the model may suggest use of expert estimation. This assumes, of course, that experts with relevant experience are available.

* Formal estimation models not tailored to a particular organization’s own context, may be very inaccurate. Use of own historical data is consequently crucial if one cannot be sure that the estimation model’s core relationships (e.g., formula parameters) are based on similar project contexts.

* Formal estimation models may be particularly useful in situations where the model is tailored to the organization’s context (either through use of own historical data or that the model is derived from similar projects and contexts), and/or it is likely that the experts’ estimates will be subject to a strong degree of wishful thinking.

The most robust finding, in many forecasting domains, is that combination of estimates from independent sources, preferable applying different approaches, will on average improve the estimation accuracy [cite web
author = Winkler, R.L.
title = Combining forecasts: A philosophical basis and some current issues Manager
url =
] [cite web
author = Blattberg, R.C. Hoch, S.J.
title = Database Models and Managerial Intuition: 50% Model + 50% Manager
url =
] [cite web
author = Jørgensen, M.
title = Estimation of Software Development Work Effort:Evidence on Expert Judgment and Formal Models
url =
] .

In addition, other factors such as ease of understanding and communicating the results of an approach, ease of use of an approach, cost of introduction of an approach should be considered in a selection process.

Uncertainty assessment approaches

The uncertainty of an effort estimate can be described through a prediction interval (PI). An effort PI is based on a stated certainty level and contains a minimum and a maximum effort value. For example, a project leader may estimate that the most likely effort of a project is 1000 work-hours and that it is 90% certain that the actual effort will be between 500 and 2000 work-hours. Then, the interval [500, 2000] work-hours is the 90% PI of the effort estimate of 1000 work-hours. Frequently, other terms are used instead of PI, e.g., prediction bounds, prediction limits, interval prediction, prediction region and, unfortunately, confidence interval. An important difference between confidence interval and PI is that PI refers to the uncertainty of an estimate, while confidence interval usually refers to the uncertainty associated with the parameters of an estimation model or distribution, e.g., the uncertainty of the mean value of a distribution of effort values. The confidence level of a PI refers to the expected (or subjective) probability that the real value is within the predicted interval [cite web
author = Armstrong, J. S.
title = Principles of forecasting: A handbook for researchers and practitioners
url =
] .

There are several possible approaches to calculate effort PIs, e.g., formal approaches based on regression or bootstrapping [cite web
author = Angelis, L. Stamelos, I.
title = A simulation tool for efficient analogy based cost estimation
url =
] , formal or judgmental approaches based on the distribution of previous estimation error [cite web
author = Jørgensen, M. Sjøberg, D.I.K.
title = An effort prediction interval approach based on the empirical distribution of previous estimation accuracy
url =
] , and pure expert judgment of minimum-maximum effort for a given level of confidence. Expert judgments based on the distribution of previous estimation error has been found to systematically lead to more realistic uncertainty assessment than the traditional minimum-maximum effort intervals in several studies, see for example [cite web
author = Jørgensen, M.
title = Realism in assessment of effort estimation uncertainty: It matters how you ask
url =
] .

Assessing and interpreting the accuracy of effort estimates

The most common measures of the average estimation accuracy is the MMRE (Mean Magnitude of Relative Error), where MRE is defined as:

"MRE" = |actual effort − estimated effort| / |actual effort

This measure has been criticized [cite web
author = Shepperd, M. Cartwright, M. Kadoda, G.
title = On Building Prediction Systems for Software Engineers
url =
] [cite web
author = Kitchenham, B. Pickard, L.M. MacDonell, S.G. Shepperd,
title = What accuracy statistics really measure
url =
] [cite web
author = Foss, T. Stensrud, E. Kitchenham, B. Myrtveit, I.
title = A Simulation Study of the Model Evaluation Criterion MMRE
publisher = IEEE
url =
] and there are several alternative measures, such as more symmetric measures [cite web
author = Miyazaki, Y. Terakado, M. Ozaki, K. Nozaki, H.
title = Robust regression for developing software estimation models
url =
] , Weighted Mean of Quartiles of relative errors (WMQ) [cite web
author = Lo, B. Gao, X.
title = Assessing Software Cost Estimation Models: criteria for accuracy, consistency and regression
url =
] and Mean Variation from Estimate (MVFE) [cite web
author = Hughes, R.T. Cunliffe, A. Young-Martos, F.
title = Evaluating software development effort model-building techniquesfor application in a real-time telecommunications environment
url =
] .

A high estimation error cannot automatically be interpreted as an indicator of low estimation ability. Alternative, competing or complementing, reasons include low cost control of project, high complexity of development work, and more delivered functionality than originally estimated. A framework for improved use and interpretation of estimation error measurement is included in [cite web
author = Grimstad, S. Jørgensen, M.
title = A Framework for the Analysis of Software Cost Estimation Accuracy
url =
] .

Psychological issues related to effort estimation

There are many psychological factors potentially explaining the strong tendency towards over-optimistic effort estimates that need to be dealt with to increase accuracy of effort estimates. These factors are essential even when using formal estimation models, because much of the input to these models is judgment-based. Factors that have been demonstrated to be important are: Wishful thinking, anchoring, planning fallacy and cognitive dissonance. A discussion on these and other factors can be found in work by Jørgensen and Grimstad [cite web
author = Jørgensen, M. Grimstad, S.
title = How to Avoid Impact from Irrelevant and Misleading Information When Estimating Software Development Effort
url =
] .

See also

* Parametric estimating
* Estimation in software engineering
* Wideband Delphi
* Project management
* Planning poker
* Cost overrun
* Function points

External links

* Special Interest Group on Software Effort Estimation:
* General forecasting principles:
* Estimation resources:
* Downloadable research papers on effort estimation:
* Mike Cohn's Estimating With Use Case Points from article from Methods & Tools:
* Resources on Software Estimation from Steve McConnell:


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Software development process — Activities and steps Requirements Specification …   Wikipedia

  • Software development methodology — A software development methodology or system development methodology in software engineering is a framework that is used to structure, plan, and control the process of developing an information system. Contents 1 History 1.1 As a noun 1.2 As a… …   Wikipedia

  • Software Sizing — is an important activity in software engineering that is used to estimate the size of a software application or component in order to be able to implement other software project management activities (such as estimating or tracking). Size is an… …   Wikipedia

  • Estimation in software engineering — The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and well understood software development… …   Wikipedia

  • Software metric — A software metric is a measure of some property of a piece of software or its specifications.Since quantitative methods have proved so powerful in the other sciences, computer science practitioners and theoreticians have worked hard to bring… …   Wikipedia

  • Cost estimation in software engineering — The ability to accurately estimate the time and/or cost taken for a project to come in to its successful conclusion is a serious problem for software engineers. The use of a repeatable, clearly defined and well understood software development… …   Wikipedia

  • Comparison of development estimation software — A comparison of notable Software development effort estimation software. Software Schedule estimate Cost estimate Cost Models Input Report Output Format Supported Programming Languages Platforms License AFCAA REVIC [1] Yes …   Wikipedia

  • Project management software — is a term covering many types of software, including estimation and planning, scheduling, cost control and budget management, resource allocation, collaboration software, communication, quality management and documentation or administration… …   Wikipedia

  • Cost estimation models — are mathematical algorithms or parametric equations used to estimate the costs of a product or project. The results of the models are typically necessary to obtain approval to proceed, and are factored into business plans, budgets, and other… …   Wikipedia

  • Software equation — In the study of software project estimation, the Software Equation is a model with multiple variables based on assumptions of a specific distribution of effort throughout the entire length of a software development project. The models basis was… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”