Flatten cone of uncertainty and reduce cost of change

The first ninety percent of the task takes ninety percent of the time, and the last ten percent takes the other ninety percent.

The quote aptly summarizes the fate of many software projects – especially the large ones. Most software projects begin with meticulous planning, yet they fail to meet their objectives. Why is it that schedule slippage is so commonplace for software projects ? The answer lies probably in the way most software projects are estimated. We discussed in previous articles how optimism bias and strategic misrepresentation prevent us from having good estimates. But even if we take the ideal world scenario, still it’s almost impossible to get a realistic estimate. The main point to note here is the context is about new feature or product development or heavily customized COTS implementations. For maintenance related software projects (which in fact should be termed as operations, going by the PMBOK® definition), there is level of certainty because of the repeated nature of the work.

Coming back to our point, limited understanding of the requirements at idea stage results in estimates that have a high margin of error. Steve McConnell in Software Estimation : Demystifying the Black Art refers to this as the cone of uncertainty where the estimates at the initial concept stage can vary 4x to the higher side or lower side.

We have better and better understanding of the requirements as we move the stages from initial concept finally to testing and deployment thereby narrowing the cone. So essentially we learn as the project develops and put us in a better place to estimate more accurately.

The typical waterfall model that would need us to go through the distinct phases of detailed requirement gathering, development and so on one by one, poses a problem here because that means that we must do most of our learning during the initial period of idea formulation, requirement gathering and design. Leanings and discoveries later on actually might increase our efforts if we consider the cost of change model which says that as we get later into the software development life cycle, our cost to change increases. So as we progress more, our understanding of what to build increases but we cannot really implement that always considering the cost of changes at that point.

Given that context, the agile model of working on smaller chunks and in shorter cycles allows the learning cycle for the entire project to be effective as its help to refine further scope as well as estimates on an ongoing basis. Of course it’s not a silver bullet and comes with its own share of challenges e.g splitting a large scope to smaller chunks is not always easy.

Overall though the agile way of software development does seem suitable for projects with high level of uncertainty in the initial stages as its offers the potential to flatten the cone and reduce the cost of change.

Enhanced by Zemanta

You may also like

2 Comments

  1. the lower half of the cone of uncertainty is nonsensical. It shows that uncertainty grows to 1 for anything that begins the project at less than one. The chart also depicts success to be when we reach the end of the project with non-zero uncertainty. I understand the metaphor of the cone but the math, as depicted, doesn’t add up. it would be better if only the upper line of the cone were there, to depict how uncertainty is nonlinear over the course of the project, converging to zero at project end. uncertainty can still be measured according to a baseline of 1x and there still can be deliverables/tasks of less than one.

Leave a Reply

Your email address will not be published. Required fields are marked *