Project failure means not meeting planned schedule, effort and quality goals. This definition is important because unless a project plan (including RAID) has been reviewed, approved and baselined for feasibility, it is not a good basis to evaluate failure against. The process of review, approval and baselining the project plan (including RAID) also includes reviews of estimates including contingencies. It is important to have done all that is possible based on the information available to ensure that the plan is sound and grounded in reality.
From experience, schedule and effort variance caused by wrong estimations are easier to recover in commercial IT projects. It mostly requires deploying more and skilled resources respectively. If this is possible, more PM's and delivery heads do this or else negotiate with customer for delay (essentially declare failure). However in my experience, quality variance is much more nuanced to figure and its influence on the other two variances is very also significant for drawing right conclusions.
- Most of the time it is possible to plan in detail only for a portion of the entire project and the rest of the plan may be sketchy till initial portion of plan generates the information to plan subsequent phases. This is the nature of the game. One must plan what one can and generate RAID items for the rest. As the project progresses, the RAID items get resolved and the rest of the plan fructifies. In such cases, one tries to design the project to generate maximum information at the earliest (POC et al) and agrees commercials which take less risk till this information is generated. Only after this information is generated, bigger committments are made. Even in such cases, the committments may include risk margins and vary in terms of commercial models.
- Sometime despite every effort by PM, the quality of information inputs to planning may be of such poor quality that the project is doomed to fail from the beginning itself. We will ignore this case, since in real life one never commits to execute the project till adequate information of adequate quality is available to make a good quality plan to generate the business outcomes and benefits needed by customer.
- Every project will have a critical path determining its duration and other paths will have some tolerance. Similarly the resources for the project will have some availability amounting to base targetted effort and some tolerance for the availability (i.e. contingency). Similarly quality goals for individual deliverables will have some tolerance to meet overall quality goals for project.
- As each week passes, the project tracking process tracks each planned task to completion in terms of duration, effort and quality. Every week this results in a project level measure of duration, effort and quality and a measure of remaining duration, remaining effort, overall gap with respect to target quality.
- Every week project manager and steering board monitors whether the remaining scope of work can be delivered in the remaining effort such that the overall quality target can be met. This involves looking at possibility of speeding up tasks on critical path by deploying more resources. It involves looking at possibility of reducing effort by leveraging automation or better design requiring lower effort or descoping non-critical items in consultation with customer. It involves improving quality towards target quality by deploying more resources to fix and test defects. Sometimes the plan may change without milestones changing in which case it need not get communicated to client. Sometimes intermediate non-critical milestone dates may change to maintain critical subsequent milestone dates. The latter may need discussion with client. I have generally found customers quite understanding in such cases as long as your explanations are sound and based on reality.
- Moving in the above manner one completes the project to meet schedule, effort and quality goals or else sometimes it is neccessary to change critical milestone dates (or split scope and leave some work for later milestone date), increase cost or reduce quality criteria for acceptance. While these may look like failure, depending on the context, it may not be viewed as failure and a lot depends on how it is presented. IMHO true failure is when the project is canned and/or overshoots client budget and/or significantly missed internal GM target.
From experience, schedule and effort variance caused by wrong estimations are easier to recover in commercial IT projects. It mostly requires deploying more and skilled resources respectively. If this is possible, more PM's and delivery heads do this or else negotiate with customer for delay (essentially declare failure). However in my experience, quality variance is much more nuanced to figure and its influence on the other two variances is very also significant for drawing right conclusions.
- If quality variance at a particular stage is high, most of the time it is possible to spend more effort to reduce the variance and depending on the skill of the resource deployed the impact on effort and schedule variance will vary. Essentially the number of defects in each stage determines the impact on effort (and potentially schedule)
- However if the defect trend and outlook in the life-cycle is such that the number of defects increases with each stage of the life cycle, then it indicates something deeply wrong at some earlier stage. Sometimes it is possible to quickly identify the root cause in the earlier stage and complete the rework for subsequent stages till the current stages without significantly impacting effort and schedule. At other times, this is not possible and failure is the right prediction. Sometimes it is not possible to identify a root cause in an earlier stage and in such case, projecting the additional effort to meet quality targets might trigger the failure prediction. Of course depending on the commercial construct underlying the delivery, the method of dealing with the failure may vary.
No comments:
Post a Comment