Are Requirements Really This Messed Up?

This post originally appeared on August 2, 2010 on BetterProjects.net.


Call me skeptical, but I find it increasingly difficult to believe that ‘bad requirements’ are at the heart of so many failed projects. Over the last couple of decades, there have been numerous studies released that declare poor requirements to be the root cause of most software implementations failing to return value. Depending upon the source, the percentage of failed projects due to poor requirements can be anywhere from 13-40%.

If so many publications out there, from the Standish Group Chaos Report to Gartner to CIO are saying the same thing, why do I have such a difficult time in believing them? Why do I read articles such as those linked to in the last sentence and have a difficult time accepting their conclusions? Are these not research companies who are paid to study these kinds of issues and report on them to you and I? Do they not stake their reputation on helping companies find and fix the root causes of their project problems?

Digging Into The Studies

Lets use the Standish report as an example. Go grab a copy and read it. At the beginning it sounds so wonderfully meticulous with estimates based on the number of projects that failed, the size of those failures and the unreturned value on investment. These numbers all seem to make a lot of sense and paint a very grim picture for those of us who work on projects.

Where the analysis in the report begins to turn away from what I see as reality is when you hit the word 'survey’. How did Standish come up with their results? They asked people to tell them. Lots of people mind you, but still people. Surveys can tell you a lot, but at best they can give you only a directional guide as to the real problems. Surveys are full of issues, one of the biggest being that people who respond are ultimately not very honest with their responses. Its called a self-reporting bias.

Standish surveyed CIOs and other IT leaders and asked why projects failed. I feel the need to point out a few different failures in their approach. First, who says IT leaders actually know and understand the root causes? How many times have you heard a CIO, who was not involved in the day to day routine of a project, stand up and talk about the project and get just about every salient point wrong? If you’re like me, you’ve lost count. What makes anyone think a CIO, no matter how competent, knows the root cause of all the projects under their domain? I’m not saying that they shouldn’t know, only that they are people and often don’t come to the same conclusions as those who are active participants in the projects. Why did Standish not ask a less homogenous group of individuals as to their insight as to the project failure? Would this not have produced a less biased means of determining failure?

The next failure in the study, as I see it, is in the wording of the reported problem… requirements. What exactly does this mean? Are these business requirements or technical requirements? What about implementation requirements? Just calling something a 'requirement’ and pointing a finger at it does not a villain make.

The Standish results state that only 2 of the top 10 reasons in the Project Success Profile had anything to do with staffing and those reasons made up less than 10% of the entire reasons reported for success. For the Project Impaired Profile, not a single one of the 10 items reported a staffing issue other than a lack of sufficient resources. It is difficult for me to believe that the largest expenditure on most projects is for labor and yet almost no blame ever rests on the shoulders of those who are actively involved in the project. Very few executives would be willing to poke out a finger and say, “There’s the reason we failed; that person in cubicle C3-476.” While a single person generally cannot torpedo an entire project, a group of people most definitely can do such damage. Most people would not intentionally do such damage, but unintentionally or simply carelessly could do such damage. It simply seems strange to me, and thus makes the results of the study seem to be flawed that this wasn’t even a reason in the top 10.

(That last statement feels a bit self-damning given that I spend 90%+ of my time as a project team member. Believe me, I do feel the pain of my own project failures quite keenly, but most importantly, I learn from them and rarely make that mistake ever again.)

My last issue with reports like Standish is that they seemingly never take the time to do much more than talk with people. I am an analyst by trade. It is my job to spend days poking holes and generally getting to the root cause of problems and opportunities. Why did Standish not do the same? Why did they not do a better job of doing objective, root cause analysis instead of just taking the word of a group of industry insiders?

Contrast this with the work done by researchers like Jim Collins in his books Good to Great and Built to Last. Here is a guy who did more than just survey some people to find a convenient answer. He spent years digging into the details to really understand what makes a success and what makes a failure. What I want to know is why Standish and others like them only skimmed the surface with their analysis, never going deeper than a few inches into a problem that seems as deep as the Mariana Trench? Why was there no follow-up study to answer the questions I’ve put forth here?

Conclusions

If you’ve read this far, and for those who have I do apologize for sounding overly judgmental at times, but why do we continue to trust in these publications when their methodology seems flawed. I have lost count of the number of times I have heard someone quote one of these studies as a reason to 'fix’ our requirements. It is not possible to fix something if you are unable to determine the cause of the failure. Its like saying that my car won’t start so there must be something wrong with the engine. Yes, that’s likely true, but the engine is a fairly large and complex piece of machinery, just like a project, and without deep study its not always easy to find the real culprit.

But most importantly, what do we need to do to fix the problem of project failure? Many studies have suggested different project methodologies or changes in team structure as the answers to project failures. Sometimes we hear that better or different training is required. Here again, we get answers, but no specific guidance on why these solutions will fix our project failure problems. If we’re ever going to get better at what we do, we need a better study of the problem to provide a better set of answers.

(If you happen to work for one of the organizations I named above, please don’t take offense at my statements as I want you to prove me wrong. In the future, I ask that you not only go deeper in your analysis, but explain your methodology in more detail in the document itself. If you think no one cares about your methodology, you’re wrong. Make it an appendix if you have to, but show me the data and show me the care you put into designing a study that works around the flaws I have pointed out. I don’t care how good your reputation is, I want to know what you did and how. The US Food and Drug Administration doesn’t let drug companies off without showing their data and I will hold you to at least the same standard.)