I recently wrote on “Analysis Paralysis” in a software development context. I’ve since found some other good, easy-to-digest information on the topic that I thought I’d pass on.
In his article, Defeating Procrastination: Analysis Paralysis, Chris Garrett advises that breaking out of analysis paralysis doesn’t require you to lower your standards, or to aim lower than you would otherwise. It just requires some prioritization.
What you have to do is identify those things you have to get right from the get-go, those items that do need analysis, and what can be fixed later.
While it is perfectly natural to want to spend time thinking about a project, especially one with an element of risk, there comes a point where any more thinking is counter-productive and you need to start making some progress.
- What do you absolutely have to do for the project to be a success?
- What tasks can absolutely not be put off while later?
- What are the most painful items to change post launch?
- What could realistically go wrong?
He goes on to say,
Planning is good. Failure to plan is planning to fail. But too much can be as crippling as not enough.
One analogy comes quickly to mind. Have you ever over-studied for a big test? I know I have. You studied sufficiently to get a good grasp of the concepts to be covered in the test, but once you reached that point you found that there were lots of little alternative paths and permutations of how questions might be framed on the test. You wanted to make so sure that you were prepared for any possible surprise that might be thrown at you on test day. You allowed yourself to get bogged down in the minutiae.
It did occur to me that the implication of comparing analysis to a big test may be too “waterfall” for some as it applies to software engineering. If you’re among that number, then consider the example as overstudying for every weekly pop-quiz during the semester. In other words, I think the analogy still applies with the RAD/agile methods of software development.
Either way you look at it, I’ve found that there is certainly a law of diminishing returns when it comes to preparing for a test and that the same applies to analysis work in software development.
When you overanalyze, you set yourself at risk of missing the forest for the trees.