April 24, 2014

Project Management & Measurement gamed

Share via email
Measurement

Measurement_ by spacesuitcatalyst on deviantart.com

In a recent article on his blog, Dan Markovitz offered this statement:

“One problem with stretch goals, I believe, is that they focus on outcome metrics, and can therefore be gamed.”

That got me thinking about how we analyze and measure progress on business projects.  All too often, I have seen project leads engage in gaming metrics as if their ability to adjust the numbers was the real purpose of their jobs.  As I indicated in my comments on Dan’s article, “How do we make these numbers look better” is an operational question, not a spreadsheet exercise.

Yet, project management tends to be all about outcome metrics.  Tracking costs vs. plan, Earned Value, Cost and Schedule Performance Indices, consumed slack – all are about what happened.  Granted, there’s an effort inherent to those practices that says the future can be predicted by understanding the past, however, that approach also seems to indicate that errors are acceptable.  Especially if we read a bunch of charts and graphs and variance analyses to tell us that we had a problem some number of days, or weeks, ago.

Somehow, that doesn’t seem good enough.

I’ve seen far too many managers who are completely distant from the day-to-day operations of the projects and processes that ought to be occurring right under their noses but, unfortunately, have become dependent upon analyst-manipulated reports to convey information.  This reality only seems to point out the importance of applying concepts such as leader standard work  to project management and controls.  Simply put – leaders need to have an understanding of not just the outcomes and results of their team’s efforts, but to be inherently familiar with the mechanisms and processes by which that work gets done.

When those processes are not measured, analyzed and understood – all that’s left is a pile of reports measuring the outcomes.  When those metrics don’t show things as expected, then there’s a loose interpretation of the inputs that goes on – in order to justify the manipulation of numbers before they get passed on to the next level of review.  ”The data doesn’t reflect reality” or “we need to make an adjustment to the numbers” is heard all too often.  If the data is so easily, and subjectively, corrected then its method of collection can also be easily corrected, too so that good information is passed along, not manually tweaked information that results in nothing more than watermelon reporting (the phenomenon by which red projects get greener as the reporting moves higher up).

The best way out of this?  Understanding the way in which the metrics are compiled is one.  The better solution, however, is to be involved.  Know who is working on the team, and why, and how they work, and what they are working on.  Engage in the human elements and be aware of not only what is going on, but what should be going on.  This places a premium on good planning and strong servant leadership.

The worst project & program managers I’ve ever worked with very often never looked at the reports that were provided to them.  They simply didn’t understand the information and/or believed that force of personality was sufficient to effect positive outcomes.  Some of the best I’ve worked with didn’t read the reports, either – because there was nothing in all that data that they did not know already.

Did you like this post?

Sign up to receive email updates directly to your inbox:

Delivered by FeedBurner