“In experimental science, experimenter’s bias is bias towards a result expected by the human experimenter.” – Experimenter’s bias, Wikipedia.
Almost every management tool out there attempts to provide estimation regarding when the feature / user story / version / release will be ready. Fogbugz took it to extreme with their Evidencing Based Scheduling, others such VersionOne and Mingle use pretty graphs or other visuals to achieve the same.
“Computer, when it will be done?!”
I think there is an inherent paradox with such prediction – it depends on developer’s honesty. You must ask yourself what is the developer’s incentive to fill the information as it is? why should she or he be honest? The way I see it, developers are staying in their safe zone, pricing things in high enough granularity (around 8h or more per task) and fill the actual or remained hours so they won’t look bad or good. They just want to be right. Why? So they won’t appear as too optimistic or too pessimistic to the prediction system their boss is using to plan the next few releases. It actually makes sense; the predication system caused the results to behave as expected by driving developers to “safely” play with the numbers. The actual, somehow, always equals to the estimation! How nice!
The second you’re telling your developers that their estimated and actual hours are being used directly to predict the future, don’t be surprised that it will be the future they want you to see. This is why I’m not convinced about using Actual Hours for readiness prediction or judging if a teammate is over optimistic/pessimistic.
As a team leader, you should care that your teammates will learn to estimate better, to allow themselves to be wrong, without everyone to see it on their prediction tools. They need to make mistakes and get personal feedback. People over Tools !
Why should you still use Actual Hours then?
More on it to follow…