> The problem is, in many software companies, none of the tasks look alike. So your past data is going to be somewhat dirty, particularly at team level
Tasks don't really give you anything though. I think if it was measured in terms of features, that would work a lot better.
And with a Monte Carlo simulation, you get to know the time it would take you to complete any random feature on any random period. So maybe you can't perfectly predict that next feature, but in the year, on the average, your predictions should tend towards good accuracy.
And you could take it further, machine learn the feature request to the time it took. And then run the ML inference on any new feature request. Maybe let devs add some labels, it's not exactly estimating, but they could mention if say they think something is "complex", "ambiguous", "straightforward", etc.
Monte Carlo here is just a way of adding together estimate distributions without doing any of the fancy math. The variance of resulting compound estimate strongly depends on the variance on inputs.
Feature-based estimation seems to coarse to me. Not only none of the features is ever like the others once you dig into it, they're also interconnected.
Tasks don't really give you anything though. I think if it was measured in terms of features, that would work a lot better.
And with a Monte Carlo simulation, you get to know the time it would take you to complete any random feature on any random period. So maybe you can't perfectly predict that next feature, but in the year, on the average, your predictions should tend towards good accuracy.
And you could take it further, machine learn the feature request to the time it took. And then run the ML inference on any new feature request. Maybe let devs add some labels, it's not exactly estimating, but they could mention if say they think something is "complex", "ambiguous", "straightforward", etc.
I'm sure there'd be ways to do it.