Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes exactly! When you ask someone for their estimate, you're getting a quote from them.

When you actually want to predict the length of time something will take, you rely on historical data and create a projected forecast. It never involves asking somebody for their best guess. That's how every single "other thing" we predict is handled, so why is software done differently.

Actually, I've seen some of these ideas being talked about in KanBan circles, like tracking lead time and cycle time, and then using monte carlo simulations to forecast, but I've never actually seen them in practice.



Didn't Joel Spolsky implement it all the way back in the stone age... er, 2007[0]?

The idea is old. The problem is, in many software companies, none of the tasks look alike. So your past data is going to be somewhat dirty, particularly at team level. Perhaps as an individual, intimately familiar with the problems you encounter, being able to introspect your own thought process - perhaps you or me, we could learn to estimate better. Except the existing popular tooling absolutely sucks at facilitating that.

--

[0] - https://www.joelonsoftware.com/2007/10/26/evidence-based-sch...


> The problem is, in many software companies, none of the tasks look alike. So your past data is going to be somewhat dirty, particularly at team level

Tasks don't really give you anything though. I think if it was measured in terms of features, that would work a lot better.

And with a Monte Carlo simulation, you get to know the time it would take you to complete any random feature on any random period. So maybe you can't perfectly predict that next feature, but in the year, on the average, your predictions should tend towards good accuracy.

And you could take it further, machine learn the feature request to the time it took. And then run the ML inference on any new feature request. Maybe let devs add some labels, it's not exactly estimating, but they could mention if say they think something is "complex", "ambiguous", "straightforward", etc.

I'm sure there'd be ways to do it.


Monte Carlo here is just a way of adding together estimate distributions without doing any of the fancy math. The variance of resulting compound estimate strongly depends on the variance on inputs.

Feature-based estimation seems to coarse to me. Not only none of the features is ever like the others once you dig into it, they're also interconnected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: