Software Estimation: Demystifying the Black Art (Developer Best Practices)

Author: Steve McConnell
All Stack Overflow 18
This Month Hacker News 1

Software Estimation: Demystifying the Black Art (Developer Best Practices)


Review Date:


by fenier   2019-05-12
It's not exactly fair.. and also not likely without extensive time spent doing the estimation, like sure - you can be more precise, but the more precise you need to be, the longer it takes you to frame the estimate.

Two books help with methods of doing this.

Rapid Development: Taming Wild Software Schedules:

Software Estimation: Demystifying the Black Art

This likely does mean you'll need to be allowed to deploy automated test cases, develop documentation and the entire nine yards, because it's going to be far harder to hit any reasonable deadline if you don't have a known development pipeline.

You'll need to reduce your uncertainty as much as you can to even have a chance - and even then, things will still blindside you and double (or more) the actual vs. the original estimate.

by swatcoder   2018-11-22
Estimation is often challenging and sometimes impossible, but there are in fact many opportunities to deliver reliable estimates.

If you'd genuinely like to learn more about estimating software projects and when you can do so more or less reliably, Steve McConnell's Software Estimation: Demystifying the Black Art provides a great survey of techniques.

by anonymous   2017-08-20

Two thoughts: drive quality and improve estimates.

I work in a small software shop that produces a product. The most significant difference between us and other shops of a similar size I've worked in a is full time QA (now more than one). The value this person should bring on day one is not testing until the tests are written out. We use TestLink. There are several reasons for this approach:

  1. Repeating tests to find regression bugs. You change something, what did it break?
  2. Thinking through how to test functionality ahead of time - this is a cheek-by-jowl activity between developer and QA, and if it doesn't hurt, you're probably doing it wrong.
  3. Having someone else test and validate your code is a Good Idea.

Put some structure around you estimation activity. Reuse a format, be it Excel, MS Project or something else (at least do it digitally). Do so, and you'll start to see patterns repeating around what you do building software. Generally speaking include time in your estimates for thinking about it (a.k.a. design), building, testing (QA), fixing and deployment. Also, read McConnell's book Software Estimation, use anything you think is worthwhile from it, it's a great book.

Poor quality means longer development cycles. Most effective step is QA, short of that unit tests. If it were a web app I'd also suggest something like Selenium, but you're dealing with hardware so not sure what can be done. Improving estimates means the ability to attempt forecasting when things will suck, which may not sound like much, but knowing ahead of time can be cathartic.

by anonymous   2017-08-20

Use neither min nor max but something in between.

Erring on the side of overestimation is better. It has much nicer cost behavior in the long term.

  • To overcome the stress due to underestimation, people may take shortcuts that are not beneficial in the long term. For example, taking extra technical debt thast has to be paid back eventually, and it comes back with an interest. The costs grow exponentially.

  • The extra cost from inefficiency due to student's syndrome behaves linearly.

Estimates and targets are different. You (or your managers and customers) set the targets you need to achieve. Estimates tell you how likely you are to meet those targets. Deadline is one sort of target. The deadline you choose depends on what kind of confidence level (risk of not meeting the deadline) you are willing to accept. P50 (0.5 probability of meeting the deadline) is commonplace. Sometimes you may want to schedule with P80 or some other confidence level. Note that the probability curve is a long-tailed one and the more confidence you want, the longer you will need to allocate time for the project.

Overall, I wouldn't spend too much time tracking individual tasks. With P50 targets half of them will be late in any case. What matters most is how the aggregate behaves. When composing individual tasks estimates into an aggregate, neither min or max is sensible. It's extremely unlikely that either all tasks complete with minimum time (most likely something like P10 time) or maximum time (e.g. P90 time): for n P10/P90 tasks the probability is 0.1^n.

PERT has some techniques for coming up with reasonable task duration probability distributions and aggregating them to larger wholes. I won't go into the math here. Here's some pointer for further reading:

by Brian Genisio   2017-08-20

I highly recommend the book "Software Estimation: Demystifying the Black Art" by Steve McConnell. It really covers this question well.

by PaulHoule   2017-08-19
Get these two


If you want to get deeper into project management I suggest that you become a member of the PMI and possibly get certification from them. The training and testing are rigorous and it is a certification that means something both from the knowledge you get and the benefit of having it on your resume.

by cvs268   2017-08-19
@otoolep Have you read Steve McConnell's "Software Estimation: Demystifying the Black Art"

(if yes, then why isn't it on your list!?)

by gte910h   2017-08-19
Check out (non-aff link)

It's a good summary on a several techniques, how to apply them, and how to pick how in depth of a one to do, as well as how to talk with people who try to negotiate with them.