Surprised nobody have mentioned Making software, https://www.amazon.com/Making-Software-Really-Works-Believe/...
It takes a empirical view on the software process.
Does writing tests first help you develop better code faster?
Can code metrics predict the number of bugs in a piece of software?
Do design patterns actually make better software?
> Unfortunately, the phrasing would be more like, It doesn't matter what's true, it matters what i can sell to management.
Yes, that's my main beef with it. SCRUM could be based on science, but sadly, it isn't. Here's an interesting book: https://www.amazon.com/Making-Software-Really-Works-Believe/...
The thing about "many good ideas and rules of thumb" is, I've got a few dozen of those of my own! Most of us do. It would be interesting if there were decisive evidence against any of them, but even when I read studies whose conclusions contradict my beliefs, the studies are so flimsy that I find it easy to keep my beliefs.
There does seem to be a recent wave of software engineering literature, exemplified by http://www.amazon.com/Making-Software-Really-Works-Believe/d... (which I haven't read). Are you familiar with this more recent stuff? Does it represent new research or merely new reporting on old research? If the former, are the standards higher?
It has some pretty basic flaws that are pervasive that the reader
should be aware of. First off is the "productivity" measure: the last
time I looked around, this was nearly impossible to quantify for
software development. The author chose SLOC and development time as
stand-ins for productivity. SLOC has a connection with code quality
and time of development, but development time is well known to vary[1].
In particular, development time in this thesis is linked to results in
a pretty classic "Psychologist's fallacy"[2]; the author generalizes
their experiences as conclusive. This implies in particular that
"development time" in this thesis should be thrown out for meaningful
conclusions, as the sample size is 1. It is, however, an interesting
experience report.
Another basic flaw is the connecting of 'modern' with 'good'. The
author remarks the main disadvantage of C and Fortran are their age;
this crops up here and there. Workflow tooling in Go and Rust are
major focuses by the developers: C workflow tooling is usually locally
brewed. This does not make C-the-language worse.
More on a meta level, my advisor in my Master's work drummed into me
that I should NOT insert my opinion into the thesis until the
conclusion. So I found the editorializing along the way very annoying.
And finally, and very unfortunately, the experience level of the
author appears to be low in all three languages; this is significant
when it comes to implementing high performance code.
---
Now, for the interesting / good parts of this experience report.
Standout interesting for me was the Rust speedup on the 48-core
machine. I did not expect that, nor did I expect Go to make such a
good showing here as well.
In general Go performance/memory made a surprisingly good showing
(to me) for this work. I shall have to revise my opinion of it upward
in the performance axis.
I am both surprised and vaguely annoyed by the Rust runtime eating so
much memory (Would love to hear from a Rust contributor why that is
and what's being scheduled to be done about it).
Of course the stronger type system of Rust catching an error that C
and Go didn't pick up is both (a) humorous and (b) justifies the type
community's work in these areas. I look forward to Rust 1.0!
One note by the author that is worth calling out stronger is the
deployment story: Go has a great one with static linking, whereas C
gets sketchy and Rust is... ??. I believe Rust has a static linker
option, but I havn't perused the manual in that area for some
time. For serious cloud-level deployments over time, static linking is
very nice, and I'm not surprised Google went that route. It's
something that would be very nice to put as a Rust emission option
"--crate-type staticbin".
Anyway. I look forward to larger sample sizes and, one day, a better productiv
It takes a empirical view on the software process. Does writing tests first help you develop better code faster? Can code metrics predict the number of bugs in a piece of software? Do design patterns actually make better software?
Yes, that's my main beef with it. SCRUM could be based on science, but sadly, it isn't. Here's an interesting book: https://www.amazon.com/Making-Software-Really-Works-Believe/...
There does seem to be a recent wave of software engineering literature, exemplified by http://www.amazon.com/Making-Software-Really-Works-Believe/d... (which I haven't read). Are you familiar with this more recent stuff? Does it represent new research or merely new reporting on old research? If the former, are the standards higher?
It has some pretty basic flaws that are pervasive that the reader should be aware of. First off is the "productivity" measure: the last time I looked around, this was nearly impossible to quantify for software development. The author chose SLOC and development time as stand-ins for productivity. SLOC has a connection with code quality and time of development, but development time is well known to vary[1].
In particular, development time in this thesis is linked to results in a pretty classic "Psychologist's fallacy"[2]; the author generalizes their experiences as conclusive. This implies in particular that "development time" in this thesis should be thrown out for meaningful conclusions, as the sample size is 1. It is, however, an interesting experience report.
Another basic flaw is the connecting of 'modern' with 'good'. The author remarks the main disadvantage of C and Fortran are their age; this crops up here and there. Workflow tooling in Go and Rust are major focuses by the developers: C workflow tooling is usually locally brewed. This does not make C-the-language worse.
More on a meta level, my advisor in my Master's work drummed into me that I should NOT insert my opinion into the thesis until the conclusion. So I found the editorializing along the way very annoying.
And finally, and very unfortunately, the experience level of the author appears to be low in all three languages; this is significant when it comes to implementing high performance code.
---
Now, for the interesting / good parts of this experience report.
Standout interesting for me was the Rust speedup on the 48-core machine. I did not expect that, nor did I expect Go to make such a good showing here as well.
In general Go performance/memory made a surprisingly good showing (to me) for this work. I shall have to revise my opinion of it upward in the performance axis.
I am both surprised and vaguely annoyed by the Rust runtime eating so much memory (Would love to hear from a Rust contributor why that is and what's being scheduled to be done about it).
Of course the stronger type system of Rust catching an error that C and Go didn't pick up is both (a) humorous and (b) justifies the type community's work in these areas. I look forward to Rust 1.0!
One note by the author that is worth calling out stronger is the deployment story: Go has a great one with static linking, whereas C gets sketchy and Rust is... ??. I believe Rust has a static linker option, but I havn't perused the manual in that area for some time. For serious cloud-level deployments over time, static linking is very nice, and I'm not surprised Google went that route. It's something that would be very nice to put as a Rust emission option "--crate-type staticbin".
Anyway. I look forward to larger sample sizes and, one day, a better productiv
[1] http://www.amazon.com/Making-Software-Really-Works-Believe/d... The situation is actually far worse than just varying developer time.
[2] https://en.wikipedia.org/wiki/Psychologist%27s_fallacy
https://www.amazon.com/Making-Software-Really-Works-Believe/...