According to this study, you can measure a team's progress and ability with:
* number of production deployments per day (want higher numbers)
* average time to convert a request into a deployed solution (want lower numbers)
* average time to restore broken service (want lower numbers)
* average number of bugfixes per deployment (want lower numbers)
I am curious about other studies in this area and if there is overlapping or conflicting information.
I belive that Accelerate: The Science of Lean Software and DevOps is the one book every developer, and anyone related to software development should read.
Book recommendation: https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339
This book is not directly related to agile. But it is rather unique, because it is based on studies and not on authors subjective opinion. It tries to figure out, which mechanics are relevant for modern software development teams.
Most often our rituals are not based on evidence. They are originally based on convenience and afterwards kept up as dogma. Sadly the original introduction was often done by the wrong people.
For example agile was mostly invented by developers, but the widespread adoption of scrum was done by business people (managers as scrum master). As such a dogma was formed that may not be in the best interest of solid software engineering.
>I am unconvinced the report at ... really relates to high-quality software.
There are actually two stability metrics in the DevOps report. Mean time to recovery (MTTR) and Release Fail Rate. And while they are not something that people think of "quality" as a developer, I find it hard to believe that software of low internal quality could be released without massive pains. Both of which are lowest for high performers. When you listen to some talks by Jez Humble, who is one of the authors of the DevOps report, he says exactly what this article says. It is actually a core message of most of his talks.
Also, in the full book, they go into more details about actual technical practices and they find trunk-based development, developer-owned tests and refactoring correlates with high IT performance. And all of these practices are what Fowler and Humble associate with software quality. I really recommend reading the book, as the report is mostly condensed stuff for managers and executives to read between meetings.
Accelerate - https://www.amazon.com/Accelerate-Software-Performing-Technology-Organizations/dp/1942788339/
The 7 Habits of Highly Effective People - https://www.amazon.com/Habits-Highly-Effective-People-Powerful/dp/0743269519
Extreme Ownership - https://www.amazon.com/Extreme-Ownership-U-S-Navy-SEALs/dp/1760558206/
> the means by which you convince management/leadership to review code is unknown to me.
I made an edit to this effect. I like the arguments made by Accelerate for having good change review, but practically any mainstream "DevOps evangelization" book out there is going to have a section on the business cases for dedicated, principled change review.
The book I mentioned is by the people behind the state of DevOps reports and is essentially a deeper dive into the research, methodologies, findings, etc and the reasoning for the approach. It’s basically intended to address the skepticism with transparency and so we can decide for ourselves how solid the results are.
Martin Fowler makes almost all of the same points about the state of DevOps report you do in his forward to the book and says he (and others) encouraged them to write the book after they walked him through their methodology. He also says that it makes a compelling case, the surveys do make a good basis with the approach they take, but that it needs independent studies using different approaches to verify the results.
The book’s worth a read at $15 if you have the time.
That said there’s a section of the book on trunk-based development:
> Our research also found that developing off trunk/master rather than long-lived feature branches was correlated with higher delivery performance. Teams that did well had fewer than three active branches at any time, their branches had very short lifetimes (less than a day) before being merged into trunk and never had “code freeze” or stabilisation periods...these findings are independent of team size, organisation size, or industry.
> ...we agree that working on short-lived branches that are merged into trunk at least daily is consistent with commonly accepted continuous integration practices.
> We should note, however, that GitHub Flow is suitable for open source projects whose contributors are not working on a project full time.
I agree with their findings after reading the book and while their data and methods can be openly reviewed now I also agree with Fowler that it will be nice to see some independent verification. It’s by far the best research I’ve seen into software delivery since Peopleware though.
There is a really interesting section in Accelerate that discusses results related to the organization impact of CABs that claims a negative correlation between CAB approvals and stability.
Though I take anything with a grain of salt, it makes sense to me personally. I think that it's unreasonable for unrelated or uniformed entities to be able to intelligently handle approvals without intimate knowledge. That is not to say that people haven't been successful with it of course (this is the obligatory disclaimer so that I don't get flooded with replies of all the success stories of people using CABs :) ).
If it takes 10 people and six hours to accomplish the release of even the smallest of features, then it sounds like there's a bug with your "ship fast" feature and the team should invest some or all of its resources in fixing that. You probably can't do it over night, but you gotta chip away at it over time at the very least.
If you are looking for justification backed up by real world numbers, I highly recommend the book "Accelerate" by Nicole Forsgren, et.al.