Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler))

Category: Programming
Author: Jez Humble, David Farley
All Stack Overflow 13
This Year Hacker News 3
This Month Stack Overflow 4


by anonymous   2019-07-21

I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.

First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask:

Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.

Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.

The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.

Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.

This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to @gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)

For us, Powershell, XML and Web Deploy parameterization will be instrumental.

I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.

Good luck!

by anonymous   2019-07-21

Hmm, this is a complex question but I'll answer as best I can. A lot of this will depend on you/your organization's risk tolerance and how much time they want to invest in tests. I believe in a lot of testing, but there is such a thing as too much.

A unit test tests the unit of code. Great, but what's a unit? This article is a pretty good discussion: but a unit is basically the smallest testable part of your application.

Much literature (e.g. ) describes multiple phases of testing including unit tests which are very low level and mock externalities such as DBs or file systems or remote systems, and "api acceptance tests" (sometimes called integration tests although this is a vague term that can mean other things). This latter type fires up a test instance of your application, invokes APIs and asserts on responses.

The short answer is as follows: for unit tests, focus on the units (probably services or more granular), but the other set of tests you describe, wherein the test behaves like a client and invokes your api, are worthwhile too. My suggestion: do both, but don't call both unit tests.

by anonymous   2019-01-13

This is a question that's broader than just Maven. Because what you do with Maven is determined by the dev process.

If you're interested in general in dev/release process, you could research Continuous Deliver topic as well as Continuous Integration. You could start with Continuous Delivery book which gives a good perspective on both CI & CD (it's pretty boring though).

As for the videos, you could just search in the internet for Continuous Delivery. I like in particular videos from Sam Newman.

As for the Maven itself, there are books like Maven Complete Reference or Apache Maven 2 Effective Implementation (which is a bit old, but Maven was pretty stable from the end user perspective, so not much changed).

by vsupalov   2017-11-23
Good point. But that's not limited to automation. As with data engineering, or software development, you're likely to overdo it and build something which does not agree with reality if you "try to build the whole thing" at once. Building the smallest possible deployment pipeline (a skeleton pipeline, as it's called in the continuous delivery book (from 2010 but still the best thing on the topic out there) [1]).

You start covering the most essential needs just so they become useful and usable. Then you go ahead and iterate from there, building something which suits your team, company & product. Learning and adapting in the process.


by anonymous   2017-09-11
For safety and speed, you don't want your code history is stolen by hackers and want to speed up the whole build process. If you have intrest on continuous deploy, you could look [Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation (Addison-Wesley Signature Series (Fowler)): Jez Humble, David Farley: 9780321601919: Books]( This book has a good explanation about how to do it right.
by anonymous   2017-08-20

This topic itself is quite complicated and there is no general or standard approach available for this. I can highly recommend to read a book Continious Delivery, there you will also find a list of available tools and a proper explanation of how to do things correctly. Keep in mind this will take significant ammount of time, but from the other hand will make your life so much easier.

In a nutshell, you will need to have:

  1. List of all products (binaries, msi, dlls, etc) available within your company
  2. A list of all possible configurations for the products
  3. Versions available of any product
  4. List of environments (staging, UAT testing, production, etc)

You will also need to have a place where you can select a product of particular version, it's configuration and deploy it via one button click to the selected environment. This user interface shall also allow you to see which product (version and configuration) is deployed to which environment. Behind the scene, the tool will just call the scripts that you wrote to perform custom deployment.

In terms of tools, there are quite a few things that you will need:

  1. CI server(Go, TeamCity, TFS)
  2. Build managements scripts, such as MSBuild, NAnt, etc.
  3. Tools like Puppet or CMDB for configuration management
  4. Deployment scripts, like Powershell
by anonymous   2017-08-20

I think either strategy can be used with continuous development provided you remember one of the key principles that each developer commits to trunk/mainline every day.


I've been doing some reading of this book on CI and the authors make suggest that branching by release is their preferred branching strategy. I have to agree. Branching by feature makes no sense to me when using CI.

I'll try and explain why I'm thinking this way. Say three developers each take a branch to work on a feature. Each feature will take several days or weeks to finish. To ensure the team is continuously integrating they must commit to the main branch at least once a day. As soon as they start doing this they lose the benefit of creating a feature branch. Their changes are no longer separate from all the other developer's changes. That being the case, why bother to create feature branches in the first place?

Using branching by release requires much less merging between branches (always a good thing), ensures that all changes get integrated ASAP and (if done correctly) ensures your code base in always ready to release. The down side to branching by release is that you have to be considerably more careful with changes. E.g. Large refactoring must be done incrementally and if you've already integrated a new feature which you don't want in the next release then it must be hidden using some kind of feature toggling mechanism.


There is more than one opinion on this subject. Here is a blog post which is pro feature branching with CI

by anonymous   2017-08-20

(That is quite a few questions inside one big question.)

But I would refer to the Continuous Delivery book

Edit: (As commented you already read this book) Some suggestions which you may already do, but for others with similar issue:

  • Ensure less coupled interfaces by using duck typing.
  • Use feature toggles to gradual roll out of features across your inter-dependant applications.
  • Evolution scripts for the DBs
  • Version table in every DB
  • Symlinks is a neat way of toggling rollouts of standalone apps
  • Have a look at how other big enterprises rollout:
    • Amazon use of new server per deployment:
    • Netflix build demo by Carl Quinn using short lived cloud instances
    • 50 times a day at IMVU and blog by Ries

But I have no solid solution to the inter-dependency auto-deploy strategy you actually ask for :|

by MalcolmDiggs   2017-08-19
Just started reading "Continuous Delivery" by Jez Humble, et al. Really wish I would have grabbed this years ago, great primer on what (for me) is a confusing topic.
by sciurus   2017-08-19
If you haven't read it already, read Continuous Delivery.