The deployment-sized hole between Continuous Integration, Continuous Delivery and Provisioning tools

(33 views)

Andrew Phillips

As a developer and a build engineer, I was always dissatisfied with the amount of glue I had to apply to stick popular CI or CD tools, whether JenkinsBamboo or Go to popular provisioning tools (Puppet,Chef or the like), which we also used for the application tier.

There was simply a fundamental impedance mismatch between the two types of tools:

  1. We needed an application-oriented model for our services, which consisted of multiple components ("traditional" or micro-service style) spread across various machines. The provisioning tools' domain models all felt too machine-centric.

  1. We were looking for clean and easy integrations with our build, packaging and CI/CD tool of choice. Since often times none existed, we ended up with a bunch of custom tasks and script steps.

  1. Our pipelines were -- and still are -- based on "directive automation."That is, we invoked whatever tool(s) were needed for a step, kept track of the process and the status and continued (or not) once it had completed. This did not match the "eventually consistent" style of the server mode of tools like Puppet or Chef at all too well.

That mode was great from an operational perspective ("Make sure all my x-thousand machines have the right configuration - I don't need to watch it happen, just get it done!"), but not really compatible with our delivery pipeline. We sometimes used a type of polling approach, but often ended using a single-node model (chef-solo or standalone puppet) for our pipeline. That worked, but didn't seem very much like the way the tools were ideally supposed to be used.

  1. For the deployment steps in our pipeline, we needed to orchestrate actions across multiple nodes, respecting dependencies but also parallelizing where possible for efficiency. This resulted in a lot of glue scripting, both to describe which kind of orchestration order was required, as well as to actually carry it out.

This "deployment-sized" hole was one of the many motivating factors for use of  XL Deploy; and indeed, we use XL Deploy internally also in setups very similar to the above scenario (more on how we integrate Jenkins, XL Deploy, Puppet and Chef in forthcoming articles).

Clipboard01

Clipboard01

Packaging and deploying an application consisting of Puppet manifests and Chef cookbooks (Docker container requestsalso supported, by the way). Easy integration with Jenkins; XL Deploy takes care of targeting the manifests/cookbooks at the appropriate hosts and all the orchestration of the invocations. Piece of cake!

From the many blog posts and forum threads, we see that talk of CI/CD tools and provisioning tools as the default delivery stack. You would be tempted to conclude that this problem has been solved. Based on what we see by using XL Deploy, this is evidently not the case.

Given the perhaps unsurprising perception of bias on the part of a deployment automation vendor like ourselves, we're not especially surprised that acceptance of the deployment and orchestration gap has taken this long. This makes it all the more interesting to finally see growing recognition, both among the DevOps practitioners we speak to at events such as devopsdays, and in discussions such as this one, that there most definitely is a place for a deployment automation tool in the standard delivery stack.

About the Author

Andrew Phillips heads product management at XebiaLabs. He is an evangelist and a leader in the DevOps, Cloud and Application Release Automation space. He sits on the management team and drives product direction, positioning and planning. Reach him at [email protected].

June 27, 2014
© HAKIN9 MEDIA SP. Z O.O. SP. K. 2023