System Configurations + Code Revisions = Continuous Integration FTW!

by on March 15, 2009 · 1 comment

This is a guest post by Patrick Debois, the author of JEDI: Just Enough Developed Infrastructure. I stumbled across Patrick early last year while searching desperately for some relevant topics on “agile operations”. One amusing, yet poignant, hit was the lost use cases of Operations. Agile Web Operations didn’t even exist at the time, but his humorous portrayal of an unfortunately all too common problem planted a seed which has really started to bear fruit. It is with great pleasure and many thanks that I introduce Patrick Debois and his ideas on how we can take Continuous Integration to the next level. Take it away, Patrick!

CI in development

Developers are really nice guys and they have figured out a lot of cool agile stuff. One of these things that really interests me as a sysadmin, is Continuous Integration (CI). Let’s have a look at the wikipedia definition:
“Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly.”

While this a great concept, in practice there are often two separate groups, developers and testers, that are involved in this process. They would have the sysadmin setup a few environments (dev, test, integration, etc.) and then go off playing with it during the project. Fast forward a month or two – these environments have become project specific and probably no longer reflect the production environment. This leads to conflicts between the Dev and Test Team and the Sysadmin Group which may even deteriorate into an “us and them” discussion.

Where’s the difference?

When building an operating environment, a sysadmin more or less goes through the same stages as the development process. The difference is they don’t produce the software which runs in the environment. Instead, they create the environment which operates the software. Let’s see how we can map their work to the CI workflow. The image is an augmented version of a javaworld article on Continous Integration with hudson:

Comparison of the software and

Comparison of the software and operating environment build cycle

Building an operating environment step by step

Source Repository:
Sysadmins have configs such as hostfiles, bootptab, dhcpdconfig, dns zones, jumpstart profiles, and scripts or recipes. Good practice demands that these should also be kept under version control so that changes can be tracked. If you don’t have a central repository for these configs, you should set one up (and avoid your laptop suffering the truck factor… Chances are you already have one for your development team, so just create a systems repository. Another approach is followed by Augeas by versioning the configs on the systems themselves.

Build:
One of the core things to build in an operating environment is the Operating System itself. You checkout your latest operating environment version from your source repository. Then starting from these templates/profiles, systems such as jumpstart or kickstart will build the base installation: they will format disks, download all necessary packages and patches using your local yum, yast, or apt-get, and even configure the basic network. This is very similar to systems like maven and ruby gems. You can even setup local proxy caches just as you would do for maven repositories. Tools such as cobbler allow you to integrate this whole process. It’s actually being used in the testplans of the new Fedora Distribution to automate the provisioning of systems.

After this, you would typically apply your recipes with tools like puppet, chef and carpet to refine these installations, again coming from your source repository. Finally, you would run some specific scripts that you developed yourself. At the end of the process you would have an installed operating system and maybe even the middleware and database installed and configured.

Test:
Now that we have built our system, the first thing to do is login and check if the basic things work. You do a ping, a ‘df -k’, check the hostname, some of the basic commands. Consider these unit tests for the installed operating environment. Most of these checks are basic, and we might go a bit further with functional tests: send a mail and see if it arrives, deploy a sample application and see if we get a “hello world”. How about some failover tests or integrating with shared infrastructure such as LDAP systems? These might be considered as integration tests. In this area, you would speak of monitoring instead of testing. But in the end we mean the same thing. In some automation software, like OpenQRM, monitoring is built in. This monitoring is leveraged by the usual suspects such as Nagios and Xymon and for load testing by Tsung or JMeter.

Package:
I would say that packaging your operating environment is similar to making it virtual. Solutions such as VirtualBox, VMware, Xen, Parallels all enable this. Preparing an AMI format for Amazon EC2 also falls in this category. By creating appliances or virtual images we can prepare the next phase which is the deployment phase. VMware studio, Suse Studio, Thincrust, rPath all focus on generating an appliance that can be deployed in different environments.

Deploy:
All these environments have to run somewhere. This might be local on a physical machine using FAI or PXeboot, on a virtual machine or even in the cloud – ready for use by the rest of the world. If you’re looking for a completely integrated solution have a look at Spacewalk.

Friends with benefits

Building the operating environment is actually a CI pipeline that integrates with the development CI process. By integrating both pipelines, you can have both your development and operating environment changes tested continuously.

So what benefits do we get by looking at this with CI glasses?

  • When working on a project, both sysadmins and developers get instant feedback on changes they introduce – no last minute gotchas before release dates.
  • Environments produced are kept up to date and follow the same administration process for each project
  • Sysadmin requirements can be addressed in each sprint
  • One build system will unite both groups in achieving a new sprint goal
  • A good reason to bring a sysadmin on board your project
  • Sysadmins will benefit from the reproducible workflow (may even be useful for disaster recovery scenarios)
  • Security patches can be considered as changes to the operating environment and, thanks to the integration, their impact immediately tested even if the project has finished for some time
  • Test driven automation: build tests/monitoring first and then make the change.

With such a CI environment, we’d certainly be one step closer to “Bridging the gap” between sysadmins and development. Thanks to Agile Webops for making this possible!

About the author:
Patrick DeboisPatrick Debois – Agile Sysadmin ([email protected])
Twitter: @patrickdebois
Blog: Just Enough Developed Infrastructure
Google groups: Agile System Administration


Did you enjoy this article? Get new articles for free by email:

Trackbacks