Why Automated Testing is a Must for DevOps


Creative Commons License alisdair

You’ve heard a lot about test automation. But why is it so important? It’s a lot of additional effort and adds lots of code which needs to be maintained later, right?

DevOps Favors Continuous Releases

One of the important parts of any DevOps process is the regular release of working software. In Scrum, iterations tend to be only one or two weeks long. When you use Kanban you release whenever a reasonable package is ready – often multiple times a week. When you do that, you will inevitably see that manual testing becomes a bottleneck. Always.

The Fairy Tale Of The Test Cycle

If your team is not able to test everything as soon as it is ready, they soon will ask you to introduce one or two week long test cycles. They tell you that in that time they can do all the testing and that you’ll have a stable release afterwards.

Unfortunately, nothing could be further from the truth. The later you test, the more effort you’ve got to spend fixing bugs introduced weeks ago. And as the code is changing during the testing weeks, every test cycle you do has to be repeated. In the end, your software is no more stable then it was before the test cycle.

Test Automation To The Rescue

The only way to support a rapid cadence of releases is to automate testing. Only if you use unit testing during implementation you can be sure that a bug fix or refactoring does not break anything later. And only if you use webrat or any other tool to do integration testing can you afford to skip the manual regression tests.

Fast releases drive manual testing efforts through the roof. Automate the critical parts step by step and you’ll be ready for continuous deployment.

Getting started with automated testing is not easy. What are the biggest stumbling blocks you see? Let us know in the comments!

6 thoughts on “Why Automated Testing is a Must for DevOps

  1. Testing is awesome. If you’re writing new code there’s no excuse not to write tests.

    The biggest problem I see with testing is that it stops as soon as it gets into production. The monitoring checks we run in production can be thought of as extremely lightweight unit test that don’t get you a lot of code coverage.

    That said, running unit or integration tests in production can be quite tricky as they generally require setting up some sort of state before the test can run. When running these tests in development, most of these tests insert test data directly into the database, but in a production environment you generally don’t have (or want) that luxury.

    An approach I’ve seen to solve this problem is A/B testing, where you only run your tests against a certain segment of your users or infrastructure. You can do this relatively easily by creating a set of users in your production environment that you only run your integration tests against, and have a teardown task at the end of the tests that resets any data mutations on those users. Transactions in the database are a pretty good way of ensuring the data is rolled back, though it does require extra code in your application to do that.

    Another double edged sword of running your dev tests in your production environment is when your monitoring integration tests touch parts of your application that induce excessive load. This will affect the performance of your app for normal users (though this might be good for discovering scaling problems early).

    This is definitely the next frontier the DevOps battle will be fought on.

    Like

  2. Lindsay, thanks for your comment. I agree that we should try to find ways to ensure correctness even in production.
    We are not that far yet, but we run our test suite on our staging environment with a complete dump of the production database. That way, we do no nice “clean room” testing but use the ugly (and potentially corrupted) production database. That’s as close as we can get at it right now.

    Like

  3. Testing, and automated testing, is normally done in a special QA environment, that is somewhat different to the real, multi-tier, distributed data center, production environment. On top of this, when applications are released to production, they require configuration changes and other alterations that enable the application to run in a complex production environment. From my experience, this is where most of the “production bugs” and problems are introduced.

    As Lindsay mentioned, testing a segment of users on production will surface existing production bugs. This method should definitely be a part of your arsenal, but striving to limit the newly introduced bugs one should be looking into application release automation. Getting rid of manual changes to the application just before release, and removing human error during deployment is critical for continuous release.

    Coincidentally to this blog post, Forrester Research is holding a webinar next week titled “Conquer Application Release Complexity” – very relevant to the readers of this blog post IMO.

    Link to webinar registration: https://www1.gotomeeting.com/register/591701792

    Like

  4. I used automated testing in the past, until I realized I would just sit around watching the tests pass/fail. Sometimes tests would run and I would know beforehand they would fail because I didn’t fix all the files yet, that’s a waste of time.

    So I run the tests manually now usually after a few rewrites, and before updating the server’s code. I find it much more efficient that way.

    Like

  5. Two major issues for our efforts in Test Automation:
    a) Quality is not improved – still have similar amount and severity of bugs found before product release
    b) Re-use rate is very low due to frequent and late UI changes. So the effort of test automation is not that economical and consequently there are doubts in manager about the ROI.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.