This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Continuous Delivery (CD), the new darling of IT, is a to dramatically improve the pace and quality of software delivery by creating a repeatable, reliable pipeline for taking software from concept to customer. The benefits are faster time-to-market, increased quality and improved responsiveness. However, doing CD properly hinges on the maturity of your testing.
A new version of any application should be rigorously tested to ensure it meets all desired system qualities. It is important that all relevant aspects — whether functionality, security, performance or — are verified by the application delivery pipeline. If in doubt, test and test again.
Many companies are starting initiatives to build delivery pipelines, automate environment creation and app deployment, and so on. That's all very well, but the most important goal of a pipeline is to allow you to make an informed (and, at some stage, perhaps automated) Go/No-go decision.
What do you have to do to get to that point? Simple: test, test and test again! Few things dampen an executive sponsors' enthusiasm for Continuous Delivery more quickly than a couple of embarrassing post-release outages.
Accurate tests will help you streamline development, enabling you to determine when you've done just enough to implement new features. At the same time, testing will help you to manage risks by ensuring your CD initiative doesn't go off the rails before it's had time to prove its worth.
The more testing you do, the better you will be able to determine whether the new deliverable is better than what's currently running.
The status quo in many companies is there are no automated tests and only limited manual ones. Such a situation severely limits the throughput capacity and predictability of your delivery pipeline. The other extreme is lots of tests using many different tools. How do you aggregate this information to allow you to make a confident Go/No-go decision?
In an environment with mostly manual tests, failures are hard to reproduce. This is not a new problem, but it gets worse as you do more tests as your Continuous Delivery initiative ramps up.
At the other extreme, with too many tools and tests, the pipeline often simply takes too long and costs too much to run. Plus, there is the signal-to-noise issue: how much does one failure stand out in a crowd of thousands of less meaningful tests that always pass? What is the right amount of testing for the current context and what is the desired risk/cost/speed tradeoff?
Managing Tests in a CD Organization
Sign up for MIS Asia eNewsletters.