Sunday, April 21, 2013

Notes on Continuous Delivery - Implementing a Testing Strategy


There are three things in life that are always held to be true: we will die someday; we will pay taxes; and software will have bugs.... LOL

Welcome to part 4 of our Continuous Delivery series that continues from part 3. This time, we spend a little time talking about Testing.

A testing strategy is often overlooked in software projects. This should not be too surprising, we want to build applications quickly and release them quickly. However, leaving quality out of the picture or towards the end are terrible mistakes.

Another case, which occurs more frequently, is the notion that testing is done entirely by a QA team. Part of implementing Continuous Delivery is getting rid of this antiquated notion. Testing is an interconnected responsibility shared amongst the software team. As a result, software teams must have QA engineers in their roster. This becomes extremely important when it comes time to creating test Automation, the core of CD. I think we can build the best automated tests when QA and development work closely together --you get the best of both worlds.

In previous posts, we went over the different types of tests. It is important to allocate automation for all of these different kinds into your deployment pipeline. We will spend a lot of time talking about your this later on.

In addition, automation of test suites make exceptional regression tests. Focus on automating just repeatable tasks. Exploratory testing, typically called manual testing should continue to be done throughout the project, but only to aspects of the testing effort that are very difficult to automate or are only done sporadically.

Types of Tests

Tests have multiple dimensions: business facing and technology facing; each one can either support the programming efforts or just critique the project. Since this is a blog posts for developers, I will focus on tests that support the development endeavors.

How much of your application should you test?

Some people have proposed 80% to be a comprehensive goal to reach for your code coverage. It kind of makes sense when you put it in context of the "80/20" rule: test features that 80% of your customers will use.


These are the functional or acceptance tests. Acceptance tests can do various things such as test functional as well as non-functional requirements. They should be part of your "done" definition for a specific story, like a completion criteria.

Examples of non-functional requirements include: security, capacity, usability, performance, etc. Acceptance tests that pertain to the functionality of the system (the behavior of the system according to specifications) fall into the functional acceptance tests. The authors mention different tools to automate this such as JBehave, Cucumber, Twist, etc. I have not had first-hand experience with this, so I can't share my own two cents on this.

In my mind, however, I deem impossible to automate the coverage of ALL acceptance criteria (100%), perhaps just focusing on happy path testing is a good alternative. In an ideal situation, the test scripts can be written by the customers; developers and testers can work on the implementation together. This can be facilitated by a high-level DSL tailored for this task. These customer-crafted scripts using a DSL can be handed to development to be turned into acceptance tests. As I said before, this to me sounds very utopian and theoretical, but if you are interested in pursuing it, Ruby or Groovy are programming languages ideal for writing DSLs. Leave a comment and let me know how it goes...

It really depends on your situation and your project. I imagine, mission critical projects like airplane control systems, automotive software, etc, probably have much more testing code done than actual application code. James Shore, a leader in the Agile community, does not believe on automating acceptance tests. I provided the link to his article in the Resources section below.

Business facing tests can also be included as part of the demo. Certainly, the business owners would be interested in seeing their tests pass every iteration.


These are the unit tests, component or integration tests, and deployment tests. I'll reiterate: unit tests focus on isolating a very granular piece of code. They should NOT test any other code other than the immediate method or function. Therefore, typically UTs are written against Test Doubles or Test Stubs or some Mock framework such as JMock or PHPMock.

Unit tests are expected to be very fast since they shall not make any database calls, Web Service calls, filesystem operations, asynchronous or threaded calls, etc. These are all meant to be part of  component tests, which will be slower since they exercise the IO ports of the system.

Also, you may decide to write deployment tests. Deployments tests can be written to make sure the build worked as expected. For instance, you may assert that configuration parameters were applied correctly or that certain files were written to the right directories.


In sum, always incorporate testing as part of development from project inception and sizing. Doing it at the beginning will require much less effort than doing it mid-way or till the end. Make it an essential part of your "done" manifest and keep development and QA interconnected on this undertaking.

The most important automated tests to write are the happy path tests, focus on these first before implementing sad tests (exceptional conditions).

A good testing strategy provides a level of confidence that the system behaves and performs the way it should at every commit. This investment has many positive effects which will translate into fewer bugs, reduced support costs, and improved reputation.  

I've heard many people say testing also provides a way of documenting the application. By looking at a test suite of a particular component of an application, you can understand in detail how that component should work. People often prefer tests to any sort of markdown documentation. Combine this with good up to date source documentation (javadocs or phpdocs), and should be on track.

Automate repeatable acceptance tests to free up QA to do more difficult or elaborate exploratory testing. Also, build automated scripts to aid exploratory testing, i.e inputing data into the system.

In agile, you are expected to ruthless refactor. Building a comprehensive suite of unit, component, and acceptance tests can heavy project refactoring much less risky. For instance, imagine you are using an ORM such as Hibernate as the persistence layer for your application against a Microsoft SQL database. You could really use all the power of your automated regression tests, when it comes time to move your project to a database such as MySQL.

There are other types of testing that are meant to critique the project such as: usability testing, showcases, exploratory testing, etc. These are very hard to automate so they are typically done manually.

If you are dealing with a legacy systems, focus on testing what you changed. The worst situation to be in is an untested legacy system. Often times, if you don't have access to the source code, then opt for distributed approaches via a Proxy or Adapter patterns, so that you can isolate and test only the new code. Testing for legacy systems oftentimes is not seen as a high priority from the business owners. However, it is important to emphasize and instruct them on the value of having this as a regression suite and the protection against possible new bugs.

Finally, managing a defect backlog is very important. Ideally, if we implemented all of the different types of tests I mentioned before you would not have any bugs in your system. Right... We know this will never be true. Even if we had 100% code and branch coverage on all types of tests, QA and external users will always find bugs. Prioritize bugs first at the beginning of your iteration planning. For instance, you can catalogue defects in different levels of severity and make it a team policy to always get rid of defects severity one and two (i.e. blockers and critical). Remember to keep the business owner in the loop; when you might think a defect is critical, the customer might not think the same way.

Stay tuned for more!

  1. Humble, Jez and Farley David. Continuous Delivery: A Reliable Software Releases through Build, Test, and Deployment Automation. Addison Wesley. 2011

1 comment:

  1. Interesting article, be sure to share with my friend. He works QA here Once I applied to this company that they have tested my software. I was pleased with their services.