Automated Acceptance Testing

Managing dependencies between automated tests

I was at a meet up recently when someone asked the presenter about how to manage dependencies between tests. The presenter gave a list of tools that allow test execution ordering so you can ensure tests are executed in a specific order to satisfy dependencies, and how to pass data around using external sources.

But I don’t think this is a good idea at all.

I believe the best way to manage dependencies between automated tests is to not have automated tests dependent on each other at all.

I have found avoiding something is often better than trying to manage something. Need a storage management solution for your clutter? Avoid clutter. Need a way to manage dependencies between tests? Write independent tests.

As soon as you begin having tests that require other tests to have passed, you create a complex test spiderweb which is makes it hard to work out the true status of any particular test. Not only does it make tests harder to write and debug, it also makes it difficult it not impossible to run these tests in parallel.

Not having inter-test dependencies doesn’t mean having not having any dependencies. Targeted acceptance tests will still often rely on things like test data (or create it quickly via scripts in test pre-conditions), but this should be minimised as much as possible. The small number of true end-to-end tests that you have should avoid any dependencies almost completely.

0 replies on “Managing dependencies between automated tests”

Completely agreed with this, though what if this cannot be avoided?

Like we have an automation suite (ruby , cucumber , watir-webdriver) in which each feature file will create a test data, store the reference number of the created data and use it for the next scenario in the same feature file. We cannot avoid this dependency as we need to check the creation capability of the system and then see if the created data is usable. How do we handle that, in case there was an issue creating the test data then move on to the next feature file and skip this one totally after raising an exception.

I have looked around but have not found a solution to it so far. Do you have any suggestions as to how to achieve / improve this?

Pardon me if this seems silly (silly to be posted here as well), but I am pretty new to this and your blogs have been very helpful so far. I would really appreciate your help in this 🙂

I found that managing the code dependency is relative easier. With different build systems/tools (npm, grunt, gulp, …), the dependency between different packages is highly possible. However, managing the test data dependency is much harder. Many test automation assets are data driven or behavior driven. The tests depend largely on the preset test data. Therefore, having multiple test s running with the same test data set is often the case. Maintaining such a test data set and ensure the isolation of tests are difficult.

To my experience, it is possible to use 80/20 principle. You can have 1 or more test data set. 80% of the tests only do the read only style tests. It should not be hard to maintain. Because tests do not modify the test data, it is safe to share the set between tests.
The remain 20% are about the writable action tests. Different test data sets should be set up using different accounts. That way, the test data sets are separated from each other. For different test stages (Beta, Gamma, Preprod, …), different test data sets should be used, although the values could be the same. Meanwhile, you may like to utilize some scripts to quickly set up the data set and clean up the dirty data set. The data set can be created via program or can be copied from a predefined data table programatically.

Just my 2 cents.

I agree with you Alister.
However, I have found that there’s a compromise which deals with my situation better than the extremes of ‘all dependent’ vs. ‘fully independent’ test execution.

In our project, the tests are grouped into suites of tests, each suite having a set-up and tear-down set of steps at the beginning and end so the environment is just right for execution.

Within each suite, each test is dependent of the previous test result for execution.

Our set-up and tear-down steps take a long time to execute, so if we had fully independent tests, the execution time would be blown out to an unmanageable size – even with parallelism being used. Also, if the tests were *all* sequentially dependent in a single large suite, the cost in maintaining this ‘complex test spiderweb’ (as you say) would be deleterious to progressing the increment of test coverage. i.e Poking new tests into an existing chain of regression tests becomes harder as the chain’s length increases.

Short chains are ideal, but the set-up cost of a chain needs to be a factor in designing the test execution.

So, as usual in our game, there’s a sweet spot somewhere in-between, depending on the context of your problem.

As an addendum to this, our project’s recent uptake of technological improvements (docker) have reduced the environmental set-up costs, so we’re migrating towards a shorter chained test execution policy, exactly as you explain above!

Leave a Reply

Your email address will not be published. Required fields are marked *