Ask Me Anything Automated Testing Exploratory Testing

AMA: balance between testing activities

QA asks…

At my work place I can see testers are caught between scripted testing, exploratory testing and automated testing. Some teams have dedicated resources for each type of tests, whereas others do a bit of everything. When it comes to priority automated testing takes a back seat, scripted testing is rushed through (skipping detailed documentation) and exploratory testing aka “just have a play around to see if you can break” takes a driver seat. What according to you is a right balance? How do you structure your test, so you can effectively do proper testing?

My response…

Finding balance is always a challenge. As I explained recently, I have found more and more software developers are more interested in automated testing these days, so collaborating on these automated tests with developers, or moving (some) responsibility of these tests to developers is a great way to free up some more time for human testing.

As I explained in another response, I don’t believe in scripted manual testing for regression purposes, the automated regression tests should cover these manual scripts, so the only testing I do is of the exploratory kind. I still plan my testing, just before or as I test, to work out the kinds of things I want to explore/cover and what browsers/devices or operating systems I will use. The key to good exploratory testing, I have found, is to be testing small changes of functionality, as there’s much less risk of gradually (continuously) introducing new small pieces of functionality than a single large change.

I don’t think there’s a ‘right’ balance as this will depend on your organization, your resources and how much collaboration you have. I typically spend 40% of my current time writing and maintaining end-to-end automated tests for, and the remaining 60% on human testing: exploratory testing new features, continuous dogfooding on different browsers/devices, catfooding, visually recording flows for reference and historical purposes, triaging existing bugs and raising new enhancement requests. If we had more developers working on our e2e automated tests, which there has been interest in, I imagine my effort could drop to say 20% but I am not really sure until we get there.


Leave a Reply

Your email address will not be published. Required fields are marked *