Agile Software Dev Automated Testing Software Testing

Testing beyond requirements? How much is enough?

At the Brisbane Software Testers Meetup last week there was a group discussion about the requirement to test beyond requirements/acceptance criteria and if you’re doing so, how much is enough? Where do you draw the line? It came from an attendee who had a manager pull him up for a production bug that wasn’t found in testing but wasn’t in the requirements. If it wasn’t in the requirements, how could he test it?

In my opinion, testing purely against requirements or acceptance criteria is never enough. Here’s why.

Imagine you have a set of perfectly formed requirements/acceptance criteria, we’ll represent as this blue blob.


Then you have a perfectly formed software system your team has built represented by this yellow blob


In a perfect, yet non-existent, world, all the requirements/acceptance criteria are covered perfectly by the system, and the system exists of only the requirements/acceptance criteria.

Requirements - System

But in the real world there’s never a perfect overlap. There’s requirements/acceptance criteria that are either missed by the system (part A), or met by the system (part B). These can both be easily verified by requirements or acceptance criteria based testing. But most importantly, there are things in your system that are not specified by any requirements or acceptance criteria (part C).

Requirements - System(1)

These things in part C often exist of requirements that have been made up (assumptions), as well as implicit and unknown requirements.

The biggest flaw about testing against requirements is that you won’t discover these things in part C as they’re not requirements! But, as shown by the example from the tester meetup, even though something may not be specified as a requirement, the business can think they’re a requirement when it effects usage.

Software development should aim to have as few assumptions, implicit and unknown requirements in a system as reasonably possible. Different businesses, systems and software have different tolerances for how much effort is spent on reducing the size of these unknowns, so there’s no one size fits all answer to how much is enough.

But there are two activities that a tester can perform and champion on a team which can drastically reduce the size of these unknown unknowns.

1 – User Story Kick-Offs: I have only worked on agile software development teams over the last number of years so all functionality that I test is developed in the form of a user story. I have found the best way to reduce the number of unknown requirements in a system is to make sure every user story is kicked-off with a BA, tester and developer (often called The Three Amigos) all present and each acceptance criterion is read aloud and understood by the three. At this point, as a tester, I like to raise items that haven’t been thought of so that these can be specified as acceptance criteria and are unlikely to either make it or not make it into the system by other means or assumptions.

2 – Exploratory Testing: As a tester on an agile team I make time to not only test the acceptance criteria and specific user stories, but to explore the system and understand how the stories fit together and to think of scenarios above and beyond what has been specified. Whilst user stories are good at capturing vertical slices of functionality, their weakness, in my opinion, is they are just a ‘slice’ of functionality and often cross-story requirements may be missed or implied. This is where exploratory testing is great for testing these assumptions and raising any issues that may arise across the system.


I don’t believe there’s a clear answer to how much testing above and beyond requirements/acceptance criteria is enough. There will always be things in a system that weren’t in the requirements and as a team we should strive to reduce the things that fall into that category as much as possible given the resources and time available. It isn’t just the testers role to either just test requirements or be solely responsible/accountable for requirements that aren’t specified, the team should own this risk.

0 replies on “Testing beyond requirements? How much is enough?”

I couldn’t agree more, Alister… As usual, you’re an asset to the international QA testing community. QA testers who take a “rote” approach to testing requirements are only doing part of the job. In reality, the “B” section is often quite a bit smaller, and the “C” section is bigger than in your drawing. The amount of testing beyond requirements that’s required is loosely dependent on a few factors: what level of “polish” is required? Will the end users be well-trained, or will they be figuring out the software as they go? If the users need to figure it out as they go, then intuitive design and overall user-friendliness are critically important, and these are two important aspects to good software which are rarely ever communicated by the formal requirements. Lastly, how complex is the system? With added complexity comes the risk of edge or boundary cases that the authors of the requirements were unable to foresee, and so a careful analysis of the state transition map and edge/boundary cases becomes important, in addition to user-story kickoffs and exploratory testing. In short, requirements shouldn’t be trusted any more than developers should, and exploratory testing is always very important, unless the system under test is quite simple.

I agree with Allister comments that if you only test the user stories you are not covering all the interactions with the rest of the system, however isn’t regression testing supposed to cover areas A and C? Especially if you introduce some Automaton testing for this purpose.
In my practice it has always been hard to clearly identify the A and C system areas. What I have done is to asked developers for information of affected functionality during the user story review and added a section, in section of the user story, to document these affected areas or user story system integrations.

Great article Alister. I prefer to talk about user expectations rather than requirements, but I believe I can build your same analogy. The difference between the system and the user expectation is the unknown. By using user story kickoffs with 3 amigos and performing exploratory testing you aim at discovering the unknowns. I find this approach extremely useful and apply it with my teams with excellent results.
By recognising the unknowns and trying to bring them to the surface you broaden the activity of testing from mere verification (wasteful) to product development (valuable).

The main problems with systems is that none of the sources of requirements will ever give you the full set of data needed in the real world, for the money people are willing to assign a project, unless it is a simple one line change.
The financial angle will always limit the ‘thinking’ time, for all parties, and thus hinder their ability to cover the more edge conditions; that is why we have Risk based testing.

The test person or department ‘should’, but are rarely able to, have a ‘risk register’.
In that register, there should always be an entry stating that the requirements are not complete; How can they be on a budget?

Imagine an on-line map try to test that they can go to any point in the world? Do they test every point; even with an automated test system that would be time consuming, and yet, unless you have done that can you be 100% sure it will work. The answer is no; even clever mathematics can only tell you, probably.

One thing that many do not recognise is that in testing, unlike many other IT departments, it is essential for a person to have a broader range of experience for the job. The technical aspects, mathematical and logical, are useful, but the ability to fill in those missing test areas comes directly from experience. Without that experience, how can you know what tests might be required.

As a tester, you are ‘required’, as part of the job, to be part psychic, seeing what people don’t tell you, part miracle worker, doing 10 days testing in 2, and part manager; in that you need to manage expectations of the customer. And when they come back and say why did you miss this, you just say you did your best given the limitations and fix the outstanding problem, plan against similar and associated problems next time and move on.

Leave a Reply

Your email address will not be published. Required fields are marked *