I don’t think anyone would disagree that creating and maintaining a set of e2e automated tests (like Playwright) takes a lot of time and effort, and therefore costs a lot of money.
So how does one demonstrate the value of investing time/effort/money into automated e2e testing?
My goal for automated e2e tests like Playwright is to have just enough that the team/project/product has zero manual regression testing required to release frequently.
I’ve written about this before, and it doesn’t mean no human testing, it just means no manual regression tests or test scripts a human follows to make sure existing functionality isn’t being broken by changes introduced into your system.
When looking at metrics I like the GQM approach where you start with goals, devise some questions to determine whether you’re achieving your goals, and create some metrics to answer those questions.
Our goal is already articulated above:
Goal: Zero manual regression testing
Some questions we could ask to see whether we’re achieving our goal would be:
- How good are our Playwright tests at catching regressions?
- How much manual regression testing do we perform?
And finally some metrics to answer our questions:
- Number of regression bugs found by the Playwright test suite
- Number of regression bugs not caught by the Playwright test suite
- Time spent performing manual regression testing
I like keeping things really simple when it comes to collecting and displaying metrics.
I created a Confluence page where I simply recorded regressions as they happened, in a table like such:
|1.||10 Jan||Welcome screen doesn’t display||Playwright||Alister||WIP|
And using the table I created two simple graphs within the Confluence page to show our metrics:
I think these metrics answer the question “how good are our Playwright tests at catching regressions?“
To answer the other question “how much manual regression testing do we perform?” I can ask our QAs this question in our fortnightly catch up and record the results similar to the data above.
By using the answers to our questions we can determine whether or not we’re meeting our goal of having “zero manual regression testing” quite easily and whether Playwright is helping us achieve this goal.
What metrics do you collect around automated e2e testing?