Categories
e2e Testing Playwright

Writing automated e2e tests for known buggy systems

Every system I’ve worked on, old or new, is full of known bugs (and unknown bugs for good measure 🤪). These known bugs are the ones that have never made it to the top of the bug backlog to be fixed because there’s always other more important work to do.

But what do you do with automated e2e tests that exercise such code and demonstrate such bugs?

Imagine a very simple example of a test that visits our page and asserts the title is correct.

Our page looks like this:

super simple webpage

And our Playwright code looks like this:

test.only('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
})

You can see our test has different text it asserts than what is displayed. The text is our test is what we actually want to display, however the system displays it differently so our test fails when we run it.

What do we do with such tests? There’s a few different options all with their own advantages and disadvantages.

Option One: Commit the failing test as it is

Advantages: test is pure and correct, test is still run on every build highlighting the functionality that is wrong

Disadvantages: each build will fail until this functionality is fixed, creating red/failed builds and not giving immediate feedback on other potential issues found in the builds and resulting in people losing confidence in overall build results.

I personally wouldn’t recommend this approach as I think the noise of the failing builds outweighs any benefits it has.

Option Two: Mark the failing test as skipped

test.skip('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
})

Advantages: no noise in builds since test no longer runs

Disadvantages: test can be forgotten about since it never runs and other issues could be introduced in the feature. For example if the text was changed to something else that is also wrong we wouldn’t know since the test is not being run.

Whilst this is preferable to option one this option often results in forgotten tests so I would also not recommend it.

Option Three: Update the assertion to be incorrect (with a comment)

test.only('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leaving Page'); // BUG: This text should be WebDriverJs Demo Leave Page
  })

Advantages: if the text changes to any value (whether now correct, or still incorrect) the test will fail alerting us to a change in functionality

Disadvantages: the tests are no longer representative of what is expected of the system – the expectations contradict what is actually expected.

I probably prefer this to having a pending test but something doesn’t feel right about a false assertion.

Option Four:

Playwright actually offers a solution for scenarios like this, it’s the test.fail() syntax which marks a test as being expected to fail, so it is still run but if it fails it passes, and if it passes it fails 🙃

We can write the test like this:

test.only('can have a test for a known bug in the system', async ({ page }) => {
  test.fail() // BUG: The text is presently wrong
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
  })

And when it fails it “passes” with a green cross:

If the system was fixed this test would then fail, and we’d know to remove the test.fail()line.

Advantages: if the text changes to the correct value we will know as this test will pass when we don’t expect it to. We can keep our assertions correct/pure.

Disadvantages: if the test was to fail in a different way we wouldn’t know about it since all the test cares about is that it fails (which we’re expecting).

Whilst this can hide other test failures, since I aim to write independent tests I can live with it potentially hiding other issues so this is my preferred approach to known failures.

How do you deal with known failures? Any of these ways or another I’ve missed?

Categories
CSharp e2e Testing

Playwright in C# (.NET Core)

Whilst I was doing some reading of the Playwright docs I noticed they have C# bindings (as well as Python & Java, but not Ruby) – and since it’s been a couple of years since I’ve used C# I thought I’d take a look at how it works – especially considering .NET Core has support for Mac which makes working in C# .NET so much easier for me.

First I downloaded Visual Studio for Mac Community Edition which was pretty easy to install and this included the .NET Core framework which includes the dotnet command line tool.

One thing about .NET Core is there’s a lot more command line options to do things.

Installing Playwright?

dotnet tool install --global Microsoft.Playwright.CLI
playwright install

Creating a new NUnit Project?

dotnet new nunit -n PlaywrightNunitDemo

Adding Playwright, Building and Running Your Tests?

dotnet add package Microsoft.Playwright.NUnit
dotnet build
dotnet test

My tests end up looking like this:

using System.Threading.Tasks;
using Microsoft.Playwright.NUnit;
using NUnit.Framework;
using PlaywrightNunitDemo.lib;

namespace PlaywrightNunitDemo
{
    [Parallelizable(ParallelScope.Self)]
    public class Scenario02 : PageTest
    {
        [Test]
        public async Task CanCheckForErrors()
        { 
            string errors = await AppHelpers.VisitURLGetErrors(Page, "/error");
            Assert.AreEqual(": Purple Monkey Dishwasher Error", errors);
        }

        [Test]
        public async Task CanCheckForNoErrors()
        {
            string errors = await AppHelpers.VisitURLGetErrors(Page);
            Assert.AreEqual(string.Empty, errors);
        }
    }
}

and my reusable Playwright code can live in a class with static methods:

using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Playwright;

namespace PlaywrightNunitDemo.lib
{
    public class AppHelpers
    {
        public static async Task<IResponse> VisitURL(IPage page, string path = "/")
        { 
            var config = new ConfigurationBuilder().AddJsonFile("appsettings.json").Build();
            string url = config["BASE_URL"] + path;
            return await page.GotoAsync(url);  
        }

        public static async Task<string> VisitURLGetErrors(IPage page, string path = "/")
        {
            var errors = "";
            page.PageError += (_, exception) => { errors = errors + exception; };
            await VisitURL(page, path);
            return errors;
        }
    }
}

Summary

I quite enjoyed working in C# in previous roles. There’s something nice about typed languages which makes you confident when writing the code that things will do what you expect. Having .NET Core easily accessible on a Mac and being able to use Playwright makes C# even nicer.

Example Code

My example code is: https://github.com/alisterscott/PlaywrightNunitDemo

I’ll see if I can get this running on Linux in CircleCI soon.

Categories
e2e Testing Playwright

Playwright Test Runner

I’d previously shared how to set up Playwright with Jest as a test runner which enabled us to do some cool things like:

  • Parallel test execution,
  • Automatic retries,
  • HTML reports (using Jest Stare), and
  • Screenshots when failing

It turns out that Playwright now supports all of the features without having to use Jest! It’s called the Playwright Test Runner.

I went ahead and created a fork of my existing code to show how to set this up with the Playwright Test Runner.

Specifying Tests

Fortunately the syntax to specify tests is almost identical to Jest, but you don’t need to worry about spawning browser contexts yourself 😎

This Jest test:

jest.retryTimes(1)

test('can use xpath selectors to find elements', async () => {
  global.page = await pages.spawnPage()
  await nav.visitHomePage(global.page)
  await home.clickScissors(global.page)
}, jestTimeoutMS)

becomes:

const { test } = require('@playwright/test')

test('can use xpath selectors to find elements', async ({ page }) => {
  await nav.visitHomePage(page)
  await home.clickScissors(page)
})

Running in Parallel

Like Jest, Playwright automatically runs your tests in parallel, sharing isolated browser pages within browsers, and spawning browsers across test files. The time taken was pretty much identical:

Jest

 PASS  scenarios/example4.spec.js
 PASS  scenarios/example3.spec.js
 PASS  scenarios/example5.spec.js
 PASS  scenarios/api.spec.js
 PASS  scenarios/example2.spec.js
 PASS  scenarios/example.spec.js

Test Suites: 6 passed, 6 total
Tests:       10 passed, 10 total
Snapshots:   2 passed, 2 total
Time:        5.514 s

Playwright Test

Running 10 tests using 6 workers

  ✓ scenarios/api.spec.js:5:1 › [chromium] can GET a REST API and check response using approval style (590ms)
  ✓ scenarios/example.spec.js:4:1 › [chromium] can wait for an element to appear (3s)
  ✓ scenarios/example2.spec.js:4:1 › [chromium] can handle alerts (4s)
  ✓ scenarios/api.spec.js:12:1 › [chromium] can GET a REST API and check response using assertion style (587ms)
  ✓ scenarios/example3.spec.js:5:1 › [chromium] can check for errors when there should be none (1s)
  ✓ scenarios/example4.spec.js:5:1 › [chromium] can check for errors when there are present (1s)
  ✓ scenarios/api.spec.js:25:1 › [chromium] can POST a REST API and check response using approval style (536ms)
  ✓ scenarios/example5.spec.js:6:1 › [chromium] can use xpath selectors to find elements (988ms)
  ✓ scenarios/api.spec.js:32:1 › [chromium] can POST a REST API and check response using assertion style (545ms)
  ✓ scenarios/example.spec.js:9:1 › [chromium] can use an element that appears after on page load (220ms)


  10 passed (5s)

Automatic Retries

Jest supports automatic retries in the test itself:

jest.retryTimes(1)

In Playwright Test you can configure it globally or per run:

// playwright.config.js
module.exports = {
  use: {
    retries: 2
  }
}
npx playwright test --retries=3

HTML Reports

I couldn’t see HTML report output for Playwright Test – but there’s heaps of others like JSON and Junit (for CI).

Screenshots

Screenshots are easily configured for Playwright Test: for each test never, always or on failure (my preference).

module.exports = {
  use: {
    screenshot: 'only-on-failure'
  }
}

But wait there’s more

The Playwright Test Runner also has these very cool features:

Visual comparisons (visdifs)

test('can wait for an element to appear', async ({ page }) => {
  await nav.visitHomePage(page)
  await page.waitForSelector('#elementappearschild', { visible: true, timeout: 5000 })
  expect(await page.screenshot()).toMatchSnapshot('element-appears.png')
})

At any point in your tests you can take a screenshot and store this to visually compare with future runs with fine grained tweaking of matches. The best part about this, in my opinion, is these visuals are stored alongside your tests (not in some third party system) so you know exactly what you’re expecting to see 😎

The one downside is the screenshots are platform specific so when I checked in those generated on my Mac then CI failed as it was running on Linux and didn’t have the baseline files to compare to. I just downloaded the captured files from CircleCI and stored them as a baseline.

And it also supports other comparisons – similar to Jest snapshots I have previously demonstrated. You just need to stringify them first:

test('can GET a REST API and check response using approval style', async () => {
  const request = supertest('https://my-json-server.typicode.com/webdriverjsdemo/webdriverjsdemo.github.io')
  const response = await request.get('/posts')
  expect(response.status).toBe(200)
  expect(JSON.stringify(response.body)).toMatchSnapshot('posts.txt')
})

Closing Thoughts

I was pretty blown away by the Playwright Test runner. It offers everything Jest provides with less code I had to write, plus it has in built visual comparison tools that can also be extended to do API approval snapshot testing. Playwright is well becoming my e2e test automation tool of choice.

Show me the Code

Of course: code is here: https://github.com/alisterscott/playwright-test-demo

Passing CI here: https://app.circleci.com/pipelines/github/alisterscott/playwright-test-demo/7/workflows/fe13f579-1f53-4a01-834e-6a4e7388e22c

Categories
Automated Testing e2e Testing

Generating data for e2e automated tests

I generally prefer creating e2e automated tests that generate their own data so that the test is repeatable, more deterministic and less dependent on external data and factors which can change.


When writing automated e2e tests that generate data I’ve found there are two common approaches:

  1. Generate static data: this data remains the same between test runs2.
  2. Generate random data: this data can change each test run

An example of static data for a test would be:

const ContactModel = function () {
  return {
    firstName: 'Sammy',
    lastName: 'Snake',
    phoneType: 'MOBILE',
    phoneNumber: '0422888444',
    emailType: 'PERSONAL',
    email: 'sammy.snake@hotmail.com',
    type: 'INDIVIDUAL'
  }
}

and the same example using randomised data:

function pick (list) {
  return list[Math.floor(Math.random() * list.length)]
}

const ContactModel = function () {
  const firstName = pick(['Aaron', 'Becca', 'Charlie', 'Donna', 'Eckbert', 'Fred', 'Graham', 'Holly', 'Ignatius', 'Josephine'])
  const lastName = pick(['Aardvark', 'Bear', 'Cat', 'Dog', 'Eagle', 'Fox', 'Gorilla', 'Horse', 'Iguana', 'J'])
  return {
    firstName: firstName,
    lastName: lastName,
    phoneType: 'MOBILE',
    phoneNumber: `+${Math.round(Math.random() * 9999999999999)}`,
    emailType: 'PERSONAL',
    email: `${firstName.toLowerCase()}.${lastName.toLowerCase()}@${pick(['hotmail.com', 'yahoo.com', 'gmail.com', 'aol.com'])}`,
    type: 'INDIVIDUAL'
  }
}

Generally speaking I will choose static data when writing automated e2e tests as it’s more repeatable and consistent, however I still commonly see random generated data being used in automated e2e automated tests.

I think the reason is that varying data is a good exploratory testing technique to find bugs, however automated e2e tests aren’t about exploring functionality but rather ensuring the functionality continues to work as expected (regression testing).

They don’t have to be completely separate concepts though, my checked-in automated e2e test can use static data so it’s consistent and repeatable, but I could modify the same automated test to use randomly generated input which I could specifically run looped for a large number of iterations to assist me with my exploratory testing and try to find bugs around input.

I’ve found that great testers uses automated tests as a technique to assist them with their exploratory testing.


What’s your preferred approach? Do you generate static or randomised data or use existing data for your automated e2e tests?

Categories
e2e Testing

Playwright + Jest = 💖

We were looking at a possible replacement for our dated Protractor + Cucumber e2e testing framework. As we move away from Angular to React microapps we have found that Protractor doesn’t work very well/efficiently and cucumber isn’t giving us any benefits.

It was a good opportunity to do some research/tinkering to answer the question lingering in my mind: in 2020 what e2e testing tool would I use by default for a dynamic react based web application?

After some experimentation it came down to Puppeteer + Jest or Playwright + Jest. I’ll compare those in this post.

Why Jest?

As explained previously we don’t need a BDD framework, but we do need something that allows us to specify our tests, create assertions and run these tests (in parallel). Jest, particularly when using the Jest Circus test runner, seems the most mature tool in this regard in 2020. Whilst Jest is often associated with React as it handles the snapshot test results also, it doesn’t need to be used with React and it’s possible and easy to use it as a standalone Node.js testing library.

Parallel Support

Jest by default runs tests across files in parallel and uses the available resources to scale appropriately using processes/threads. I’ve found this particularly good as I write independent e2e tests which can scale through parallelism, and using Puppeteer and Playwright you can spawn new incognito browser contexts to run these.

Auto-Retry Support

Whilst I believe in writing deterministic e2e automated tests, since e2e tests are full-stack there are often external dependencies and services beyond our control (like a third party domain provider) and I’d rather prioritize test reliability over test perfection. With this mind I think it’s important to be able to automatically retry a single failing test scenario before failing a build, and the Jest Circus test runner supports this:

jest.retryTimes(1)

test('can wait for an element to appear', async () => {
  global.page = await pages.spawnPage()
  await nav.visitHomePage(global.page)
  await global.page.waitForSelector('#elementappearschild', { visible: true, timeout: 5000 })
}, jestTimeoutMS)

HTML Reports and Screenshots

Nice looking HTML reports are easy to achieve by using jest-stare and screenshots are easy to generate using Jest Circus hooks.

Playwright or Puppeteer?

I’m a fan of Puppeteer however Playwright is a much nicer browser automation library. Whilst it adds support for Firefox and Webkit, even if you’re running your e2e tests in one browser (Chromium) I’d still recommend Playwright over Puppeteer any day of the week. Here’s why:

Automatic Waiting

I’ve found the automatic waiting in Playwright just works™️ Especially when dealing with dynamically rendered react web apps I’ve found my Puppeteer code looks is full of page.waitFor calls to make it run reliably:

await page.waitFor('#loadedchild', { visible: true, timeout: 5000 })

Whilst this is occasionally necessary in Playwright (in particular waiting for an iFrame to switch into), I’ve found it’s almost never required which reminds me of good old Watir days.

Nicer API

Whilst the APIs are similar, Playwright is just nicer to use. Take grabbing some text from a div, in Puppeteer:

await page.goto(`${config.get('baseURL')}`)
await page.waitFor('#loadedchild', { visible: true, timeout: 5000 })
const element = await page.$('#loadedchild')
const text = await (await element.getProperty('textContent')).jsonValue()
expect(text).toBe('Loaded!')

In Playwright:

await page.goto(`${config.get('baseURL')}`)
const text = await page.textContent('#loadedchild')
expect(text).toBe('Loaded!')

Summary

If I was writing an e2e web testing framework from scratch in 2020 I would use Playwright + Jest. Playwright offers automatic waiting and a nice API, whilst Jest offers a solid runner with parallel support, automatic retries and the ability to easily generate HTML reports and capture screenshots.

I’ve created one repository for Playwright + Jest #
And another for Puppeteer + Jest to compare #

As an aside we put changing node libraries on hold for our e2e test framework and will look at some test infrastructure improvements instead.

Categories
e2e Testing

Puppeteer Jest Demo

I’m trying to work out what would be the ideal combination for an e2e web testing framework in Node.

I wanted to try Jest with Puppeteer (I’d previously used Mocha) and I also wanted to try avoiding Babel for transpiling and using JavaScript directly so I came up with this example on GitHub.

I think I’ll try using TypeScript for my next tinker project.