Categories
Career

Job Satisfaction?

I recently received a letter from a long-term reader asking about meaning/purpose from work:

“I’m having a real struggle with lack of meaning/purpose in my role these days?… Do you get a sense of purpose from your work?”

“… Most friends are telling me a job is just for money, suck it up! I guess I don’t want to believe that, I want the work itself to be the reward and focus on that instead of $$$ or titles etc.”

I’ve responded privately and had some good conversation since, and I’ve asked if I could share some of my thoughts here to benefit others and the reader said sure as long as I don’t identify them.

So here goes…

I’ve definitely had those exact same feelings for many years and these were exacerbated when we first had kids (we have three boys who are now aged 8/10/12 years old).

I definitely don’t have all the answers but I have personally found some things that have helped me over these years with the obvious caveat of your mileage may vary.

  1. Add some hobbies: I really like the quote “Find 3 hobbies, 1 to make you money, 1 to keep you in shape, and 1 that let’s you be creative.” and I’ve been making sure I have something in each of these boxes for many years now. For me my professional career is about making me money and providing for my wife and our 3 boys, and I see it as that and only that (sorry Boss if you’re reading this!) I get my purpose from my other hobbies which are hiking to keep in me shape, and visual art to provide a creative outlet. The visual art one has been the most impactful on my feeling of purpose since I enjoy it and I don’t feel any self-pressure to be professional at it – I do sell some paintings on the side – but it’s about flow and the feeling of accomplishment in something I am in full control of, which I don’t often get in work (as projects are long, involve lots of people and moving parts and things always go wrong). I’ve been told I’m good at my art and I should do it as a job but that would remove the fun from it for me.
  2. Try part time? This obviously depends on your particular circumstances – but I spent the last two years working 4 days per week (~32 hours with every Monday off) with the additional day to spend on hobbies (above) and family. Whilst it’s been beneficial there are real downsides to it in that you take a 20% pay cut (which I can just afford after my wife spent 12 years looking after our kids and we’re way behind our peers financially due to this), you seem to do the same amount of work just squeezed into 4 days (and get paid 20% less than someone working full time) and work isn’t really set up/designed for it (I’d still get lots of compulsory meeting invites for Mondays). But it is an option and I found it good for a “reset“. Both my employers didn’t hesitate to agreeing to it (probably realised it’s a good deal for them) and whilst I returned to full time this month, I would do it again in an instant.
  3. Focus on individual contributor (IC) roles: this one was a bit hard for my ego to swallow (even though I don’t think I have a big ego) but I’ve spent the last few years really focussed on individual contributor roles rather than trying to get a better title (QA Lead, QA Practice Lead, Staff Engineer etc). As it happens my current official job title “Automation Test Analyst” is probably the least pretentious I’ve ever had! I’ve personally found a lot more satisfaction being an individual contributor working broadly as a QA in a cross functional team delivering software the most fulfilling role I can be in so I am focussed on how I can be the best I can be at that (and nothing more). I don’t thrive on coaching people. I don’t thrive on training people. I don’t thrive on writing documents and strategies no one reads. This can be a bit tricky as if you’re good at your role management naturally want to promote you into a non-IC role, so you need to resist, keep your ego in check, and keep firm on not wanting to take that on. If you can’t leave and work somewhere else that values your IC skills.
  4. Try to find some in-person time with colleagues: after 3.5 years working fully remote at Automattic I was burnt-out and had a full mental health breakdown. I needed to find a local office based team where I could come along and work with people again. I did that for about a year and then COVID-19 hit and my team went all remote 😕 I’ve since worked more in an office, but this latest Omicron wave has meant working from home again and I definitely feel less engaged and work feels less meaningful when I’m just another someone on Slack. This is surprising to me as I’m not someone who is extroverted (I classify myself as am ambivert) however I do find it motivating working around colleagues (I tried c0-working when I was working at Automattic and found it to be socially awkward and made me even more anxious than just working at home).
  5. Remember Passion follows Success: I’m a strong believer that passion follows success because being passionate about something isn’t enough. There are plenty of failed business owners who were certainly passionate, but most people who are highly successful easily develop passion about that thing. So how do you become successful? I think the only real way is good old fashioned hard work (sometimes sprinkled with a bit of luck). Working hard drives success. Success then drives passion.

As I mentioned at the beginning I am by no means an expert on this stuff, but I hope at least one of these ideas could help you, or maybe knowing that you’re not alone in feeling disconnected from paid work could be beneficial in itself.

Take care.

Categories
Playwright

A couple of cool new Playwright features

Version 1.19.1 of Playwright has been released and by reading the release notes I noticed a couple of cool new features (one of which was introduced in 1.18)

The first allows you to pass a message as an optional second parameter to an expect() call.

Say you’re calling an API and expect a certain status code:

test('can POST a REST API and check response using approval style', async ({ request }) => {
    const response = await request.post('https://my-json-server.typicode.com/webdriverjsdemo/webdriverjsdemo.github.io/posts', { data: { title: 'Post 4' } })
    expect(response.status()).toBe(202)
  })

If this fails you don’t have much info:

1) scenarios/alltests.spec.ts:62:3 › All tests › can POST a REST API and check response using approval style 

    Error: expect(received).toBe(expected) // Object.is equality

    Expected: 202
    Received: 201

With this new second parameter we can provide more info:

  test('can POST a REST API and check response using approval style', async ({ request }) => {
    const response = await request.post('https://my-json-server.typicode.com/webdriverjsdemo/webdriverjsdemo.github.io/posts', { data: { title: 'Post 4' } })
    await expect(response.status(), `Response: ${await response.text()}`).toBe(202)
  })

and get more detailed output

Error: Response: {
      "title": "Post 4",
      "id": 4
    }

    Expected: 202
    Received: 201

The second new feature is the ability to use .toBeOK() assertions instead of having to assert on status codes. This asserts the response is in the 2xx status range which means the request was OK.

We can now do this:

test('can POST a REST API and check response using approval style', async ({ request }) => {
    const response = await request.post('https://my-json-server.typicode.com/webdriverjsdemo/webdriverjsdemo.github.io/posts', { data: { title: 'Post 4' } })
    await expect(response, `Response: ${await response.text()}`).toBeOK();
  })

note: you can also call .not.toBeOK() if you are expecting an error.


Have you found any useful new Playwright features being released recently that you now can’t live without?

Categories
Aside Automated Testing

Inspecting Chrome elements requiring focus

In the web app I work on we have some elements which are only rendered when their parent element has focus (such as a select list with options) but the child elements disappear when focus is lost (I don’t think this is great behaviour but that’s another story).

I had trouble inspecting these elements in Chrome devtools since using the inspect tool would remove focus, and even using F8 to pause the debugger would also lose focus before it was paused.

I found this code snippet online that can pause the debugger 3 seconds after posting it in the devtools console – which is enough time to put the element in focus and at that point the DOM is paused or frozen so you can inspect the elements in devtools as much as you like.

The script is pretty simple – change the number of milliseconds to suit you but I’ve found 3000 ms is a good amount of time:

setTimeout(function() {
  debugger;
}, 3000);
Categories
Playwright

Playwright Developer Advocate Advertised at Microsoft

A long time reader of this site kindly sent me a link to this tweet thinking I may be interested in the new Playwright Developer Advocate role at Microsoft.

https://twitter.com/JamesMontemagno/status/1490796849304854533

I am not looking for a remote working role (and I’m not even sure that job is remote outside of the USA anyway), but I wanted to share to my other readers in case they’d find this interesting!

Categories
Continuous Integration Playwright

Setting timezones for consistent Playwright results

In the system I am working on we have some tests for leave balances which are timezone dependent. I noticed these would fail on CI before 10am local time, and pass for the rest of our (work) day.

Since our local timezone is UTC+10 I realised that our CI system was using UTC and therefore wasn’t accurate in its estimations.

I discovered there are two ways to ensure a consistent timezone in our CI system.

Firstly we set the timezoneId for Playwright to our timezone using the list of timezones is available here.

This is our Playwright config file (playwright.config.ts):

use: {
        headless: true,
        locale: 'en-AU',
        timezoneId: 'Australia/Brisbane',
    }

And secondly we make sure the timezone is set correctly on the CI docker images. We use Bitbucket Pipelines are the config file (bitbucket-pipelines.yml) line looks like:

script:
  - cp -f /usr/share/zoneinfo/Australia/Brisbane /etc/localtime # set timezone
  - npm ci
  - npm test

Setting both the system and browser ensures consistent timezone execution on CI and we’re eliminated our inconsistencies by implementing this.

Categories
Playwright

Demonstrating the value of our Playwright tests

I don’t think anyone would disagree that creating and maintaining a set of e2e automated tests (like Playwright) takes a lot of time and effort, and therefore costs a lot of money.

So how does one demonstrate the value of investing time/effort/money into automated e2e testing?

My goal for automated e2e tests like Playwright is to have just enough that the team/project/product has zero manual regression testing required to release frequently.

I’ve written about this before, and it doesn’t mean no human testing, it just means no manual regression tests or test scripts a human follows to make sure existing functionality isn’t being broken by changes introduced into your system.

When looking at metrics I like the GQM approach where you start with goals, devise some questions to determine whether you’re achieving your goals, and create some metrics to answer those questions.

Our goal is already articulated above:

Goal: Zero manual regression testing

Some questions we could ask to see whether we’re achieving our goal would be:

  1. How good are our Playwright tests at catching regressions?
  2. How much manual regression testing do we perform?

And finally some metrics to answer our questions:

  1. Number of regression bugs found by the Playwright test suite
  2. Number of regression bugs not caught by the Playwright test suite
  3. Time spent performing manual regression testing

I like keeping things really simple when it comes to collecting and displaying metrics.

I created a Confluence page where I simply recorded regressions as they happened, in a table like such:

No.DateRegressionPlaywright/ManualRaised byStatus
1.10 JanWelcome screen doesn’t displayPlaywrightAlisterWIP
2.
Sample table to collect data

And using the table I created two simple graphs within the Confluence page to show our metrics:

Trend of regressions found
Playwright vs Manual

I think these metrics answer the question “how good are our Playwright tests at catching regressions?

To answer the other question “how much manual regression testing do we perform?” I can ask our QAs this question in our fortnightly catch up and record the results similar to the data above.

By using the answers to our questions we can determine whether or not we’re meeting our goal of having “zero manual regression testing” quite easily and whether Playwright is helping us achieve this goal.

What metrics do you collect around automated e2e testing?

Categories
Playwright

Debugging Playwright Tests with VS Code

I use VS Code as my text editor/IDE for writing Playwright tests. I can use use VS Code for debugging since it offers full debugging functionality like breakpoints and being able to see variables etc.

To enable this, there’s a couple of things you do:

  1. I created a debug task in my package.json file: "debug": "npx playwright test --headed --timeout=0" which means I can use npm run debug to execute a test without a timeout and showing the browser – by either adding a .only to a specific test, or telling it a file, eg. npm run debug ./scenarios/test1.spec.ts
  2. In VSCode use the “View → Debug Console” menu option, choose “Terminal” and make sure “JavaScript Debug Terminal” is set as the terminal type.
  3. Add a breakpoint in your code using the red dot in the left margin
  4. You can then use the npm run debug command which starts a debugging session where you can step through and see variables etc.
Debugging Playwright scripts in VS Code

Happy Debugging & Happy New Year! 🥳

Categories
e2e Testing Playwright

10 tips for successful e2e web app test automation

  1. Write independent automated tests: you should try remove dependencies on other tests or test data – this allows tests to be consistent, repeatable and to run in parallel (see #6).
  2. Set up data/state for each test via API calls: calling APIs is quick and efficient and can set up exactly what you need.
  3. Clean up data/state for each test using “after” hooks: this ensures test environments are kept clean and tidy and unwanted test data doesn’t cause issues with exploratory testing.
  4. Re-use browser authentication so you only need to log in once: this speeds up tests, see this post on how to do this with Playwright.
  5. Generate and use consistent (static) test data for each test: only generate unique/randomised values to satisfy uniqueness constraints, else use hard-coded known good values. Further reading here.
  6. Run all tests in parallel locally and in CI: hardware is powerful and there’s really no reason not to (unless you use Cypress and can’t 😝)
  7. Run new/updated tests at least 10 times locally in parallel before committing: this helps with reducing and removing non-deterministic tests and race conditions from your test suite.
  8. Use your automated test scripts to assist with manual/exploratory testing: for example you can easily set up state/accounts/sessions for testing – create npm commands so you can run npm run newuser for example to generate and log in as a brand new user ready for testing.
  9. Use linting/code autoformatting: such as JavaScript Standard Style, for consistently formatted code and not having to make decisions.
  10. Focus on reducing the need for manual regression testing, rather than code coverage: when you can confidently release your web application with no manual regression testing you know you have enough e2e automated tests.

What are your tips for successful web app test automation?

Categories
e2e Testing Playwright

Writing automated e2e tests for known buggy systems

Every system I’ve worked on, old or new, is full of known bugs (and unknown bugs for good measure 🤪). These known bugs are the ones that have never made it to the top of the bug backlog to be fixed because there’s always other more important work to do.

But what do you do with automated e2e tests that exercise such code and demonstrate such bugs?

Imagine a very simple example of a test that visits our page and asserts the title is correct.

Our page looks like this:

super simple webpage

And our Playwright code looks like this:

test.only('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
})

You can see our test has different text it asserts than what is displayed. The text is our test is what we actually want to display, however the system displays it differently so our test fails when we run it.

What do we do with such tests? There’s a few different options all with their own advantages and disadvantages.

Option One: Commit the failing test as it is

Advantages: test is pure and correct, test is still run on every build highlighting the functionality that is wrong

Disadvantages: each build will fail until this functionality is fixed, creating red/failed builds and not giving immediate feedback on other potential issues found in the builds and resulting in people losing confidence in overall build results.

I personally wouldn’t recommend this approach as I think the noise of the failing builds outweighs any benefits it has.

Option Two: Mark the failing test as skipped

test.skip('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
})

Advantages: no noise in builds since test no longer runs

Disadvantages: test can be forgotten about since it never runs and other issues could be introduced in the feature. For example if the text was changed to something else that is also wrong we wouldn’t know since the test is not being run.

Whilst this is preferable to option one this option often results in forgotten tests so I would also not recommend it.

Option Three: Update the assertion to be incorrect (with a comment)

test.only('can have a test for a known bug in the system', async ({ page }) => {
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leaving Page'); // BUG: This text should be WebDriverJs Demo Leave Page
  })

Advantages: if the text changes to any value (whether now correct, or still incorrect) the test will fail alerting us to a change in functionality

Disadvantages: the tests are no longer representative of what is expected of the system – the expectations contradict what is actually expected.

I probably prefer this to having a pending test but something doesn’t feel right about a false assertion.

Option Four:

Playwright actually offers a solution for scenarios like this, it’s the test.fail() syntax which marks a test as being expected to fail, so it is still run but if it fails it passes, and if it passes it fails 🙃

We can write the test like this:

test.only('can have a test for a known bug in the system', async ({ page }) => {
  test.fail() // BUG: The text is presently wrong
  await goToPath(page, 'leave')
  expect(page.locator('#leavepage')).toHaveText('WebDriverJs Demo Leave Page');
  })

And when it fails it “passes” with a green cross:

If the system was fixed this test would then fail, and we’d know to remove the test.fail()line.

Advantages: if the text changes to the correct value we will know as this test will pass when we don’t expect it to. We can keep our assertions correct/pure.

Disadvantages: if the test was to fail in a different way we wouldn’t know about it since all the test cares about is that it fails (which we’re expecting).

Whilst this can hide other test failures, since I aim to write independent tests I can live with it potentially hiding other issues so this is my preferred approach to known failures.

How do you deal with known failures? Any of these ways or another I’ve missed?

Categories
Playwright

Reusable Authentication Across Playwright Tests

Most web apps require user authentication which requires the user to login at the start of an e2e test. A very basic example is:

import { test, expect } from '@playwright/test'

test.describe.parallel('Unauthenticated tests', () => {
  test('can view as guest', async ({ page }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Hello Please Sign In')).toBeVisible()
  })
})

test.describe.parallel('Authenticated tests', () => {
  test('can view as admin', async ({ page }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await page.type('#firstname', 'Admin')
    await page.type('#surname', 'User')
    await page.click('#ok')
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=AdminUser')).toBeVisible()
  })

  test('can view as standard user', async ({ page }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await page.type('#firstname', 'Standard')
    await page.type('#surname', 'Person')
    await page.click('#ok')
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=StandardPerson')).toBeVisible()
  })
})

Whilst it’s easy to move the common code which authenticates (sets the cookies/tokens) into a login function that uses Playwright to visit a login page which is called from each test, Playwright offers something much better in that it can save browser storage state and re-use it. The idea is to login once and re-use the resulting browser state for every test that requires that role to work.

If the cookies/tokens don’t expire, you can capture them once, commit them to your code repository and simply re-use them:

First to capture:

  test('can view as admin', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await page.type('#firstname', 'Admin')
    await page.type('#surname', 'User')
    await page.click('#ok')
    await context.storageState({ path: 'storage/admin.json' });
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=AdminUser')).toBeVisible()
  })

  test('can view as standard user', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await page.type('#firstname', 'Standard')
    await page.type('#surname', 'Person')
    await page.click('#ok')
    await context.storageState({ path: 'storage/user.json' });
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=StandardPerson')).toBeVisible()
  })

And then to re-use the captured files:

import { test, expect } from '@playwright/test'

test.describe.parallel('Unauthenticated tests', () => {
  test('can view as guest', async ({ page }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Hello Please Sign In')).toBeVisible()
  })
})

test.describe.parallel('Administrator tests', () => {
  test.use({storageState: './storage/admin.json'})
  test('can view as admin', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=AdminUser')).toBeVisible()
  })
})

test.describe.parallel('User tests', () => {
  test.use({storageState: './storage/user.json'})
  test('can view as standard user', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=StandardPerson')).toBeVisible()
  })
})

But what if like most apps your authentication cookies/tokens do expire?

Fortunately you can dynamically create the session state once per test run – in the global hooks – then simply refer to those same local storage files in each test.

You can create a file like global-setup.ts which generates our storage state files once per test run:

// global-setup.ts
import { Browser, chromium, FullConfig } from '@playwright/test'

async function globalSetup (config: FullConfig) {
  const browser = await chromium.launch()
  await saveStorage(browser, 'Standard', 'Person', 'storage/user.json')
  await saveStorage(browser, 'Admin', 'User', 'storage/admin.json')
  await browser.close()
}

async function saveStorage (browser: Browser, firstName: string, lastName: string, saveStoragePath: string) {
  const page = await browser.newPage()
  await page.goto('http://webdriverjsdemo.github.io/auth/')
  await page.type('#firstname', firstName)
  await page.type('#surname', lastName)
  await page.click('#ok')
  await page.context().storageState({ path: saveStoragePath })
}

export default globalSetup

which is referenced in playwright.config.ts

// playwright.config.ts
module.exports = {
  globalSetup: require.resolve('./global-setup'),
  reporter: [['list'], ['html']],
  retries: 0,
  use: {
    headless: true,
    screenshot: 'only-on-failure',
    video: 'retry-with-video',
    trace: 'on-first-retry'
  }
}

Once you have this set up our tests remain the same but the local storage values are captured and set once per test run:

test.describe.parallel('Admin tests', () => {
  test.use({ storageState: './storage/admin.json' })
  test('can view as admin', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=AdminUser')).toBeVisible()
  })
})

test.describe.parallel('User tests', () => {
  test.use({ storageState: './storage/user.json' })
  test('can view as standard user', async ({ page, context }) => {
    await page.goto('http://webdriverjsdemo.github.io/auth/')
    await expect(page.locator('text=Welcome name=StandardPerson')).toBeVisible()
  })
})

You can call test.use({ storageState: './storage/user.json' }) for a file or a test.describe block, so if all your tests in your test file use the same authentication role place it outside a test.describe block, otherwise place it within a test.describe block of tests that use the same authentication role. Different roles? Use different test.describe blocks with different test.use calls to different files in each.

What do you think of Playwright’s ability to capture and use browser storage state?