Welcome to my new site 🙌

Welcome to my new site. I thought it was time I gave it a new name: RIP WatirMelon 💀

My focus over coming months alongside my regular blogging will be building up a comprehensive software testing (and more!) knowledge base which will act as a source of information about different aspects of software testing. An example:

If you previously subscribed by email, you’ll need to do that again. See this page for details.

Welcome 👋

Agile Software Dev Project Management

The importance of a decision log 🎋

It’s no secret that I love writing and love documenting things. This is where I’m a bit different to a lot of people I work with who prefer writing code to documentation.

I have a lot of project management responsibilities as part of my current role, and I’m a big fan of “fixed time/variable scope” as a way to get rapidly get important things done. This works in that we can say to our product owner how long do you want to spend on a feature, which then enables us as a development team to do everything we can to get as many things done in that time.

There’s two things I have found that are crucial to succeeding in this approach:

  1. Documenting and maintaining a list of up-to-date things that need doing that is constantly refined and re-prioritized based on discoveries; and
  2. Documenting decisions as they take place in some form of “decision log”

A decision log isn’t for blame. The reason a decision log is crucial is it allows us to decide, document and move on. This enables velocity since we don’t ruminate on decisions. It’s also good during conversations where you feel “déjà vu” as we often forget what we’ve decided previously, and it’s easy to refer to our log and say “we’ve already talked about this and we decided this for that reason”.

I like the simplest decision log that will possibly work for your context. Ours is a Confluence page with a table in reverse chronological order (so the newest decisions are at the top):

QuestionDecisionMade By / WhereDate
Do we want to limit the max records?We should limit it to 50. That’s a sensible limit and we can adjust based on feedback.Product owner during Slack Conversation (link)28 April 2020
An example decision log

This doesn’t mean we can’t change our mind, flexibility is crucial, we just need another decision log entry to show we changed our minds 😊

How do you make and document decisions? Are you a fan of fixed time/variable scope?

Continuous Delivery Continuous Integration Software

Visualising Technical Debt (with the Debt Boat)

Being successful in software delivery requires a team to constantly balance technical debt and feature delivery. Teams often fall into the trap of delivering features too rapidly at the expense of ever increasing technical debt, or delivering a over-engineered solution at the expense of not delivering things within reasonable time-frames and before any real world usage.

This is one of the key issues facing most software delivery teams I have worked on as there are often different appetites for delivery vs debt throughout an organisation.

I was trying to come up with a way to present technical debt in a visual format that could be easily understood by different people: management, stakeholders and a development team.

One thing we value is shipping things: getting our products in the hands of customers so they can get value from them. Accumulating technical debt, both customer facing and “under the covers” may allow us to rapidly get features to our customers, but eventually the debt grows and affects our ability to deliver.

In the “shipping” theme I came up with the Debt Boat: a way to visually represent things that build up and eventually slow us down. I like splitting technical debt into both feature debt: things our customers want but we chose not to do, and system debt which is technical things our customers (or product owners) can’t see (below the water line) but increasingly slow us down.

We originally had this boat on our team’s physical wall to show there’s only so much technical debt we can take “on board” before we sink.

Now that we’re 100% remote (for now) I’ve created a Jamboard version. I’ve shared the template here – you can easily make a copy if you think your team would find this helpful.

Click to view on Jamboard

Slack tip: closing unnecessary DMs in the sidebar

I like to keep my email inbox as empty as possible: only containing emails that I need to follow up or action which is usually less than 6.

I find Slack can be overwhelming, particuarly Direct Messages (DMs) which build up in the sidebar over time – particularly in a large company. At a previous job I noticed my team lead would use the DMs like an email – open a new DM to someone to ask something, and as soon as there’s nothing more to do close that DM so the sidebar is nice and clean. I didn’t even think to do this but I love doing it now – I only have DMs in the sidebar that require my attention, all other ones aren’t there (you don’t lose history of DMs, clicking the plus ➕ and adding a DM to someone shows your previous history).

I recommend giving it a try to see whether it helps you stay on top of your Slack.


Printing an image across multiple pages on macOS

We use our squad wall to display a lot of information and often we’ll want to print out an image across a number of pages so it’s extra large to visualise – for example a screen flow, or a mock up or a chart.

I was trying to work out a way to easily scale an image across numerous pages to stick together on the wall and I found the easiest way is first make sure the image or thing you want to print is saved as a PDF (easy to do in macOS Preview), then open the PDF in the free Adobe Acrobat Reader DC app.

This gives you a poster print option which scales the PDF across numerous pages:

Poster option with scaling and preview

Very handy.

Automated Acceptance Testing Automated Testing Selenium

Playing with Playwright

Playwright is a new browser automation library from Microsoft:

Playwright is a Node library to automate the Chromium, WebKit and Firefox browsers with a single API. It enables cross-browser web automation that is ever-green, capable, reliable and fast.

I’m a big fan of Puppeteer, so this section in their FAQ stood out to me:

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer project is active and is maintained by Google.

We are the same team that originally built Puppeteer at Google, but has since then moved on. Puppeteer proved that there is a lot of interest in the new generation of ever-green, capable and reliable automation drivers. With Playwright, we’d like to take it one step further and offer the same functionality for all the popular rendering engines. We’d like to see Playwright vendor-neutral and shared governed.

Playwright uses similar concepts to Puppeteer:

“Due to the similarity of the concepts and the APIs, migration between the two should be a mechanical task.”

Luckily I have a demo test suite written in Puppeteer which I have cloned and converted to use Playwright to see how it works, and compares.

Here are my thoughts:

I really, really like the BrowserContext concept

In Puppeteer, and WebDriverJs, you have Browsers and Pages. Each Page in a Browser share the state across the Browser, so to create isolated tests using the same Browser (to avoid the inefficiencies of spawning a Browser per test) you need custom code to delete all cookies and local storage between tests. Playwright solves this with the BrowserContext object which is a new incognito window where its pages are created: each test can use the same browser but a different BrowserContext. Super cool 👌

It automatically waits to click, and supports xpath expressions

Playwright automatically waits for elements to be available and visible before clicking, by default, and also has the same API for xpath expressions, which means this Puppeteer code:

await page.goto( `${ config.get( 'baseURL' )}` );
await page.waitForXPath( '//span[contains(., "Scissors")]' );
const elements = await page.$x( '//span[contains(., "Scissors")]' );
await elements[0].click();
await page.waitForXPath( '//div[contains(., "Scissors clicked!")]' );

becomes a lot cleaner:

await page.goto( `${ config.get( 'baseURL' )}` );
await '//span[contains(., "Scissors")]' );
await page.waitFor( '//div[contains(., "Scissors clicked!")]' );

It supports three “browsers” but not as you know them

Q: Does Playwright support new Microsoft Edge?

The new Microsoft Edge browser is based on Chromium, so Playwright supports it.

Playwright supports three “browsers” but not as you know them. I’d say it supports three rendering engines (Chromium, WebKit & Gecko) rather than Browsers as you can only use the (somewhat modified) browsers that come bundled with Playwright over using an already installed browser on your operating system (like Selenium does). This makes it easier to ensure consistency of test runs since the library is bundled with the browsers, but there are some risks your tests could pass on the bundled browsers but fail on “real” browsers. I would say that the claim it supports running on Microsoft Edge is a little misleading.

I’m unsure of CircleCI Support for WebKit and Firefox

I was able to get my tests running against Chromium on CircleCI using the same configuration as Puppeteer, however I couldn’t get the WebKit or Firefox tests to run on CircleCI even when having the default CircleCI browsers installed. I didn’t want to invest the time, but it is probably due to some headless Linux dependencies missing which could be solved in the project config.


If the only thing Playwright did better than Puppeteer was also supporting WebKit and Gecko then I wouldn’t suggest using it over Puppeteer, since Puppeteer is closely aligned with Chromium, and I’m going to run my tests solely in Chrome/Chromium anyway. I don’t believe in running the same e2e tests in multiple browsers: the maintenance overhead outweighs the benefits in my experience.

However, Playwright offers a much nicer BrowserContext concept, and the xpath support is much nicer (although I rarely use xpath expressions anyway).

If anything I am hoping Puppeteer adds support for BrowserContexts – I’ve raised a feature request here so feel free to comment on it if you think it would be a good idea.

All the sample code is available here:

Agile Software Dev Software Software Testing

Moving towards a quarterly team story wall

One of the key facets of effective software delivery is continuous improvement to team practices.

The reason I believe physical team walls are so effective in continuous team improvement is that they both reflect good team practices, and drive good team practices. That is, our wall both displays how we’re working, and improves how we work.

If your team is improving how you’re doing things then chances are your wall will look different to how it looked six months ago.

In September I shared how we were using our story wall to display dependencies between tasks for more complex pieces of work.

Our team wall as at September 2019

We’ve since made some improvements to the wall that has continued to improve our effectiveness as a team.

We work in quarterly planning cycles, fortnightly sprints towards our goals, and frequent software releases (once or twice a day typically).

The nice thing about our quarterly planning cycles is that we can neatly fit six sprints within a quarter (12 weeks).

Since the wall represents what we’re doing, and we have this quarterly focus, we thought it would be a good idea to represent the full quarter on our wall. This means our wall currently looks something like:

Quarterly wall

If you zoomed into a single sprint it looks like:

Zoomed into one sprint

Some of the important aspects of the design include:

  1. We put colour coded epics across the top of our board that roughly show when we plan to start each epic. These may not always start at the beginning of a sprint as each epic doesn’t always fit within a sprint and we don’t wait for a new sprint to start a new epic.
  2. Task card colours match the epic to which they belong, except for white cards which are tasks unrelated to an epic – for example tech debt, or a production fix.
  3. Each task card is exactly three columns wide – this is because we try to keep our cycle time, that is the time it takes to pick up a task and merge/release it, to about 3 work days, and each column is one work day. If we find a task is taking much longer than 3 work days it’s a good indication it hasn’t been broken down enough, if it’s much quicker than that we may be creating unnecessary overhead. The benefit of this is more consistent planning, and also effort tracking as we can see at a glance roughly how much effort an epic was by seeing the coloured tickets.
  4. Tasks have a FE/BE label, a JIRA reference, a person who is working on it and one or two stickers representing status.
  5. We continued our status dots – blue for in progress, a smaller yellow sticker to indicate in review, blue and yellow makes a green sticker which is complete. We also use red for blocked tasks, and have added a new sticker which is a purple/pink colour which a black star which indicates this is a tech debt related task.
  6. We move the pink ribbon along each day so it shows us visually where we are at in the sprint/quarter.
  7. We have rows for both team leave, and milestones such as when we enabled a new feature, and also public holidays and team events.
  8. We continue to have our sprint goals and action items displayed clearly at the top of the wall so we can refer back to these during our daily stand up meeting during the sprint to check in on how we’re going.
  9. One extra thing we’ve recently started doing which isn’t represented in the diagram above is when a sprint is complete we shift the cards to the bottom of the wall (in the same columns) so we have a clear future focus, whilst still having a historical view.

We’ve found continually improving our wall represents how our practices have improved and will continue to make improvements as we go. I have no idea how it will look in six months time.

How have you adapted a typical agile wall for your team? How different does it look today than six months ago?

Remote Work

The future of work? An essay.

“The most exciting breakthroughs of the 21st century will not occur because of technology but because of an expanding concept of what it means to be human”

John Naisbitt – Megatrends
New Yorker Cartoon


There is a lot of reading available about distributed and remote ways of working but a lot of this is written from the perspective of an employer (Basecamp, Automattic etc) and the benefits it can provide to those employers. Things like gaining access to a global talent pool, more productive employees, workforce diversity, lower office costs, more dedicated staff, and broader timezone coverage.

Remote was an early manifesto for distributed work from the perspective of founders, and highlighted the value the practice provides to open-minded employers”

Working Smaller, Slower, and Smarter

I haven’t been able to find much material that’s written purely from the perspective of an employee that provides a balanced view of distributed and remote ways of working. This essay aims to provide an employee’s perspective of how remote and distributed ways of working compares to traditional office based roles.

Software Testing

GitHub & Bitbucket

We use Git on Bitbucket in my current role, and I didn’t realise how much I liked using GitHub until I started using Bitbucket on a regular basis to commit and test code changes.

The biggest difference is how these systems handle squashed commits into a master branch.

With Bitbucket you can do the usual approach of multiple commits on a branch/pull request:

When you go to merge this to master, you can choose squash commits:

which a nice way to make a cleaner commit history on the master branch:

However if you look at the branch/PR now that it is merged you will notice you’ve lost all commit history! 😿

This has been super frustrating for helping us diagnose what went wrong during the development of a change where an issue was introduced.

Comparing this same workflow to GitHub, you can see that you can see individual commits against a branch, and squash these into master:

After merging you can still see the full commit history on the PR and branch:

and it is squashed on the master commit history:

Has anyone else noticed this with Bitbucket? Any known workarounds to keep commit history on branches/PRs?

Business Analysis

Now, Next, Later, Never (improving MoSCoW)

Our team sets quarterly objectives, which we break down into requirements spread across fortnightly sprints. As the paradev on my team I work closely with our product owner to write and prioritise these requirements.

We originally started using the MoSCoW method to prioritize our requirements:

The term MoSCoW itself is an acronym derived from the first letter of each of four prioritization categories (Must have, Should have, Could have, and Won’t have


We quickly started noticing that the terminology (Must have, Should have, Could have, Won’t have) didn’t really work well in our context and how we were thinking, and this caused a lot of friction in how we were prioritizing, and adoption by the team. It didn’t feel natural to classify things as must, should, could, won’t as it didn’t directly correlate into what we should be working on.

Over a few sessions we came up with our own groupings for our requirements based upon when we would like to see them in our product: Now, Next, Later, Never. We’ve continued to use these four terms and we’ve found they have been very well adopted by the team as it’s very natural for us to think in these groupings.

The biggest benefit of using Now, Next, Later, Never is they naturally translate into our product roadmap and sprint planning.

I did some research in writing this post and found Now, Next, Later as a thing from ThoughtWorks back in 2012, but I couldn’t find any links that included the Never grouping as well which we’ve found very useful to call out what we agree that we won’t be doing.

How do you go about prioritizing your requirements?