Categories
Random

Slack tip: closing unnecessary DMs in the sidebar

I like to keep my email inbox as empty as possible: only containing emails that I need to follow up or action which is usually less than 6.

I find Slack can be overwhelming, particuarly Direct Messages (DMs) which build up in the sidebar over time – particularly in a large company. At a previous job I noticed my team lead would use the DMs like an email – open a new DM to someone to ask something, and as soon as there’s nothing more to do close that DM so the sidebar is nice and clean. I didn’t even think to do this but I love doing it now – I only have DMs in the sidebar that require my attention, all other ones aren’t there (you don’t lose history of DMs, clicking the plus ➕ and adding a DM to someone shows your previous history).

I recommend giving it a try to see whether it helps you stay on top of your Slack.

Categories
Software

Printing an image across multiple pages on macOS

We use our squad wall to display a lot of information and often we’ll want to print out an image across a number of pages so it’s extra large to visualise – for example a screen flow, or a mock up or a chart.

I was trying to work out a way to easily scale an image across numerous pages to stick together on the wall and I found the easiest way is first make sure the image or thing you want to print is saved as a PDF (easy to do in macOS Preview), then open the PDF in the free Adobe Acrobat Reader DC app.

This gives you a poster print option which scales the PDF across numerous pages:

Poster option with scaling and preview

Very handy.

Categories
Automated Acceptance Testing Automated Testing Selenium

Playing with Playwright

Playwright is a new browser automation library from Microsoft:

Playwright is a Node library to automate the Chromium, WebKit and Firefox browsers with a single API. It enables cross-browser web automation that is ever-green, capable, reliable and fast.

https://github.com/microsoft/playwright

I’m a big fan of Puppeteer, so this section in their FAQ stood out to me:

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer project is active and is maintained by Google.

We are the same team that originally built Puppeteer at Google, but has since then moved on. Puppeteer proved that there is a lot of interest in the new generation of ever-green, capable and reliable automation drivers. With Playwright, we’d like to take it one step further and offer the same functionality for all the popular rendering engines. We’d like to see Playwright vendor-neutral and shared governed.

https://github.com/microsoft/playwright#q-how-does-playwright-relate-to-puppeteer

Playwright uses similar concepts to Puppeteer:

“Due to the similarity of the concepts and the APIs, migration between the two should be a mechanical task.”

https://github.com/microsoft/playwright#q-how-does-playwright-relate-to-puppeteer

Luckily I have a demo test suite written in Puppeteer which I have cloned and converted to use Playwright to see how it works, and compares.

Here are my thoughts:

I really, really like the BrowserContext concept

In Puppeteer, and WebDriverJs, you have Browsers and Pages. Each Page in a Browser share the state across the Browser, so to create isolated tests using the same Browser (to avoid the inefficiencies of spawning a Browser per test) you need custom code to delete all cookies and local storage between tests. Playwright solves this with the BrowserContext object which is a new incognito window where its pages are created: each test can use the same browser but a different BrowserContext. Super cool 👌

It automatically waits to click, and supports xpath expressions

Playwright automatically waits for elements to be available and visible before clicking, by default, and also has the same API for xpath expressions, which means this Puppeteer code:

await page.goto( `${ config.get( 'baseURL' )}` );
await page.waitForXPath( '//span[contains(., "Scissors")]' );
const elements = await page.$x( '//span[contains(., "Scissors")]' );
await elements[0].click();
await page.waitForXPath( '//div[contains(., "Scissors clicked!")]' );

becomes a lot cleaner:

await page.goto( `${ config.get( 'baseURL' )}` );
await page.click( '//span[contains(., "Scissors")]' );
await page.waitFor( '//div[contains(., "Scissors clicked!")]' );

It supports three “browsers” but not as you know them

Q: Does Playwright support new Microsoft Edge?

The new Microsoft Edge browser is based on Chromium, so Playwright supports it.

https://github.com/microsoft/playwright#q-does-playwright-support-new-microsoft-edge

Playwright supports three “browsers” but not as you know them. I’d say it supports three rendering engines (Chromium, WebKit & Gecko) rather than Browsers as you can only use the (somewhat modified) browsers that come bundled with Playwright over using an already installed browser on your operating system (like Selenium does). This makes it easier to ensure consistency of test runs since the library is bundled with the browsers, but there are some risks your tests could pass on the bundled browsers but fail on “real” browsers. I would say that the claim it supports running on Microsoft Edge is a little misleading.

I’m unsure of CircleCI Support for WebKit and Firefox

I was able to get my tests running against Chromium on CircleCI using the same configuration as Puppeteer, however I couldn’t get the WebKit or Firefox tests to run on CircleCI even when having the default CircleCI browsers installed. I didn’t want to invest the time, but it is probably due to some headless Linux dependencies missing which could be solved in the project config.

Conclusion

If the only thing Playwright did better than Puppeteer was also supporting WebKit and Gecko then I wouldn’t suggest using it over Puppeteer, since Puppeteer is closely aligned with Chromium, and I’m going to run my tests solely in Chrome/Chromium anyway. I don’t believe in running the same e2e tests in multiple browsers: the maintenance overhead outweighs the benefits in my experience.

However, Playwright offers a much nicer BrowserContext concept, and the xpath support is much nicer (although I rarely use xpath expressions anyway).

If anything I am hoping Puppeteer adds support for BrowserContexts – I’ve raised a feature request here so feel free to comment on it if you think it would be a good idea.

All the sample code is available here: https://github.com/alisterscott/playwright-demo

Categories
Agile Software Dev Software Software Testing

Moving towards a quarterly team story wall

One of the key facets of effective software delivery is continuous improvement to team practices.

The reason I believe physical team walls are so effective in continuous team improvement is that they both reflect good team practices, and drive good team practices. That is, our wall both displays how we’re working, and improves how we work.

If your team is improving how you’re doing things then chances are your wall will look different to how it looked six months ago.

In September I shared how we were using our story wall to display dependencies between tasks for more complex pieces of work.

Our team wall as at September 2019

We’ve since made some improvements to the wall that has continued to improve our effectiveness as a team.

We work in quarterly planning cycles, fortnightly sprints towards our goals, and frequent software releases (once or twice a day typically).

The nice thing about our quarterly planning cycles is that we can neatly fit six sprints within a quarter (12 weeks).

Since the wall represents what we’re doing, and we have this quarterly focus, we thought it would be a good idea to represent the full quarter on our wall. This means our wall currently looks something like:

Quarterly wall

If you zoomed into a single sprint it looks like:

Zoomed into one sprint

Some of the important aspects of the design include:

  1. We put colour coded epics across the top of our board that roughly show when we plan to start each epic. These may not always start at the beginning of a sprint as each epic doesn’t always fit within a sprint and we don’t wait for a new sprint to start a new epic.
  2. Task card colours match the epic to which they belong, except for white cards which are tasks unrelated to an epic – for example tech debt, or a production fix.
  3. Each task card is exactly three columns wide – this is because we try to keep our cycle time, that is the time it takes to pick up a task and merge/release it, to about 3 work days, and each column is one work day. If we find a task is taking much longer than 3 work days it’s a good indication it hasn’t been broken down enough, if it’s much quicker than that we may be creating unnecessary overhead. The benefit of this is more consistent planning, and also effort tracking as we can see at a glance roughly how much effort an epic was by seeing the coloured tickets.
  4. Tasks have a FE/BE label, a JIRA reference, a person who is working on it and one or two stickers representing status.
  5. We continued our status dots – blue for in progress, a smaller yellow sticker to indicate in review, blue and yellow makes a green sticker which is complete. We also use red for blocked tasks, and have added a new sticker which is a purple/pink colour which a black star which indicates this is a tech debt related task.
  6. We move the pink ribbon along each day so it shows us visually where we are at in the sprint/quarter.
  7. We have rows for both team leave, and milestones such as when we enabled a new feature, and also public holidays and team events.
  8. We continue to have our sprint goals and action items displayed clearly at the top of the wall so we can refer back to these during our daily stand up meeting during the sprint to check in on how we’re going.
  9. One extra thing we’ve recently started doing which isn’t represented in the diagram above is when a sprint is complete we shift the cards to the bottom of the wall (in the same columns) so we have a clear future focus, whilst still having a historical view.

We’ve found continually improving our wall represents how our practices have improved and will continue to make improvements as we go. I have no idea how it will look in six months time.

How have you adapted a typical agile wall for your team? How different does it look today than six months ago?

Categories
Remote Work

The future of work? An essay.

“The most exciting breakthroughs of the 21st century will not occur because of technology but because of an expanding concept of what it means to be human”

John Naisbitt – Megatrends
New Yorker Cartoon

Introduction

There is a lot of reading available about distributed and remote ways of working but a lot of this is written from the perspective of an employer (Basecamp, Automattic etc) and the benefits it can provide to those employers. Things like gaining access to a global talent pool, more productive employees, workforce diversity, lower office costs, more dedicated staff, and broader timezone coverage.

Remote was an early manifesto for distributed work from the perspective of founders, and highlighted the value the practice provides to open-minded employers”

Working Smaller, Slower, and Smarter

I haven’t been able to find much material that’s written purely from the perspective of an employee that provides a balanced view of distributed and remote ways of working. This essay aims to provide an employee’s perspective of how remote and distributed ways of working compares to traditional office based roles.

Categories
Software Testing

GitHub & Bitbucket

We use Git on Bitbucket in my current role, and I didn’t realise how much I liked using GitHub until I started using Bitbucket on a regular basis to commit and test code changes.

The biggest difference is how these systems handle squashed commits into a master branch.

With Bitbucket you can do the usual approach of multiple commits on a branch/pull request:

When you go to merge this to master, you can choose squash commits:

which a nice way to make a cleaner commit history on the master branch:

However if you look at the branch/PR now that it is merged you will notice you’ve lost all commit history! 😿

This has been super frustrating for helping us diagnose what went wrong during the development of a change where an issue was introduced.

Comparing this same workflow to GitHub, you can see that you can see individual commits against a branch, and squash these into master:

After merging you can still see the full commit history on the PR and branch:

and it is squashed on the master commit history:

Has anyone else noticed this with Bitbucket? Any known workarounds to keep commit history on branches/PRs?

Categories
Business Analysis

Now, Next, Later, Never (improving MoSCoW)

Our team sets quarterly objectives, which we break down into requirements spread across fortnightly sprints. As the paradev on my team I work closely with our product owner to write and prioritise these requirements.

We originally started using the MoSCoW method to prioritize our requirements:

The term MoSCoW itself is an acronym derived from the first letter of each of four prioritization categories (Must have, Should have, Could have, and Won’t have

Wikipedia

We quickly started noticing that the terminology (Must have, Should have, Could have, Won’t have) didn’t really work well in our context and how we were thinking, and this caused a lot of friction in how we were prioritizing, and adoption by the team. It didn’t feel natural to classify things as must, should, could, won’t as it didn’t directly correlate into what we should be working on.

Over a few sessions we came up with our own groupings for our requirements based upon when we would like to see them in our product: Now, Next, Later, Never. We’ve continued to use these four terms and we’ve found they have been very well adopted by the team as it’s very natural for us to think in these groupings.

The biggest benefit of using Now, Next, Later, Never is they naturally translate into our product roadmap and sprint planning.

I did some research in writing this post and found Now, Next, Later as a thing from ThoughtWorks back in 2012, but I couldn’t find any links that included the Never grouping as well which we’ve found very useful to call out what we agree that we won’t be doing.

How do you go about prioritizing your requirements?

Categories
Accessibility

Accessibility is good for everyone

It’s great to see the recent changes to Automattic’s long-term hiring processes based upon a user research study into how their approach to tech hiring resonates with women and non-binary folks:

In May, Automattic’s engineering hiring team launched a user research study to better understand how our approach to tech hiring resonates with women and non-binary folks who may experience similar gender discrimination in the workplace and are experienced developers.

What changes did we make?

  • Existing work and life commitments mean that it is important to know the details of the hiring process at the outset: we have published a public page that clearly outlines our hiring process so that people have a concrete understanding of the expectations.
  • We removed all the little games from our job posting page. We were trying to test people’s attention to the job posting and filter out unmotivated candidates; it turned out we were also putting people off who we want to apply.
  • We removed all the language that emphasized that hiring is a competitive process -for instance, removing language about application volume.

Whilst I don’t fit into their target audience for this study, if these changes had been implemented earlier I would have personally benefited from these, instead of being disheartened about waiting for 4 years for a response to a job application that never came (I did eventually work up the courage to apply again at which time I was successful).

This example shows that making your recruitment processes more clear and accessible makes it better for everyone, not just those who experience discrimination – much like web accessibility benefits everyone, regardless of ability.

Categories
Agile Software Dev

Experimenting with our Agile Story Wall

After three and a half years of working by myself at home, it feels truly great to be working in a co-located cross-functional team again. My squad consists of four developers and myself, and I had forgotten how much I love being a paradev. I wear many hats every day in my job, which I love, and one of these hats is managing our iterations of work: our fortnightly sprints.

We follow lightweight agile software development practices but in the spirit of this our squad is empowered to experiment and adjust our techniques to delivery good quality software quickly.

We aim to work on items of work in small chunks and have these features being used by our customers as quickly as possible by releasing software at least daily to production.

Typically we’ve been using a simple physical agile story wall which is good when you’re working on small and fairly independent chunks of functionality, but we’ve found not all work fits into this style of working.

We recently had an initiative that involved lots of small tasks, but there were a lot inter-dependencies between the tasks – as we need to do data creation, migration, back end services migration and new front end development. Our standard agile story wall which looked like this was very bad at showing us what was dependent on what:

Agile Story Wall

As a team we decided to experiment with using our agile story wall to map the dependencies we have between work and also to show when we’re able to release the various pieces of functionality. The two week sprints were less relevant as we release as soon as we can. We still have some pieces of independent work (eg. bug fixes and tech debt) which we kept tracking using the standard Kanban style columns under our main board:

Dependency Wall v1.jpg

This gave us instant benefits: the dependencies were not only very clear but elastic. We used a whiteboard marker to draw lines between tasks which meant as we discovered new or unnecessary dependencies we could quickly remove or re-arrange these lines. But we also quickly realized that in mapping our dependencies out this way we lost one key attribute of our information radiator: we couldn’t see the status of our pieces of work at quick glance which the standard status based wall gave us. Since we’re using a physical wall we can quickly adapt so we added some sticky dots to indicate status of individual cards: blue for in progress, a smaller yellow dot is in-review, and green for done since blue + yellow = green (I’m happy to take the credit for that). We also added red for blocked when we discovered we had our first blocked piece of work, and a legend in case the colours weren’t immediately obvious:

Dependency Wall v2Once our 4 week initiative was over we found the dependency wall was so useful we’ve decided to continue using it for the main focus of each sprint, and continue using the standard status based columns for less critical things. We’ll continue to refine and iterate.

Lessons Learned:

  1. Having an old-fashioned physical agile story wall is powerful is a lot of ways, and one of the most powerful things about it is how easy it is to modify and adapt your wall to whatever suits you best at the time: we couldn’t have achieved what we did if we were using a digital board like JIRA or Trello.
  2. Standard agile story walls are good for showing status of work, but are very weak at showing interdepencies between stories and tasks – the major software solutions suffer from this.
  3. A software team being agile isn’t about just doing sprint planning, standups, a story wall and retrospectives from a textbook. It’s about delivering software rapidly in the best possible way for your team and your context – and continually experimenting and adjusting your ways of working is crucial to this.
Categories
Automated Acceptance Testing Automated Testing

Identifying elements having different states in automated e2e tests

I was recently writing an automated end-to-end (e2e) test that was expanding a section then taking an action within the expanded content.

A very simplified example is the details html tag that expands to show content:

When do you start on red and stop on green?

When you’re eating a watermelon!

I initially wrote the test so that it clicked the details element each time the test ran, something like:

async expandJoke() {
	return await this.driver.findElement( by.css( '#joke' ) ).click();
}

The problem with this straightforward approach is that if the element is already open the click will still perform and the test will continue and then subsequently fail when the test tries to access the content within the expanded section which is now collapsed 😾

I wanted to make sure the test was as resilient and consistent as possible, so instead of just assuming the section was already collapsed I then wrote a function like this to expand the element if it wasn’t expanded and then continue the test:

async expandJokeIfNecessary() {
	const open = await this.driver.findElement( by.css( '#joke' ) ).getAttribute( 'open' );
	if (!open) {
		this.expandJoke()
	}
}

The benefits of this are the test is the most resilient, since it caters for whether the UI is open or closed by checking the open attribute and acting accordingly. I realised that the problem with this approach was that our user experience expects this section to be closed on page load (the punchline is hidden on page load in our example) so if we introduced a bug where we immediately displayed the punchline our test would completely miss it since it just skips expanding the section!

Both these approaches use the same selector to refer to a web element which can have two entirely different states: open and not open.

Keeping this in mind the best solution I could come up with was a selector that combines the element and the state, so the test will fail to click the element to expand the section if it can’t be found, including if the element is already expanded. This gives us simple code that is deterministic and fails at the appropriate time:

async assertAndExpandJoke() {
	return await this.driver.findElement( by.css( '#joke:not([open])' ) ).click();
}

What’s your preferred approach here? How do you identify elements in different states?

All demo code is available here: #