Categories
Agile Software Dev Software Software Testing

Moving towards a quarterly team story wall

One of the key facets of effective software delivery is continuous improvement to team practices.

The reason I believe physical team walls are so effective in continuous team improvement is that they both reflect good team practices, and drive good team practices. That is, our wall both displays how we’re working, and improves how we work.

If your team is improving how you’re doing things then chances are your wall will look different to how it looked six months ago.

In September I shared how we were using our story wall to display dependencies between tasks for more complex pieces of work.

Our team wall as at September 2019

We’ve since made some improvements to the wall that has continued to improve our effectiveness as a team.

We work in quarterly planning cycles, fortnightly sprints towards our goals, and frequent software releases (once or twice a day typically).

The nice thing about our quarterly planning cycles is that we can neatly fit six sprints within a quarter (12 weeks).

Since the wall represents what we’re doing, and we have this quarterly focus, we thought it would be a good idea to represent the full quarter on our wall. This means our wall currently looks something like:

Quarterly wall

If you zoomed into a single sprint it looks like:

Zoomed into one sprint

Some of the important aspects of the design include:

  1. We put colour coded epics across the top of our board that roughly show when we plan to start each epic. These may not always start at the beginning of a sprint as each epic doesn’t always fit within a sprint and we don’t wait for a new sprint to start a new epic.
  2. Task card colours match the epic to which they belong, except for white cards which are tasks unrelated to an epic – for example tech debt, or a production fix.
  3. Each task card is exactly three columns wide – this is because we try to keep our cycle time, that is the time it takes to pick up a task and merge/release it, to about 3 work days, and each column is one work day. If we find a task is taking much longer than 3 work days it’s a good indication it hasn’t been broken down enough, if it’s much quicker than that we may be creating unnecessary overhead. The benefit of this is more consistent planning, and also effort tracking as we can see at a glance roughly how much effort an epic was by seeing the coloured tickets.
  4. Tasks have a FE/BE label, a JIRA reference, a person who is working on it and one or two stickers representing status.
  5. We continued our status dots – blue for in progress, a smaller yellow sticker to indicate in review, blue and yellow makes a green sticker which is complete. We also use red for blocked tasks, and have added a new sticker which is a purple/pink colour which a black star which indicates this is a tech debt related task.
  6. We move the pink ribbon along each day so it shows us visually where we are at in the sprint/quarter.
  7. We have rows for both team leave, and milestones such as when we enabled a new feature, and also public holidays and team events.
  8. We continue to have our sprint goals and action items displayed clearly at the top of the wall so we can refer back to these during our daily stand up meeting during the sprint to check in on how we’re going.
  9. One extra thing we’ve recently started doing which isn’t represented in the diagram above is when a sprint is complete we shift the cards to the bottom of the wall (in the same columns) so we have a clear future focus, whilst still having a historical view.

We’ve found continually improving our wall represents how our practices have improved and will continue to make improvements as we go. I have no idea how it will look in six months time.

How have you adapted a typical agile wall for your team? How different does it look today than six months ago?

Categories
Software Testing

GitHub & Bitbucket

We use Git on Bitbucket in my current role, and I didn’t realise how much I liked using GitHub until I started using Bitbucket on a regular basis to commit and test code changes.

The biggest difference is how these systems handle squashed commits into a master branch.

With Bitbucket you can do the usual approach of multiple commits on a branch/pull request:

When you go to merge this to master, you can choose squash commits:

which a nice way to make a cleaner commit history on the master branch:

However if you look at the branch/PR now that it is merged you will notice you’ve lost all commit history! 😿

This has been super frustrating for helping us diagnose what went wrong during the development of a change where an issue was introduced.

Comparing this same workflow to GitHub, you can see that you can see individual commits against a branch, and squash these into master:

After merging you can still see the full commit history on the PR and branch:

and it is squashed on the master commit history:

Has anyone else noticed this with Bitbucket? Any known workarounds to keep commit history on branches/PRs?

Categories
Software Testing

→ The Rise of the Software Verifier

https://medium.com/@jarbon/testers-dont-test-anymore-71448ab87965

I found this article rather interesting. I’m still not sure if some of it is satire, forgive me if I misinterpreted it.

“DevOps has become so sophisticated that there is little fear of bugs. DevOps teams can now deploy in increments, monitor logs for misbehavior, and push a new version with fixes so fast that only a few users are ever affected. Modern software development has squeezed the testers out of testing.

Features are more important than quality when teams are moving fast. Frankly, when a modern tester finds a crashing bug with strange, goofy, or non-sensical input, the development team often just groans and sets the priority of the bug to the level at which it will never actually get fixed. The art of testing and finding obscure bugs just isn’t appreciated anymore. As a result, testers today spend 80% of their time verifying basic software features, and only 20% of their time trying to break the software.”

The author doesn’t say where the 80:20 figures came from, but the testers I have worked with for the last five years have spent zero time on manual regression testing verification and most of their time actually testing the software we were developing. How did we achieve this? Not by splitting our team into testers and verifiers as the author suggests:

What to do about all this? The fix is a pretty obvious one. Software Verification is important. Software Testing is important. But, they are very different jobs. We should just call things what they are, and split the field in two. Software testers who spend their day trying to break large pieces of important software, and software verifiers, who spend their time making sure apps behave as expected day-to-day should be recognized for what they are actually doing. The world needs to see the rise of the “Software Verifier”.

We did this by focussing on automating enough tests that we were confident to release our software frequently being confident we weren’t introducing major regressions. This wasn’t 100% test coverage, it was just enough test coverage to avoid human verification. We obviously spent effort maintaining these tests, but that’s a whole team effort and it freed up a lot of time to spend the rest of our time testing the software and looking for real life bugs using human techniques.

Another thing I noted about the article was the use of the graph to show decreasing interest in software testing:

But even their interest is Software Testing fading fast…

https://ssl.gstatic.com/trends_nrtr/1386_RC02/embed_loader.js

trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”software testing”,”geo”:””,”time”:”2004-01-01 2018-05-03″}],”category”:0,”property”:””}, {“exploreQuery”:”date=all&q=software%20testing”,”guestPath”:”https://trends.google.com:443/trends/embed/”});

 
This also applies to software in general, perhaps even more dramatically:

https://ssl.gstatic.com/trends_nrtr/1386_RC02/embed_loader.js

trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”software”,”geo”:””,”time”:”2004-01-01 2018-05-03″}],”category”:0,”property”:””}, {“exploreQuery”:”date=all&q=software”,”guestPath”:”https://trends.google.com:443/trends/embed/”});

I don’t think there’s a decreasing interest in software testing, or software, but rather these have become more commonplace and more commoditised, so people need to search for these less.

Categories
Automated Testing Career Software Testing

Creating a skills-matrix for t-shaped testers

I believe the expression “jack of all trades, master of none” is a misnomer, as I’ve mentioned previously. Being good at two or more complimentary skills is better than being excellent at just one, in my opinion.

But what about being excellent at one skill, and still being good at two or more? Why can’t we be both?

Jason Yip describes a T-shaped person and the benefits that having t-shaped people on teams brings:

A T-shaped person is capable in many things and expert in, at least, one.
As opposed to an expert in one thing (I-shaped) or a “jack of all trades, master of none” generalist, a “t-shaped person” is an expert in at least one thing but also somewhat capable in many other things. An alternate phrase for “t-shaped” is “generalizing specialist”.

jason yip
image by Jason Yip

Ideally we’d like to have a team of t-shaped testers in Flow Patrol at Automattic. But how do we get to this end goal?

I recently embarked on an exercise to measure and benchmark our skills and do just this with our team. Here’s the steps we took.

Step One – Devise Desired Team Skills

The first thing we did was come up with a list of skills that we have in the team and would like to have in the team. These can be ‘hard’ skills like a specific programming languages and ‘soft’ skills like triaging bugs. In a standard co-located team this would be as easy as conducting a brainstorming session and using affinity grouping to discover these skills. In a distributed environment I wrote a blog post to my team’s channel and had individual members comment with a list of skills they thought appropriate, and then I did the grouping and came up with a draft list of skills and groups.

Step Two – Self-assess against a team skills matrix

Once I had a final list of skills and groups (see below for full list), I put together a matrix (in a Google Spreadsheet) that listed team members on the x-axis, and the skills on the y-axis, and came up with a skill level rating. Our internal systems use a three level scale (Newbie, Comfortable, Expert) which we didn’t think was broad enough so we decided upon five levels:

1. Limited
2. Basic
3. Good
4. Strong
5. Expert

 

skills_matrix
Team Skills Matrix

I hadn’t seen Jason yip’s visual representation at that point in time, otherwise I may have used something like that, which has five similar levels:

matrix jason yip
Image by Jason Yip

Step Three – Publish results and cross-skill

Once we had the self assessments done we could then publish the data within our organisation and use the benchmark to cross-skill people in the team. In a co-located environment this could involve pair programming, in a distributed one it could involve mentoring and reviewing other team member’s work.

Have you done a skills matrix for your team? How did you do it? What did you discover?


Full List of Skills and Skill Groups for Flow Patrol at Automattic

Automattic Product Knowledge
WordPress Core
WordPress.com Simple Sites
WordPress.com Atomic Sites
Jetpack
Woocommerce
Simplenote
Mobile Apps
Human Software Testing
Flow Mapping
Bug Triage & Prioritization
Exploratory Testing (pre-release)
Dogfooding
Cross-browser Cross-device Testing
Facilitating Beta/Community Testing
Facilitating User Testing
Usability Testing
Accessibility Testing
Automated Testing
Automated End-to-end Browser Testing
Automated API/Integration Testing
Automated Unit Testing
Automated Visual Regression Testing
Android Automated Testing
iOS Automated Testing
Programming Languages
JavaScript
PHP
Shell Scripting
Objective C
Swift
Android/Kotlin
Testing Tools/Frameworks
Mocha
WebDriverJS
Git/Github
CircleCI
TravisCI
Team City (CI)
Mailosaur
Applitools
VIP Go
Docker
Other
i18n Testing
Performance Testing
Security Testing
User advocacy – empathy and compassion
Mentoring/onboarding
Project Management
Product Management
Product Development 
Calypso
Jetpack
WP.com API PHP
Woocommerce
iOS App
Android App

 

Categories
Software Testing

The blurry line between test and development

One of the themes I talked about during my presentation in Wellington was the blurry line between test and development in a distributed environment like Automattic.

I was recently having trouble with a complex method in our WordPress.com e2e test page objects, so I used my skills as a developer and wrote a change to our user interface which adds a data attribute to the HTML element.

This meant our page object method immediately went from this:

Categories
Software Testing

Test for Real Life

“Most of us are anxious pretty much all the time – but frequently imagine that other people aren’t. It’s time to admit the truth. Anxiety is just a basic fact about being human.”

~ Alain de Botton

We are all human, we are all worried and anxious pretty much all the time, people just don’t tell you that they are. We wear masks and we hide it well.

But why do we test like we’re not anxious or worried? Why don’t we test for real life?

Categories
Automated Testing Software Testing

Make sure your end-to-end tests align with your company’s strategy

I recently embarked on writing some new automated end-to-end tests for an existing product that has been around for some time but has never had e2e automated tests written for it.

Categories
bugs Software Testing

Should you close old bugs?

Do you actively close bugs because they reach a certain age?

One of the (many) things I love about Automattic is the attention that is given to bug triage. Bug triage is the habit of continually grooming our bug lists to ensure they are constantly relevant, updated and reflective of the current state of our products. A benefit of this is that an up-to-date and prioritized bug list translates directly into a backlog of maintenance work items for a product development team.

Categories
Automated Testing Software Testing TDD

100% Code Coverage?

Which codebase is better?

Categories
Automated Testing Career Software Testing Test Automation

(Not) Lying about Writing Code

I recently saw this quote in an article by Nikita Hasis on Medium.

“If Your Test Leaders Aren’t Telling You To Write Code, They Are Lying!
Even if it’s by omission.

There’s this argument, almost daily, about whether software testers should learn programming. I’ll jump right in. It is unimaginable that someone would tell you NOT to learn something. That’s the first, and probably shittiest lie that inexperienced testers get fed. It’s further unimaginable, and downright irresponsible to tell people not to learn something that is very clearly where a large, well-paying, and above all interesting part of the industry is heading. Wanna work on innovative, data-driven projects with smart and driven people? You probably need to pull up terminal and at least get your toes wet, y’all.

The worst part of the lie is that it imposes that coding is a difficult grind and will only cause more problems than it solves. I even saw Alister Scott’s blog post referenced as an argument against coding, ironic as it is.”

~ Nikita Hasis (Medium)

Since Medium is a walled garden that doesn’t allow you to leave a comment without creating an account I’ll leave my response here instead (where anyone is free to comment however they like).