Categories
Continuous Delivery Continuous Integration Software

Visualising Technical Debt (with the Debt Boat)

Being successful in software delivery requires a team to constantly balance technical debt and feature delivery. Teams often fall into the trap of delivering features too rapidly at the expense of ever increasing technical debt, or delivering a over-engineered solution at the expense of not delivering things within reasonable time-frames and before any real world usage.

This is one of the key issues facing most software delivery teams I have worked on as there are often different appetites for delivery vs debt throughout an organisation.

I was trying to come up with a way to present technical debt in a visual format that could be easily understood by different people: management, stakeholders and a development team.

One thing we value is shipping things: getting our products in the hands of customers so they can get value from them. Accumulating technical debt, both customer facing and “under the covers” may allow us to rapidly get features to our customers, but eventually the debt grows and affects our ability to deliver.

In the “shipping” theme I came up with the Debt Boat: a way to visually represent things that build up and eventually slow us down. I like splitting technical debt into both feature debt: things our customers want but we chose not to do, and system debt which is technical things our customers (or product owners) can’t see (below the water line) but increasingly slow us down.

We originally had this boat on our team’s physical wall to show there’s only so much technical debt we can take “on board” before we sink.

Now that we’re 100% remote (for now) I’ve created a Jamboard version. I’ve shared the template here – you can easily make a copy if you think your team would find this helpful.

Click to view on Jamboard
Categories
Ask Me Anything

AMA: testing and technical debt

Sean asks…

There’s a web team that is comprised of people who likely grew up being the smartest people in the room. Over time, their code base is reviewed by other folks who rely on them as the “oracles” who know all. Their code is right.

Any tips on making the business case for testing? Have you ever quantified the technical debt where you’ve worked? Any tips on when to start testing a project (e.g.: is there a rule of thumb for a size to break even)?

My response…

In my experience I have found here’s two questions a web team should be continually answering about the features they are building: are we building the right thing? and are we building the thing right? It sounds like it isn’t really a question of whether they are building the thing right, but they may not be building the right thing.

There’s zero point building something right if it’s not the right thing, and this is where I have seen a tester provide the most value by providing a different mindset and asking questions early in the development process rather than just testing something is built right at the end.

As Rands in Repose elegantly put it:

It’s not that QA can discover what is wrong, they intimately understand what is right and they unfailingly strive to push the product in that direction.

As for whether I’ve quantified technical debt of a product: firstly, I really like how Martin Fowler categorises technical debt into quadrants based upon four attributes: reckless/prudent and deliberate/inadvertent: with any form of reckless technical debt being fairly obviously the worst kind, and deliberate technical debt being better than inadvertent.

On one project I worked on we had a physical technical debt board on the wall which we used to list and reduce technical debt. How it worked was we had a circle with sectors based either on architecture (database, services, UI etc.) or product function (authentication, admin, ordering etc.).

tech debt board

As soon as someone noticed some technical debt (eg. lack of test coverage) they would immediately add this as a sticky note on the outer ring of the circle. Every few days immediately following our daily standup we’d have a technical debt talk where we’d move tech debt items around, typically moving them towards the centre of the board as they became more of an issue – they also might become a non-issue so we’d tear them up.

When a technical debt issue made its way to the red hot centre (core), we would add fixing that debt to a user story in the upcoming backlog so that the technical debt was fixed as part of a user story in that area of our system, this avoided having non-functional user stories that weren’t delivering business value.

Doing this activity meant we were constantly ensuring our technical debt was prudent and deliberate.

We never quantified technical debt by measuring something about the code. If you’ve ever researched how to measure technical debt there’s a lot of suggestions: measure duplicated code, measure unit test coverage, measure cyclomatic complexity (unique paths through application code) etc. But most teams I know of rely on a binary gut instinct: is this a good or bad codebase? Can we release new features quickly without introducing showstopper bugs?

We maintained a list of known technical debt that we’d constantly evaluate to make sure we were being deliberate. We could have counted how many issues we had on that board, but we were more interested in their content and whether they were something we would deliberately fix (or ignore).

Finally, I am not sure what your question about when to start testing and the size aspect means. I believe doing testing or involving a tester as early as possible, even if it’s there input on designs/wireframes/prototypes is the best thing you can possibly do, as I mentioned, as you can avoid building the wrong things, which is far worse, IMO, than building the right things with some imperfections or technical debt.