Hello friend! It's pie-mail time again!
I'm excited to share I'm going to be speaking at NZ Testing Conference in Wellington later this year!
It's been a while since I've been to an 'in person' conference - I'm really looking forward to it. I'm preparing a new talk for this one too, and I think it should be a fun one - for those of you in NZ, I hope to see you there!
(Do check out the other speakers too - I'm lucky to be sharing the stage with so many talented individuals - it's going to be great!)
Of course, COVID is still hanging over us. I'm reasonably confident that by November, NZ will be back to 'Level One'. But, worst comes to worst, I know the organising team has contingency plans in place too - so, don't let that put you off. Early bird ticket sales are on now!
Until then, here's your bi-weekly news from my side. Hope you're all staying safe out there!
💸 Technical Debt 💸
The theme for me this week has been Technical Debt.
I feel like Technical Debt is one of those things that you don't learn about at university, and yet comes up all the time in industry.
What is Technical Debt?
To me, it's when you choose not to attend to something in your product now, knowing you'll have to come back to it later.
This could be:
- intentionally leaving a bug in a feature
- choosing not to upgrade part of your infrastructure
- writing some hacky code that won't be maintainable later on
- and lots more!
Sometimes there might be good reasons for incurring technical debt. In my experience, the most common reason is speed - making shortcuts, to deliver something faster. This can be a sensible decision!
The thing is, at some point most Technical Debt needs to be 'paid back'. I find that testers come up against this a lot. For example, I'm sure many of us have come across bugs that have been determined "not important enough to fix now".
I think an important skill for testers is learning how to make a case for addressing Technical Debt. Often it's not enough to say "there's a bug, and it needs fixing". The business - often represented by a Product Manager - will want to hear why a bug needs fixing.
To make a strong case, it's important to recognise what risk a piece of technical debt poses to the company. It's even better if you can put a dollar value on it!
Here are a few ideas / examples:
- Instead of "this bug needs fixing", perhaps pointing out the number of support tickets a bug is causing each week, and what that costs the organisation.
- If there's a piece of infrastructure that needs upgrading, point out the risk to the business that being on an old version can cause - and what it could cost the company. (e.g. security upgrades)
- If there's some hacky code in the product, point out the amount of engineering time that is wasted trying to work around it - and what the company could save if they spent a little time improving it.
Dealing with lots of Technical Debt can be difficult!
It's part of our responsibility as testers to highlight the risks it can pose, and work with our teams to help address it - while still building new features and doing all the other stuff that is important to our business.
✨ Some interesting links ✨
The Testing Trade-Off Triangle:
This is an interesting 'conversation starter' from Paul Swail, where he identifies three factors that must be balanced for a good automation suite. It's a short read, worth checking out.
Manuel Matuzovic reminds us all that not everything has to be a <div>.
What makes a good automated test?
Kristin Jackvony with yet another great post on testing. This one is six key indicators you can use to gauge whether your automated tests are 'good' or not.
What do you say if you don't say "manual" testing?
The phrase "manual testing" is one of my biggest frustrations in the industry, and the phrase is rampant. I'd love to see it stamped out. Michael Bolton puts it better than I can.
The iPhone charger with a built in keylogger
Remember that time you were at the airport and you needed to charge your phone and you borrowed a cable off a kind stranger? Maybe don't do that any more :)
🤔 Question time 🤔
A slight change this week!
Instead of a puzzle, I have a question to ask and I'd really love to hear your answers!
The question is:
What I mean is - if you're deploying to production, and a test fails - should you still be able to complete the deployment?
What are your thoughts?
Reply and let me know! I'll include the best answers in a blog post.
(And I know... as testers, you'll be tempted to say "it depends" - but try not to sit on the fence! Answer according to the context of your own organisation, if it helps!)
🎪 Events coming up 🎪
Events for those of you in New Zealand:
NZ Testing Conference (November 19)
As mentioned earlier, NZ Testing Conference is coming. I've heard there's a Guns n Roses concert the same weekend, but I also heard their last concert here was very bad. Trust me, the testing conference is the better option.
Events for anyone anywhere!
Observability in a distributed environment (Sep 8)
Ministry of Testing Auckland are hosting Manoj Kumar to talk all things Observability. A really important topic in todays testing landscape. We're still in Level 4 lockdown, so, this one will be online!
Growing an Experiment-driven Quality Culture (Sep 23)
Practitest are hosting the amazing Elisabeth Hocke, to talk about how she used experimentation in her product team to improve their approach to quality.
A Practical Guide to Accessibility Testing (Sep 24)
This one is hosted by Ministry of Testing Philippines - join Rowena Calata as she teaches on how to get started with accessibility testing!
👋 Thanks for reading! 👋
Still in lockdown, but the numbers are going in the right direction. Lockdown can be tough on your mental health, so please, look after yourselves, and reach out if you need to talk to someone. I'm happy as to chat.
James a.k.a. JPie 🥧