Smoke Test on a Smoke Break
This post originally appeared March 30, 2011 on BetterProjects.net
A few weeks ago, I made one of those blunders all leaders new to a team makes… I made an assumption, a bad one, about everyone being on the same page regarding our testing process. It wasn’t that the team didn’t know how to perform a smoke test or that they didn’t understand the necessity of performing a smoke test, its just that I wasn’t clear at the outset of what my expectation were for the release.
Two days into the testing, someone finally got to what is our primary path for testing of a release… and found that the entire build was broken. This is something that should have been noticed within the first 10 minutes, not two full days of testing later. It was akin to trying to drive away in a car that had no wheels. The problem was so obvious, no one could have missed it. Yet, we did.
Sometimes its easy to get caught up in all the new bling of a release that we forget what it means to hit the fundamentals first. As the leader, the failure was my responsibility. I reiterated to the team why the smoke test was important, we all renewed our commitment to do this testing at the beginning of every build cycle and then went and did it.
Building it Better
But this miss on our part made me really start to think about what makes a good smoke test. Before our little hiccup, this had been mostly ad-hoc; we each took a look at the product, thought about the things that are vital to our users achieving their goals, hit the high points and then ran through as many things from highest to lowest priority as we had time. We were pretty good at this, too (when we remembered to do it).
This just wasn’t good enough, in light of the problems that this miss caused. Thankfully, I have a team of really great testers and several others were thinking along the same lines as I. Last week, I finally took a few minutes to start outlining what it was I wanted in our smoke tests and how exactly we should go about doing them. No more than an hour later, with me never having said a word to my team about my personal brainstorming session, two team members walk up and present their ideas for a smoke test. The smile that flashed across my face likely blinded people three isles down the hallway.
I reviewed their initial draft and liked what I saw. They had basically nailed it. There were a few tweaks that needed to be addressed, but the list for our environment was mostly all there. With the hard part of creating the scenarios done, I started to think this situation would make a good blog post about why I was so happy about this particular smoke test. What follows are my thoughts on what makes a good smoke test.
Duration
It needs to take no longer than the worst cigarette addict’s smoke break. In the time that the most fanatical nicotine fiend takes to burn through a few Marlboros, the team should know if the build is good enough for full testing or not. If the smoke test fails, snuff out the build and bum another from the development team.
For our team, we set this time at 20 minutes. The development team has usually taken the build for a spin in the testing environment, so its pretty rare when we are set loose that there is a problem that hasn’t already been found and addressed. Still, its our first shot at the build and fresh eyes will routinely find issues that are overlooked by those who have been staring at it for hours.
Priority
What is that one thing that, if you can’t do, the product becomes largely useless to your users? If this isn’t the first item on your smoke test list, you probably need to rethink your list. Is it a piece of desktop software? If so, I’d start with a clean install (or upgrade if you mostly have a static user base). If the build won’t even apply to the test environment, there isn’t a lot else you can do.
For web testing, that’s a bit different, especially in larger development shops like the one where I work. We have an Ops team that handles the deployments and a development team that makes the builds. Each build has passed through at least two teams before my team gets it, so install issues are basically found before I ever see it.
Given that we’re an ecommerce site, ordering is the number one priority for our customers. If they can’t place orders, we don’t make money. We begin our smoke testing here. We look at the many different types of orders, they may different ways of making it into the ordering path and then make sure these all work.
Once the critical paths have been tested, its time to start digging in to the tasks of lesser importance. Add in scenarios until you fill up your allotted time. Don’t choose quantity over quality, but aim for a nice medium that allows you to cover most of your system while hitting the really important parts in (relative) depth.
Latitude
But even for a website, not everything is about the web. Mobile and App channels are becoming increasingly more important in our highly-connected society. If our smoke test only concentrated on one ordering channel to the exclusion of all the others, we might be neglecting some of our most important customer’s needs.
For installed software testing, latitude testing could be just as important. Imagine if you were testing a new video codec and decided to only test it with a single media player front end during your smoke test. Video codecs are important due to their ability to plug in to many different OSes and architectures, enabling people with all different types of environments to view the same content. Covering as many platforms and devices as possible, especially the ones that are common among your users, is vitally important.
Certainty
At some point, you have to make a call… is this build good enough or not to continue testing? If it is not, you know now and not two days later when you are now behind instead of on target or maybe even ahead of where you believed you would be. If everything tests out, you’re not at the end, but you are at the end of the beginning.