The Six QA Hats

Article by Austin Ho
Published on Read in 5 mins

Quality assurance does not merely belong to testers but rather belongs to the whole team, so we need to empower everyone on the team with basic testing mind sets in a way that is memorable and enjoyable. Introducing the “Six QA Hats” – a mind-map with six different branches, each of which represents a dimension of focus on testing. The the “Six QA Hats” could help us effectively brainstorm and organise our next major testing activities.

Why hats? It was inspired by Dr. Edward De Bono’s “Six Thinking Hats”, which represents a simple, effective technique to help organise thinking process systematically.

What are the “Six QA Hats”? There are the Context Hat, Functional Hat, Non-functional Hat, Risk Hat, Big Picture Hat and Good Practice Hat.

The first hat – Context Hat

Every story has a background, has its own circumstance. Without knowing the context it is hard to understand the story correctly, not to say test it properly.

When I joined QA handle over sessions, I observed that most people do not bother to think: why do we need to implement this card? Who are the end users? And how do they interact with the system under test? Without knowing these information it’s very hard to do proper testing. The context is the foundation to guide us as what tests need to be done and why we need them.

One useful technique to get the contexts is to ask questions, no questions are silly questions, be brave to ask any questions and clear any assumptions.

The second hat – Functional Hat

To some people functional tests are primarily focusing on Happy Path, especially when working on fast-paced software delivery teams. Apart from testing the Happy Path–what the system should do, we also need to test the Sad Path and Edge Cases–what the system should not do and where the boundaries are.

By wearing this hat it also reminds me the automation tests for regression purposes, such as automated unit tests, automated feature tests and automated integration tests, etc. And there are some good practices about automation tests which I will get back to them in the last hat.

The third hat – Non-functional Hat

A system can function does not mean it can function well, so we need to think of the “quality” of the system.

Non-functional tests are normally refer to usability, security, performance, availability, compatibility, vulnerability and robustness tests, etc. If we do a great job on understanding the context which I mentioned in the first hat, then we probably will know what types of non-functional tests we should do.

Here is my story, my team is currently working on a web based Pricing Tool for internal users, who are very good at MS Excel, so we build some Excel like functions in order for our users to use the website more productively. In this way we make the system more user friendly. Though the system is only used by two human beings, there are some micro services constantly consuming the APIs so we also do performance testing to make sure it fast enough for HTTP/s calls.

And it’s worth to mention that the non-functional tests should be valued as much as the functional tests, and should be considered as early as possible.

The fourth hat – Risk Hat

Risk testing is about to find a problem that might happen and understand the impact if it happens. There are two parts for Risk Based Testing, one is to identify the risks, the other is to mitigate the risks.

In order to identify the risks, we need to think of what the risks are, how to trigger them, how likely they happen and how bad the impacts are if they happen. One simple technique to get the risks is to keep a curious mind towards potential failure scenarios. I always wonder what would happen to the whole system if some parts of it fail randomly. Let’s think of a scenario we face daily: imagine we are crossing a pedestrian line, and it is with green lights, but we still consciously or unconsciously look around to see if there are any careless drivers putting our lives in danger. So be alert and be curious will help find risks.

After figure out what the risks are, then the second part is to think about how to make the risks visible and work out a plan to mitigate them. Ask questions like: Are there any monitoring? Are there any logs? Are there any workaround? Are there any resilience plans? Etc. Once risks are spot let’s quarantine or kill them.

The fifth hat – Big Picture Hat

One of the testing objectives is to understand the system under test in 360 degrees, and gather enough information to help decision-making. So sometimes we need to step in and zoom in to scrutinise bits and pieces of the system. but also we need to be able to step back and zoom out to overview the system as a whole.

In general the system under test is not a standalone application. It has upstream and downstream dependencies. We need to form a habit of asking if we changing something here will it break anything there, thinking about how the end-to-end workflow looks like from the user’s perspectives, and doing exploratory testing to understand the system in different angles.

The sixth hat – Good Practice Hat

Any profession has good practices, so does testing. And I am sure you have your own good practices, so do I. Here are some good practices that work for me.

1. Remember to check if we are testing against the right version with the right environment before starting to test.

2. Prioritise testing activities by importances and risks, since we cannot give every test case the same spotlight.

3. Consider multiple factors when write automation tests, the factors like cost and benefit. Do not automate tests for the sake of automation, especially when the overall cost outweighs the benefit. The automation tests should be sustainable.

4. Provide feedback as earlier as we can. The earlier a bug gets caught the easier it get fixed.

Let me tell you a story about automation tests that I learnt. There was a project to write UI based automation tests for a legacy CRM system. The system is 5 years old and has been heavily customised over years without proper unit test coverage. So deploying a new version became a daunting task. Then we pulled 3 QAs spent 6 months writing cucumber tests. At the end of the project we created around 200 test cases which marked a success at that time. But later on we found those UI automation tests caused us more trouble than the benefit we got–it cost us more time to fix the random flakiness than what we could otherwise do it manually or in a different way.

In retrospective I identified 2 basic principles for automation tests: 1. avoid heavy UI based automation tests; 2. push automation down to as lower level as we can. Don’t write functional tests if unit tests are good enough to cover what we want to test; Don’t write UI tests if API tests are good enough to give us a peaceful mind.

At last I want to say the “Six QA Hats” is just a guide not a complete testing checklist, you can come up with your own mind-map to organise testing errands, but you are more than welcomed to wear the hats I designed here. And please bear in mind that not all the testing activities in the mind-map apply to every story under test, the mind-map serves as a conversation marker, to what extent we do testing it’s case by case.

 

More from the blog

My first rotation as a Springboarder at REA.

Category: Career stories
Published on

From Teaching to Software Development – A Year On

Category: Career stories
Published on

A vision of the future is a bet we are prepared to make

Category: Tech
Published on

One year at REA: Lessons, Growth, and Gratitude

Category: Life at REA
Published on

Introducing Argonaut – Part Three.

Category: Tech
Published on

Introducing Argonaut – Part Two.

Category: Tech
Published on

View all articles