Just published a free mini-course to introduce folks to Behaviour Driven Development in 3 Easy Steps: http://tiny.cc/bddin3
With reference to my original post on The Streetlight Effect; I still see this effect today.
Recently, I was discussing the testing of a mobile app with an experienced QA, who is a very experienced black box tester. I am currently encouraging and coaching this tester to learn more about programming, code design, code architecture etc as I think any knowledge in these areas can make you a more effective and efficient tester.
This tester was seeing some problems when interacting with the mobile app UI and was going to raise a bug with the developer. I asked this tester if they were observing the mobile app. requests and responses to and from the backend API, i.e. the requests and data being sent and received either across wifi or cellular data between the mobile app. and our cloud-based backend server API. The tester was not sure how to look at that traffic. The streetlight effect – I will look here (the UI) for my keys (any issues) because I know how to look here (the UI). So I shared a how-to I had written years ago for Charles Proxy which will allow the tester to inspect the https requests and contents along with timings and more, by proxying all requests made from, and responses to, the mobile app. through his laptop. This enabled the tester to see the requests that were failing as well as the resulting error message – the tester was then able to raise a much more detailed bug and the developer was able to get straight to the problem and fix it quickly. (As opposed to a bug like, when I do this in the app, I don’t get the UI screen/data I was expecting to see displayed. Where the developer would have to first reproduce the problem, watch the traffic themselves to pinpoint the problem – taking much longer and possibly with some back and forth to clarify reproduction steps etc.)
Another example is another very talented tester who is also a strong coder so has great white box testing skills. This tester was digging into some performance issues. Trying to understand why a request for a large dataset from an API was taking so long (when the system was otherwise quiescent), and would often fail if there was any other activity (API GETs and POSTs to requests for data and to store data). We are using AWS and there is a myriad of tools and monitoring capabilities to learn and get your head around. This tester was able to extract the time taken to complete the request for data and plot this against the size of the data extracted. If you think about this visually, this tester is looking at this from a black box perspective, making a request knowing what was requested and extracting the start time of the request from the log, then extracting the completion/response time from the same log, then plotting this against the size of data returned. (The tester was increasing the data stored and thus retrieved between each test/request). In this case, being capable of white box testing and understanding the code and system architecture this tester knew that there were several key components involved in servicing this request, but was not observing any of them. In order to understand what is really going on, and in this case be able to pinpoint the parts of the system that was taking a long time to service the request and which ones would fail when there was any other activity. Most performance issues are as a result of some form of resource exhaustion, e.g. CPU, Memory, Input-Output (IO), threads, connections etc. So, we really want to be able to see how these resources are being consumed when we interact with the system, as this can lead us to understand that our CPU usage is spiking to 100% when we do something and thus cannot do more when more requests come in, or that our memory spikes up and never comes back down to the pre-request level once the request is complete – in other words some memory is not being freed leading to a resource leakage (we will exhaust this resource over time). In this case, learning how to observe the individual service docker container resources and the database resources will likely lead us fairly quickly to where the problem(s) or weak link(s) in our data (chain) are for the request we are making.
In conclusion, what we need to do more often is to ask questions like;
- What could I watch or monitor to see, in more detail, what is happening/going wrong?
- What is the data flow – what path does the data takes through our system and what components are involved?
- How is the system architected and how do all of the components communicate
- How can I observe the communications between the system components?
If any of these questions result in a “I don’t know” or similar, then ask your colleagues for help, you are likely to learn something new, even if that something is that your colleagues also don’t know the answers to some of these questions.
I am going to join a long list of people I consider thought leaders in our industry in a lament of how many people continue to see testing as a simple job, not a career, not a craft, not really requiring any skill at all.
But, like Lalitkumar Bhamare I felt I needed to talk about this, because …
I recently interviewed a candidate that had had two co-op experiences as a QA, he had been given requirements and design specs and asked to think of tests (purely black box) for these. He believed he was doing well and that the team was working efficiently when he could think of one or two cases in addition to the one obvious one to confirm the requirement was met.
He found bugs, 2 or 3 a day, he said with some pride.
It was obvious he did not think this was hard, required any skill, or was a career that he wanted, but he felt trapped and compelled to apply for a QA role as that was what he had experience in. He really wanted to be a developer.
I explained he could be in a very different environment where instead of being part of a waterfall like cycle like this;
- customer proxy passes requirements to the tester and developer,
- the developer develops code to meet the requirements,
- the tester thinks up a few test cases per requirement (he told us he came up with about 2.5 on average).
- The developer then hands code to the tester who then takes the code (product) and tests it by following the heavily prescribed test cases he just wrote. (Presumably without thinking much). Finds a bug or two, records that in the bug tracking system and moves onto the next test case.
- Then the game of ping pong begins as the tester throws the code (product) back to the developer to fix the bugs. The developer fixes the bugs and throws it back …
That instead, he could use his skill, (yes I did say the word skill), to help the customer proxy and the developer understand what he was thinking of testing before the coding started. That this might actually prevent the developer from missing the requirement to meet the other use case the tester had thought of. That the customer proxy may have an opinion on this and say, “no we don’t want that”, or perhaps, “yes we want to meet that need too, and now you have made me think of another one that we need to meet too!” (Catching a completely missing requirement)
He didn’t see it, couldn’t see himself doing that, couldn’t see a way that could work, at least not during the interview. To his credit he was thinking about it and it clearly confused him but he was not able to take this any further.
(I did also suggest that there was no reason for him not to apply for software development roles)
Many, many, others I have interviewed, some who have been ‘testing’, (I would say checking), for years, still have no thought that they could be better, that they could improve and develop their skills, that they could really help their team to avoid making so many costly mistakes and save their companies thousands if not millions in wasted efforts, time lost due to bug interrupts and context switches, re-builds, re-tests, whack-a-mole etc. In other words deliver real value and not just fill a role, warm a seat, expend some effort, etc.
A co-op student I worked with, (believe me I am not picking on co-ops at all, they have very little experience to draw on, but are also often not taught anything good about testing), visibly turned his nose up at doing some testing, saying, “I don’t want to just press buttons all day long”. This was his understanding and belief of what testing was. Not entirely sure what his experience was or where he got that, but I tried to disabuse him of that incorrect understanding of the role, but once again felt myself working against an entrenched idea.
So, I call out to the learning establishments to do a better job of preparing these young minds for the real work of professional software engineering. That good software developers do a lot of thinking and a lot of testing as part of everything they do, and that doing this well takes a lot of skill, experience and learning. And that good software engineering teams and leaders recognize the need for people on their team that think differently and are skilled in the art of helping the team expose issues as early and as quickly as possible, to help them deliver the right solution first time more of the time. Without the need for drama, missed deadlines, long hours, late nights, re-design, context switching to fix bugs found in code developed weeks ago, etc. That this, (software testing), is a career, a valuable role, and an essential part of any high performance team. That it also takes a lot of skill, experience and learning to do it well.
I also call out to all the companies out there that insist on continuing to think of testing as simply, ‘using the product the way a customer will’, and hoping that will catch enough issues. Believing that in doing this they are ‘testing the quality into the product’! They are both wasting and losing a lot of money, as they have to pay the cost of finding defects late in their cycle. They also need large customer support teams, and in some cases teams of developers focused on maintaining, (fixing), their colleagues code.
These companies are responsible for perpetuating the myth that testing is basic, unskilled and a simple role. That it is easy to automate what these testers do or that we can simply outsource this work to countries where peoples’ time is cheaper.
They are also often responsible for poor quality products, that will often miss deadlines, and require expensive maintenance programs or extensive and expensive re-writes every few years.
And finally I call out to all the people in testing or quality related roles out there to learn to do better, be better and thus change the perceptions of everyone around them to see that testing is a skilled role, it is a craft and thus requires craftsmanship, and that it needs to be part of every team and will improve the quality of everything we do.
But it’s not all bad …
I have employed and had the pleasure of working with some fantastically skilled testers, both talented exploratory testers, and test automators. Sometimes a few rare gems that are able to combine great skill in both testing and the development of tools and automation to help check software.
I have also had the great pleasure of working with lots of great software engineers and leaders who really understand and seek the value of testing. Insisting that they have great testers to work with them, that they and their software developer colleagues need to think about and execute testing at all levels and that they encourage all to learn more about and be better at ensuring the quality of everything we do.
I have been inspired, guided and encouraged in my career by many people, but most of this has come from what I think are some of the thought leaders in this area. Here are some links to articles, books and people that can help you too;
Why don’t we have time to do it right, but somehow we do have time to do it twice?
My paraphrase of a quote by John Wooden (quote number 5)
How come we often think it is better to rush into something we don’t understand and hack at it than to take the time to understand what it is we are really trying to achieve and then think about how we will achieve that before we start coding?
Do we not learn from our mistakes?
Do we not see, measure or understand the cost of halting a developer, who is by now working on another story, to get them to switch context and think back to the rushed coding they did on the previous story. To get them to diagnose the root cause of the bug we just discovered? Or see, measure or understand the time it takes to diagnose, resolve then rebuild and re-test this change through the entire pipeline? With a very real risk that when handed back to the tester that she will find another issue as the fix was rushed and we did not have enough coverage in our pipeline of automated checks to discover the regression that was introduced.
I have seen this pattern throughout my life and have been guilty of it myself. So what do I do about it?
Well I try to discipline myself, but what I do for my teams is to use BDD to ensure we have shared and common understanding of what story we are about to do, changes we are about to make, additions we are introducing. To ensure we all understand who needs these changes and why they are important to them. (Who the customer is and why they care). Then we, (the three amigos – slide 9), will be able to agree some high level examples of what ‘done’ looks like for this piece of work. These examples will be in the the form of tests that will adequately specify and thus prove we have delivered what was required. We call these the acceptance tests. They are defined before any coding is started. Hence the ‘driven development’ part of BDD. Where possible these tests are automated and are required to be passed, (via automation or manual checking), before the story is pulled by QA, (we use Kanban), to do a final exploratory test.
We are not perfect at this, but it really does mean that we can get a story accepted more often than not on its first pass through the pipeline. If it doesn’t we all understand the work well enough to learn from the mistake(s) and improve.
Following on from the UNUSUAL PAGE post, I have also created a heuristic for REST APIs, along with a simple mnemonic, which I think is quite memorable for a certain group of sci-fi fans 😃
My organisation is currently implementing an API first strategy, whereby we design and implement the API for any piece of functionality before developing any UI or consumer code for that interface. This provides us with the ability to separate concerns easily, improves testability and is in line with the current trend for micro services.
As with the UNUSUAL PAGE mnemonic I realised that the original heuristic was not that memorable and thus my team were not able to easily call it to mind when in a meeting room, designing the next REST API with their team.
So, with a bit of rephrasing I came up with VADER, (Verbs, Authorization, Data, Errors, Responsiveness).
As with the previous heuristic, I have updated the coverage outlines templates originally described and linked in a previous post.
Obviously not all of these branches or leaves will be applicable to your REST API and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.
Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a REST API or clarify the requirements around REST API design etc
Feel free to share back your own variations on this heuristic or even your own heuristics.
I have been meaning to share this for a while now.
I have been inspired by, learned from and generally challenged to think more and better by some of the folks that I consider to be thought leaders in testing, namely; James Bach, Michael Bolton and Jonathan Kohl. These are amongst the best thinkers in the testing profession. They are also some of the best at sharing their knowledge, for which I am eternally grateful. I am in some small part trying to mimic them by sharing some of my thoughts and experiences here.
So this is a little overdue homage to these giants upon whose shoulders I am standing.
When trying to come up with ways to help my QA team think more broadly, differently and holistically about risks and tests for Web UI pages I realised that the mind map that I had developed over time for this purpose was not very easy to remember.
This was fine if you used my coverage outline template, (now updated to UNUSUAL PAGE), because that includes both the mindmap and the spreadsheet sections from the mindmap, thus no memory required.
But if you were in a meeting room discussing the user workflow or code design of the latest UI change, or at the desk of the User Experience designer looking over some wire frames in preparation for a 3 amigo style BDD discussion, (designed to ensure we all had a common, shared understanding of the requirements), or a story kickoff where we wanted to think about design and code risks and tests to mitigate those. But you didn’t have a laptop in front of you with the template to hand, how would you mentally run through the different aspects to consider in the context of the work in front of you?
Thinking about how I normally expanded my thoughts around where things could/would go wrong and what sorts of things I should consider testing I realised I often used heuristics I learned from the folks mentioned above. These heuristics were normally memorized in the form of simple mnemonics. Looking again at my mindmap I realised I was not that far from a fairly easy to remember mnemonic, so with a little tweaking I came up with UNUSUAL PAGE (start with URL and go clockwise);
Obviously not all of these branches or leaves will be applicable to your page and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.
Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a UI page or clarify the requirements around Web UI design etc
Feel free to share back your own variations on this heuristic or even your own heuristics.
I will share some more that I have been practicing with my team.
My Agile QA Manifesto
With reference to the original Agile manifesto I present my thoughts on an extension for agile QA or an agile testing manifesto;
- Prevention over goalkeeping
- Risk based test coverage over systematic test coverage
- Tester skill over test detail
- Automation over manual (for checking/repetition)
While there is value in the items on the right, I value the items on the left more
And to follow that, a set of principles I try to follow and try to instill into those that work with me;
- Fail fast/provide fast feedback
- Test at the lowest layer
- Test first (TDD/BDD)
- Risk based testing for efficiency
- Focus on tester skill and domain knowledge
- Drive for automation for repeated checking (regression)
- Learn from your mistakes – don’t repeat them
I was looking back over some notes from previous positions and I came across an instance where I had experienced what I called a Canary Fail. I am sharing this in the hope that some of you can learn from this rather than having to experience this for yourselves first hand.
This was quite simply a couple of situations where the customer found issues I knew were there but had not yet raised them with my team. I have always maintained that one of the primary purposes of a tester, in the context of discovering and providing information about the quality of the product, is to be the canary in the coal mine. In other words to be the early warning system, to raise the alarm to avert disaster.
Why is it then that I failed in this regard? Well, there are no excuses, I was wrong and I really should have known better. This was a lesson well learnt! Of course there are plenty of contributing factors that conspired to help me make poor decisions, and I am sure these will be familiar to at least some of you;
- The release was time bound.
- I had more areas to cover than I had time for.
- We were creating release candidates, and thus introducing new changes and risks, quicker than I could mitigate (test) them.
The root cause of my canary fail was that I held off on reporting these issues because I was concerned I was wrong, that my understanding and thus my testing was flawed in some way and this would lead to me crying wolf! This in turn would result in me losing credibility as a tester and thus weaken my ability to advocate for defect fixes.
So the lesson I learned painfully was to raise a flag early and to talk through my concerns openly. Then to trust in the professionalism of my colleagues to forgive me when I am mistaken but to treat each flag I raise with respect and not discount my concerns even if I cried wolf last time.
PDF version of slides: BDD Intro
Disclaimers first, I am not an expert in Behaviour Driven Design (BDD), in fact I am just starting down this particular learning path. I have however been testing, and to a lesser extent, automating tests for many years. So I have learnt enough to know that this approach is a great one to try, as I can see how it will help to address many of the issues we experience. In particular it will clearly help prevent the types of issues that arise from misunderstandings, assumptions and ambiguity in our requirements.
Marc and I wanted to share some of our early experiences and those of other and better folks that precede us in learning BDD, as we feel strongly enough about this approach that we want to encourage and inspire others to learn and adopt BDD practices.
(slide 2) Setting the scene
We have probably all experienced ambiguous requirements, which means we have probably all experienced problems in our products as a result.
Some simple examples;
- Product Manager (PM) & Interaction Designer (IxD) require a text box to have a 100 character limit.
- Test automator leaves the requirements discussion, goes back to his desk and writes tests that will drive the UI to write one character and assert that the UI shows character count of 1
- Developer leaves the same discussion, goes back to his desk and writes code which will show 100 characters and count down every time a character is entered.
- The test fails and then a discussion ensues to figure out which one understood the requirement appropriately.
I have experienced many of these types of situations, quite often where the PM is saying that neither the tester or the developer understood them correctly. In other words that both developer and tester misunderstood or mis-interpreted the requirements, and now both need to go and re-do or refactor their work to deliver what the PM really wanted.
(slide 3) Are you often testing at the end of the cycle?
Perhaps you are in a waterfall like development lifecycle, or a fragile lifecycle?
Do the testers know ahead of time what they will test? Did they work that out from either a written requirements document or a requirements discussion at the beginning?
Or are the testers working with the developers, product managers and in our case interaction designers on a regular (daily?) basis to ensure we are always on the same page and on track to deliver what is really required?
(slide 4) Are you in an agile like environment?
In which case do you all speak the same language? Do you have a domain specific language that is understood and used by all?
I have been in many requirements discussions, story kickoffs or similar where it really seems like we are talking different languages.
In my current role we have a lot of domain specific terms which are either overused, (used to mean more than one thing depending on context), or terms that mean different things to different people. We recently had a problem where one team used the term ‘system variables’ to mean a specific kind of data we store about a member of one of our insight communities, another team wanted to use the same term to refer to data we capture and store about a survey respondent’s computer system (e.g. browser & version, screen resolution and browser locale)
As a test why not ask 10 different people what a test plan is and see if you get 10 different answers.
(slide 5) Deadline approaching?
Does this mean you usually cut a few corners, rush your work, ignore some aspects of your process that perhaps you don’t think provide good value for the time they take?
So, why don’t we have time to do it right but we have time to do it twice?
(I forget where I first heard that phrase but it is a really powerful one for me)
We often cut corners or rush to meet a deadline, knowing really that we will have to come back and ‘fix it up’ or pay down some ‘technical debt’ later. And of course the cost of that will be higher than the cost of doing it the first time round.
(slide 6) A well known illustration of;
a) What the senior developer/designed designed
b) What got delivered
c) How it was installed at the customer site
d) What the customer really wanted
This is typically the result of some of the ways we have been working and the approaches we take.
(slide 7) So how can we be better?
We can adopt the test first approach – combine the red-green-refactor pattern of TDD with behaviours to get BDD
Defining a test first and then writing the code to pass that test provides many benefits;
- Only write the code needed to pass the test (no waste)
- Ensures code is testable – cannot pass test otherwise – requires observability etc
- Test effectively documents the code
- Safety net – test is added to CI system ensuring we never regress this code (if we do the test fails) – future code refactoring is done with confidence as we will know, as quickly as it takes to run these tests, if we have made a mistake
(slide 8) Where do I start?
Another way to describe the test first pattern is the Acceptance Test Driven Development approach.
This fantastic diagram is borrowed from (add refs and links)
The fours Ds
Follows the TDD red green refactor cycle as shown in the middle, i.e. this is iterative
(slide 9) Discuss
Ensure you have representation from all of the key roles to discuss the story or item of work. Typically we refer to the 3 amigos – Product Management (or Business Analyst or if possible the customer), Development and QA. At Vision Critical we also often have a 4th amigo – an interaction designer (IxD)
The discussion needs to unify the language, ensuring we are all discussing the same thing and have common and shared understanding
For example ensure we don’t mix development terms with product or customer terms, try to define a domain specific language (DSL) that we can all share and understand.
(slide 10) Distill
Discussion should then produce examples of the behaviour you want from the product or system. These examples are effectively how you will test you have developed what was really needed.
Use a ubiquitous language and structure to define these – Given – When – Then
Facilitates clear communication as well as structure that is easy to read and simple to follow
(slide 11) Develop
First develop the automation that asserts the behaviours (automate the tests first)
Then develop the code (production code) to pass those tests
(slide 12) Demo
Demonstrate the working code using the automated tests
Review the behaviour specifications with the customer or product manager
Add the tests to your CI system (keep running the tests to ensure fast feedback and a consistent safety net of tests)
Time to celebrate and perhaps retrospect on the story and capture anything you learnt and ideas for improvement so that you can apply all to the next story
(slide 13) Repeat the cycle for all the stories, learning an improving as you go.
(slide 14) Results
Hopefully you will have delivered what the customer really wanted and gained some additional benefits;
- Executable specifications that can always be trusted to be true (otherwise your tests will be failing)
- Automated regression tests that provide a safety net and fast feedback on those regressions – try to have these tests run as frequently as possible
- Testable and thus maintainable code, not only can you look back at these tests in 6 months and know how the code works, but you also know the code is testable as it was written to pass tests
You will hopefully also feel proud of what you have achieved and will be recognised for that – if you are the only team doing this is will show!
(slide 15) What BDD is not
This all sounds great, so where do I get hold of this silver bullet or pink glittery unicorn?
Well BDD is not one of those, in fact it takes a lot of work to do it well, but it is worth it
(slide 16) How to get started?
There are a number of different BDD frameworks for the mainstream development languages, here are a few
- SpecFlow is for .Net
- Cucumber is mostly for Ruby
- JBehave is for Java
- Behat is for PHP
(slide 17) Gherkin anyone?
As well as being a pickled cucumber …
This provides the common and ubiquitous language that facilitates the simple and clear communication of behaviour
Here is an example of a feature, which contains a number of scenarios (tests for that feature)
The feature description here describes the feature and the context, in this case the problem the feature is trying to solve
(slide 18) Background
Background is a special keyword in Gherkin
In this case I am showing an example from the Cucumber Book that uses the background to setup the test preconditions – the Given for the following scenario(s)
(slide 19) Scenario
Here are the When and Then sections of this scenario
Very readable, understandable and clear
(slide 20) A question of style
A lot of folks (myself included) start writing scenarios in a similar way to how we would write test cases or code, by detailing all the steps that we need to execute in order to setup the product under test as well as perform the test and check the results.
This is an imperative style and it is not very readable, at least not when you are testing something more trivial than adding two whole numbers.
So we need to focus on a more declarative style, try to tell the story of the behaviours we want the product to have
This means hiding all of the details that are not relevant to the behaviour and keeping only the details that are important to the behaviour or the intent of the test
(slide 21) An imperative example
Lots of inconsequential detail here which means that the intent of the test is lost in the noise. We specify the email address and password but have to assume this means these are valid. Do we really need to know we clicked the login button? How does that help us understand if the product behaves correctly when we provide valid login credentials?
(slide 22) A declarative example
Hopefully you can all see this is much more readable and very clearly talks about what is important to the behaviour
This test is not about what makes a valid or invalid email address or password, it is about what happens when those are valid and the user is able to successfully login
(slide 23) DRY vs DAMP
Aim to tell a story rather than focusing on re-usability
An example here would be that we have steps like those in the imperative example;
Given I am on the login page
When I enter email as “firstname.lastname@example.org”
And password as “Password1”
And I click the login button
Because these steps detail how I login we can easily re-use the same steps throughout the scenarios to login before executing more steps that are designed to assert a new behaviour, for example;
Given I am on the login page
When I enter email as “email@example.com”
And password as “Password1”
And I click the login button
And I click the Start New Project button on the dashboard page
Then I should see a blank project
And I should be able to edit the project
Instead of re-using these steps this could have been written as;
Given I am on the dashboard page (it is not important for this behaviour to know what steps you took to login or what exact credentials you used)
When I start a new project
Then I should be able to edit my new project
(slide 24) Scenario Table Example
Using data tables to test with multiple values that will not read well if all written on one line of a Given, When or Then
These enhance readability by keeping the data clear but separate from the declarative and meaningful phrases
(slide 26) Scenario Outline Example
Sometimes you need to effectively test using the same steps but with a variety of test inputs, a scenario outline helps to avoid repeating steps each with different data values
In this case each row of the table is essentially one Given – When – Then scenario and will be executed sequentially
(slide 28) Hooks
These are really useful, they can simply call some code to setup or tear down your tests and are controlled using methods called Before and After
(slide 29) Tags
Use these to label your features and scenarios within features
You can then execute only those features or scenarios that are tagged a certain way
Or filter out tests with a different tag
We use tags to group tests by team, to run certain subsets of tests in a certain environment, and now we are trying to use tags as a way of recording and reporting test coverage by labelling features and scenarios by the code area that they cover
And that wraps it up for the presentation part.
This is a presentation I gave at a local test meet up group called VanQ
PDF version of slides: Test Coverage Outlines
Test Coverage Outlines are just elaborate spreadsheets. They were originally inspired by Jonathan Kohl, as part of exploratory testing training at Sophos. A colleague, (Jose Artiga), and I built on the initial inspiration and came up with the first version of the test coverage outline. This was then adopted by the teams at Sophos, and of course refined through use and practice.
(slide 8) Simple spreadsheets to help;
- Structure & simplify your test planning
- Inspire & structure your test design
- Be test professionals and not just test execution machines
- Collaborate on testing
- Show clear and concise status at any point in your testing
- Discuss risk in terms of planned vs actual coverage
(slide 9) An example coverage outline
- Each white background row is a test idea – a simple sentence or few words to convey an idea you want to cover by testing
- Try to keep these simple like a heuristic that acts as a guide rather than a step by step procedure
(slide 10) Divide your test ideas up into sections
- Example shows this as the black lines with white text
- You can do this by functional area, test focus, heuristic etc (whatever makes sense for you in your context)
(slide 11) Divide your test ideas up into sheets
- Again divide into functional area per sheet or perhaps copy your sheet to provide same/similar coverage against different builds or versions of your software
(slide 12) Configurations – show your coverage against different configurations
- Example given is a common one – browsers and OS, but could equally be versions of HW, SW, Mobile device etc
(slide 13) Test types/levels – show your coverage at different test levels or different testing types
- Matrix your test ideas against different test levels, for example unit tests, integration tests, automated UI tests, manual tests, exploratory tests
- Even if you are not familiar with the unit tests you can suggest the developers gain some coverage of the test ideas with unit tests and fill out the column with you
(slide 14) Prioritise your test ideas
- I use simple (high, medium, low) priority levels to organise, sort and filter test ideas by priority
- Make sure you are executing/covering in priority order
(slide 15) Prioritise your configurations
- I use a simple left to right prioritisation for configurations, i.e. the highest priority configuration to cover is the left most column
(slides 16 & 17) Use colour and conditional formatting
- I use simple colours and conditional formatting to make updates easy and to show status and priority clearly
- Make sure you also use a colour (I use x and grey) to indicate coverage that you are consciously not planning to cover
(slide 18) I use mind maps as a visual test design/planning aid
(slide 19) Start by outlining your test idea areas, use the outline to inspire your test ideas
- I sometimes use heuristics to structure and inspire my testing, for example using a quality criteria heuristic like usability or performance I can think about test ideas that come under each of those ares
- Creating templates (see examples) for common types of testing focus can help inspire testers as well as providing some base or consistent tests
(slide 20) Using test ideas rather than detailed test cases
- enables and encourages variation, exploration and more ‘brain engaged’ testing
- thus avoiding the pesticide paradox [Boris Bezier]
(slide 21) Collaboration
- Using google docs or similar enables you to easily collaborate in real time with other testers/colleagues
- Some of the easiest ways in which you can organise and facilitate collaboration is to allocate a config column, test area or even tabs
- You can see who is accessing/updating the sheet and can see status of tests others are covering in real time too
(slide 22) Use colour to provide very easy and quick to read status
- You can just look at a tab and immediately see if you have any fails or blocked tests
(slide 23) Colour coding also makes it easy to see coverage in terms of planned vs actual
Example Coverage Outlines
Feel free to copy and adapt these for you own purposes, hopefully these will inspire you to refine the outlines and share your ideas back with me and others;