VADER – a REST API test heuristic

Following on from the UNUSUAL PAGE post, I have also created a heuristic for REST APIs, along with a simple mnemonic, which I think is quite memorable for a certain group of sci-fi fans 😃

My organisation is currently implementing an API first strategy, whereby we design and implement the API for any piece of functionality before developing any UI or consumer code for that interface. This provides us with the ability to separate concerns easily, improves testability and is in line with the current trend for micro services.

As with the UNUSUAL PAGE mnemonic I realised that the original heuristic was not that memorable and thus my team were not able to easily call it to mind when in a meeting room, designing the next REST API with their team.

So, with a bit of rephrasing I came up with VADER, (Verbs, Authorization, Data, Errors, Responsiveness).

REST API - VADER

As with the previous heuristic, I have updated the coverage outlines templates originally described and linked in a previous post.

Obviously not all of these branches or leaves will be applicable to your REST API and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.

Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a REST API or clarify the requirements around REST API design etc

Feel free to share back your own variations on this heuristic or even your own heuristics.

UNUSUAL PAGE – a Web UI test heuristic

I have been meaning to share this for a while now.

I have been inspired by, learned from and generally challenged to think more and better by some of the folks that I consider to be thought leaders in testing, namely; James Bach, Michael Bolton and Jonathan Kohl. These are amongst the best thinkers in the testing profession. They are also some of the best at sharing their knowledge, for which I am eternally grateful. I am in some small part trying to mimic them by sharing some of my thoughts and experiences here.

So this is a little overdue homage to these giants upon whose shoulders I am standing.

When trying to come up with ways to help my QA team think more broadly, differently and holistically about risks and tests for Web UI pages I realised that the mind map that I had developed over time for this purpose was not very easy to remember.
This was fine if you used my coverage outline template, (now updated to UNUSUAL PAGE), because that includes both the mindmap and the spreadsheet sections from the mindmap, thus no memory required.
But if you were in a meeting room discussing the user workflow or code design of the latest UI change, or at the desk of the User Experience designer looking over some wire frames in preparation for a 3 amigo style BDD discussion, (designed to ensure we all had a common, shared understanding of the requirements), or a story kickoff where we wanted to think about design and code risks and tests to mitigate those. But you didn’t have a laptop in front of you with the template to hand, how would you mentally run through the different aspects to consider in the context of the work in front of you?

Thinking about how I normally expanded my thoughts around where things could/would go wrong and what sorts of things I should consider testing I realised I often used heuristics I learned from the folks mentioned above. These heuristics were normally memorized in the form of simple mnemonics. Looking again at my mindmap I realised I was not that far from a fairly easy to remember mnemonic, so with a little tweaking I came up with UNUSUAL PAGE (start with URL and go clockwise);

UNUSUAL_PAGE
Obviously not all of these branches or leaves will be applicable to your page and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.

Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a UI page or clarify the requirements around Web UI design etc

Feel free to share back your own variations on this heuristic or even your own heuristics.
I will share some more that I have been practicing with my team.

My Agile QA Manifesto and Testing Principles

My Agile QA Manifesto

With reference to the original Agile manifesto I present my thoughts on an extension for agile QA or an agile testing manifesto;

  • Prevention over goalkeeping
  • Risk based test coverage over systematic test coverage
  • Tester skill over test detail
  • Automation over manual (for checking/repetition)

While there is value in the items on the right, I value the items on the left more

Testing Principles

And to follow that, a set of principles I try to follow and try to instill into those that work with me;

  • Fail fast/provide fast feedback
  • Test at the lowest layer
  • Test first (TDD/BDD)
  • Risk based testing for efficiency
  • Focus on tester skill and domain knowledge
  • Drive for automation for repeated checking (regression)
  • Learn from your mistakes – don’t repeat them

Canary Fail

canary in a coal mineI was looking back over some notes from previous positions and I came across an instance where I had experienced what I called a Canary Fail. I am sharing this in the hope that some of you can learn from this rather than having to experience this for yourselves first hand.

This was quite simply a couple of situations where the customer found issues I knew were there but had not yet raised them with my team. I have always maintained that one of the primary purposes of a tester, in the context of discovering and providing information about the quality of the product, is to be the canary in the coal mine. In other words to be the early warning system, to raise the alarm to avert disaster.

Why is it then that I failed in this regard? Well, there are no excuses, I was wrong and I really should have known better. This was a lesson well learnt! Of course there are plenty of contributing factors that conspired to help me make poor decisions, and I am sure these will be familiar to at least some of you;

  • The release was time bound.
  • I had more areas to cover than I had time for.
  • We were creating release candidates, and thus introducing new changes and risks, quicker than I could mitigate (test) them.

The root cause of my canary fail was that I held off on reporting these issues because I was concerned I was wrong, that my understanding and thus my testing was flawed in some way and this would lead to me crying wolf! This in turn would result in me losing credibility as a tester and thus weaken my ability to advocate for defect fixes.

So the lesson I learned painfully was to raise a flag early and to talk through my concerns openly. Then to trust in the professionalism of my colleagues to forgive me when I am mistaken but to treat each flag I raise with respect and not discount my concerns even if I cried wolf last time.

 

Behaviour Driven Development – An Introduction

This is a presentation given by me and Marc Karbowiak  at a local test meetup group called YVR Testing on the 2nd April 2014

PDF version of slides: BDD Intro

Intro

Disclaimers first, I am not an expert in Behaviour Driven Design (BDD), in fact I am just starting down this particular learning path. I have however been testing, and to a lesser extent, automating tests for many years. So I have learnt enough to know that this approach is a great one to try, as I can see how it will help to address many of the issues we experience. In particular it will clearly help prevent the types of issues that arise from misunderstandings, assumptions and ambiguity in our requirements.
Marc and I wanted to share some of our early experiences and those of other and better folks that precede us in learning BDD, as we feel strongly enough about this approach that we want to encourage and inspire others to learn and adopt BDD practices.

(slide 2) Setting the scene

We have probably all experienced ambiguous requirements, which means we have probably all experienced problems in our products as a result.
Some simple examples;

  • Product Manager (PM) & Interaction Designer (IxD) require a text box to have a 100 character limit.
  • Test automator leaves the requirements discussion, goes back to his desk and writes tests that will drive the UI to write one character and assert that the UI shows character count of 1
  • Developer leaves the same discussion, goes back to his desk and writes code which will show 100 characters and count down every time a character is entered.
  • The test fails and then a discussion ensues to figure out which one understood the requirement appropriately.

I have experienced many of these types of situations, quite often where the PM is saying that neither the tester or the developer understood them correctly. In other words that both developer and tester misunderstood or mis-interpreted the requirements, and now both need to go and re-do or refactor their work to deliver what the PM really wanted.

(slide 3) Are you often testing at the end of the cycle?

Perhaps you are in a waterfall like development lifecycle, or a fragile lifecycle?
Do the testers know ahead of time what they will test? Did they work that out from either a written requirements document or a requirements discussion at the beginning?

Or are the testers working with the developers, product managers and in our case interaction designers on a regular (daily?) basis to ensure we are always on the same page and on track to deliver what is really required?

(slide 4) Are you in an agile like environment?

In which case do you all speak the same language? Do you have a domain specific language that is understood and used by all?
I have been in many requirements discussions, story kickoffs or similar where it really seems like we are talking different languages.
In my current role we have a lot of domain specific terms which are either overused, (used to mean more than one thing depending on context), or terms that mean different things to different people. We recently had a problem where one team used the term ‘system variables’ to mean a specific kind of data we store about a member of one of our insight communities, another team wanted to use the same term to refer to data we capture and store about a survey respondent’s computer system (e.g. browser & version, screen resolution and browser locale)
As a test why not ask 10 different people what a test plan is and see if you get 10 different answers.

(slide 5) Deadline approaching?

Does this mean you usually cut a few corners, rush your work, ignore some aspects of your process that perhaps you don’t think provide good value for the time they take?
So, why don’t we have time to do it right but we have time to do it twice?
(I forget where I first heard that phrase but it is a really powerful one for me)
We often cut corners or rush to meet a deadline, knowing really that we will have to come back and ‘fix it up’ or pay down some ‘technical debt’ later. And of course the cost of that will be higher than the cost of doing it the first time round.

(slide 6) A well known illustration of;

a) What the senior developer/designed designed
b) What got delivered
c) How it was installed at the customer site
d) What the customer really wanted

This is typically the result of some of the ways we have been working and the approaches we take.

(slide 7) So how can we be better?

We can adopt the test first approach – combine the red-green-refactor pattern of TDD with behaviours to get BDD
Defining a test first and then writing the code to pass that test provides many benefits;

  • Only write the code needed to pass the test (no waste)
  • Ensures code is testable – cannot pass test otherwise – requires observability etc
  • Test effectively documents the code
  • Safety net – test is added to CI system ensuring we never regress this code (if we do the test fails) – future code refactoring is done with confidence as we will know, as quickly as it takes to run these tests, if we have made a mistake

(slide 8) Where do I start?

Another way to describe the test first pattern is the Acceptance Test Driven Development approach.
This fantastic diagram is borrowed from (add refs and links)
The fours Ds
Follows the TDD red green refactor cycle as shown in the middle, i.e. this is iterative

(slide 9) Discuss

Ensure you have representation from all of the key roles to discuss the story or item of work. Typically we refer to the 3 amigos – Product Management (or Business Analyst or if possible the customer), Development and QA. At Vision Critical we also often have a 4th amigo – an interaction designer (IxD)
The discussion needs to unify the language, ensuring we are all discussing the same thing and have common and shared understanding
For example ensure we don’t mix development terms with product or customer terms, try to define a domain specific language (DSL) that we can all share and understand.

(slide 10) Distill

Discussion should then produce examples of the behaviour you want from the product or system. These examples are effectively how you will test you have developed what was really needed.
Use a ubiquitous language and structure to define these – Given – When – Then
Facilitates clear communication as well as structure that is easy to read and simple to follow

(slide 11) Develop

First develop the automation that asserts the behaviours (automate the tests first)
Then develop the code (production code) to pass those tests

(slide 12) Demo

Demonstrate the working code using the automated tests
Review the behaviour specifications with the customer or product manager
Add the tests to your CI system (keep running the tests to ensure fast feedback and a consistent safety net of tests)
Time to celebrate and perhaps retrospect on the story and capture anything you learnt and ideas for improvement so that you can apply all to the next story

(slide 13) Repeat the cycle for all the stories, learning an improving as you go.

(slide 14) Results

Hopefully you will have delivered what the customer really wanted and gained some additional benefits;

  • Executable specifications that can always be trusted to be true (otherwise your tests will be failing)
  • Automated regression tests that provide a safety net and fast feedback on those regressions – try to have these tests run as frequently as possible
  • Testable and thus maintainable code, not only can you look back at these tests in 6 months and know how the code works, but you also know the code is testable as it was written to pass tests

You will hopefully also feel proud of what you have achieved and will be recognised for that – if you are the only team doing this is will show!

(slide 15) What BDD is not

This all sounds great, so where do I get hold of this silver bullet or pink glittery unicorn?
Well BDD is not one of those, in fact it takes a lot of work to do it well, but it is worth it

(slide 16) How to get started?

There are a number of different BDD frameworks for the mainstream development languages, here are a few

  • SpecFlow is for .Net
  • Cucumber is mostly for Ruby
  • JBehave is for Java
  • Behat is for PHP

(slide 17) Gherkin anyone?

As well as being a pickled cucumber …
This provides the common and ubiquitous language that facilitates the simple and clear communication of behaviour
Here is an example of a feature, which contains a number of scenarios (tests for that feature)
The feature description here describes the feature and the context, in this case the problem the feature is trying to solve

(slide 18) Background

Background is a special keyword in Gherkin
In this case I am showing an example from the Cucumber Book that uses the background to setup the test preconditions – the Given for the following scenario(s)

(slide 19) Scenario

Here are the When and Then sections of this scenario
Very readable, understandable and clear

(slide 20) A question of style

A lot of folks (myself included) start writing scenarios in a similar way to how we would write test cases or code, by detailing all the steps that we need to execute in order to setup the product under test as well as perform the test and check the results.
This is an imperative style and it is not very readable, at least not when you are testing something more trivial than adding two whole numbers.
So we need to focus on a more declarative style, try to tell the story of the behaviours we want the product to have
This means hiding all of the details that are not relevant to the behaviour and keeping only the details that are important to the behaviour or the intent of the test

(slide 21) An imperative example

Lots of inconsequential detail here which means that the intent of the test is lost in the noise. We specify the email address and password but have to assume this means these are valid. Do we really need to know we clicked the login button? How does that help us understand if the product behaves correctly when we provide valid login credentials?

(slide 22) A declarative example

Hopefully you can all see this is much more readable and very clearly talks about what is important to the behaviour
This test is not about what makes a valid or invalid email address or password, it is about what happens when those are valid and the user is able to successfully login

(slide 23) DRY vs DAMP

Aim to tell a story rather than focusing on re-usability
An example here would be that we have steps like those in the imperative example;

Given I am on the login page
When I enter email as “[email protected]
And password as “Password1”
And I click the login button

Because these steps detail how I login we can easily re-use the same steps throughout the scenarios to login before executing more steps that are designed to assert a new behaviour, for example;

Given I am on the login page
When I enter email as “[email protected]
And password as “Password1”
And I click the login button
And I click the Start New Project button on the dashboard page
Then I should see a blank project
And I should be able to edit the project

Instead of re-using these steps this could have been written as;

Given I am on the dashboard page (it is not important for this behaviour to know what steps you took to login or what exact credentials you used)
When I start a new project
Then I should be able to edit my new project

(slide 24) Scenario Table Example

Using data tables to test with multiple values that will not read well if all written on one line of a Given, When or Then
These enhance readability by keeping the data clear but separate from the declarative and meaningful phrases

(slide 26) Scenario Outline Example

Sometimes you need to effectively test using the same steps but with a variety of test inputs, a scenario outline helps to avoid repeating steps each with different data values
In this case each row of the table is essentially one Given – When – Then scenario and will be executed sequentially

(slide 28) Hooks

These are really useful, they can simply call some code to setup or tear down your tests and are controlled using methods called Before and After

(slide 29) Tags

Use these to label your features and scenarios within features
You can then execute only those features or scenarios that are tagged a certain way
Or filter out tests with a different tag
We use tags to group tests by team, to run certain subsets of tests in a certain environment, and now we are trying to use tags as a way of recording and reporting test coverage by labelling features and scenarios by the code area that they cover

And that wraps it up for the presentation part.

Test Coverage Outlines

This is a presentation I gave at a local test meet up group called VanQ

PDF version of slides: Test Coverage Outlines

Intro

Test Coverage Outlines are just elaborate spreadsheets. They were originally inspired by Jonathan Kohl, as part of exploratory testing training at Sophos. A colleague, (Jose Artiga), and I built on the initial inspiration and came up with the first version of the test coverage outline. This was then adopted by the teams at Sophos, and of course refined through use and practice.

(slide 8) Simple spreadsheets to help;

  • Structure & simplify your test planning
  • Inspire & structure your test design
  • Be test professionals and not just test execution machines
  • Collaborate on testing
  • Show clear and concise status at any point in your testing
  • Discuss risk in terms of planned vs actual coverage

(slide 9) An example coverage outline

  • Each white background row is a test idea – a simple sentence or few words to convey an idea you want to cover by testing
  • Try to keep these simple like a heuristic that acts as a guide rather than a step by step procedure

(slide 10) Divide your test ideas up into sections

  • Example shows this as the black lines with white text
  • You can do this by functional area, test focus, heuristic etc (whatever makes sense for you in your context)

(slide 11) Divide your test ideas up into sheets

  • Again divide into functional area per sheet or perhaps copy your sheet to provide same/similar coverage against different builds or versions of your software

(slide 12) Configurations – show your coverage against different configurations

  • Example given is a common one – browsers and OS, but could equally be versions of HW, SW, Mobile device etc

(slide 13) Test types/levels – show your coverage at different test levels or different testing types

  • Matrix your test ideas against different test levels, for example unit tests, integration tests, automated UI tests, manual tests, exploratory tests
  • Even if you are not familiar with the unit tests you can suggest the developers gain some coverage of the test ideas with unit tests and fill out the column with you

(slide 14) Prioritise your test ideas

  • I use simple (high, medium, low) priority levels to organise, sort and filter test ideas by priority
  • Make sure you are executing/covering in priority order

(slide 15) Prioritise your configurations

  • I use a simple left to right prioritisation for configurations, i.e. the highest priority configuration to cover is the left most column

(slides 16 & 17) Use colour and conditional formatting

  • I use simple colours and conditional formatting to make updates easy and to show status and priority clearly
  • Make sure you also use a colour (I use x and grey) to indicate coverage that you are consciously not planning to cover

(slide 18) I use mind maps as a visual test design/planning aid

(slide 19) Start by outlining your test idea areas, use the outline to inspire your test ideas

  • I sometimes use heuristics to structure and inspire my testing, for example using a quality criteria heuristic like usability or performance I can think about test ideas that come under each of those ares
  • Creating templates (see examples) for common types of testing focus can help inspire testers as well as providing some base or consistent tests

(slide 20) Using test ideas rather than detailed test cases

  • enables and encourages variation, exploration and more ‘brain engaged’ testing
  • thus avoiding the pesticide paradox [Boris Bezier]

(slide 21) Collaboration

  • Using google docs or similar enables you to easily collaborate in real time with other testers/colleagues
  • Some of the easiest ways in which you can organise and facilitate collaboration is to allocate a config column, test area or even tabs
  • You can see who is accessing/updating the sheet and can see status of tests others are covering in real time too

(slide 22) Use colour to provide very easy and quick to read status

  • You can just look at a tab and immediately see if you have any fails or blocked tests

(slide 23) Colour coding also makes it easy to see coverage in terms of planned vs actual

Example Coverage Outlines

Feel free to copy and adapt these for you own purposes, hopefully these will inspire you to refine the outlines and share your ideas back with me and others;

Learning from the mistakes our customers care about

5 question marks
5 whys

As mentioned in a previous post I keep a close watch on customer defects. These are the issues that a customer cared about enough (or was sufficiently annoyed by) to contact us and tell us about them.
I am focusing on the issues here, not the feature requests or the how do I’s, though both can be also regarded as defects in ‘failing to understand or predict the customers needs’ or ‘failing to deliver an intuitive product’ respectively.

Being a big fan of prevention is better than cure, I like to investigate the customer issues and perform a root cause analysis or 5 whys on the reasons each issue escaped our attention.
Yes, I refer to customer reported issues as escaped defects, since they escaped our detection. It doesn’t matter how many stages in your pipeline or how may automated tests at different levels, or even how good the teams are, there will be some issues that escape our attention.
Technically, I also regard issues discovered late in our pipeline, after story acceptance and as part of our release process, as escapes too, as well as any issues we happen to find in production (before a customer reports them).

There are lots of quotes around learning from failure, and being doomed to repeat your mistakes if you fail to do so. I believe, along with many others, that true learning only comes from failure and understanding the reasons for it. However, we should take care to not make the same mistake twice as this indicates a failure to learn. So the reason for analysing these escaped defects is not to apportion blame or point fingers, it is instead to learn how we can prevent a similar class of issue in future. The preference here being to prevent the class of issue from ever being coded again. If that proves to be more expensive than the cost of impact of the issues then at least being able to prevent the class of issue escaping our attention again.

I was introduced to Lean software development techniques via Mary and Tom Poppendick  which led me to learn more about The Toyota Way where I learnt the 5 whys technique. Prior to this I had been using other root causing techniques or simply using my QA ability to ask difficult but relevant questions to achieve the learning and expose the actions.
The 5 whys technique is just so simple that it makes it easy for anyone to participate as well as facilitate, meaning that anyone can do this – you don’t need to be a QA or have a background in problem solving or root cause analysis techniques.

So, what does it look like? Well here is an example with some edits to remove any proprietary details (note 5 is simply a guide, you can use more or less whys);

Problem statement: Service proxy was updated in C# provider code but not in consumer code

Why didn’t we catch this?

  • tests run in the consumer pipeline were not sufficient to expose the issue
  • tests were not full contract tests – nothing testing the contract between producer and consumer
  • no communication between the producer and consumer teams on any changes made to the interface

Why didn’t the consumer pipeline tests expose the issue?

  • because the test only exercised the simplest possible scenario which did not get affected by the change (sec call returned minimum possible data)

Why were the contract tests insufficient?

  • contract testing is not very well understood by all teams concerned
  • tests were not reviewed by anyone except developers

Why wasn’t there communication between teams?

  • the producer of the service does not know who is consuming that service
  • tests didn’t relay information of endpoint changes
  • consumer tests were still passing (green)

Why didn’t the tests relay information of endpoint changes?

  • there were no tests asserting or checking the stability of the interface
  • the consumer was coded to de-serialise the entire response when really it only needed to check for ‘success’

Why was the consumer coded to de-serialise the entire response rather than just parse the value of interest?

  • because it was deemed easier to use a standard pattern to de-serialise entire response rather than write code to specifically look for just the value of interest

Some example actions that were taken as a result of this;

  1. Provide training in contract testing patterns
  2. Producer to add tests to notify producer team of interface change (trigger for investigation or communication of change)
  3. Consumer to provide contract tests for producer to run in producer pipeline to alert of breaking changes for consumer
  4. Audit all cross team interfaces/dependencies to negotiate and add any missing contract tests

The Streetlight effect

streetlighteffect

This phenomenon is called the streetlight effect. I often see this with test engineers, I have caught myself doing it! I find myself testing or exploring some area of functionality because I am familiar with that area, in other words I know how to test it. Or I catch myself spending most of my time focused on a certain type of testing or looking for a certain type of bug because I am comfortable in my skill, knowledge and experience in that type of testing – the light is better here!

Another way to characterise this is that you are staying in your comfort zone. So how do you avoid the streetlight effect or start to move out of your comfort zone?

Step 1 is recognising it. Try to catch yourself looking where the light is better. For example spot when you are testing or thinking about tests to run with a product and challenge yourself to test something you wouldn’t normally test or think about tests that you wouldn’t normally run. Even better challenge your colleagues to spot when you are doing it (and offer to do the same for them).

For example if you are the tester on an agile team and you usually test functionally from the UI, ask your colleagues to review your test cases (you are discussing them with the team anyway, right?), and challenge you if they are only testing the story or feature from the UI. Or if you are a test automator and are familiar with automating tests by driving the UI have your colleagues review your automation code (they are already doing that, right?), and challenge you to test at a lower level interface in the system instead.

Step 2 is to develop the ability and skills required to shed some light on the areas of darkness. Or put another way, to push yourself out of your comfort zone and into the learning zone where you will be exposed to new things and can therefore develop new skills and experiences which will broaden your circle of light.

learning_zoneSo if you are normally testing using the UI with a customer focus, try learning more about the back end code and try thinking about new tests using that new found knowledge, or try thinking about tests to run against an interface into a lower level of code.

I work with teams to broaden their test perspectives and to think about the product or area of code they are responsible for more systematically and holistically. Challenging them to think about what happens at all levels in the code and all the places where things could go wrong. I then help them think about different ways we can test or automate testing those areas that are most likely to be buggy. For example using test tools or automation to create test data, or inject it into a part of the system so we can effectively check how the system transforms or presents it later.

To be a great Quality Assurance engineer or a great tester you also need to be a continuous learner. Are you a genius already? Did you leave school, college or university with all the knowledge you would ever need?learn_more_alpha

There will always be something you can learn about the product you are helping to deliver with quality. Customers will teach you new ways in which they use the product (which can often surprise you). Your colleagues and other thought leaders in the domain and profession you work in will always provide you with new and different ways of looking at something or approaching what you do. So make sure you listen, learn and experiment, looking in places where the light is not so good.

So what will you learn next to increase your circle of light?