My Agile QA Manifesto and Testing Principles

My Agile QA Manifesto

With reference to the original Agile manifesto I present my thoughts on an extension for agile QA or an agile testing manifesto;

  • Prevention over goalkeeping
  • Risk based test coverage over systematic test coverage
  • Tester skill over test detail
  • Automation over manual (for checking/repetition)

While there is value in the items on the right, I value the items on the left more

Testing Principles

And to follow that, a set of principles I try to follow and try to instill into those that work with me;

  • Fail fast/provide fast feedback
  • Test at the lowest layer
  • Test first (TDD/BDD)
  • Risk based testing for efficiency
  • Focus on tester skill and domain knowledge
  • Drive for automation for repeated checking (regression)
  • Learn from your mistakes – don’t repeat them

Fork And Ambush

Fork and ambush is my short way of describing the outcome of non-collaborative work on a story or the implementation of a requirement. I used to see this all the time in more waterfall like environments, but I am sad to say I still see and hear of this in more agile environments too.

The scenario is as follows;

Someone, usually the customer proxy, in my current company this is a Product Manager, provides a requirement, in our case a story along the lines of “As a …. I would like … so that I can …”
The Developer would then take this story and go back to his or her desk and start developing the code to deliver the functionality for the story.
The QA or tester would take this story and go back to his or her desk and start thinking about test cases that should be executed against the story.

This is the fork (both going their separate ways to think about the story individually)

Some time later …

When the story is developed the developer notifies the tester and testing begins.

This is effectively the ‘ambush’ part.

The tester is often trying to find fault with the developer by hunting for bugs in the developed code.
Often a bug turns out to be a difference of interpretation (of the story) between the developer and the tester

dev_diff_to_test

In the worst case the customer proxy (e.g. Product Manager), comes along to diffuse the argument and informs them both that they are both wrong and what has been delivered is not what was required and the tests are also incorrect.

So, how can this be better?

Using a more BDD like approach, where the three amigos (Developer, Tester and Product Manager) discuss the requirement first.
Making sure each of them understands what is required (scope of work), who it is for (who the customer is), and why it is important to them.
They confirm this understanding by defining collaboratively the set of tests that will be used to prove what has been delivered is acceptable to the customer (what they needed and working in the way they need it to).
Then if the developer and tester fork at all then they both have a clear understanding of what is required.
The developer can ensure the developed code passes the acceptance tests.
The tester can ensure that in addition to passing the acceptance tests the developed code does not do anything unexpected and conforms to any non functional requirements that may have also been discussed etc.

The conclusion then is more likely to be a successful delivery as the work would not be accepted if the acceptance tests do not pass, and the developer and tester are on the same page as the Product Manager and can all see how they can work together towards the common goal.

Behaviour Driven Development – An Introduction

This is a presentation given by me and Marc Karbowiak  at a local test meetup group called YVR Testing on the 2nd April 2014

PDF version of slides: BDD Intro

Intro

Disclaimers first, I am not an expert in Behaviour Driven Design (BDD), in fact I am just starting down this particular learning path. I have however been testing, and to a lesser extent, automating tests for many years. So I have learnt enough to know that this approach is a great one to try, as I can see how it will help to address many of the issues we experience. In particular it will clearly help prevent the types of issues that arise from misunderstandings, assumptions and ambiguity in our requirements.
Marc and I wanted to share some of our early experiences and those of other and better folks that precede us in learning BDD, as we feel strongly enough about this approach that we want to encourage and inspire others to learn and adopt BDD practices.

(slide 2) Setting the scene

We have probably all experienced ambiguous requirements, which means we have probably all experienced problems in our products as a result.
Some simple examples;

  • Product Manager (PM) & Interaction Designer (IxD) require a text box to have a 100 character limit.
  • Test automator leaves the requirements discussion, goes back to his desk and writes tests that will drive the UI to write one character and assert that the UI shows character count of 1
  • Developer leaves the same discussion, goes back to his desk and writes code which will show 100 characters and count down every time a character is entered.
  • The test fails and then a discussion ensues to figure out which one understood the requirement appropriately.

I have experienced many of these types of situations, quite often where the PM is saying that neither the tester or the developer understood them correctly. In other words that both developer and tester misunderstood or mis-interpreted the requirements, and now both need to go and re-do or refactor their work to deliver what the PM really wanted.

(slide 3) Are you often testing at the end of the cycle?

Perhaps you are in a waterfall like development lifecycle, or a fragile lifecycle?
Do the testers know ahead of time what they will test? Did they work that out from either a written requirements document or a requirements discussion at the beginning?

Or are the testers working with the developers, product managers and in our case interaction designers on a regular (daily?) basis to ensure we are always on the same page and on track to deliver what is really required?

(slide 4) Are you in an agile like environment?

In which case do you all speak the same language? Do you have a domain specific language that is understood and used by all?
I have been in many requirements discussions, story kickoffs or similar where it really seems like we are talking different languages.
In my current role we have a lot of domain specific terms which are either overused, (used to mean more than one thing depending on context), or terms that mean different things to different people. We recently had a problem where one team used the term ‘system variables’ to mean a specific kind of data we store about a member of one of our insight communities, another team wanted to use the same term to refer to data we capture and store about a survey respondent’s computer system (e.g. browser & version, screen resolution and browser locale)
As a test why not ask 10 different people what a test plan is and see if you get 10 different answers.

(slide 5) Deadline approaching?

Does this mean you usually cut a few corners, rush your work, ignore some aspects of your process that perhaps you don’t think provide good value for the time they take?
So, why don’t we have time to do it right but we have time to do it twice?
(I forget where I first heard that phrase but it is a really powerful one for me)
We often cut corners or rush to meet a deadline, knowing really that we will have to come back and ‘fix it up’ or pay down some ‘technical debt’ later. And of course the cost of that will be higher than the cost of doing it the first time round.

(slide 6) A well known illustration of;

a) What the senior developer/designed designed
b) What got delivered
c) How it was installed at the customer site
d) What the customer really wanted

This is typically the result of some of the ways we have been working and the approaches we take.

(slide 7) So how can we be better?

We can adopt the test first approach – combine the red-green-refactor pattern of TDD with behaviours to get BDD
Defining a test first and then writing the code to pass that test provides many benefits;

  • Only write the code needed to pass the test (no waste)
  • Ensures code is testable – cannot pass test otherwise – requires observability etc
  • Test effectively documents the code
  • Safety net – test is added to CI system ensuring we never regress this code (if we do the test fails) – future code refactoring is done with confidence as we will know, as quickly as it takes to run these tests, if we have made a mistake

(slide 8) Where do I start?

Another way to describe the test first pattern is the Acceptance Test Driven Development approach.
This fantastic diagram is borrowed from (add refs and links)
The fours Ds
Follows the TDD red green refactor cycle as shown in the middle, i.e. this is iterative

(slide 9) Discuss

Ensure you have representation from all of the key roles to discuss the story or item of work. Typically we refer to the 3 amigos – Product Management (or Business Analyst or if possible the customer), Development and QA. At Vision Critical we also often have a 4th amigo – an interaction designer (IxD)
The discussion needs to unify the language, ensuring we are all discussing the same thing and have common and shared understanding
For example ensure we don’t mix development terms with product or customer terms, try to define a domain specific language (DSL) that we can all share and understand.

(slide 10) Distill

Discussion should then produce examples of the behaviour you want from the product or system. These examples are effectively how you will test you have developed what was really needed.
Use a ubiquitous language and structure to define these – Given – When – Then
Facilitates clear communication as well as structure that is easy to read and simple to follow

(slide 11) Develop

First develop the automation that asserts the behaviours (automate the tests first)
Then develop the code (production code) to pass those tests

(slide 12) Demo

Demonstrate the working code using the automated tests
Review the behaviour specifications with the customer or product manager
Add the tests to your CI system (keep running the tests to ensure fast feedback and a consistent safety net of tests)
Time to celebrate and perhaps retrospect on the story and capture anything you learnt and ideas for improvement so that you can apply all to the next story

(slide 13) Repeat the cycle for all the stories, learning an improving as you go.

(slide 14) Results

Hopefully you will have delivered what the customer really wanted and gained some additional benefits;

  • Executable specifications that can always be trusted to be true (otherwise your tests will be failing)
  • Automated regression tests that provide a safety net and fast feedback on those regressions – try to have these tests run as frequently as possible
  • Testable and thus maintainable code, not only can you look back at these tests in 6 months and know how the code works, but you also know the code is testable as it was written to pass tests

You will hopefully also feel proud of what you have achieved and will be recognised for that – if you are the only team doing this is will show!

(slide 15) What BDD is not

This all sounds great, so where do I get hold of this silver bullet or pink glittery unicorn?
Well BDD is not one of those, in fact it takes a lot of work to do it well, but it is worth it

(slide 16) How to get started?

There are a number of different BDD frameworks for the mainstream development languages, here are a few

  • SpecFlow is for .Net
  • Cucumber is mostly for Ruby
  • JBehave is for Java
  • Behat is for PHP

(slide 17) Gherkin anyone?

As well as being a pickled cucumber …
This provides the common and ubiquitous language that facilitates the simple and clear communication of behaviour
Here is an example of a feature, which contains a number of scenarios (tests for that feature)
The feature description here describes the feature and the context, in this case the problem the feature is trying to solve

(slide 18) Background

Background is a special keyword in Gherkin
In this case I am showing an example from the Cucumber Book that uses the background to setup the test preconditions – the Given for the following scenario(s)

(slide 19) Scenario

Here are the When and Then sections of this scenario
Very readable, understandable and clear

(slide 20) A question of style

A lot of folks (myself included) start writing scenarios in a similar way to how we would write test cases or code, by detailing all the steps that we need to execute in order to setup the product under test as well as perform the test and check the results.
This is an imperative style and it is not very readable, at least not when you are testing something more trivial than adding two whole numbers.
So we need to focus on a more declarative style, try to tell the story of the behaviours we want the product to have
This means hiding all of the details that are not relevant to the behaviour and keeping only the details that are important to the behaviour or the intent of the test

(slide 21) An imperative example

Lots of inconsequential detail here which means that the intent of the test is lost in the noise. We specify the email address and password but have to assume this means these are valid. Do we really need to know we clicked the login button? How does that help us understand if the product behaves correctly when we provide valid login credentials?

(slide 22) A declarative example

Hopefully you can all see this is much more readable and very clearly talks about what is important to the behaviour
This test is not about what makes a valid or invalid email address or password, it is about what happens when those are valid and the user is able to successfully login

(slide 23) DRY vs DAMP

Aim to tell a story rather than focusing on re-usability
An example here would be that we have steps like those in the imperative example;

Given I am on the login page
When I enter email as “[email protected]
And password as “Password1”
And I click the login button

Because these steps detail how I login we can easily re-use the same steps throughout the scenarios to login before executing more steps that are designed to assert a new behaviour, for example;

Given I am on the login page
When I enter email as “[email protected]
And password as “Password1”
And I click the login button
And I click the Start New Project button on the dashboard page
Then I should see a blank project
And I should be able to edit the project

Instead of re-using these steps this could have been written as;

Given I am on the dashboard page (it is not important for this behaviour to know what steps you took to login or what exact credentials you used)
When I start a new project
Then I should be able to edit my new project

(slide 24) Scenario Table Example

Using data tables to test with multiple values that will not read well if all written on one line of a Given, When or Then
These enhance readability by keeping the data clear but separate from the declarative and meaningful phrases

(slide 26) Scenario Outline Example

Sometimes you need to effectively test using the same steps but with a variety of test inputs, a scenario outline helps to avoid repeating steps each with different data values
In this case each row of the table is essentially one Given – When – Then scenario and will be executed sequentially

(slide 28) Hooks

These are really useful, they can simply call some code to setup or tear down your tests and are controlled using methods called Before and After

(slide 29) Tags

Use these to label your features and scenarios within features
You can then execute only those features or scenarios that are tagged a certain way
Or filter out tests with a different tag
We use tags to group tests by team, to run certain subsets of tests in a certain environment, and now we are trying to use tags as a way of recording and reporting test coverage by labelling features and scenarios by the code area that they cover

And that wraps it up for the presentation part.

Test Coverage Outlines

This is a presentation I gave at a local test meet up group called VanQ

PDF version of slides: Test Coverage Outlines

Intro

Test Coverage Outlines are just elaborate spreadsheets. They were originally inspired by Jonathan Kohl, as part of exploratory testing training at Sophos. A colleague, (Jose Artiga), and I built on the initial inspiration and came up with the first version of the test coverage outline. This was then adopted by the teams at Sophos, and of course refined through use and practice.

(slide 8) Simple spreadsheets to help;

  • Structure & simplify your test planning
  • Inspire & structure your test design
  • Be test professionals and not just test execution machines
  • Collaborate on testing
  • Show clear and concise status at any point in your testing
  • Discuss risk in terms of planned vs actual coverage

(slide 9) An example coverage outline

  • Each white background row is a test idea – a simple sentence or few words to convey an idea you want to cover by testing
  • Try to keep these simple like a heuristic that acts as a guide rather than a step by step procedure

(slide 10) Divide your test ideas up into sections

  • Example shows this as the black lines with white text
  • You can do this by functional area, test focus, heuristic etc (whatever makes sense for you in your context)

(slide 11) Divide your test ideas up into sheets

  • Again divide into functional area per sheet or perhaps copy your sheet to provide same/similar coverage against different builds or versions of your software

(slide 12) Configurations – show your coverage against different configurations

  • Example given is a common one – browsers and OS, but could equally be versions of HW, SW, Mobile device etc

(slide 13) Test types/levels – show your coverage at different test levels or different testing types

  • Matrix your test ideas against different test levels, for example unit tests, integration tests, automated UI tests, manual tests, exploratory tests
  • Even if you are not familiar with the unit tests you can suggest the developers gain some coverage of the test ideas with unit tests and fill out the column with you

(slide 14) Prioritise your test ideas

  • I use simple (high, medium, low) priority levels to organise, sort and filter test ideas by priority
  • Make sure you are executing/covering in priority order

(slide 15) Prioritise your configurations

  • I use a simple left to right prioritisation for configurations, i.e. the highest priority configuration to cover is the left most column

(slides 16 & 17) Use colour and conditional formatting

  • I use simple colours and conditional formatting to make updates easy and to show status and priority clearly
  • Make sure you also use a colour (I use x and grey) to indicate coverage that you are consciously not planning to cover

(slide 18) I use mind maps as a visual test design/planning aid

(slide 19) Start by outlining your test idea areas, use the outline to inspire your test ideas

  • I sometimes use heuristics to structure and inspire my testing, for example using a quality criteria heuristic like usability or performance I can think about test ideas that come under each of those ares
  • Creating templates (see examples) for common types of testing focus can help inspire testers as well as providing some base or consistent tests

(slide 20) Using test ideas rather than detailed test cases

  • enables and encourages variation, exploration and more ‘brain engaged’ testing
  • thus avoiding the pesticide paradox [Boris Bezier]

(slide 21) Collaboration

  • Using google docs or similar enables you to easily collaborate in real time with other testers/colleagues
  • Some of the easiest ways in which you can organise and facilitate collaboration is to allocate a config column, test area or even tabs
  • You can see who is accessing/updating the sheet and can see status of tests others are covering in real time too

(slide 22) Use colour to provide very easy and quick to read status

  • You can just look at a tab and immediately see if you have any fails or blocked tests

(slide 23) Colour coding also makes it easy to see coverage in terms of planned vs actual

Example Coverage Outlines

Feel free to copy and adapt these for you own purposes, hopefully these will inspire you to refine the outlines and share your ideas back with me and others;

Learning from the mistakes our customers care about

5 question marks
5 whys

As mentioned in a previous post I keep a close watch on customer defects. These are the issues that a customer cared about enough (or was sufficiently annoyed by) to contact us and tell us about them.
I am focusing on the issues here, not the feature requests or the how do I’s, though both can be also regarded as defects in ‘failing to understand or predict the customers needs’ or ‘failing to deliver an intuitive product’ respectively.

Being a big fan of prevention is better than cure, I like to investigate the customer issues and perform a root cause analysis or 5 whys on the reasons each issue escaped our attention.
Yes, I refer to customer reported issues as escaped defects, since they escaped our detection. It doesn’t matter how many stages in your pipeline or how may automated tests at different levels, or even how good the teams are, there will be some issues that escape our attention.
Technically, I also regard issues discovered late in our pipeline, after story acceptance and as part of our release process, as escapes too, as well as any issues we happen to find in production (before a customer reports them).

There are lots of quotes around learning from failure, and being doomed to repeat your mistakes if you fail to do so. I believe, along with many others, that true learning only comes from failure and understanding the reasons for it. However, we should take care to not make the same mistake twice as this indicates a failure to learn. So the reason for analysing these escaped defects is not to apportion blame or point fingers, it is instead to learn how we can prevent a similar class of issue in future. The preference here being to prevent the class of issue from ever being coded again. If that proves to be more expensive than the cost of impact of the issues then at least being able to prevent the class of issue escaping our attention again.

I was introduced to Lean software development techniques via Mary and Tom Poppendick  which led me to learn more about The Toyota Way where I learnt the 5 whys technique. Prior to this I had been using other root causing techniques or simply using my QA ability to ask difficult but relevant questions to achieve the learning and expose the actions.
The 5 whys technique is just so simple that it makes it easy for anyone to participate as well as facilitate, meaning that anyone can do this – you don’t need to be a QA or have a background in problem solving or root cause analysis techniques.

So, what does it look like? Well here is an example with some edits to remove any proprietary details (note 5 is simply a guide, you can use more or less whys);

Problem statement: Service proxy was updated in C# provider code but not in consumer code

Why didn’t we catch this?

  • tests run in the consumer pipeline were not sufficient to expose the issue
  • tests were not full contract tests – nothing testing the contract between producer and consumer
  • no communication between the producer and consumer teams on any changes made to the interface

Why didn’t the consumer pipeline tests expose the issue?

  • because the test only exercised the simplest possible scenario which did not get affected by the change (sec call returned minimum possible data)

Why were the contract tests insufficient?

  • contract testing is not very well understood by all teams concerned
  • tests were not reviewed by anyone except developers

Why wasn’t there communication between teams?

  • the producer of the service does not know who is consuming that service
  • tests didn’t relay information of endpoint changes
  • consumer tests were still passing (green)

Why didn’t the tests relay information of endpoint changes?

  • there were no tests asserting or checking the stability of the interface
  • the consumer was coded to de-serialise the entire response when really it only needed to check for ‘success’

Why was the consumer coded to de-serialise the entire response rather than just parse the value of interest?

  • because it was deemed easier to use a standard pattern to de-serialise entire response rather than write code to specifically look for just the value of interest

Some example actions that were taken as a result of this;

  1. Provide training in contract testing patterns
  2. Producer to add tests to notify producer team of interface change (trigger for investigation or communication of change)
  3. Consumer to provide contract tests for producer to run in producer pipeline to alert of breaking changes for consumer
  4. Audit all cross team interfaces/dependencies to negotiate and add any missing contract tests

Why does QA Matter?

Software Quality Assurance matters because without it we face a world of buggy software and systems that will often fail in ways that are harmful or incur some form of cost that we would rather not pay. You only need to follow thedailywtf to see bad and often embarrassing software failures. Software Quality Assurance focuses on improving the quality of the end product delivered to customers by working to assure quality throughout the entire lifecycle. Testing processes represent only part of that entire lifecycle, as such QA encompasses test but also goes beyond. Quality Assurance to me is as much about defect prevention as it is about defect detection.

 

qa_not_goalkeeping
QA is about much more than testing

 

It might help if first I explain my definition or interpretation of Quality Assurance as opposed to Test.

  • My definition of software testing – goalkeeping, the last defence between buggy code and the customer
  • My definition of software quality assurance – a focus on quality practices and processes across the entire development lifecycle

So, expanding on those definitions;

Testing is the art of exposing information (where that information is often a defect), and is typically performed by dedicated engineers on a built software product. (Yes, I realise I am painting a picture with very broad brush strokes here but my intention at this moment is simply to use the common interpretation or perception of test to illustrate the difference between this and quality assurance). Testing tends to be focused on the design of tests and how they will be executed. Often with metrics around numbers of test cases, defects found, functional areas covered etc. All of which are important in some way, but represent only part of the quality picture.

QA however typically focuses on the processes and practices used throughout the entire development lifecycle, with the intention of assuring that these facilitate the delivery of a quality product. Thus quality assurance is as much about defect prevention as it is about defect detection, in other words taking steps to prevent defects being coded as well as trying to ensure any defects in the code are detected as early and efficiently (and thus cheaply) as possible. In terms of prevention I am often heard to say “the cheapest defect to fix is the one that never gets coded”. (I will talk about some of the approaches and techniques I use to help prevent defects being coded in future posts).

QA really starts at the beginning of the development lifecycle, typically with the idea, concept or requirement analysis phase. The focus at this stage being less about ensuring we have tests designed to cover each requirement, but more that the whole team fully understands the requirements; how they will benefit the customer, what sort of changes they will require to the existing product, what could go wrong etc. The aim being to avoid coding the wrong thing, (TDD or BDD anyone?), and to ensure no bad assumptions are being made, (assumptions are at the root of many defects),  whilst also preparing ourselves to test the delivery of the requirement throughout all the development lifecycle phases, ensuring the fastest feedback possible should we detect any failures.

QA also continues after the product has been shipped, analysing any customer exposed defects. (I will talk about the approach I take for this in a future post). Taking steps to learn from those escaped defects and applying those lessons to the appropriate phases and processes in development lifecycle. As an example, learning that we missed a defect because we did not have the means to unit test a particular type of code interface. Using that knowledge to make a change, (re-factor the code), to enable a mock to be built which in turn enabled full unit testing of that interface, and thus ensured the future quality of that code in the cheapest and most efficient way. Another example where we learned we missed a defect was because we did not fully understand how the customer was utilising a part of the product. Applying what we learnt to the requirements analysis process to ensure we include that understanding into future changes in that area of the product, whilst also ensuring we added tests that used the product in the same way as the customer to guard against future regressions. (I will talk about my method for learning from escaped defects as well as metrics around those in future posts).

References: