A BDD worked example – login page

I have used this example as a workshop to introduce BDD to a wide variety of folks at different companies. I like this example as it is deceptively simple, everyone knows how to deliver a login page, right? The reality is that we all have different ideas about what should and shouldn’t be on a login page and how it should look etc. So it does serve as a simple but very illustrative example of how using a Behaviour Driven Design approach can really help to clarify requirements, and engage the thoughts, experiences, and knowledge of all the participants to ensure what you deliver will be what was really desired. Also, that it will be both testable and tested as the high-level acceptance tests are defined up front.

Introduce roles and abbreviations

First of all, I want to introduce the roles that will be part of the discussion along with the abbreviations for those roles used in this example. Each role can then speak their part as an example of how the discussion could go for this example.
  • PM – Product Manager, our proxy for the customer, bringing the ‘what the customer wants or needs’ definitions to the team
  • DL – Development Lead
  • IxD– Interaction Designer, bringing the UI look and feel, the usability and customer workflow understanding to the team. Helping to ensure we have a consistent style, content, and customer workflows.
  • QA– Quality Assurance person (either QAE or SET) who will be ensuring we deliver the story with high quality, building it right and building the right thing
  • Dev– Developer(s), responsible for the actual implementation of the story, the code that will provide the desired functionality.
  • Implementation team– typically composed of a developer and a QA person, but can include Interaction designer, development or QA pairs.
  • Amigos– the group of people required to analyse a story – typically the PM as the customer proxy and the implementation team

Introducing the Story

The Login Page
Bring the story into ‘In Analysis’
What does the story look like at this point? (This is an example using a tool called Mingle)

The BDD discussion begins

As is fairly typical at this stage, the story does not contain a lot of detail and is kind of vague in its description
We start the discussion:
PM or Dev Lead presents the story
The 4 amigos (PM + Implementation team (IxD, Dev, and QA)) discuss and ask clarifying questions to understand the story in detail, exposing and discussing any business risks as they go
PM/DL: This story is to deliver a login page. Fairly standard login page, username and password fields and a submit button. (The what). This will be the login page for our administrators. (The who). Once they login here they will have access to the dashboard and all the administrator functionality. (The why)
QA/Dev: Do we have a mockup?
IxD: Yep, looks like this (I encourage mockups to be cheap and for me, nothing beats a whiteboard diagram for cheap, flexible and efficient;
QA/Dev: So is the button text ‘login’ or ‘submit’?
IxD: I think ‘login’ is more intuitive
QA/Dev: Is it a username or an email address?
IxD: I was assuming it was an email address
PM: Yep, we will need to use an email address, we will want this to work with our single sign-on feature coming later and that will use an email address
QA: Can I assume we will use our standard code for validating an email address?
Dev: Erm, do we have a standard email address validation code?
QA: Yes, I believe the architecture team has a regular expression they standardised on
QA: Do we want to provide any client-side validation of the password? Or should we just send it to the server for validation against the username? i.e. should we ensure it is at least 8 characters long, contains at least one special character and at least one upper case character?
PM: No, we will have checked that when we set the password at admin user creation time or when they update it themselves. Let’s just have the server side validate it against the email address. Besides if we provide guidance on how a password will be composed then an attacker can update the dictionary they are using for brute forcing so that it follows the rules.
QA: How do we want to tell the user that either their email or password is not valid? Text on the page? Red? A popup? Do we want to clear the fields?
IxD: First I think we should have ghost text in the email address field to provide an example of a correctly formatted email address. For the error, I think we should have red text above both boxes, and we should leave the fields populated, let me provide an updated mockup;
QA: Do we want to show a different message for an invalid email address, i.e. one that fails email address validation rather than a check to see if that email address is a user in our system?
IxD: Yes, I think we should help the user to avoid typos, how about red text above the username field for this too. Here is an updated mockup;

QA: Does the error text and the text on the page need to be localised?
PM: Yes, we need to support the existing 14 languages for the Admin users
Dev: So, we should use the browser context to set the locale and display localised text if we support that locale and a fallback if we don’t?
PM: Yep, we might have a different setting later if we allow users to select a preferred locale that we store as part of their profile, but at login time we don’t know who they are so we should just use the browser locale.
QA: Cool, so the localised text will be supplied as part of the page render based on the initial request to the login page URL, including any text for error messages?
Dev: Yeah that’s the way we usually do it, so we can just send error codes back and the client side code can then render the appropriate message. Of course, the email address validation will be checked at both client and server so we can provide quick feedback to the user if they don’t provide a valid email address format in that field, but also guard against someone hitting the server directly with an invalid formatted email address.
QA: Nice, we should make sure we have unit tests covering the validation on both client and server then, I can add a single invalid test for each to show the error message (UI) and return code (server)
Dev: Should we have a ‘forgot password’ link and functionality?
PM: Yes, but I don’t think we have email functionality built yet, so we will defer that to a future story
QA: Should we have a timeout for responses from the server? i.e. how should we deal with the server being busy or unresponsive?
Dev: Yes, we should have a timeout value in the client code that will display a message to try again later, do we have a mockup for that?
IxD: Agreed, let’s allow 10 seconds for the timeout and I think we should show a message to try again later if we timeout or if we get a 50X back from the server. Here is a mockup for how that text should be displayed;
QA: Do we need to support logging in on mobile devices? i.e. should this page follow a responsive design pattern?
PM: Yes, we need to support tablets right now and may need to support phone devices in the future, if we go responsive now then both should work.
QA: But we will only need to test on tablets for now, right?
PM: Yep, we will add testing stories for phone device testing later if we need them.
IxD: Responsive should be easy enough, but I may need to think about the length of the fields and the text we will need to display, particularly in different languages.
QA: What about accessibility? Do we need to support a WCAG level for this?
PM: Hmm, well we should but I think we will defer that to a future story. Let’s try to keep it in mind so that we don’t have to re-design later
Dev: What about functionality to enable maintenance notifications? i.e. the ability to add text to inform admins of upcoming maintenance or outages?
PM: Again, I want to defer that to a future story, I will sync up with Production IT to understand the requirements for that
QA: Do we need to limit the number of attempts to login so we can avoid brute force security attacks?
PM: Hmm, yes I think we should allow 3 attempts and then lock the account for maybe 5 minutes?
IxD: Actually I think 5 attempts would be better
PM: OK let’s go with 5 attempts and 5 minutes wait time
Dev: Do we want to log each and every login attempt (both successful and unsuccessful) or just the ones that result in a lock on the account?
PM: Hmm, I think we may need to log all attempts along with success, failure or lock so that we can provide an audit log if the customer needs it or if we need to show anything for a security audit.
QA: How do we want to show the lock message when 5 unsuccessful attempts have been made?
IxD: I am thinking red text again, but below the 2 boxes and to the left of the button this time, here is an updated mockup;
QA: How are we determining 5 attempts to login? Attempts using the same email address? Coming from the same IP? Some form of session identifier e.g. cookie?
Dev: Well the simplest is to set a session id in a cookie when an attempt is made on the server side and then to count how many attempts are made with this session id
QA: I presume that means someone malicious could simply brute force by creating ‘cookie-less’ requests?
Dev: Yeah, maybe we need to think about that one some more or talk to the security team.
QA: What should happen if the user attempts another login when we have locked them out?
.
.

.

QA: So based on all of that, what do we need as examples to accept this story with? I am thinking something like this;
Given a valid email address and password when I select login then I should be authenticated and taken to the dashboard page
Given an invalid email address or password when I select login then  should see a message indicating that my login attempt was unsuccessful
Given a badly formatted email address when I focus outside of the email address text field then I should see a message indicating that I have entered an incorrectly formatted email address
Given I am entering an invalid email address or password for the 5th time, when I select login then I should see a message indicating that I must wait 5 minutes before trying to login again
Dev: Should we include an AT for the server busy/down error message too?
QA: Is that required for acceptance? It is not something the user is in control of or can directly impact (without removing their network connection)
PM: I agree, we should test for it but I don’t think we need to include that in the Acceptance Tests
PM: I want to be sure this will look and work well on a tablet, so can we make sure we test that?
QA: Sure, we can do desk checks on an iPad if you like? But we will automate the ATs using Selenium and test with our most popular customer browsers for the admin interface
PM: Great
IxD: I am a bit concerned with how the localised text will look, can we make sure we test that too?
QA: Sure, we will test that the locale gets set and fallback as expected and we will do some basic checks with pseudo loc to make sure we don’t have overlap, truncation etc, but how about we include some different languages in the desk checks and with iPad to make sure you are ok with the the look and feel?
IxD: Sounds good
Dev: What about an AT for the audit logging?
PM: This is not a formal requirement from our customers or security team yet, so I want it tested but it does not need to be an AT as it is not a must-have part of the specification.
QA: No need for any regression tests here as this is all new code and not dependent on anything else.
PM: Do you need a new environment to test with?
QA: We already have the pipeline setup so we can just deploy to that from the CI system and test there, so no, we should be ok.
PM: What do we think are the biggest risks?
QA: Well I think security is the biggest as this is a login page, but we will mitigate that with validation in the client and server side plus testing focused on circumventing security, including the lockout to prevent brute forcing. The next biggest is email validation, this is notoriously problematic as most email clients do not conform to the RFCs. We will mitigate this by using our standard email validation to be consistent and to have one place to change if customers complain. We can also monitor the audit logs to see if people regularly use different methods of commenting or ‘tagging’ their emails that we should allow for. I am not really concerned about performance (very little traffic between server and client) and we will do regular desk checks for usability and style including the error messages, localised text, and responsive design.
What does the story card look like now?
QA: We need to get more specific with our examples, we have captured the top level ideas and behaviours but we really want to provide examples (or specification by example) so that it is 100% clear how this will behave and so we know how we will demonstrate this to you for acceptance, so how about;
Given I have entered valid.email@mydomain.com and Password1 (valid pass) as the password
When I login
Then I should be presented with the dashboard page
Given I have entered invalid.email@mydomain.com and Password1
When I login
Then I am presented with an error message in red text saying “Invalid email or password”
Given I have entered valid.email@notopleveldomain as the email address
When I change focus from the email field
Then I am presented with an error message in red text saying “invalid email address”
Given I am entering invalid credentials for the 5th time in a row
When I login
Then I am presented with an error message in red text saying “Too many failed login attempts, please wait 5 minutes before trying to login again”
PM: OK, we know the scope now and we have Acceptance Tests defined, so what do we think is the size of this story is?
At this point, we have clarified and agreed on the scope, and have a common understanding of what ‘done’ looks like in the form of some high-level acceptance tests. It is reasonable to guesstimate the size of the story at this point. But note we are much more likely to be able to guess more accurately once we have talked through the design in a bit more detail – the how we will solve this need in code discussion.

Testing as a career

I am going to join a long list of people I consider thought leaders in our industry in a lament of how many people continue to see testing as a simple job, not a career, not a craft, not really requiring any skill at all.

But, like Lalitkumar Bhamare I felt I needed to talk about this, because …

I recently interviewed a candidate that had had two co-op experiences as a QA, he had been given requirements and design specs and asked to think of tests (purely black box) for these. He believed he was doing well and that the team was working efficiently when he could think of one or two cases in addition to the one obvious one to confirm the requirement was met.

He found bugs, 2 or 3 a day, he said with some pride.

It was obvious he did not think this was hard, required any skill, or was a career that he wanted, but he felt trapped and compelled to apply for a QA role as that was what he had experience in. He really wanted to be a developer.

I explained he could be in a very different environment where instead of being part of a waterfall like cycle like this;

  • customer proxy passes requirements to the tester and developer,
  • the developer develops code to meet the requirements,
  • the tester thinks up a few test cases per requirement (he told us he came up with about 2.5 on average).
  • The developer then hands code to the tester who then takes the code (product) and tests it by following the heavily prescribed test cases he just wrote. (Presumably without thinking much). Finds a bug or two, records that in the bug tracking system and moves onto the next test case.
  • Then the game of ping pong begins as the tester throws the code (product) back to the developer to fix the bugs. The developer fixes the bugs and throws it back …

That instead, he could use his skill, (yes I did say the word skill), to help the customer proxy and the developer understand what he was thinking of testing before the coding started. That this might actually prevent the developer from missing the requirement to meet the other use case the tester had thought of. That the customer proxy may have an opinion on this and say, “no we don’t want that”, or perhaps, “yes we want to meet that need too, and now you have made me think of another one that we need to meet too!” (Catching a completely missing requirement)

He didn’t see it, couldn’t see himself doing that, couldn’t see a way that could work, at least not during the interview. To his credit he was thinking about it and it clearly confused him but he was not able to take this any further.

(I did also suggest that there was no reason for him not to apply for software development roles)

Many, many, others I have interviewed, some who have been ‘testing’, (I would say checking), for years, still have no thought that they could be better, that they could improve and develop their skills, that they could really help their team to avoid making so many costly mistakes and save their companies thousands if not millions in wasted efforts, time lost due to bug interrupts and context switches, re-builds, re-tests, whack-a-mole etc. In other words deliver real value and not just fill a role, warm a seat, expend some effort, etc.

A co-op student I worked with, (believe me I am not picking on co-ops at all, they have very little experience to draw on, but are also often not taught anything good about testing), visibly turned his nose up at doing some testing, saying, “I don’t want to just press buttons all day long”. This was his understanding and belief of what testing was. Not entirely sure what his experience was or where he got that, but I tried to disabuse him of that incorrect understanding of the role, but once again felt myself working against an entrenched idea.

So, I call out to the learning establishments to do a better job of preparing these young minds for the real work of professional software engineering. That good software developers do a lot of thinking and a lot of testing as part of everything they do, and that doing this well takes a lot of skill, experience and learning. And that good software engineering teams and leaders recognize the need for people on their team that think differently and are skilled in the art of helping the team expose issues as early and as quickly as possible, to help them deliver the right solution first time more of the time. Without the need for drama, missed deadlines, long hours, late nights, re-design, context switching to fix bugs found in code developed weeks ago, etc. That this, (software testing), is a career, a valuable role, and an essential part of any high performance team. That it also takes a lot of skill, experience and learning to do it well.

I also call out to all the companies out there that insist on continuing to think of testing as simply, ‘using the product the way a customer will’, and hoping that will catch enough issues. Believing that in doing this they are ‘testing the quality into the product’! They are both wasting and losing a lot of money, as they have to pay the cost of finding defects late in their cycle. They also need large customer support teams, and in some cases teams of developers focused on maintaining, (fixing), their colleagues code.

These companies are responsible for perpetuating the myth that testing is basic, unskilled and a simple role. That it is easy to automate what these testers do or that we can simply outsource this work to countries where peoples’ time is cheaper.
They are also often responsible for poor quality products, that will often miss deadlines, and require expensive maintenance programs or extensive and expensive re-writes every few years.

And finally I call out to all the people in testing or quality related roles out there to learn to do better, be better and thus change the perceptions of everyone around them to see that testing is a skilled role, it is a craft and thus requires craftsmanship, and that it needs to be part of every team and will improve the quality of everything we do.

But it’s not all bad …

I have employed and had the pleasure of working with some fantastically skilled testers, both talented exploratory testers, and test automators. Sometimes a few rare gems that are able to combine great skill in both testing and the development of tools and automation to help check software.

I have also had the great pleasure of working with lots of great software engineers and leaders who really understand and seek the value of testing. Insisting that they have great testers to work with them, that they and their software developer colleagues need to think about and execute testing at all levels and that they encourage all to learn more about and be better at ensuring the quality of everything we do.

I have been inspired, guided and encouraged in my career by many people, but most of this has come from what I think are some of the thought leaders in this area. Here are some links to articles, books and people that can help you too;

I would also highly recommend Rapid Software Testing training, this course will be one of the best investments you can make in your testing career.

QA Recruitment – what do I look for?

I recently got asked what I look for when recruiting for QA/test positions. So I decided I should share my thoughts both with those of my colleagues who are recruiting, and also those who are looking and maybe wondering what they need to be able to share in an interview.

I have broken this out into a small number of attributes or qualities I tend to focus on:

Passion for testing?

Try to understand if they are passionate about what they do
  • ask questions about why/how did they got into a QA role or what keeps them in a QA role
  • looking for someone who cares about quality and recognizes that it takes skill, that they have some of that and they enjoy using it
  • ask them to tell you about an interesting bug they found and why it is interesting (was is challenging to expose, challenging for the team to resolve, an interesting chain of events type issue … is there genuine interest and passion in finding and solving problems here)

Motivations

  • what do they enjoy most about their current role?
  • why do they want to leave?
  • what does this tell me about what motivates them and what they may be looking for?

Testing Basics

  • do they understand the basic test design techniques of equivalence class partitioning and boundary value analysis – keeping in mind they may not know the terms, they may just do this intuitively
  • ask example questions that have clear equivalence classes and boundary values or get them to explain how they would think about the test cases for a situation like this
  • ask why they think of testing with those values to see if they have ever identified that they are applying a technique or a pattern to these types of problems
  • you can also ask other basic questions like what are some of the important points to include in a good defect report, what they do when faced with ambiguous or unclear requirements
  • do they know any test heuristics?
  • do they perform exploratory testing? Can they describe what they do or give you an example? (Are they truly exploring or are they just doing ad-hoc and thinking they are the same?)

Teamwork

How well do they work with others
  • how big is their team – if they are the only QA then they probably don’t have anyone to bounce ideas off or tell them there is another/better way of doing something
  • are they all in the same office? – if yes how often are they talking with others on the team and why
  • if some/all are remote – how do they communicate and what problems do they have communicating
  • do they try to help developers resolve issues – ask them if they attempt to narrow down where, how or why a bug is occurring
  • do they try to help PM/BA define requirements – ask them how the requirements analysis/review happens and how they provide feedback or ask questions to clarify requirements
  • you are watching for any team vs us type attitudes or ‘not my role/job’ type attitudes

Process basics

  • do they understand their current process? Can they describe it briefly and succinctly? – if not then they probably don’t understand it well enough to explain it clearly to a new teammate, or it is too complicated for them to efficiently work with and they don’t know how to improve it
  • do they do regression testing
  • do they get to choose what testing is done and in what order?
  • if yes, what do they do first and why?
  • if not, then do they understand how the decisions on what to test and in what order are made?
  • what happens when they find a bug in something they are testing? Do they stop and wait for a fix or pick up something else?
  • if they could change one thing about their current process in order to improve it, what would they change and why?

Automation – only applicable if they have some automation experience

  • what do they automate and why do they focus on automating that testing? Often get simple answers here varying from I automate what I am given/told to automate through to I automate everything – you are trying to understand if they know that there is judgement required here – choices to be made about what to automate in order to improve the efficiency of the team
  • also common that QA automation is focused on UI – ask them if they have any experience of automating anything other than UI
  • do they run their automation regularly? In a CI system?
  • what problems have they had with automation – digging here to understand if they have built fragile automation or if they have learnt the basic mistakes and are experiencing more advanced problems like how to efficiently manage test data or parallelize tests more effectively

Continuous learning

Lots of people say they do this but do they really? And what are they trying to learn?
  • ask them how they keep their skills current
  • ask them to tell you about something they have learned recently that they found particularly interesting or useful
  • ask them if they follow any QA/test/development folks blogs or twitter feeds and what they learn from that if they do

Don’t have time to do it right, but have time to do it twice?

Why don’t we have time to do it right, but somehow we do have time to do it twice?

My paraphrase of a quote by John Wooden (quote number 5)

How come we often think it is better to rush into something we don’t understand and hack at it than to take the time to understand what it is we are really trying to achieve and then think about how we will achieve that before we start coding?

Do we not learn from our mistakes?

Do we not see, measure or understand the cost of halting a developer, who is by now working on another story, to get them to switch context and think back to the rushed coding they did on the previous story. To get them to diagnose the root cause of the bug we just discovered? Or see, measure or understand the time it takes to diagnose, resolve then rebuild and re-test this change through the entire pipeline? With a very real risk that when handed back to the tester that she will find another issue as the fix was rushed and we did not have enough coverage in our pipeline of automated checks to discover the regression that was introduced.

I have seen this pattern throughout my life and have been guilty of it myself. So what do I do about it?

Well I try to discipline myself, but what I do for my teams is to use BDD to ensure we have shared and common understanding of what story we are about to do, changes we are about to make, additions we are introducing. To ensure we all understand who needs these changes and why they are important to them. (Who the customer is and why they care). Then we, (the three amigos – slide 9), will be able to agree some high level examples of what ‘done’ looks like for this piece of work. These examples will be in the the form of tests that will adequately specify and thus prove we have delivered what was required. We call these the acceptance tests. They are defined before any coding is started. Hence the ‘driven development’ part of BDD. Where possible these tests are automated and are required to be passed, (via automation or manual checking), before the story is pulled by QA, (we use Kanban), to do a final exploratory test.

We are not perfect at this, but it really does mean that we can get a story accepted more often than not on its first pass through the pipeline. If it doesn’t we all understand the work well enough to learn from the mistake(s) and improve.

VADER – a REST API test heuristic

Following on from the UNUSUAL PAGE post, I have also created a heuristic for REST APIs, along with a simple mnemonic, which I think is quite memorable for a certain group of sci-fi fans 😃

My organisation is currently implementing an API first strategy, whereby we design and implement the API for any piece of functionality before developing any UI or consumer code for that interface. This provides us with the ability to separate concerns easily, improves testability and is in line with the current trend for micro services.

As with the UNUSUAL PAGE mnemonic I realised that the original heuristic was not that memorable and thus my team were not able to easily call it to mind when in a meeting room, designing the next REST API with their team.

So, with a bit of rephrasing I came up with VADER, (Verbs, Authorization, Data, Errors, Responsiveness).

REST API - VADER

As with the previous heuristic, I have updated the coverage outlines templates originally described and linked in a previous post.

Obviously not all of these branches or leaves will be applicable to your REST API and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.

Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a REST API or clarify the requirements around REST API design etc

Feel free to share back your own variations on this heuristic or even your own heuristics.

UNUSUAL PAGE – a Web UI test heuristic

I have been meaning to share this for a while now.

I have been inspired by, learned from and generally challenged to think more and better by some of the folks that I consider to be thought leaders in testing, namely; James Bach, Michael Bolton and Jonathan Kohl. These are amongst the best thinkers in the testing profession. They are also some of the best at sharing their knowledge, for which I am eternally grateful. I am in some small part trying to mimic them by sharing some of my thoughts and experiences here.

So this is a little overdue homage to these giants upon whose shoulders I am standing.

When trying to come up with ways to help my QA team think more broadly, differently and holistically about risks and tests for Web UI pages I realised that the mind map that I had developed over time for this purpose was not very easy to remember.
This was fine if you used my coverage outline template, (now updated to UNUSUAL PAGE), because that includes both the mindmap and the spreadsheet sections from the mindmap, thus no memory required.
But if you were in a meeting room discussing the user workflow or code design of the latest UI change, or at the desk of the User Experience designer looking over some wire frames in preparation for a 3 amigo style BDD discussion, (designed to ensure we all had a common, shared understanding of the requirements), or a story kickoff where we wanted to think about design and code risks and tests to mitigate those. But you didn’t have a laptop in front of you with the template to hand, how would you mentally run through the different aspects to consider in the context of the work in front of you?

Thinking about how I normally expanded my thoughts around where things could/would go wrong and what sorts of things I should consider testing I realised I often used heuristics I learned from the folks mentioned above. These heuristics were normally memorized in the form of simple mnemonics. Looking again at my mindmap I realised I was not that far from a fairly easy to remember mnemonic, so with a little tweaking I came up with UNUSUAL PAGE (start with URL and go clockwise);

UNUSUAL_PAGE
Obviously not all of these branches or leaves will be applicable to your page and your context, and indeed the words I use here may mean different things to each of you, but that is sort of the point with a heuristic, it is a guide not a formula, optional not rigid.

Hopefully this will help and possibly inspire some of you to expand your thinking when you need to test a UI page or clarify the requirements around Web UI design etc

Feel free to share back your own variations on this heuristic or even your own heuristics.
I will share some more that I have been practicing with my team.

Testing vs Checking

There has been a lot of discussion over the last couple of years about test automation and in particular the varying definitions of testing vs checking and how that applies to test automation.

I broadly agree there is a difference, here is my paraphrased understanding of each definition;

Testing – the art and science of conducting experiments and carefully observing the results, all the while making multiple evaluations against explicit and implicit expectations. A fundamentally human, (or manual if you prefer), exercise.

Checking – the deterministic evaluation of the outcome of an action or step such that a pass or fail is recorded.

But there seems to be an underlying theme to most of these discussions, almost a fear. It is as if someone has threatened the existence of manual or human testing.

I do agree that there has been a general drive towards more automation of ‘tests’, and that this has been largely associated with the adoption of agile practices. I myself have encouraged, and in some cases demanded, more investment in, and thus more, automation of tests in companies I have worked for. However, I have also encouraged and hired for manual testing, and have coached and mentored folks to be better exploratory testers (what I call brain engaged testing).
So I don’t subscribe to the fear that manual testing is a thing of the past or an unnecessary overhead. Perhaps this is why I don’t share in the what seems to be an attempt at a sharp delineation between automation and testing?

Like Michael Bolton, I do see automation as a tool and as something that supports testing.
I often use the phrase automation assisted testing, to refer to exploratory or other manual testing where the test setup or initial test data has been achieved using automated tools or scripts.

My preference is to develop automation code in a re-usable fashion, producing a library of re-usable code that is easy to ‘glue’ together in different ways such that different automated tests (or checks if you will) are achievable quickly and efficiently. But this approach also lends itself well to re-using these library ‘functions’ to assist with manual testing. If developed well then anyone with fairly basic coding skills should be able to combine some of these together in order to ‘drive’ a system under test to the point where you want to start your exploration or manual testing. Or as mentioned before, to prime the system under test with the exact data you want or need, in order to conduct the exploratory or manual testing you wish to execute next.

My Agile QA Manifesto and Testing Principles

My Agile QA Manifesto

With reference to the original Agile manifesto I present my thoughts on an extension for agile QA or an agile testing manifesto;

  • Prevention over goalkeeping
  • Risk based test coverage over systematic test coverage
  • Tester skill over test detail
  • Automation over manual (for checking/repetition)

While there is value in the items on the right, I value the items on the left more

Testing Principles

And to follow that, a set of principles I try to follow and try to instill into those that work with me;

  • Fail fast/provide fast feedback
  • Test at the lowest layer
  • Test first (TDD/BDD)
  • Risk based testing for efficiency
  • Focus on tester skill and domain knowledge
  • Drive for automation for repeated checking (regression)
  • Learn from your mistakes – don’t repeat them