Skip to main content

https://technology.blog.gov.uk/2015/03/04/creating-better-acceptance-criteria-for-user-stories/

Creating better acceptance criteria for user stories

Posted by: , Posted on: - Categories: GOV.UK Verify

Acceptance criteria (ACs) are a key part of user stories. They set the boundaries of what should be achieved - telling the developer when to stop, the QA how to test, and the product owner what to expect. Because they’re used by technical and non-technical people, ACs must use simple language and be non-implementation specific. This post looks at why we changed our approach to ACs on GOV.UK Verify and the effect it has had on the team.

How we used to do it
On Verify we used to define ACs as lists of statements that had to be satisfied before we could say a story was “done”. We often wrote these after the technical details of a story had been considered. As a result our ACs often looked a bit like this:

  1. Create the new page according to this example [link to prototype]
  2. Use the IDP name in the H1 (e.g. xyz couldn't sign you in)
  3. The 'Start again' button should lead to the Start page on the hub ".../start"
  4. The feedback button should link to the feedback page
  5. Amend the acceptance tests

These ACs are from a story that introduced a new page when a user returned from an identity provider after failing to sign in. While the ACs weren't wrong, they didn’t really provide much help for anyone working on the story. They didn’t set the boundaries for the developer or inform the QA of how it should be tested. They are also not in language a user would understand. We should be speaking their language to satisfy their needs.

A New way
We decided to write our ACs in a different style - ‘Given, When, Then’.  They specify a scenario that should be true as a result of the story and are written from the user’s perspective.

  • Given - the conditions assumed true at the start of this scenario
  • When - the event that triggers this scenario
  • Then  - the outcome of this scenario

We can rewrite the ACs again, but in the ‘Given, When, Then” format:

Given When Then
I have unsuccessfully signed-in with an identity provider I return to Verify I should see a page informing me I failed to sign in
I should see the name of the identity provider I attempted to sign-in with
I have satisfied the previous criterion I click the Start Again button I should be back at the start page on Verify
I click the Feedback button I should see feedback form

This style of AC removes ambiguity - they specify scenarios clearly by describing what the user must do in order to experience the new functionality created by the story. They are also written by the user – in plain English ensuring everyone (technical and non-technical) knows what to expect. Putting the user at the centre of new behaviour is far better than someone technical.

What we found
This approach had many benefits for people across the team, technical and non-technical:

  • Greater visibility - product owners and stakeholders are now better informed of upcoming functionality. Being open and using plain English means more conversations and shorter feedback loops.
  • Smoother QA process - edge cases are now bettered considered in ACs as they focus on user behaviour and the possible outcomes.
  • Shared language - product owners, developers, analysts and QAs now have conversations based on this format - increasing understanding and knowledge.

What shouldn’t be an acceptance criteria
We soon found that this new style of ACs leaves no room for components best dealt with in other areas of the story:

  • Technical details - leave them for the technical notes section of the card. If technical details appear in ACs, your story may be too technical to begin with.
  • Development practices - you don’t need to specify test coverage within ACs. It should be a standardised approach across the team.
  • The solution - don’t specify the solution up front - decide as late as possible! Specifying an approach before the problem has been considered by developers will lead to poor delivery.

A big improvement
The adoption of this style of acceptance criteria has increased the quality of our user stories by introducing consistency and greater consideration at the analysis stage. The shared language spoken across the team has resulted in increased conversations and shorter feedback loops - improving the way we deliver.

Sharing and comments

Share this page

1 comment

  1. Comment by Paul Moffat posted on

    and using this type of formatting for acceptance tests should also mean that writing automated acceptance testing in tools (such as Cucumber) should become much easier