Confessions of a QA in Dev

Get insights in your inbox

For a few years now, the development approach has been that engineers are responsible for the quality of their code and the solution. Meanwhile, QA staff are responsible for testing the quality of the product.
‘QA in Dev’ is an approach aimed at making all engineers responsible for the quality of the product as well as the code – this is my experience of this situation but I learned a lot and our focus was on always making sure we were trying to make things better.

Why take this approach?
The benefits of this are potentially a better product, with a shorter delivery time. I say potentially, because, like any process, when done wrong it could make things worse (slower and lower quality), so it’s important to stay on track.
a) Faster overall delivery times

b) Improved quality
Because the QA tasks are part of the ‘dev’ delivery, reviewers of pull requests etc. should expect to see those tasks (i.e. a ticket cannot be progressed without the QA tasks being complete). This reduces the chance of the QA task being given less time (or removed altogether) due to delivery pressure etc. It also means the QA task is included in the ticket estimate, which further decreases the chance of QA tasks being reduced.
Because the engineers are now writing unit tests, integration tests and end to end / UI tests, we can truly develop a test ‘pyramid’. With most tests as early as possible (i.e. unit tests), the developer knows exactly what is covered by unit tests and can therefore reduce the amount of UI tests required (although some overlap is a good thing).
This also encourages engineers to ‘shift tests left’ as much as possible; the more that is covered in unit tests, the faster the feedback for regression failures will be and the slower (and more fragile) UI tests have less work to do. For example: if a screen written as a React component has validation on text fields (phone number / name etc.), and those fields have their validation well-covered with unit tests, there’s less (potentially no) need to cover the form validation in UI tests, since the person writing those tests has full understanding of the entire ‘test stack’ and can see it’s already covered. UI tests can then focus on what’s not covered in unit / integration tests, such as user journeys through the application.

How do you bring QA into Dev?
Like most things in development, designing a model that brings QA into Dev is an ongoing journey. It’s important to continually re-evaluate and adjust the approach, instead of aiming to design a final solution that gets everything right first time. Trying to do that almost certainly results in a model that is an inflexible ‘law’ and not designed to be adapted or driven with common sense and will lose ‘buy-in’ from engineers (which is essential for this to work).
With that in mind, here are some points to note following my experience so far:

  1. Be realistic and upfront: it’s unlikely that adding QA to Dev will reduce the work an engineer has to do for a ticket – the trade-off is (hopefully) a reduced chance of a ticket returning to development because of a bug found in UAT or Beta.
  2. Be honest:
    1. What works, works. What doesn’t work, doesn’t work.
    2. Develop a model that’s followed because it’s ‘right’ and has tangible benefit, not because it’s ‘law’.
  3. As far as possible, always include all engineers in any decision regarding the model.
  4. Expect the model to change: it will need to adapt to changes in project / company priorities, timescales, team members, tech leads, new philosophies etc.
  5. Model changes are not ‘done’ until the engineers have seen it and provided input / discussed. Encourage (justified) opinions.
  6. Start simple:
    1. Small changes to the current practice, develop a QA mindset within development before introducing too much.
    2. Begin at the beginning: ensure good code / solution reviews are happening, ensure TDD is being followed, add integration tests, then e2e UI tests and so on.
  7. Initially the model will change a lot:
    1. Feedback will highlight areas that aren’t providing benefit, or could be designed better etc.
    2. New test areas will be added (integration / e2e etc…)
  8. The model will change less frequently as time goes on, but never let it become stale – always review periodically.
  9. For many engineers this is different to the normal process flow, so out of habit – even with the best intentions – people will forget, cut corners, and race ahead. Accept that it’s just part of the journey.
  10. Depending on your technical ability, get involved in working on engineering tickets – I would recommend with development work that you always ‘pair’ initially to get to know the culture etc. but if you’re able, be as much a ‘developer’ on the team as possible. It helps if you personally understand what the model is like to work with.

History of the model
Following is a brief history of the process model that is evolving in our current project (I’m using ‘me’ and ‘I’ to highlight the role of the ‘Quality Assistant’ here).

After a couple of initial meetings with the team to explain what the goal is and get feedback on ideas / past experiences / opinions, I proposed this:

The negative scenarios in the first step were intended to be written with no ‘solution’ in mind (i.e. before any development work had begun, just an understanding of the requirement).
We decided to meet once a week to feedback on how it’s going and refine the process, as well as learn more about QA in general through discussions about testing practices, carrying out exercises and team ‘bug-hunts’ on the product we had so far.

Over time, we made the pull request review a more formal test opportunity and added more ‘customer-focused’ testing during the demo to the BA:

We found that negative test scenarios weren’t necessarily what we ended up testing, so we changed the focus to exploratory tests. However, we found that individuals regularly forgot to plan these after picking up a ticket (before the work), so we moved that task to be a team one during sprint planning and recorded on the tickets:

We then included e2e UI tests as part of the delivery (code, unit test, UI test if relevant). I wrote the setup for this (using wdio + cucumber etc.) to get us started, but let the engineers write the tests required to get us up to speed with our current product. As far as possible, I avoided writing any tests here, but advised and reviewed the automation code that was being written, with the occasional meeting to discuss ideas / concepts / good BDD practice among other things

During discussions we realised we had not been following the initial step of creating exploratory test scenarios for a few sprints (breaking habits is hard!). However, by now we were at the point where:

  • Our unit tests had a very high coverage thanks to our ‘shift tests left’ approach;
  • The unit tests were well-written, well-reviewed and we had a high level of confidence in them; and
  • Our reviews were rigorous and we also believed in what we were merging into Master.

At this stage much of our effort and focus was aimed at producing good unit tests and following good practices (TDD, testable code etc.). Further, we were reliably delivering e2e UI tests with relevant tickets, running all tests as part of our CI process (and with ‘git hooks’ during commit and push) and the manual and ‘mob’ exploratory testing was still being done.

These details created the feeling that it would be more advantageous to continue focusing on unit tests and reviews (including pulling the code and running it locally where possible), rather than spending time on the initial exploratory test planning. We therefore changed the plan a little to reflect this:

Continuing Development
It is worth noting that the final process described above couldn’t have been the first approach – this is the product of a journey, not just developing the process, but also developing us as a team to this point. It’s not the only solution, simply one that is working well for now – if our involvement in the project had continued, this process would have to continue to grow and adapt.

My experience to this point shows that the process must continually be subject to change to remain relevant to priorities / skill sets / team members. It should therefore, never be considered ‘finished’.