How do you do QA?

Teams that aren’t big enough to have a dedicated QA person/team - how do you do it? Logging (like sentry) and reactionary fixes? Unit/regression tests only? Cypress/Playwright/similar? Devs/PM in charge of testing? What has been working for you and why?


we are team of 4 engineers and me as lead PM - i do all QA…we have weeklong iterations, tickets are due on Sunday nights and then QA is done the next two days where we send tickets back and forth until functionality and design is acceptable to be considered “complete”


We have a continuous delivery approach. When something is ready, PMs review the change in a staging environment and, once given the go, the developer merges and ships the change. Usually this is quick (within an hour or 2 of the dev being done) to avoid context switching for the dev.
For unexpected bugs, these are more reactive. We try and ascertain if it was something that could have been avoided in a better product review (in which case we bring it up in our weekly retro), or if it wasn’t really preventable without adding too much overhead.
We have a bit of all the logging and testing that you mentioned (unit tests, sentry, etc.) to help identify bugs, as well as end to end testing of critical flows like payment.


Can I say out loud what others said implicitly? Even larger teams don’t need dedicated QA and lots of high-performing eng leaders prefer for engineers to handle their own testing.
In my experience, this yields better code and engineers who better understand the business drivers for what they’re building.


Respectfully really disagree with @RobMartin on this once you reach a certain scale. It is very contextual to the company/business model and you need different testing approaches at different stage of the product life cycle. Good QA leadership is like a good dentist, you don’t know you need to go through all the pain until you find one :grimacing:


Well, I can’t really disagree with @Jesus. Every strong opinion I express should be loosely held! QA strategy should definitely depend on the context.


We agree on the early stage :sweat_smile:


I think I’d restate as: “What’s the problem the company is trying to solve with organizational structure?” Because org structure is truly your strongest tool for prioritization. @Jesus: Some of the things I’ve seen great teams do:

  1. Automated unit tests and integration tests
  2. Have engineers smoke test their own code
  3. Instrument the heck out of everything
  4. Engineers, not PMs, triage incoming bugs

I’d echo what @Rob said, but close collaboration between engineer owning the change and PM to triage incoming bugs. However, even with a small team, if your product is on a large number of surfaces, multiple languages and locales, etc. you’re going to want dedicated QA. I’ve been working closely with a small engineering team (3 - 5 depending on how you count manager and interns) on a product that is a web app and only in English. I set up a process like this:

  1. If no CX change (like query tuning, db changes, etc.) then engineers own all testing and QA
  2. If small change, PM and UX decide who will own testing and usually do it independently
  3. If a large change or new feature, PM and UX do initial QA, create test cases, and recruit stakeholders (typically marketing, CS, or ops) for additional testing. This has been a huge win for stakeholder management and engagement as well.

Testing, in the end, is often something no one really wants to do. Good testing requires more patience than the typical dev has, in my opinion I’m all for automating as much tests as possible (browser based testing included) as a part of CI/CD pipelines, but there’s always something that those tests don’t cover. I like the idea about having it as the job of the engineers, because that is one way to get them more involved. On the other hand testing is a huge time sink and devs are already pressed to get a lot of stuff done in a short amount of time and good testing takes away from that. I fear over time that leads to worse and worse outcomes if not addressed. @Nathan’s 3rd point makes sense to me. Testing is a way to get other stakeholders involved, too.


We’re also a smallish team without a QA person - not saying we’ve figured it out but we’ve added a lot of the basics: bug/log monitoring, integration and E2E tests, etc. The one piece of advice I don’t see shared often but has really helped our velocity/stability is keeping a manual QA checklist that the entire team can contributes to.

  • we prioritize items as high/med/low in terms of what to test. For a minor release we’ll only test high/medium impact things
  • when you’re trying to balance speed/quality adding things to manual QA checklist is way faster, but its a short term solution
  • we’ll have engineers pick up items off the “high” bucket when they have capacity to keep building our test suite
  • this is super helpful for experimental features you want to ship fast and sometimes scrap
  • its also valuable to have the entire team contribute since engineers have deeper insights into edge cases for let’s say the PM/EM to manually test

we also work with a manual list of test cases that cover the most critical user flows and try to cover the rest with automated tests as much as possible. we take turns in who is responsible for testing those manual cases and have made it a bit of a team sport. so the entire team is involved on a regular cadence and learns the most critical points where the app breaks. it’s not perfect but has worked okay for us in terms of cost/benefit


We automate keyflows using playwright (~20) and then have unit tests to cover more specifics. We then use sentry to react to live issues we missed. We use our own product throughout the day, which sometimes helps find issues too.

1 Like

Thank you all for your insights and experiences.

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.