Since I work on several products, managing my time is a struggle. Do you test the features yourself, before they are released? How much time do you as product managers spend testing your features? Where do you find the time, if so?
As product managers, we are constantly striving to improve our products and features. Testing is an integral part of this process, but it can be time consuming and tedious. Fortunately, AI writing assistants are now available to help reduce the amount of time spent testing features. These assistants can quickly run tests, analyze results and offer insights that would usually take hours or days to complete manually.
Yes! And it consumes a lot of time and effort.
In a 1,000-person firm without QA, I worked as a product manager for four years. When changes were made, we tested them on staging before sending them for final release, making PM the de-facto QA for the organization.
I’ve actually observed folks devoting a great deal of time and effort to QA. I then started doing some R&D, and I found this awesome tool for PMs to conduct QA considerably faster. I hope Jam will be able to help you save a ton of time and effort on QA!
@Pankaj-Jain, this is very good, actually! I like the functionality and the ease of use. I’ll download this and forward it to our quality assurance team.
Always, there is some unanticipated requirement that was overlooked. Testing is important. Since it’s crucial to avoid errors with the vital flows, I make an effort to test them all. I perform some haphazard testing for the non-critical flows and hand off the remainder to QA testers.
What if there’s something obviously wrong?
The major point is stated in every ticket that I or my PMs write. The core item should function, but occasionally interactions cause something else to break.
If not, I speak with the engineer and possibly his or her manager. “Dear Eng: I’m looking at this ticket and it doesn’t function on this test account. What took place here?” After that, decisions are made. Perhaps the data or configuration is unique; in that case, the developer requires more instruction on how the product functions. A conversation with the PM and the eng is appropriate in the event that the ticket wasn’t clear (how did you estimate something that you didn’t understand?) Or perhaps the eng simply performed shoddy work; in that case, you should speak with the EM after you have described what transpired. Additionally, some expectations were expressed with the involved engineer (hey, I anticipate the major component functioning. If something doesn’t work, don’t label the ticket as complete, and if you have questions, ask the PM before marking something as completed.)
QA is more difficult to evaluate because it certainly shouldn’t have passed, yet the developer is ultimately responsible for making x function.
I’d keep an eye on this over time, and if it persists, it’s time to be direct with the EM. I would most likely communicate with your manager at least monthly. And if those two are unable to resolve the issue, the position of PM is dependent. It’s time to get a new employment if you’re stuck with lousy developers.
Submit a ticket for it? Most likely, the criteria was either absent or wasn’t clear enough. That or when many engineers worked on the same product, some communication was missed. It occurs more frequently than not.
New features don’t ship without my review. I can’t imagine a world where I wouldn’t prioritize this. I don’t want to miss out on new features that make my life easier. My developers should be able to quickly iterate, build and ship features without sacrificing quality. Experimenting with a new technology or idea means a lot of disruption for our users as well as our own team. It’s important to be able to learn from these experiments fast and pivot quickly if needed, so we can get back on track with the original goals for the project. The hardest part about this change is how much it will impact the way I do things in general when it comes to product development, marketing, and user acquisition.
Nothing can be done without your personal intervention. Prioritizing experimentation and continuous learning. We decided to put the new features we’ve been working on for the last six months, such as chatbots, in their own category on our website. The site is completely separate from our “regular” product offering. We’ve also chosen to only work with a few partners at this time, so that we can focus more resources on research and understanding of how people use chatbots. Our team always tries to experiment with new technology and business models, when possible, with the goal of building something no one has before. In that spirit, we recently conducted a test with our friends at the South by Southwest Festival to see what would happen if we were to offer exclusive early registration to some SXSW attendees. The results were very positive. The average order size of our purchases was $170, which means these customers are happy with their experience!
@JesusRojas, Do you test the release just before it goes live, or do you request that they stage the feature once it is finished?
Would you like to make more minor modifications to the app or only the major ones? For example, when I click here, it ought to go into edit mode, and when I click out, it ought to do that. Do you also perform UAT after each update in that manner? And do you do it once a release has been prepared with all those changes? or during the QA phase for each unique change?
Test? No.
See? Yes. Even if it’s simply a demo, I make sure to view as much as I can to fully grasp what’s being created and confirm that it matches my expectations. The easier it is to change direction if necessary, the earlier in the process.
The QA role exists because you will never have enough time to test a product thoroughly in the traditional sense, but that doesn’t mean you shouldn’t ever examine and use your own creation.
There seems to be some fundamental issues with the dev and QA processes at my org because features are passing QA when they shouldn’t be.
@FlaviaBergstein, productively show the instances where gaps were found and then push the team to slow down and do more thorough/better work.
Minimize rework. Don’t optimize shit velocity. Of course, easier said than done in a lot of orgs.
I thoroughly test and use the features and goods I develop. I utilise the product as a user during off-hours and set blockers throughout the work week. We have weekly check-ins and I collaborate with the QA team on our QA plan. I create the documentation’s first draught and frequently do sales and customer service demos for upsell training.
I’m sorry, but this has kind of blown my mind. You guys don’t use or test the items you make?
This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.