Hi there! We’re building a collaboration tool for product teams, who are thinking of building a Growth team, which goal would be to run experiments in the product (new product features, integrations & other projects) before we can build those experiences “properly” in our main codebase with maximum learnings.
We want to ship fast thus will take shortcuts. One of the requirement from our tech lead is that the “growth” code doesn’t interfere with the main app codebase.
Do you have any recommendations in managing a separate codebase for growth experiments? Ideally we wouldn’t want to create a new environment for this, work on the same data/env, so users can test the feature in real life conditions for better learnings, but we’re flexible on other options!
Many thanks!
Acknowledging that I’m side-stepping your question, this seems to me like an artificial and unnecessary constraint. It’s precisely your tech lead’s responsibility to figure out how exactly to make this work. A less side-steppy approach to the problem could be for you and your team to look to low-code/no-code solutions to try out scrappy tests in real life solutions. There are good reasons aside from the constraint you’ve described here to explore those solutions if you’re in scrappy business model validation mode. But, assuming that you’re not even in public beta yet, the constraint your tech lead is imposing feels like a really unnatural, legacy requirement for a company at your stage.(caveat: I am not an Engineer )
Hey there - I would advise against this approach. You’re not building a Growth team with this description; you’re building a team that does product discovery… which should be what all product teams do (validate before over-investing).
I think it’ll marginalize the impact that a properly established growth team can have and set you up for long-term organizational problems.
I reacted the same way as others, with a very immediate “don’t do that.” Apart from the other points above, any successes and key learnings you do have would have to be (re)built in the “core” code. This sounds a lot like “if you want to experiment, go do it somewhere else and come back when you have something.”
+1 to to what @DonovanOkang, said - don’t even try to fight this battle as where this lives in code. Treat this as discovery and validation work and everyone will be on a better path to success (and happiness).
yeah with others. I’ve worked with engineers who don’t like how growth experiments show up in codebases. But I haven’t seen a great way to sequester growth experiments while still enabling the team to operate effectively.
Rather than trying to isolate growth code, I’d instead focus on establishing a process for code cleanup post experiment. Dead experiment code can linger in the codebase for a while, as it’s easy to conclude an experiment, ramp it to 100% and move on.
Really depends on the layout and specifics of your codebase I think. In the past something I have seen teams do is create a streaming read-only copy of the production data and create the growth content adjacent to that copy database so the info can be strung together as-needed for reporting and analytics.
Guys, thank you so much this is very helpful
@MarcoSilva, @DonovanOkang, @KaranTrivedi, @Nathanendicott
definitely seems like an artificial constraint and tbh this is what I was also thinking but not having much technical expertise, I was looking for a solution that would fit both parties. Your perspective helps a lot thank you!
@Angela, thank you! We’ll be looking into that, could work!
I’m going to put my hat on as a tech lead for a moment here.The underlying question really is “how much technical debt are you prepared to take on?”. By technical debt I mean things that are created consciously aware of shortcomings like “will it handle 1000 people at once” or more insidiously “will everything still work when I make changes or will we end up with a nightmarish whack-a-mole where every change seems to break unrelated things?” Technical debt is a nightmare for tech leads: it’s like a mortgage you can’t see.You go fast now but the ride might get quite bumpy later: and who is responsible for that bumpiness later? Usually the tech lead .The way I’ve navigated this before (and right now in my podcast note-taking app, knowcast.io) is for everyone to be really clear on the possible consequences. I segment stuff into 3 camps:
-
Technical experiments. Assume this is code to produce an insight then thrown away. Never integrated anywhere. Usually trying to answer a specific question
-
Technical prototypes. High speed, low quality, likely discarded too, but there might be bits that can be refactored. This is where most of the code goes before we know if it’s a feature users even want.
-
Product engineering. Solid. What we do when scaling. Low technical debt. I don’t see much point in doing any of this until we have product customer fit (10-30 paying customers).
So as long as you’re all in agreement about the quality/speed parameters and the consequences of the resulting technical debt, you should be ok. It also gives the eng team an opportunity to sign up explicitly to that plan, and, some may leave if they don’t like the early stage “hacking”. They can leave and go to a scaleup.
This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.