“If you are not embarrassed by the first version of your product, you’ve launched too late.” - Reid Hoffman, LinkedIn/Greylock
This quote by Reid Hoffman, co-founder of LinkedIn, suggests that it’s better to launch a product early and iterate based on feedback, rather than waiting for it to be perfect. Launching early allows you to get feedback from your target audience, identify areas for improvement, and iterate quickly to make the product better.
This quote strikes me as a revelation since I have always waited for everything to be perfect or at least to reach a point where it is not humiliating.
This approach is often referred to as the “minimum viable product” (MVP) strategy, which involves launching a basic version of a product to see if there’s a market fit, and then improving it based on customer feedback. This can help you validate your ideas and avoid investing a lot of time and resources into a product that may not be successful.
So, don’t be afraid to launch a product that’s not perfect, as long as it provides value to your customers and solves a problem they have. Embrace the iterative process and use feedback to make your product better over time.
This only applies to particular kinds of software startups, unlike a ton of unqualified business advice that people have been taking as gospel since around 2006.
Absolutely agree with @Luisneilson, this advice is often framed as a general principle for startups, but it is especially relevant for software startups and startups in fast-moving industries where technology and customer needs are constantly evolving. The MVP approach allows software startups to rapidly experiment and validate their ideas, which is important in an industry where technology is changing rapidly and customer needs are constantly evolving.
However, it’s important to keep in mind that this advice may not be applicable to all types of startups or businesses. For example, in industries with longer product development cycles or where products have higher regulatory barriers, launching an MVP may not be feasible. Additionally, in businesses where brand reputation is a critical factor, launching an early and potentially buggy product could damage the brand.
It’s important to consider the unique context of your business and the industry you’re in, and tailor your approach accordingly. But in general, the MVP approach can be a valuable tool for startups to validate their ideas, get feedback, and iterate quickly.
Right. We would not be discussing Tesla right now if it shipped even the Roadster, much less the Model S, with embarrassing bugs or incomplete/untested components. They deserved to fail because they would have failed.
This advice COULD make sense for a freemium B2C SaaS product.
For practically every other market, it’s bad advice. Software development best practices should be carefully considered when incorporated into physical products in particular.
I completely agree with @BinaCampos. The MVP approach may not be appropriate for all types of businesses, and it’s important to consider the specific context and industry of your business before applying this principle.
For physical products, especially in industries where the quality of the product is a key factor in customer satisfaction and brand reputation, it’s important to ensure that the product is of high quality and meets safety and regulatory standards before launching.
In addition, for businesses that have a direct relationship with their customers, such as B2C SaaS products, the MVP approach can make sense as the risks and costs associated with a poorly received product are lower. But for businesses that operate in highly regulated industries, such as medical devices, or those with complex supply chains, the MVP approach may not be feasible.
Thus, it’s important to consider the unique context of your business and the industry you’re in, and tailor your approach accordingly. The MVP approach can be a valuable tool for startups, but it’s important to weigh the risks and benefits before deciding to use it.
I’m not sure I agree completely with that. I have one of the first Model 3s and there were a lot of embarrassing issues.
You couldn’t open the trunk when charging, the windshield wipers would always wipe a bucket of water right when you opened the door, the maps worked maybe half the time, there are panel gaps and some trim that didn’t really fit properly. Also, Autopilot hardly worked outside of the highway, the paint had swirls, and you couldn’t use your phone as a key for a while… you had to always pull out a card to open it and drive it.
So it definitely wasn’t perfect. It still isn’t. But it was and still is the best car out there, in my opinion. And it keeps getting better.
I think that’s the point of the sentiment. Release when it’s good, not when it’s perfect. Then iterate. I think that works in almost every market outside of the medical field. Heck, it’s even working with SpaceX.
Yes, this applies to environments of extreme uncertainty (startup testing value proposition) with extreme agility (software product). I don’t know that anyone claimed otherwise…what people HEAR however…another story
You should be careful not to read too much into this. Though perfection is the enemy, a unusable product and improper positioning is also a failure.
Many have said that use MVP to find P/M fit so that you can learn from faster iterations. I absolutely agree but building MVP is difficult because you have to make some hard decisions on what makes the cut vs what doesn’t. Some basic principles I use for MVP are
day 1 vs day 2 features. For MVP, focus on subset of day 1 features
single use case vs multiple use cases. For MVP, start with the critical single use case that aligns with your messaging
For MVP, invest in onboarding the attractive segment that are receptive to your messaging
Ignore scale related initiatives until you have reached stickiness with your first segment. For e.g. usability at scale, product performance at scale and so on aren’t that important with MVP.
Design a vertical slice that covers end to end use case such that the use case is usable, functional, delightful and reliable.
Have tight feedback loops with the power users. Build in ways to submit feedback, establish triage and prioritization process for bugs/feature requests etc.
Instrument to capture your success metrics/counter metrics. Use tools like Amplitude to learn about your users and their behaviors.
Assuming you have proper instrumentation/tools, analyze your retention (and it’s flip side - churn) - Who are sticking/leaving, when are they leaving, what are the attributes of users (and actions they take) that stick around and so on. You’re continuously analyzing, synthesizing and sharing your cohort analysis.
Totally relate to this. I spent the past year working on a project. It was ‘ready’ in the fall but the decision was made to rework major parts of it delaying launch to the middle of the Covid pandemic… It would have been so much better had we launched what we had in the fall.