There are differences between a minimum viable product (MVP) and minimum marketable feature (MMF), but in order to understand the difference, you need to understand the concepts, not just know their names.
In this excerpt from Beyond Requirements, I describe both concepts, how they differ, and where they can be useful for product people.
Minimum Viable Product
Eric Ries introduced the concept of minimum viable product (MVP) in his writings on Lean Startup. This is the most straightforward description he provides (The Lean Startup, Chapter 6):
“A minimum viable product (MVP) helps entrepreneurs start the process of learning as quickly as possible. It is not necessarily the smallest product imaginable, though; it is simply the fastest way to get through the Build-Measure-Learn feedback loop with the minimum amount of effort.
Contrary to traditional product development, which usually involves a long, thoughtful incubation period and strives for product perfection, the goal of the MVP is to begin the process of learning, not end it. Unlike a prototype or concept test, an MVP is designed not just to answer product design or technical questions. Its goal is to test fundamental business hypotheses.”
The main purpose of MVPs as defined by Ries is to learn what customers find valuable, not necessarily to deliver value to customers. Additionally, you want to answer business as well as product design and technical questions. In the early stages you most likely focus on the business questions.
MVPs are a technique that your team can use to carry your discovery activities forward through your delivery process. It’s a key component of the Build-Measure-Learn loop because it’s the thing your team builds in order to gather feedback and learn.
That’s all stated from the startup context. What value can the MVP concept bring to more established organizations?
Primarily, the MVP can be used to test solutions. You may find you get a lot of information from your customers about what they believe their needs are, and you may even pull some data from a variety of sources that tells you how your customers behave. At the end of the day the most effective way to see if your solution is going to satisfy your customers’ needs is to build it, try it, and see what happens.
The purpose of an iteration is to either learn or earn. Initially your team wants to learn about the problem and solution by doing things to validate assumptions, address risks, or make progress toward your desired outcome. Your team produces some output that while not a complete solution does provide information for the purpose of testing your hypothesis.
The benefit of this approach is that you can get feedback sooner because you don’t have to build a solution that’s conceptually complete, just enough that you can get feedback on it. It’s important when the intent of an iteration is learning that you have a specific question in mind that you are trying to answer.
What does an MVP useful for feedback in an iteration setting look like? I recently worked with a team building new analytic capabilities for an organization that traded securities. The team wanted to incorporate a new source of data into an existing data warehouse, and one of the first things they needed to do was verify that they could successfully merge data from the new source into an existing source.
They selected a view of the data that represented a simple listing of securities but contained attributes from multiple sources. Instead of worrying about creating a pristine report or moving the data through all the various architectural layers, they chose to start by associating data from the new source into the existing data and generating a query using Excel.
They were able to show the data they expected to see in the final report without spending a lot of time designing the report or building all of the background infrastructure perceived as necessary to automatically integrate the data in the long run.
Doing it in this manner let them immediately identify where they had some logic corrections to make, and they were able to find that out in a couple of weeks. Had they taken the typical approach to building up a data warehouse, they might not have uncovered the issues they found until months down the road, because they wouldn’t have been able to isolate where the problem occurred. In addition, they were able to quickly get meaningful feedback from their stakeholders about which attributes were really needed and specific rules on how to match data from the disparate systems.
Your team may figure out that the only way to truly know whether an MVP will work is to let stakeholders use it in actual day-to-day business. In this case, the MVP may represent a more complete change than the version that was demoed at the end of an iteration.
In the preceding analytics example, this type of MVP happened when the team released one full-fledged report to its stakeholders.
Here, they wanted to find out if the reporting interface was helpful to the stakeholders and if the organization of the reports made sense.
This one report was still useful, but their customers realized real value when multiple reports were available. By delivering this one report first, the team learned a great deal about the stakeholders’ needs, and the stakeholders gained a better understanding of their needs and the reporting capabilities.
Either way, understanding the MVP idea keeps your team from feeling compelled to define too much information up front and encourages you to see delivery as a way to validate assumptions and test hypotheses.
The other important aspect of the MVP concept is the implied focus on speed. You seek the minimum viable product (or change to an asset) because it means you can get it done sooner, get it in front of stakeholders sooner, and get feedback sooner. All of that means that you aren’t wasting time heading down a road that leads nowhere; and if you are, you don’t waste nearly as much time on that dead end.
Minimum Marketable Features
Mark Denne and Dr. Jane Cleland-Huang introduced Minimum marketable features (MMF) in their 2004 book Software by Numbers: Low-Risk High Return Development. MMF provides a way to organize work based on how much value you provide. Described in a little more detail:
- Minimum—the smallest possible group of features
- Marketable—provides significant value to the customer
- Feature—something that is observable to the user
Their ideal MMF is a small, self-contained feature that can be developed quickly and that delivers significant value to the user. Denne and Cleland-Huang define MMF in the context of a software product where the concept of marketability makes sense.
For internally focused efforts here, the definition of MMF needs a slight adjustment. In this case, “marketable” means “provides significant value to the stakeholder,” where value is measured as progress toward a specific outcome. You still want to group the work you’re doing in terms of MMF for your stakeholders to sequence, though you may not necessarily use the rigorous analysis suggested by Denne and Cleland-Huang to do so.
Another good tidbit of information from Denne and Cleland-Huang is that MMFs make the best unit of planning for releases. This idea is helpful when you need multiple iterations to build parts of a feature, but it doesn’t make sense to deliver those parts until you are able to combine them into a viable conceptual whole. It follows from this that features are appropriate planning units for releases, and user stories are appropriate planning unit for iterations.
MVP vs MMF: Appropriate in Different Contexts
When deciding when to use MMF or MVP, consider your context and choose appropriately.
The MMF is the smallest set of functionality that provides significant value to a stakeholder, whereas MVP is the version of your product that lets your team complete the Build-Measure-Learn loop as quickly as possible with the least amount of effort.
In other words, MMF is about delivering value to customers, whereas MVP is about learning more about the ultimate product.
The MVP could range anywhere from not having any MMFs, to having a single MMF, to having several MMFs. They are not the same concepts, but both reinforce the idea that we should be seeking, in Alistair Cockburn’s words, “barely sufficient” functionality.
Both concepts are also often misinterpreted to mean “crap.”
Whatever functionality is delivered should work, should be supportable by the organization, and should be something that you would be proud to put your name on.
Both ideas are limited functionality that focuses on the core of what you are trying to accomplish, be that satisfying a particular need (MMF) or learning more about how you may be able to satisfy a particular need (MVP).
If You Remember Nothing Else
Minimum viable products are intended to get information.
Minimum marketable features are intended to capture value.
Does your organization use the terms MVP or MMF? Do you use those terms correctly? Share your experiences in the comments.
What does a successful MVP look like?
MVP is an acronym that can be used as a weapon. Everyone believes their vision is really the Minimum Viable Product. You can simply declare “it’s not really minimum” if you want to do something less than your colleague’s vision, or declare “it’s not really viable” if you want to do more. But MVP is a key concept in product management. It’s the first step to creating software that actually has value Yet for all the chat about MVPs, many people have never felt the tug of a successful MVP. 15 years ago Andy Bell founded a product development studio, Mint Digital, and they’ve launched at least 100 MVPs since then. Most were crashing failures. In this post he talks about three successful MVPs that stick in his mind.
A framework for Minimum Viable Products?
KC Karnes with CleverTap provides a description of Minimum Viable Product and suggests using the Eisenhower Matrix to help you decide what to include in your MVP. The post suggests considering the importance and urgency of each feature and using the resulting classification to determine whether to include the feature in your MVP.
I’m a little skeptical of that approach. The classification of importance and urgency seem to be more suited for analyzing tasks rather than features. I’m not sure if that idea translates directly to considering features. In keeping with the idea of using an MVP to learn something, the only prioritization I’ve typically done before is to include those things that help us explore the hypothesis that the MVP is meant to test.
I’m sharing this approach because I think it’s good to explore new ideas, even those that I’m a little skeptical of. If you’ve had the opportunity to use the Eisenhower Matrix to determine what to include in your MVP, I’d love to hear how it turned out for you.