Minimum Viable Product: Good Idea, Poor Execution

computer

 

There’s a process for software development called Minimum Viable Product, better known as MVP. When executed correctly, it should help teams mitigate risk by keeping focus on core functionality, avoiding unnecessary features. You take limited releases and get them in front of your users so they can test the viability of an idea. Sounds good, right? But what happens when this process is overprescribed, and you find yourself on this path for everything you do? If you’re testing truly unique ideas, then MVP makes sense. Truly unique is truly rare, though. What I have seen launched in the name of MVP are immature versions of products that were developed to compete with functionally mature products. And many things can go wrong.

If a user has invested in your product by taking the time to learn about it, downloading it, installing it, and exploring it, only to reach an impasse by finding out something they wanted to do is completely absent, you have violated their expectations. You have wasted their time, and people will judge you negatively for it.

Once a judgment is made, the belief may persist long after it is disconfirmed. Years ago, Hyundai motors made really bad cars, but I can’t remember the details that made them so bad. What I can remember, however, is that my family had a negative experience with a Hyundai more than 20 years ago. But the memory of my dissatisfaction has outlasted more recent information to the contrary. I am still unsure about Hyundais even though I have read nothing but good reviews about them lately. My negative experience has turned into a negative bias. I was done with Hyundai because they screwed us once, and I don’t care if they make better cars now.

MVP makes the assumption that users understand that digital products will evolve over time and get better. But the average person doesn’t think that way. Amazon.com is constantly evolving and releasing updates, yet I have talked to people outside the industry who are surprised that Amazon has more than one designer, or any designers at all because, “Isn’t the site already designed?” The here and now is when most people make a judgment.

When I first engage with a product, it will be the strongest impression I will get about that product. Apple knows this. Consider the care they put into packaging. As you unbox that shiny new iPhone, you are taking ownership. Your first experience is with packaging that is beautifully crafted, because Apple wants to amplify the strong emotional response that accompanies taking ownership. They know you will tell other people how amazing it is, and in turn, those people will go buy an iPhone. Conversely, ever buy something at the store, take it home, and unwrap it only to find out it’s missing some of the pieces? How does that make you feel?

The half-baked version of your software called MVP will lack beauty, craftsmanship and delight. Those attributes make great experiences that people care about and come back to. Those attributes count because they communicate quality and much more. Why would we ever subject a user to incomplete work? If we are doing our jobs and focusing on how to solve real problems, we should forget about the idea of an MVP. There are a myriad of ways to test a product before going live that will allow you to build and validate ideas. Sure, work on the core features first, but keep going. If you push the product out too soon, know there can be material damage from an incomplete experience. And we should be striving for nothing less than a great experience.

Land of Confusion

confused

 

Have you ever been told your design is confusing? Personally, I hear that from time to time. Someone reviews your work, and since they didn’t understand it for whatever reason, they think no other human could possibly understand it. It’s imprecise feedback, but they are entitled to their own opinion, and opinions aren’t wrong. Or are they? To begin, it’s more judgment than opinion. I often wonder what the process is for judging design when coming from someone who’s never designed an experience. Regardless of whom it’s coming from, design is easily misjudged. Sadly, it leads to misdirection, poorly designed experiences and, at best, wasted time. You have to know what the situation is before easing the problem. The first step is to understand what the word “confusion” means.

As a word, “confusion” seems to stand in for a lot of other issues. For me, though, I see confusion on a scale of difficulty in cognition ranging from minor hesitation to a fully broken experience. Walking into a closed door because you thought it was open would be confusion. Jiggling the handle because you don’t know if the door is locked is a lack of clarity. Fixing an unclear experience can be as easy as changing a word in a label. Fixing a confusing experience could have you scrapping your design altogether. Still, you’ll hear the word as a catch-all for any friction in the user experience. Your work could very well be confusing, but it could also be a matter of perspective.

How is the critic viewing your work? More than once, I have had someone look over my shoulder at a design and tell me what’s not working. Being told a button is too small from someone standing six feet away from my screen is undoubtedly an incorrect judgment. I once had a VP who refused to view any mobile design unless it was on a mobile device. Seems obvious, but I have been in meetings where mobile designs were being judged while projected on a wall at 20X their actual size.

Now consider the difference between the process of design versus an actual experience. In the design process, we sometimes look at printouts of different screens that are tacked up on the wall. This is not the user experience. It’s a replica, an approximation of what the user will experience. Some things are lost in translation. Compare an architectural drawing to what it’s like walking though a real structure. The user experience is nothing short of what the user is doing while interacting with your actual site or application. We have certain tools to create a vision. We use static mocks, wireframes, and prototypes. Occasionally with these tools, the context and continuity are misread. That confusion is a result of the inherit inadequacies of the tools we use.

I have seen people perplexed by how to navigate a four-way stop. Does that mean a four-way stop is confusing? More likely, the confusion is the result of a distracted driver, one that isn’t in the mindset to navigate it properly. A user can be in the wrong mindset, as can the person judging the experience. But when designing, you have to assume some level of focus from the user. You cannot account for when they are off in dreamland. Designers attempt to remove as much friction from an experience as possible, but nothing is truly effortless. A user has a task. They are attempting to accomplish that task. I have been told a design was questionable because it took a couple seconds for someone to find the thing they wanted to click on. In this case, it was a tertiary action, something most would do infrequently. To ensure that action would have more immediate discoverability, it would have to compete with the primary action that most used frequently. What’s funny is, they complained about a task they were still able to complete in a reasonable amount of time.

The hardest issues to deal with are cognitive biases. This is the curve-ball judgment I fear the most. A cognitive bias can easily lead to an error in judgment based on one’s own perception. To illustrate a bias of my own, I can tell you about the video of the twirling models that plays automatically when a customer clicks a garment on Myhabit.com. Having been a Flash animator in the advertising industry, I was drawing on an experience that proved to be bad for the user. Principally, it was having unexpected movement on a page when the user first landed on it. On Myhabit.com, I argued against having video automatically play. I thought it should be user-initiated, allowing them to acclimate to the page layout first. The counter argument was that the video was a key differentiator, and that we should lead with a “delighter” feature. But based on my heuristic, it wasn’t going to do well. Fortunately, we agreed to test it with a large internal audience. We lead with the auto play video and waited for negative feedback. No one voiced a concern. Then we released the product and waited again for negative feedback. Nothing. I was happy we tested it first, and happy I was proven wrong, because I learned a very tricky lesson about how your own experiences can mislead you. My perception of a confusing experience was proven false. Unless all factors are exactly the same, you cannot predict with a high level of certainty what will be successful. Your biases may lead you in the right direction, but can just as easily distort rational thinking. The process for good judgment should lead us away from our own biases. Getting past them requires listening to other people’s concerns, seriously questioning yourself and, above all, user testing.

The perception of a confusing experience needs to be dissected. There may be a legitimate concern. Take it seriously, but question it. Often, it’s less of an issue than what someone believes. As a designer, you control how a design is viewed and how it is communicated. If you take the time to appropriately frame the experience with the right amount of context, there are a lot of problems you can avoid. For the issues that communication can’t handle, do user testing. Get a broader perspective, and base your decisions on broad and deep information. Be aware that the one person you prove wrong may be yourself.

Requiem for a Metaphor

We use metaphors when what we have to explain may not be obvious. Given the complex nature of software, metaphors are used regularly. Every time you jump on your computer you are looking at different metaphors which come in the form of the icon. If you click on a folder icon, it’s not taking you to a folder; it’s taking you to a set of information. The folder icon is a metaphor for the organization of your documents. It is both a graphic element and a tool of information architecture used for better understanding. But, there is a problem sneaking up on us. Most of our metaphors are based in analog.

Many things no longer need to look like what they do. A television used to need a giant tube, which dictated its size and shape. Over time it got flatter. And then a lot flatter. Now a television has become a slim black box with a screen. Telephones have had a similar evolution. Once they were large devices that had a distinct look that was dictated by its analog nature. Now they’ve changed.

The convergence of devices on your average smart phone breaks metaphors like never before. It’s a phone, but looks nothing like the phones we grew up with. It’s a camera, but looks nothing like the cameras we grew up with. It runs applications, but looks nothing like the computer you grew up with. Given the newer technologies in development, your phone will soon become your wallet too. And for all it does, what does it look like? A slim black box with a screen. With this evolution we have lost inherit visual cues that analog gave us. This effects how we communicate in the digital space.

We are comfortable with using icons as a base metaphor for information on Web sites and operating systems. Look at the Windows OS. Your information and documents are subdivided into folders. But, if you look at how documents are increasingly digital, at one point you realize the metaphor of a folder may no longer be a logical reference.

If there are no paper documents, then there is no need for folders to place them in. At some point in the future, this will happen. With smart phones and digital readers, it may not be as far off as you think. The folder metaphor as we now know it will become an antiquated reference to how we used to do things.

The number of cell phone subscriptions will hit 5 billion this year. I would also argue the average icon used to represent a phone is outdated. If you do a Google search on phone icons you will see an overwhelmingly large number of images that have little to do with your average phone experience.

Holding on to these analog references makes little sense. Sometimes it’s the language that doesn’t add up. Tapes and VCRs have a rewind button, but so does you’re DVD player and DVR. It was called rewind because you were physically winding tape back to the first reel. The term fast forward is still relevant, but we should ditch rewind and call it fast back. Other analog references are less innocuous.

The accepted QWERTY keyboard layout is rooted in analog. It’s been arguably shown that other keyboard layouts are more efficient. But with the invention of the keyboard, they wanted to create something that was familiar and easy for those using mechanical typewriters. Now we’re stuck with it.

I am certainly not being nostalgic. I don’t want to be stuck with the old. I don’t want to cater to analog. I say good riddance, but it forces new questions to be asked. Abstraction is the removal of details. Digital has hastened the abstraction of these objects we create for ourselves. Fundamentally, it becomes a communication challenge. I see it as an opportunity for new language and new metaphors, but digital metaphors.

Ideas Are Meaningless

A good idea can come from anywhere. People say it all the time because it is true. But, an idea by itself is utterly worthless without someone who knows how to take that idea from dream to reality.

Ever watch the television show House? Each week, Dr. House solves the latest medical mystery though an obscure idea that seems to approach him from anywhere or anyone. The “a-ha” moment that reveals itself does nothing more than point the way. It takes the doctor’s creative genius to recognize the idea when he sees it. And it takes his experience to make the idea a reality and actually solve the problem. Understanding the value of an idea is the first part of good creative direction. The second part is having the know-how to make it a reality. We are surrounded by ideas all the time. Recognizing value in a single thought floating in a sea of ideas takes not only creative intelligence, but experience.

Benjamin Franklin said, “At twenty years of age, the will reigns; at thirty, the wit; and at forty, the judgment.” As a young designer, I came about good ideas by spawning hundreds of bad ideas. I would push myself like I was at the gym. Just 4 more ideas, just 3 more ideas, 2 more, okay last ooonnne. Done.  Now in my thirties, I feel like good ideas happen more quickly. By comparison, they are certainly more clever. The difference is, I now see ideas through the scope of my experiences.

We see a lot of good ideas with poor execution on the internet. Actually, some studies have shown that people watching YouTube are turned off by high production value. I don’t know. Maybe free content needs to look like it’s free. I always ask, “could it be better?” How would you improve it? How could you make it more meaningful? In looking back at my blog, it’s no big surprise to me why some entries do better than others. It’s not only the quality of the idea, but the quality of the execution that counts.

Crowd sourcing uses submissions from an online community to solve a particular problem. This is probably the most literal interpretation of how a good idea comes from anywhere. The Pepsi Refresh project is one of the most popular expressions of crowd sourcing. This project not only outsources the generation of the idea, but the judgment of the idea too, through online voting. I feel like crowd sourcing is an interesting way to generate ideas. I am always willing to listen to anyone, but it’s the judgment aspect that I have trouble with. In the end, I feel like average judgment equals average success.

A lot of CEOs make unpopular decisions, only to become wildly successful. Sure, they could have done what everyone expected them to do. And that would have sufficed. But recognizing the value of a good idea, even when everyone else doesn’t, is a hard road to follow. You’re stamped with a lot of nasty labels until you prove your detractors wrong. The flip side is that, sometimes, unpopular decisions are just bad decisions. But that’s what makes it interesting.

Everyone has the right to their own opinions and ideas. In a professional setting, they have the right to express those ideas, or at least should be able to express them without fear of reprisal. But who makes the final call? There always seems to be some confusion surrounding opinions.

In the mind of a creative professional, there is a real separation between professional creative evaluation and personal opinion. Having an opinion does not qualify your average person for anything more than having an opinion. A creative director’s opinion reflects years of training. Sometimes it’s based on intuition, other times it’s easier to explain. They push to understand both failures and successes. This becomes the basis of a trained professional’s decision. A personal opinion is about what you like; a professional opinion is about what an audience will like. Sometimes it’s the same, other times it isn’t.

Creative evaluation does not come from having an idea or two. It’s about having thousands of ideas, and executing hundreds of them over years. A creative professional at an advanced level knows how to prioritize, evaluate and judge what is better. The process of raising an idea can sometimes be anything but lucid. It’s tricky to know when you’ve arrived at the right solution, but experience teaches you judgment. And that judgment is what gives meaning to an idea.