Richard White Video Still

Richard White: Thank you. Last one of the day. How are we doing? A quick show of hands here, how many people here are in product? How many folks are not in product? How many folks don't like raising their hands? Okay, good. So, this is... There, over there. Okay, slightly different talk, actually. I'm gonna talk about how to make data-informed product decisions. Really, what I'm gonna talk about is how to de-risk product decisions, 'cause that's how I think about how you use data to do it. I'm gonna go through a lot of stuff up here. This is like a 45-minute talk condensed down in about 25. So, I'm gonna go a little bit fast. If you wanna ask questions, I'll be around afterwards, at the whole happy hour thing, and we talk about it. Let's jump right in. 90% of product management is building the right things in the right order. Do we agree with that, for the most part? Cool. Anyone know who said this, by the way?

Audience 1: You.

RW: Yeah. Do you know who this quick quote came from?

A1: I said you.

RW: You got it right. It came from me. I made it up about three nights ago. It feels pretty accurate, though. So, I wanna talk about how do we do that. Most companies I talk to, we work with a lot of product teams. We help them gather customer feedback, we help them kind of de-risk their roadmap, if you will. What is it, the first thing we do when trying to prioritize a roadmap? Well, one, we have to define what our goals are. What is the goal for the roadmap? Are we trying to increase the win rate? Are we trying to increase the ASP? Are we trying to reduce churn? Whatever it is, and oftentimes, there are multiple goals. You generally have OKRs, you have some kind of quarterly basis that you want to look at.

RW: Step two, start looking at features or solutions, or sometimes even do this on problems, as opposed to features. But let's start scoring the things we could build against our rubric of whatever our goals are. There's a bunch of talks about this, about how to estimate impact. I actually like this guy, Bruce McCarthy, actually launching a book about road mapping. So, you just score everything zero, one or two. It will have no impact on the goal, it will have some impact, or it will have a lot of impact. Anything beyond that you find is really suspect.

RW: So, we can't rate this one to 10. So, let's just call it zero, one, or two. You can do whatever you want, but that's where we're gonna start. You can guess what we're gonna do next. We're gonna estimate effort, engineering effort. Again, it doesn't really matter what numbers you use, it's more of the fact that you're using some at all. In this case, we often do t-shirt sizes, small, medium, large, and we like to basically square the number. So, I'm not a fan of one, two, three, because usually, a large engineering effort is not three X, a small one, it's three orders of magnitude more. So, I like using one, two, four. You can use whatever numbers you want. Again, numbers don't matter, it's consistency.

RW: Next thing we do, estimate confidence. How confident are we in either the effort estimate, but really especially in that goal estimate. Really try to measure confidence on the goals. Effort, generally, I think it's PM, so we can get decently right, or when we shortlist, we can take the engineering and get very right. But we're doing "What's our confidence interval on the goal?" So, you can guess where we're kind of going with this, which is, we do some maths. Goal A plus goal B, times confidence, divided by effort, equals a score and we restaff rank based upon this. How many folks here do something similar to this on a quarterly or biannual level with some of their features or functionality? Cool.

RW: Pretty common, and we'll talk more. There's a lot of ancillary benefits to that. I like to talk about, Zynga did this pretty effectively. Their goals were revenue lift or referral lift. All they cared about is, "If I add this new horse stable to FarmVille, will that increase the number of people that get referred into FarmVille, or increase the amount of money we make off of it?" It's fantastic. I don't know if you've ever heard this story, but they would literally look at 20 different things. They would do a lot of analysis to figure out what exactly the impact of these things will be. They would stack rank these 20 things, they would go build the top three things, and within two weeks, they would know, "Hey, we thought it was gonna lift revenue by 0.3%. It lifted by 0.25%." To the nines, they knew exactly how to optimize that machine.

RW: So, I love this model just for the framework, because it worked so well with them. The challenge for us, how many of you here are B2B companies? Okay. How many of you... Let me answer another question. How many of you think you have as much behavioral data on your users as Zynga does? Nah, I don't think so. So, the challenge is, like online gaming, if I don't like the game, I don't play the game. If I don't like your software, or you don't like my software, sometimes I'm still using your software. I'm forced. We get a lot of things, at B2B, it's like people are using it day in and day out. How do they churn? Well, 'cause they hated it day in and day out. So, some companies are gonna be 100% behavioral when they do this analysis to inform these impacts and these estimates. I find that most of us live somewhere over here. We do look at behavioral data, it helps us tell where there might be smoke, but a lot of this stuff ends up being qualitative. It ends up being feedback from customer sales team, surveys, you name it. And so, it's a mix of both, and so it's the way we have to look at it. So, if I go back to this, what's our biggest challenge with this exercise?

RW: Where the hell do we pull these numbers from? We just kinda pull these out of our ass and said, "Here's what the impact is." It gets really fun when you start having multiple PMs do this. We've got five PMs on the team or 10 PMs, each one's gonna score their own set of features and then we're gonna globally stack rank them. How do you think that's gonna go? Oh, well my thing is like super important, it's gonna really affect this goal. So it's really tough sometimes to figure out. How do we figure out? I mean it's a great exercise, it's certainly better than doing nothing. But the next question comes up, how do we know that... How do we have some validation that these numbers are at all representative? And so what I want to talk about is how we can use feedback to de-risk that. So let's suspend disbelief for a second and let's assume you had a database that was everything your customers had said about your product. Assume you had that, we'll talk about how you get that in a minute, but let's just assume you have this database of all the things.

RW: Well, if you had the database of all the things that people have said about you. The thing you'd want to do and so what we do is "Well, I wanna associate them to the things I'm considering putting on the roadmap." So let's go ahead and connect to customer feedback, let's see how many customers have asked for something that we're evaluating. I also didn't obviously want to look at it by percentages. What percentage of our feedback is related to this thing? It's advancing numbers there. So, why do we do this? Well, the first way we de-risk this is do the asks backup the estimates. For example, if we see something where it's like we think there's gonna be a really big impact on some goal, we have a high confidence in it, but we have very little customer feedback to support that? That doesn't mean that's wrong, but it certainly means we should ask a question to the product team. How are we so confident in this thing? Should confidence be lower? Should the estimate be lower? Or is it accurate? In this case, my contrived example, what's added in our CRM data, which something else like again, imagine this mythical database has all your feedback data and has your CRM data about spend.

RW: Okay. Well, now we know the answer, okay? I asked the PM, "Why are we so confident the sort of a big impact on goal B." He said "Oh, that's because it's our enterprise customers have asked us for this. So that's why we're really confident about that." So it's only three folks but they represent $300,000 of revenue. Which brings me to the next thing. Are we overstating the impact? Because a common thing that happens to us as product people is we get a lot of kind of selection bias, but we hear a lot from our top three accounts, top 10 accounts. We sometimes have trouble building the case for why we should actually be paying attention to some functionality that doesn't benefit the top three accounts but benefits 30 small accounts, which in aggregate, are more. So we like to do this exercise too because we often find that "Okay, we have whole big customers on something, but if you look on this list the number of the aggregate amount of customer dollars is actually the smallest on that." So I might go to my product team and say "Okay, maybe your confidence is right, it'll have an impact, but I don't think you have that much of impact, and is it really gonna have more impact than the rest of these things that have more dollars behind them in aggregate?" So that's kind of the second way we de-risk that.

RW: The next way to look at this is [09:19] ____ real goal? This is a little ephemeral. So I would say one of our goals is actually to reduce churn. So instead of go away, we have reduce churn, well the thing we would look at is let's not just line up all customers that have asked for this thing. Let's line up all of our detractor customers and all of our churn customers. And then there's dollar amounts and see what we get. And so again, this helps us de-risk it even further. Now I look at this and say "Gosh, we think that this is gonna have a big impact for... We think feature C is going to reduce churn and we think it's gonna reduce churn more than A, but we have much more demand from the churn customers and detractor customers." Again, this doesn't mean you're wrong, it just means like this is something for us to investigate and get comfortable within de-risk because that's what prioritization is about to me. We make our best guess and we use them. All the data are disposable to try to de-risk that. So this is what we do and this is what we talk our best customers through doing. So I'd look at this. "Okay, maybe we need to reduce the estimates on our reduced churn thing for a feature C and maybe we need to up it for feature A."

RW: And finally the last way we de-risk this is we go into say "Okay cool, we believe our estimates, we've gone back and checked them, we feel like they check out, we're gonna go build C." What's the first thing when the executive team comes and says or we're just like "Okay, we're gonna go build X." First thing I do and our team does is "We gotta find 10-20 people that actually want X, and can we go talk to them and validate that that is actually what they wanted?" And this mock-up that our design team made like actually looks like something they want. And so that's the other way we de-risk this with feedback. It speeds our time to go do the research and find out, "Do we need to build this thing? Is this thing actually even solving the right problem? Was that the actual problem in the first place?"

RW: How many people do something like this? Cool. I wanna talk to you later. So it's really hard work. So let's talk about how we get this mythical database and I think we're all... This sounds good in theory. I think generally it feels like there's a lot of work you have to do to get to this point, and we're product people, we don't have a lot of time to do stuff. So let's talk about. So let's talk about how we build this machine to basically be able to line up with that rubric we had before. There's two channels I want to talk about. So if my goal and like my goal is that I always want to know... My goal is 100%. Like what are 100%... I wanna know feedback from a 100% of our customers, if possible. To do that, I need to tap into multiple channels. And the two channels I look at are basically direct feedback from users, indirect feedback from sales support, success, customer-facing teams.

RW: Now both these channels are important, we've found. You can't just do one or the other. We've actually found internally we get about 53% of our feedback directly from customers and 47% through internal teams. We're UserVoice, we have a brand built on getting feedback in public from people. If we are getting 50% from internal teams, I guarantee you, you will get that much or higher. It's not possible for us to go completely direct. You can see a lot of it is from support, [12:31] ____ sales, success, and these numbers actually scale pretty linearly with the size of those teams.

RW: What's really interesting about this, we've found this for other folks, especially for ourselves, we get different people give feedback. So the other reason why both channels matter to me is I get they're kinda mutually exclusive. And it kinda makes sense. Some people like to watch videos to learn about stuff, some people like to read the blog post. Some people don't wanna talk to your customer's team, they just wanna drop their feedback over there. And some people are on your white glove, and they can't be bothered to go to an idea portal or a survey and fill out anything. They're just gonna tell their rep on the phone.

RW: So we gotta do both channels. So let's talk about real quickly how do we do both channels well and how we often get it wrong. So getting feedback from users. No matter how you do it. NPS, idea boards, in-app widgets, usually the key failing point is low response rates. We don't get enough people give us feedback. I think most of our companies... Feedback to me is like this dirty word. It means useless anecdotes. It means statistical noise. And it's because we [13:37] ____ noise. We don't hear from enough people. So let's talk about why we don't hear from enough people. One is generally there's a lack of awareness. Most customers still think you don't give a shit about their feedback. That's changing, that attitude is changing a lot, but for many years every website on the world had a link that said 'feedback.' It was in the footer, you clicked on it, it took you to a form that went to nowhere.

RW: So we've trained people for a long time that we don't care about your feedback, and so I'm not gonna go out of my way to give it. Second thing is if we do have a way to give feedback, we often make it a lot of work to give it. And lastly, and this one's the one we miss a lot, which is "what's in it for me?" Why would I even take the time to give you feedback? Even if I was aware that you wanted it, even if you did make it easy to use, how do I do that?

RW: Have you ever seen the flying surveys? It's like you go to a website, it's like "Take a survey." How many people have ever filled out that? Have you ever clicked on that thing before? Okay. Have you ever finished filling that thing out? I'm in this industry and I always like "ugh, I should go see how we're doing things today" and I click on it and it's like "Do you have 10 minutes to take a survey?" and I'm like "No, I was reading something about my fantasy football team. I don't have time to take your fricking survey." So, what's the problem with this, right? It doesn't start from lack of awareness, it's flying across the screen. I'm very aware they want my feedback. However, there's too much work to give it, and there's no "What's in it for me?" You like my feedback, I don't care. What do I get out of this? Even NPS. I've talked to a lot of people about NPS 'cause... How many people here do NPS? How many people are gold on NPS? Yeah, okay.

RW: The scoring thing's fine. People don't like... At least it's... Especially if you do a pop-up, or an email, we're aware that you want the feedback. It's pretty easy to give it, at least to give the score. The problem we found is people don't ever fill in the 'why.' There's no value in it for me, "why am I bothering to tell you why I gave you a five?" 'cause I don't think you're gonna follow up with it. Let's see if I've put this in here... Yeah, I did. Okay, I'll [15:38] ____.

RW: So let's talk about what we need to do. One, we got to promote that we're listening. We actually worked with Stack Overflow, many years ago. We're familiar with Stack Overflow, correct? And we invented like these little feedback tabs to put on people's websites. And we just did that to try to break people's mental model that this company here cares about your feedback. The company that got the most product feedback, almost ever, was Stack Overflow. And you know what they did? They put red text at the top of their website, it said "We want your feedback on what we should build next" with a link, "click here." That's all they did. They didn't integrate stuff, they didn't pop up stuff, they just put red text at the top and people came through. So promote that we're listening. Promote that we're active. And again there's something in there, we want your feedback and what we're gonna do with it. We're gonna use it to inform what we build.

RW: Next, we want to make it a single question, if possible. These 10-minute surveys are not gonna work. And we need to explain what the user gets in return. Again, I'm going to mention our product, but there's a lot of things that can do this. The nice thing about when people go to a UserVoice forum and other idea boards is that you can see, "Oh gosh, I can see whether the company is responding." We actually had a Stanford PhD look at an anonymized set of data and found when companies actually respond to the feedback and users could see that they'd responded to the feedback, they're more likely to give it. Makes sense, right? If I see a thing with a bunch of feedback nothing's been done on, why am I going to pile on? So we promote... Want your feedback, here's where to give it, here's what we're gonna use it for, it's gonna inform our decision-making, and try to make it a single question survey. This is also... You can't possibly read that. This is a thing... There's this guy named Sean Cramer who does a really good talk about voice of the customer programs at Atlassian, and now he's at Amazon.

RW: And like I said, the problem with NPS is there's nothing in it for the user of NPS. What they would do is, when they get NPS results, all of them, but especially ones where people were like a detractor or something, and they didn't say why they were unhappy, they send this followup email that says... I'll have to paraphrase it. It basically says "We really value your feedback. We use it to inform our product decisions... " It explains basically the process. We classify everything to reliability, usability, or functionality, it's reviewed by our product team. Basically, it's this big write-up about "We actually pay attention to stuff, thank you for filling it out." They find just sending this means, next time around, more people actually filled out the why. And not only that, even without doing anything actually, their NPS scores went up. Just by saying we actually do listen to this.

RW: I'll tell you that level. That works for a while if you don't actually listen to it then you get this rebound downward approach. So that's feedback from internal teams. So we say what we mostly get wrong is we don't promote enough. We don't say we want to listen. We don't tell people, here is what we are going to do with it, and I think that's scary to us as product people because we're afraid what if we wanna do what they want us to do and we often don't and that's a whole 'nother talk I can get into. But after this, we can, if you're curious about that, I can tell you about how to get around that, but we shouldn't be afraid of that because it's gonna help us de-risk things. Let's talk about internal teams. How many people have a regular meeting with a sales or support or success team about "Hey, what are you guys hearing on the front lines?" I see this a lot. I see it typically manifest itself in a couple of ways, but the predominant way is we meet once a week, once a month, once a quarter. We bring a spreadsheet of here's our top 10 asks and then we'll check back in next month and you will tell us if you have done them and we as product people don't generally like this.

RW: Why don't we like this? One, I don't know how to compare the number three thing from sales from the number one thing from support. Which one, like I have no idea. Two, I don't know that I really trust it because it is just a list of feature names. I don't know who said it. I don't know what they actually said. Sales is notorious for this. Well, we need all these features and if I go talk to them. No, that's not what they need. So, it's hard to trust, and it's pretty time consuming and frustrating for everyone. We did a survey a couple of years ago, and this won't surprise you but every team wants to influence product. Partially for their own self-interest. Sometimes I want to be able to close more deals. Customer success is gold on retention. They want you to improve the product. But eventually this process really, if you can't aggregate the data, if you can't make sense of it, if you don't really trust it, this is kind of a political meeting than really an effective meeting.

RW: So, I'll tell you what we have done and what I have seen some other people do is either we have a tool for this or you can build your own. This can build something where they can give you feedback at any time. However, there is a couple of rules to giving product feedback. One, we have to know who it came from. You have to tell us it came from user X. Two, you have to show us what they said or at least paraphrase what they said. If it's phone call, what did they actually say in their own words? If it's an email, you highlight what they wrote. We have PMs, sometimes when we do customer interviews, after the customer interview, we'll do a write-up, highlight here is exactly what they said and then we will actually have them associate it to the feature they think will fix this and we like that, we like the last part because it saves us some time of organization. So we put this all together, we put this all together, we end up with a system where whether they come directly to us through a survey, NPS or an idea board or they come through a sales support team, we have this data structure.

RW: We have a user, we have exactly what they said, their primary source feedback, we have ideas. Though ideas to me are always kind of grist for the mill of we are going to mine into this and find out what the problem is behind this. So I don't necessarily trust, the success team says, we've got 200 people that said the box should be red. That's fabulous. We are not gonna make the box red. We are going to what we believe is that those 200 people have one or two of the same problems and so we will go ask them to go find that out but that's fine. You have done a lot of good work for us. We needed to know that those 200 people had this problem. So we push all of this onto our teams and by putting it into this data structure, I can now merge it from different sources so sales, support, direct from customer. I can now put it into a single database. We then have a separate step above that which I call features though I have to think of them as solutions. So now we map things, we think about build. We go in there. We look at the top 10 ideas of our detractor turned customers. Cool. Let me go line up solutions to all of these because sometimes a solution will affect multiple ideas.

RW: Multiple problems can be solved with a single feature you might build but that's my space as a PM. That's where I translate and I write them in my own language, and finally, we will hook this into the customer data. Because we know this is why we always want to know who the end user is. So we can eventually go back and pull in the demographic data, CRM spends data, you name it, whatever I may want a segment on, I do it. This is the data structure we found. Whether this is what we have in our product. We have also known people to build this with Salesforce custom objects, you name it. Buildings come in data structure, making sure you've got basically a standard protocol for how those teams give you feedback. We don't want that monthly meeting. You want feedback. Flag it. Send it into us through this mechanism. That allows us to get to where we are. We've actually got about, I think, it's 65% of our customers we get feedback from at any given time and that's pretty good given that a lot of our, we have a big swath of we've got really high-end customers and we've got a bunch of smaller customers that are probably, if I cut this into people that pay us more than $100 a month, that number is probably closer to 95, 96 and the PM team does less work.

RW: So just to wrap up here, what's nice about this is if I had this database, it helps me really inform and de-risk the product roadmap when I go through that exercise of scoring and ranking functionality. It helps me increase internal confidence in the roadmap. One of the things about doing that framework methodology and then especially if you could do the framework net to customer asks, it helps you when you are going out to these other teams, sales, supports, success and saying, we have a system. It's pretty consistent and this is the number one thing we hear from team's external product is they get really upset that product feel... Feels like product just kinda this on everything and just saying "Hey, we've got a standard system. Here's how it works." Inspires a lot of internal confidence which is something not to be discounted because having your support team, your success team, your sales team think you building the right thing makes your job a lot better.

RW: Focus at PM time, like I said, we did another survey that found 20% to 25% of people's time is what I call building the spreadsheet, talking to those teams, merging things, keeping track of what people ask for. That's not a great use of your time. When you grow up to be an enterprise company, usually you shove that off to a, you make a new team that their only job is to do that, I'm guessing based upon this conference, we're not probably at the point where we have a whole team that just spends all day churning that data, it ends up falling on us; we don't want that. We want to push that work off to either the end users to organize it for us, or the customer-facing teams, who are pretty incentivized [24:38] ____ organized, to organize it for us. Support people want to get this out of support cubes, they want to close the ticket. Customer success want to tell you about it because they want the thing to get built so they can have retention so they can hit their goal.

RW: And the last thing I'd say is, it helps you develop more evangelist customers. And this is the other thing, not to discount too much, I firmly believe five, 10 years from now, all this will be pretty unsurprising. Right now though, when you close the loop with the customer that gave you feedback six months ago, nine months ago and say, "Remember that thing you asked about? Well, it's now in beta, would you like to join the beta? Or we gotta mock-up of it, would you like to see it?" Or we launched it. People go bananas over that. I was at a meeting at Adobe a couple weeks ago, and they were showing when they push out updates and say this thing is completed, they get all tweets being like, "Oh my gosh, my thing got built." That's not to be discounted. I ran through a bunch of stuff, We'll have the slides online, we've got a couple ebooks, webinars, videos and stuff that goes more into all these topics but I just wanna fly through a bunch of the stuff. Thank you guys very much.

[applause]

Moderator: Alright. That essentially concludes Empower. Thank you all again for coming to our inaugural event. We will definitely be hosting this again next year. I'm super excited. Hope to see all of you back. Without further adieu, please feel free to enjoy the cocktails and the networking hour. It's in the same room that lunch was in, just around the corner to your left. Thanks.

[applause]