Guy Yalif

Founder and CEO
of Intellimize

Artificial Intelligence for B2B PMs: How to Get Started and Examples of Practical Applications from MarTech

Guy Yalif: Hello, my name's Guy Yalif, I'm the CEO of Intellimize. And for the last few years we've been helping growth teams better connected with their customers by using the AI to improve their online engagement, which has driven a bunch of revenue and a bunch of what they wanted out of their products. For me, that journey began a long time ago when I noted a little space experts. Can you hear a little bit turned off? Uh, let's see. A little on the top...

Guy Yalif: ...orange on the top. There we go. My bad. Okay. Um, so for me that journey began a long time ago when I coded a rules based expert system to design airplanes in college. And since then I spent 10 years as a product manager, 10 years as a marketing guy serving growth teams across industries at Twitter, Brightroll and Yahoo. Today I want to help make you better consumers of AI by helping you poke holes in what you're hearing, make get more out of your investments and uh, take better approaches when you're making AI investments. We believe that AI can help each one of us advance our careers, deliver more. For our companies and get more done. And so we'll spend the next 30 minutes talking about practically what are the different kinds of AI that we should know about as product people. Uh, and a couple of examples of how people have put this to use in day to day life. So please jump in, ask questions anytime. So anybody have a sense of how would you define AI?

Speaker 1: [inaudible]

Guy Yalif: It learns, yeah, that that is a great definition in particular for machine learning. And most would often say AI is a computer doing something we would consider intelligent and it's all around us. I mean, these companies today helped us figure out how to get here faster. Entertained us, helped us avoid reading spam, email. Uh, it is pervasive in our personal lives and it turns out it's particularly good at a few things that are really helpful to us in our professional lives. Anybody have a guest's with these four numbers are the first. The each illustrate something AI is really good at. AI Is really good at managing a lot at once. Four and a half billion is the number of versions of a page. A growth professional got up in front of a con conference a year ago and said, Hey, I'm managing that number at the same time using AI.

Guy Yalif: AI is also really good at accelerating learning. Our average customer last year, personalized flows that would have taken them 25 years had they used traditional ab testing. It's also good at unlike us humans who get tired, fallible and distracted. It's good at listening and reacting. 24, seven. So four hours is the amount of time it took one of our customers to see different experiences showing on their website after they ran a and the traffic changed. And one is the size of the segment to which AI can help us personalize. It, can act with superhuman levels of precision, enabling us to deliver one-to-one experiences that we've been talking about for a long time. And I want to talk about how that happens. Now AI is really good at these things. It's not so good at some other things. I'm going to pull examples more from uh, personalizing a site experiences for growth professionals.

Guy Yalif: Cause that's happens to be what my company is doing. But these apply much more broadly across what we're doing as pms and as growth professionals. But if you're thinking about optimizing a site, you need to do three things. One, you need to have customer intimacy. You need to empathize with your customers and understand a day in their life too. Gotta be creative. Got to come up with a bunch of ideas of things you want to put in front of your prospects to get them to become a customer. And three, you need to manage a lot at once. You need to manage a bunch of experiments. AI is not particularly good at human empathy and human creativity, at least not yet. And so we think the best outcomes happen when you pair human intuition, creativity, and empathy with machines that can do a great job managing a whole lot at once and accelerating what we're doing.

Guy Yalif: And you can do that today. I'll pull examples from marketing, but they, they, they, uh, are things I think we can all learn from, uh, throughout the marketing funnel. So our brethren in paid media had been using AI out in the open for 10 years, right? When Google says, Hey, give me five search ads, not one. It's using AI to figure out what's the right one to show each individual visitor than it feeds the winners and starves the losers automatically. Persado at the top of the funnel is another example of a company trying to create more engaging email subject lines. It's trying to be creative automatically. You go mid funnel. There are companies like mad kudu and infer that are creating batch models to try to predict, Hey, I've never seen this company before. Are they likely to be a high value prospect? Are they actually low value prospect and I should keep them away from sales and point them to self serve even further down the funnel. You've got companies like drift and intercom, which are enabling growth professionals to create playbooks, rules that they will then go execute to go interact with prospects, enabling live selling.

Guy Yalif: Now we'll draw, as I mentioned examples from websites. The principles will apply more broadly. And I'd like to talk about the different kinds of AI that are out there. We'll talk about distinctions that I think matter to us as pms. Uh, practically day to day we're not going to go, uh, uh, super deep or into any of the math, but rather on the intuition of what matters. So at least on websites, folks begin without AI at all, but they run an ab test, we're all familiar with them, right? You give half your traffic to one thing, half your traffic to another, you wait until you reach statistically significant differences, which happens about 20% of the time and you pick a winner and you go coated on the site. It's great because you're using data to decide what you're showing. But then we often ask ourselves like, is everyone really the same?

Guy Yalif: And so we want to do something a level further and often we'll turn to rules will create if this, then that statements like this one where you know, if somebody in San Francisco, I'll show them the northern California promotion, it's great cause you're not treating everyone the same way as we often don't want to do in our products, right? We want to have personalized experiences in them. And often when you're hearing folks talk about AI, they're really doing this, they're doing these. If this, then that statements, you see it all the time in like marketing automation in Marketo. You see it in the bots I was talking about before the chat. It's pervasive and, and uh, it gets people wanting more. They say, look, I got more of the engagement and revenue I want because I created these rules. So they create more of them. They say, look, if somebody is in this particular situation or context or is exhibited these behaviors, I'm going to show them this message or open up this feature to them or give them this experience.

Guy Yalif: And often we'll say, great, that worked. I want even more. And so you end up with a huge number of rules where my cofounders were personalizing the Yahoo homepage. If they wanted to create a rule for every city in the U.S. they'd have the daunting task of creating 35,000 of them. It's not tenable at that scale, but it is particularly useful in certain situations. I'd suggest you think about rules one, when you've got a hard business rule, perhaps for a marketer, you've got a promo in a certain area of the country or as a pm, you've got a feature that you can only enable for a certain group of folks. And too, if you've got machine learning scientists on your team, this is a great way to go check out if there is lift in something, this is a good way to, you know, in a half a day go try something out rather than them spending three months building something that you're not sure it's going to work. And it's often implemented as an ab test with an audience or with rules-based personalization or with an expert system, which is really just a bunch of, if then statements organized effectively. This happens to be the one I wrote in college. Any questions so far?

Speaker 3: Yeah,

Guy Yalif: so I think intuitively I wouldn't machine learning folks do they do say this isn't rules-based is a branch of AI formally, I think what we're all used to calling AI is machine learning is where a machine is learning on its own.

Speaker 3: Question. [inaudible]

Guy Yalif: I realized I'm going to repeat the question just so we can go on the audio. So the question was, would you call it heuristics when you're using rules-based, uh, AI? Absolutely. In fact, the expert system I showed you before was specifically built to do that where it was saying, look, when you're designing an airplane, they're a bunch of rules. There are also a bunch of things you learned from experience, experienced airplane designers know these rules of thumb. You can encapsulate those in rules. So yes, I think it's a great idea. Question

Speaker 3: [inaudible]

Guy Yalif: What's the difference between fuzzy logic and AI?

Speaker 3: [inaudible] pretty bad.

Guy Yalif: I think you're spot on in asking questions saying, look, people are slathering AI on everything in their marketing. They're saying everything is AI. Like the question about rural space, do you know they would say, hey, that is AI. If you can hold a question for one more slide, I'm going to talk about what I believe are the distinctions in the machine learning tree. The hierarchy that irrelevant for us transparently. Fuzzy logic isn't one of the ones I had on my radar and maybe we can map it to that as we go through it. Okay. Machine learning is where a machine is teaching itself. Uh, you've got a bunch of data and you need to be thoughtful about that data. Some algorithms require a Google scale data in order to work some other algorithms required data that's really expensive for you to go get. Be Thoughtful about that.

Guy Yalif: As you pick the approach you're going to take to you then have a model that's being trained and you want to pick the right model, the right approach for the problem you're trying to solve and we'll talk about that next. But you know the model med Kudu is using to predict lead scores is very different than the model you would use to go do ab testing on a website and third you wanted then create a prediction and put that into production. You want to use that to change what your users or prospects are experiencing and you want a system that will help you do that quickly where those learnings are being put out into production regularly. Now let's talk to your question about the different kinds of problems we might solve and some of the kinds of algorithms that would help us go do that.

Guy Yalif: To build some intuition, I'm going to draw from the world of marketing. You can draw direct parallels to the world of user experiences and product. So on the left I'm going to talk about different kinds of problems and on the right some classic kinds of machine learning algorithms that are often used to solve them. Please interrupt with questions throughout. So if you're trying to predict something that is continuous, like a lead score, like how much should I charge for something where there's an infinite number of values, right? You're trying to predict something between one and 10 where any number in between is is is an option. Linear regression is regressions are the most typical way to approach that and a lot of the machine learning we see there is doing that. It's trying to fit a model to predict a number like um, housing price, right? I might have a model that has the square footage of the house and the price at which it's sold and I'm going to train a model to predict price based on square footage. It's a continuous number. If instead you've got to screen values like I want to predict if somebody's high lead score or low lead score or I want to predict is this email spam or not spam. Then classification algorithms, logistic regression being the most common are the approach that is most often used is trying to classify between two or multiple different classes. Makes Sense.

Guy Yalif: Third, there's a whole group of recommendation algorithms particularly relevant to us spms where you might want to know, hey, what's the right product for me to show what's the right content for me to show? Some of these are based on, hey, the two of you behave the same. You might like the same things. There's a whole nother approach to it where you might say these two products are similar. So if you like this product, you might like that product. So you, you, you generally can come from the user behavior end or from what I know about the product and depending on the data you have and uh, the, the problem you're trying to solve, how you want to solve it when you've never seen someone or never seen, uh, something, a product before. So those are recommendation algorithms.

Guy Yalif: Next, there's a whole group that gets a whole lot of press nowadays when you've got Google scale data, you can do things like try and understand speech, try to recognize images, try to write an email subject line. And often deep learning shows up here. And deep learning is a fancy word for neural networks that have a bunch of layers in them and they take an awful lot of data to train and then they can do magical things when you can train them that way or you can, you can train one to go solve chess. Um, so speech and image recognition is a whole class of problems where you will often see deep learning neural networks.

Guy Yalif: Then sometimes you want to know more about our customer base. We've got a bunch of data about our customers and the goal isn't, hey, can I predict a certain thing like that housing price, right? I want to predict a certain output here. It's not that at all. Here you're trying to find patterns. You're like, find me the clusters in my data, me, the clusters in my customer segments, clustering algorithms are particularly well suited to that kind of problem. Couple more to share. There are times where you want to detect when something's unusual and outlier. This kind of machine learning was initially pioneered. Some of the early work was done with um, aircraft engines. People wanted to predict, hey, when is the engine on that jet going to fail? I want to pilot a whole bunch of data about the engine and predict. I don't know which piece of data matters, but I want to predict that it's often used in products to predict fraud. You know, is this signup real or is it real? Does this, and the core question this algorithm's asking is here's everything I know about people who've been successful. Is this new example like that or is this one anomaly, is it different? That's kind of the intuition around this one.

Guy Yalif: And lastly, there's a different approach called reinforcement learning that is suited to problems where they're three different things. Feel free. You may take a picture again in a minute cause I'm going to add two more things to this slide. Um, uh, reinforcement learning is particularly suited to problems like chess or go or checkers where you one only get to learn about the moves that you make. Like if I'm playing chess and I want to learn, hey, if I make this move with my piece, I'll get to learn the outcome of that. I don't get to learn anything about all the other moves I might've just made. That means part two of reinforcement learning. You have a trade off between exploring, should I make one of those other moves or exploiting, should I go do the thing I based on what I know now is the best thing for me to go do to win this chess game.

Guy Yalif:

Guy Yalif: And now you can take a picture if you want. Um, I can feel free to anytime, but this is the complete slide is the ones at the top are called supervised learning because there is a particular outcome you want to predict. And your training data has that outcome in it. Like if you're going to predict housing prices, you're going to be fed a bunch of data here so you know the square footage and here's the housing price so you know the outcome and you're trying to match data to that. Unsupervised learning does not have in the training data the particular thing you're trying to predict, right? We've talked about a very unstructured problem here. What are the clusters in the data? I don't know what they should look like. It's that kind of problem and reinforcement learning is characterized by the three things I described before. It has elements of both in it. I'm going to pause there for a second. Questions

Speaker 1: you mention [inaudible]

Guy Yalif: it was a mistake to mention it here. I mentioned it because sometimes this is implemented with neural networks and I've talked about neural networks up here. It really does fall down here, not up there. Thank you for asking.

Speaker 1: So where does the role of having discrete outcome, it can be measured to be clay? I understand there's a difference between supervised learning and unsupervised learning there, but just as sort of a person, I would assume that you need to have some kind of outcome to be able to train any kind of system.

Guy Yalif: The question is can you train? Don't you need a discrete outcome to train any kind of system? The answer is yes, but the way you frame the outcome will be totally different between a supervised and unsupervised learning problem. So all these, all these approaches, when they're going through that loop we described before, they're saying, here's some measure of goodness and I want to minimize how wrong I am. So in supervised learning, the measure of goodness is I needed to predict a certain value, a certain thing. Did I predict it well? In unsupervised learning, I'll pick on clustering for a minute. You will say, I wanted to, I want to create clusters so that the error, the badness is as small as possible. So you could imagine a whole bunch of points like, like imagine I had a whole bunch of dots here in a two dimensional grid.

Guy Yalif: If I picked clusters that were here, the center of the cluster would be really far away from each of these points. If these were the two sets of dots and I pick clusters that were right in the middle. Now the distance between the center of the cluster and the dots is less. That's the intuition. So you're minimizing this error. And here you would say the error is how far off is each piece of data I have from what I'm predicting is the center of the cluster. So there is something to minimize. It's just not part of the training data because the training data didn't come with which clusters should this be in. Whereas here it did come with that. Right? You didn't know the housing price. Does that make sense?

Speaker 1: Does that make sense? Cool. Other questions? [inaudible]

Guy Yalif: question is how in an organization do you get people to believe this really works and have them making decisions based on this? I'm repeating it just for the camera not cause um, we have run into it and often we have gone with data. So often the machine learning scientist will break up the training day to day have into two parts. You may ask them to break it up into three. If I get tactical for a minute, you'll have the part on which you're training the model. You're using that over and over again. You'll have the part on which you'll check. How far off was I in the thing I'm trying to predict. So you avoid a certain class of problems called overfitting and under fitting they you'll have a third bucket of data and that's the one you'll show to your stakeholders cause you'd be like, I trained the model over here.

Guy Yalif: Now here's this stuff I've never seen before. Look at how good it is at predicting this. You'll use the data itself to say, hey look, this really does work and invite them to bring their skepticism. Invite them to say, okay, well then let's try it out over the next period of time and see with this data we never even had before, how well it does in particular. If you can do it in parallel with what you're already doing, you can say, I look at the stuff we're doing produced this value, this thing produced this much value at the same period of time. Sort of a head to head is one of the techniques I have found useful. Another way. It often is less useful to try to walk somebody through the math depending on who they are.

Guy Yalif: So this is a nice clean tree, nice clean table, reasonable experienced. The machine learning, people will disagree on this. Most would say this looks reasonable. Some will quibble with one part here or there. Um, but there was no sort of universal truth on this and frankly it took me a whole bunch of iterations to get it to simplify to this cause people it in so many different overlapping ways. Okay. Practical, real life is not this clean at all. And I want to walk you through an example of that. On the left you see the machine learning problems we talked about on the right before I just slid them over to the left. Practical real life. People will often not use one of these on its own. I want to walk you through two examples, totally realistic ones. Let's say I wanted to recommend some articles for you to read and I had this whole set of articles I could choose from.

Guy Yalif: Chances are I am not just going to go do product recommend, I'm not just going to do content recommendation first. I might write a classifier to say look, this piece of news, is it offensive or not offensive. I might take the result of that and then say look, is this popular or not popular cause popular is a real predictor of engagement and I might want some mix between those. I then might do collaborative filtering to say look you and you tend to like articles that are the same and you like these articles so you might like them too only on the output of the previous one. And finally I might have a rule in place because I don't want them all to be from one category of content. And so I might say, Hey, I've got a rule no more than two per totally realistic, takes a bunch of experience to notice string these together.

Guy Yalif: And so when you're working with machine learning, scientists working with one who has experience in your space can be really valuable, can save you months and quarters worth of work pursuing paths that are less likely to lead to positive results. Another example, I want to predict some score, I don't know, maybe it's the housing price, maybe it's some other score and I, the machine learning engineer need to tune a whole bunch of parameters sitting in the background. They're often called hyper parameters. Now I may have experienced in this space on to know what range these parameters should be in, but I may also just need to try a bunch and so I may try five different versions of the model, right? Each with different hyper parameters in it. Now what do I do? Do I say, look, for each one of these I've got a new training example.

Guy Yalif: I could one say I'm going to pick the model that is most confident, so let's say a new training example comes up. Regression to says, I am super confident that I've got the right answer. Great, so I'm going to use regression too for that example. I could do that over and over again. A whole other completely valid approach would be to say, you know what? I'm going to spread the wealth. I'm going to say 80% of the time I'm going to take regression to 20% of the time I would take regression three 30% of the time I'm gonna take regression for ignore the fact that that's over a hundred but you know, I could set that up in advance. This is what machine learning people deal with all the time because it's not clear in advance which one they should have done. Which approach would have been better. Makes Sense. Questions?

Guy Yalif: I'm going to walk through an example of a customer using this in real life. To give you, to tie back to some of the strengths I shared about machine learning earlier. These will be in particular in this space that my company happens to work in because that's where I've got examples from, but you can map these to those algorithms that we saw before. So Stella and dot happens to be a business that empowers women to run their own business, selling jewelry and clothing and person hosting events and then straight up e-commerce on a website. They worked with us to work across all three. This is some of the work they did in the e-commerce funnel. They started at the bottom of the funnel and they tried a bunch of ideas. They tried, in fact, so many ideas on this page that when you multiply them all out, there were more than 400 versions of this cart page running at once.

Guy Yalif: Which one ended up having the most impact? To everyone's surprise, it was this in this context, emotionally affirmative language at the moment of purchase. This looks great on you, great choice. You've got great taste was the most impactful thing to do. We did complex, you know, product recommendations down below similar products. This had the biggest impact. No one could have predicted that machine did a nice job of finding that true 52% lift. Okay, and then went up funnel a little bit to the product detail page, tried a bunch of ideas, had more than 300 versions of this page running the idea that mattered the most, that had the biggest impact. Having this purchase widget. Scroll down with their buyer while she scrolled through a whole bunch of images that used to scroll off the page. Okay, great. Jovi percent lift then went all the way up funnel across their entire site.

Guy Yalif: This is a view of their homepage and they tried 25 different versions of the text in this bar, which is present on every page and to everyone's surprise, it drove 400% lift in engagement on their site for them, they would never have been able to do this in a practical amount of time with their lean marketing team. They had more than 700 versions of pages running for one part of one flow for one audience. For them. AI was a great lever. It helped them do a whole lot more. It's like having an army of analysts sitting in the background.

Guy Yalif: I'm going to jump ahead cause I see we're short on time and we'll share to put the three we talked about before in context. If you're going to do things manually, you can be data-driven and create a great experience for everyone. You have to keep an eye on your tests and you've got to come up with new ones. Uh, if you want to then create great experiences for different segments of your audience, you can use rules and some would say, hey, yes, that is AI. And most personalization vendors are saying they do AI do this. Great. You can create a different experience for each part of your audience. And if you want, you can use machine learning to create the right experience for each unique visitor automatically. And I'd invite you to pick the right approach for you based on what we were talking about before. And so I would humbly suggest to you, AI can help you accelerate your career, get more done, help deliver more product engagement, more revenue for your company. And I invite you to use some of what we talked about here today to go do that at home

Speaker 3: and now open it up for questions. Um, what's your favorite or best book for learning the details of the AI? The problem I have been mostly could be like how it's going to effect the economy or it's like hardcore code. Someone who wants to learn the details like you'd have in your slide, but like maybe isn't a like stats phd.

Guy Yalif: The question was what's a good book for somebody who doesn't want to go deep into the stats but wants the intuition to learn about machine learning? I don't have a good answer unfortunately. Um, I will, I will ask. I for me a lot of this very transparently came from my co founders who learned a lot of very expensive lessons and then, oh, I've got one for you. If you have the time, it'll take maybe similar amount of time to a book. Andrew Ang is a famous machine learning guy. He happened to have taught a course at Stanford. He founded a couple of companies on it. He worked at Baidu. Um, very nice guy too. He taught a cs course at Stanford that he put the entire thing online. So if you've got a few hours, you can go to youtube, get it for free. You can go to Coursera and see another version of it. Andrew Ng ng is his last name and look for his computer science course. I actually found the way he described it, super helpful. He'll cover the math. He also will show you the intuition and so you can get either of both. It will take a few hours, but I think I highly recommend it.

Speaker 3: Yup. Question and then I'll come back. Yes. [inaudible] analyze.

Guy Yalif: The question was in the Stella and dot example where there were 700 pages, did the AI create the ideas or did it analyze the results? In our case, for the problem we're trying to solve, the AI is analyzing the results and then deciding what to show. It's actually making the call about what am I going to show each individual visitor transparently as a company. We thought maybe the AI should actually come up with the ideas. And when we talked to a hundred potential customers, they said, that's great. It'll come up with them. And they were like, wait a minute, hold on. You're going to change my website with a machine. Okay, now I want control, I want approvals, I want this additional stuff. And then we talked to them and they said, I actually have a lot of ideas. I'm frustrated that I've never been able to try. So we did all of that part. The creativity and the empathy human, the analyzing the results and deciding what to show well-suited to machine in our, in our use case, your use case may be different. Makes Sense. Yep. Cool. Welcome. Question.

Guy Yalif: Question was if you read 400 pages, versions of a page, how on earth or was there enough sample size to gain insights? Significance. So that happens to be part of how we're using machine learning. Uh, we are using it to learn more from every single impression. With traditional, I'll just pick on an example. We traditional ab testing or multivariate testing, you'd be taking your traffic and dividing it 400 ways. We're not doing that. We are intentionally showing winners more often starving losers of traffic and using AI to tease out the impact of each individual thing we're showing on a page. For example, if we had a customer that did five headlines, five images, five calls to action, you have 125 combinations. You may not have shown all 125 but you can learn something about all 125 when like you know, headline one image, one call to action one is shown than headline one.

Guy Yalif: Image one call to action two is shown. I can still learn about headline one and image one together regardless of what's shown and call to action. I see I'm not giving you a satisfactory answer. The AI does allow us to learn about things we're not showing cause you can get partial matches and tease out the impact using machine learning. It's very specific to our particular approach. It's not that we violated the laws of stats, we didn't rewrite them. We just chose to solve a different problem using machine learning. If you want to talk more, I'm happy to afterwards. Other questions please.

Speaker 3: This is about [inaudible] model. How are you [inaudible]

Guy Yalif: question is how do you evaluate if a model is degrading over time and how do you correct for that? I think um, experienced machine learning folks will tell you exactly to your point, you always need a hold back group. You always need a group that is seeing what was before in my particular company. We happen to do that all the time. And when you are working in an environment where somebody is like, I put the algorithm in and now we're using it all the time. In my humble opinion, that should scare you some. You should have some hold back groups so you can see comparison over time. And maybe that comparison is the last ml algorithm that you thought was great and now you're testing the next one. But there should be some comparison so you can have an apples to apples contemporaneous view. Because if you're doing some before or after analysis, there's so many other things that may be coloring the results. Make sense? One last question. I think we're, we're basically a time

Speaker 3: question. One will draw a third party data on four is that you do the way we train a model, a big companies have a monopoly on it, on the beat learning stapler. Um, how can we democratize AI?

Guy Yalif: I have wondered the same question. A bunch of the question was have you run into pools of data that you could use to train a model? Um, so two parts of the answer. One, I have found none and we thought about that a lot early on in the company's life too. I came to appreciate what my cofounders already appreciated, which is you need data that's very bespoke to your problem. But we need data in my company. We need data very specific to human interaction with websites given some of the things they might see. And, and that's one set of data. Somebody trying to do the housing price thing. They need a totally different set of data, right? They need to scrape a thousand MLS pages. Um, and so I do think you're, uh, I, I share the frustration and I think you're very right to be thinking about it.

Guy Yalif: Cause I think too often, I know in my pm life and also in my marketing life, I'd be like, just go get this data. And then when you really dig into it, you realize somebody is going to school, spent three months getting it, or $50,000 getting it. And there may be a different way to think about the problem to avoid needing that data at all. It's not a super satisfactory answer, but I think practically it's what we have today. Thanks for the questions. Uh, if you want it, Ping, uh, on gyalif on Twitter or guy@intellimize.com and I'll be around for questions after. Thanks for the time.

Speaker 3: [inaudible].


Keep me posted on Empower 2019.