Episode #9: When is a company ready to start a Conversion Optimization program?

January 17, 2023

Audio only:

What's this episode about?

In this episode, we dive deep into what is needed for a company to launch a successful CRO program. We discuss everything from stats to tech to internal politics that all contribute to the outcome of your company's experimentation efforts.

No video this time, unfortunately- it got messed up because of some technical difficulties so you'll have to do without seeing us in person this time.

Don't like videos? Read the conversation here:

[00:00:00] Ryan: the reality is it's, it's always kind of messy and like I think everybody who does CRO for any length of time will have these weird situations where you get a winning test.

And it just doesn't get implemented. Like I've, I've had a high volume website where we got around a 15% improvement at the checkout, the last checkout step, and you know, three months later, four months later, we're checking in with them. Like, so have you implemented that change after the test? And like nothing has happened.

And because, Somebody doesn't like it for some non-data informed reason. 


[00:00:35] Gerda: Hey, everyone. Wanted to talk today about the topic of how to know if your company is ready to start a CRO slash experimentation program. and we work with a lot of different companies and some of them already have really fleshed out CRO and UX teams.

Some of them only have one person who [00:01:00] is like the main point of contact and working with then an agency or consultants. So, Whatever you want to do and whatever way you wanna go about it. We hope that some of the points we talk about here today are going to be helpful, but this video is geared more towards people who are just thinking about starting out and don't really have like a program 

[00:01:22] Ryan: yet.

Yeah. Some stuff might apply too, other situations, but mostly towards people who are just thinking of getting started. . 

[00:01:33] Gerda: Yeah. I think the first and like biggest thing to kind of get aligned on within your company and your team and with yourself is the short versus long-term mindset. So it seems that a CRO project will always go sideways if people are geared towards , Short term wins and wanna see, uh, really epic results like within the first month and don't really calculate all the [00:02:00] cost and I guess resource that has to go into like, research and, um, everything that is needed to even get to that point where you will start to see a return on investment.

[00:02:13] Ryan: Yeah, a lot of it ties into how CRO is framed as well. Part of the fault for this, you know, even having to make this point lies with some people in the CRO industry who kind of market their services as like, you know, short term, we're gonna get you the wins, we're gonna increase your revenue within, I don't know, certain time period or something like that.

And this is like a, a kind of hot topic in the CRO industry as it's maturing that the. The value of what a CRO program can bring is actually extends well beyond the potential revenue lift that you get from implementing winning tests. Uh, like for example, you know, [00:03:00] just to look at it like really the flip side of that is like losing tests.

Bring you a lot of value as well, because you're saving money by not rolling out changes that have been proven to be detrimental. Um, and so if you're only looking at kind of the short term wins, you're gonna miss over that. But there's even more value that that CRO can bring in the longer term beyond just, you know, winning or losing tests and, and what you learn from them.

And it's. Kind of a mindset shift for the whole organization, or at least the parts of it involved in experimentation where you're transitioning from operating based on people's opinions to operating based on validated data from multiple sources. And so if you're too focused on like, how much revenue can we get out of this CRO program in month one or month two or something, then that's gonna shift the incentives a little bit and you're gonna be missing out on a lot more potential in the long.

[00:03:57] Gerda: Yeah, and I mean that all sounds really good in [00:04:00] theory, but like a losing test is still a losing test, right? Mm-hmm. , so it has almost this meaning attached to it that you lost out on something. So I guess the bigger question of how to deal with it in real life is , how do you get to that mindset that you see the losses as like a valuable thing as well?

Especially when you're starting out a completely new program and trying to convince everyone involved that it's going to be worth it. And then the first results that you see are, are not what you expected or not those gains that you wanted, 

[00:04:35] Ryan: right? Yeah. I've actually seen projects where. Everyone was doing a pretty good job of getting everybody into it.

And they would do a thing like, um, you know, running a test, especially if it's one that has multiple variations, not just a straight AB test. Mm-hmm. . Um, and the example I'm thinking of was, uh, like headlines on the homepage, like basically playing with. What the value proposition should be in the hero area of the homepage.[00:05:00] 

And so as a way to get some buy-in, um, what the company did is had people kind of place like bets, not real bets with money or anything, but just for fun, you know, on what they thought the winning headline would be. And then, you know, ran the test to completion and it was just inconclusive. Like there was no clear winner.

you know, that's not helping the situation. Everyone's getting all excited about something and then they just get all disappointed that , there's not actually even a winning test or, or a clear loser or anything really. It's just mm-hmm. , well, we don't know . 

[00:05:31] Gerda: So how does that , tie into, like you think getting to that mindset where you see value in the program, even if you don't get only solid wins..

[00:05:43] Ryan: Yeah. And it's, it's also kind of hard like us sitting here saying, don't have a short term mindset. Like, how's that gonna change anything really. Mm-hmm. . Um, so I think part of it is if, you know, if you're the person that's. That's considering this of like, you know, should we hire an [00:06:00] agency or should we start a CRO program?

Then, you know, you have a bit of a duty to do some research and understand a bit more how CRO and experimentation can be done at a high level. Like how are the kind of the A players doing this? How do they think about it? How do they approach it so that you have. Your own kind of expectations that are better aligned with the actual potential of what you're thinking about doing.

And then if you are, you know, like us, if you're a consultant or you're in an agency setting, a lot of times you might just have to juggle both of those things at once. Like recognize that people are gonna be sort of attached to that revenue in the short term. Try to get some tests going quickly that have a, a good potential for impact so that you can do.

both at once. You can say, here's a winning test. We're we're providing a return already. And also be educating the client as you go of like, here's the other benefits of this. So that when there's a losing test you can see like, okay, here's how much money we just saved you by testing [00:07:00] this. And then also be looking at, you know, other benefits of, of the program as you go.

[00:07:06] Gerda: So what do you think are the other benefits that people should understand, like in the early stages of starting a program? 

[00:07:15] Ryan: Well, I think it's mostly kind of what I alluded to of just like making that transition from sort of like decisions being made about the website based on kind of like a hierarchical

system or people within their specialties just having a strong opinion, like, you know, the UX people having like their ideas about what constitutes good UX based on whatever best practices, what they learned in, in training and in school and stuff like that. Um, and then, you know, that potentially being vetoed by a more senior person who just says, uh, no, we gotta do it this way because this is how we always did it.

Or because, I think that this is a good way to go, uh, and like. Those kinds of little [00:08:00] power struggles every time you're trying to change something on the website can definitely hold you back and cause a lot of like friction and kind of political tension in the organization. Uh, and so just having a more objective way to approach these decisions can make it so much easier on everybody, like whoever this person is that is so sure that their idea is gonna work, it's like, we'll test it then let's actually see if it brings the results you're expecting.

And if. If you're that confident, then you shouldn't be worried about this. But 

[00:08:29] Gerda: I mean, that's also very easy to say, like sitting here because you say that this makes things, uh, easier for everyone. But you know, the reality is that the person on the top who thinks that they're right they're not gonna see it like that at first, at least.

 I've seen cases as well where like the, the person's idea in that position, it gets overturned by the test or whatever, and they don't care anyway. They will find reasons to convince you that you [00:09:00] did the test wrong or like mm-hmm. the statistics were, I don't know, geared or the testing tool was broken or like, you know, they're, they will try to find reasons

to argue that their original idea was still correct, because they're so biased towards it, right? 

[00:09:17] Ryan: Yeah. And yeah, getting testing involved isn't gonna solve all of your. 


[00:09:22] Gerda: It's not organizational problem. It's not gonna change like people's personality in that sense, . 

[00:09:26] Ryan: Yeah. But what it can do is give these people more ammo in terms of how to approach these conversations with, you know, the senior people who keep overturning decisions and stuff.

Like if you're, if you're basically on your own opinion, against their opinion, you know, you have basically nothing. But the more data you have to show them that, you know, your perspective is correct, or, or whatever it may be, and whatever the conflict is about the better. Mm-hmm. . But yeah, I mean, yeah, the reality is it's, it's always kind of messy and like I think everybody who does CRO for any length of time will have these weird situations [00:10:00] where you get a winning test.

And it just doesn't get implemented. Like I've, I've had a high volume website where we got around a 15% improvement at the checkout, the last checkout step, and you know, three months later, four months later, we're checking in with them. Like, so have you implemented that change after the test? And like nothing has happened.

And because, Somebody doesn't like it for some non-data informed reason. . 

[00:10:24] Gerda: Yeah. And I guess all this is kind of like a good segue to the next point that we wanted to make. Is that mm-hmm. , um, for the program that you wanna start in order for it to be successful, you need like a pretty clear , structure around ownership and Yeah, just the whole 

 who is involved and responsible for which step. And especially in terms of like who makes the last call, , in terms of launching the test, stopping it, um, implementing, sharing results, all that kind of 

[00:10:54] Ryan: stuff. Yeah, definitely like one way to frame that is the, the absolute best, [00:11:00] most successful programs I've been involved with on either agency or consulting side.

Was with clients where our point of contact actually had the authority to make those decisions. Yeah, totally. To actually say like, yes, this design is approved, you can move it to development. Mm-hmm. , and once its development is done, test gets launched. Once there's a favorable result, then that gets implemented and.

You know, I've had projects where the opposite was true, where it's like a test is done, design, and then somewhere, or sorry, the test is done, development, and then somebody says, oh, we want to tweak this aspect of the design. So then you've spent all these resources, you have to go back a couple steps, revise the design, you know, restart the development to change it.

And uh, you know, when you don't have that kind of like clarity of, of role and authority. For the program, it can really slow things down and it, it really gets in the way of actually like, you know, getting positive results. 

[00:11:58] Gerda: So yeah, it's almost like it's [00:12:00] this, uh, huge balance between wanting to, wanting to have a lot of stakeholders

involved and on board of the program. But then at the same time to like push things through the pipeline, you need very clear authority, like who is able to actually make that last call. 

[00:12:18] Ryan: Yeah, and I think you can sort of address that by choosing at what points of the process you get more shareholder or stakeholders involved.

Mm-hmm. , and I would say like beginning and end, like get, make it really easy for almost anybody to contribute a test idea or some research findings. But then the middle parts of, you know, deciding whether to move ahead with the test, prioritizing them, getting them through all the, the pipeline steps, you know, that should be under kind of like somebody's authority so they don't have to listen to like a whole committee at every stage of the way.

And then, once tests are over, you know, share the results broadly, you know, figure out lots of ways to, to get, get [00:13:00] information to the people who are interested in, in a way that's, you know, ideal for them to consume it and stuff. Mm-hmm. . 

[00:13:07] Gerda: So one thing is that, you know, if you're building this like new team and program within your own organization and trying to, um, hire people internally and whatever, and like that is of course really complex as well.

Mm-hmm. , but then when you introduce like, An agency that's external or external consultants or whatever, that adds like this layer of complexity to the program. Yeah. And like on one hand, yeah, you get the um, outsourced skills and whatever help that is needed to get the program launched, but for that to succeed, you, you need that even more to have that clear authority figure.

Mm-hmm. from the company side, that is the main point of contact that has the authority to , make these decisions to grant access to your agency so they can have everything that they need to get up to speed and so on. Because that is also like a really huge bottleneck for starting the [00:14:00] program that we've seen.

Oh yeah. Where , it takes us like months to gain access to like analytics or like whatever, because nobody knows whose job it is to grant that access. Mm-hmm. within the company, 

[00:14:11] Ryan: right? . Yeah. Yeah, exactly. . Um, yeah. In that sense, also, like having an external, like agency or consultant involved can help kind of crystallize that focus too.

Mm-hmm. , because if it, you know, it's a pretty big initiative and it's expensive. Like bringing in a cro. Yeah. Cro, like outside help is not cheap. Mm-hmm. . And so that investment can. Almost like provide a bit of like support for the fact that like, hey, we need, we're spending a lot of money on this. We need to take this seriously.

And in order to do that, we need to give somebody the sort of tools and authority they need to be an effective point person to manage, you know, the process of interacting with this agency from, you know, the internal team..[00:15:00] 

[00:15:00] Gerda: So the third point we wanted to make about, you know, starting a completely new CRO program is like, , I think almost the most basic one you have to understand, do you have enough volume like traffic and conversions for AB testing, or should you focus more on qualitative research if you don't have that traffic?

And things like that because you don't want to be putting resources into like testing tools and developers and whatever for launching AB tests when there's like no way you can actually reach any kind of significance on those tests. 

[00:15:32] Ryan: Right? Yeah, exactly. And. This decision or this consideration? It depends a lot on, um, a bit of an understanding of statistics and like what does it even mean for a test to have significance or not have significance.

And some people don't quite understand this and I've, I even get in some sort of heated debates on LinkedIn about , this exact topic, um, where people think that like, oh, an in insignificant test, [00:16:00] it's still okay if you get a test that has a 3% lift or something, if it's not significant, oh well, it's still a small win.

Um, and I'd encourage anybody who thinks that way just to run an AA test cuz you'll see. These random fluctuations, you'll, if you run an AA test, you'll get a result with like a positive lift, you know, 5% even. Um, and sometimes you even run an AA test with a lift and it'll get significant. So like you can get to say 95% significance in an AA test.

And if that happens once in 20 times, that's completely valid. As far as the statistics go, that's like what the statistics mean basically. Is that like at 95% confidence, that means one in 20 tests will have a significant result, even if there's no difference at all. So it will give you a false positive result just because of the random fluctuation.

you know how to avoid that. Um, just make sure you have enough traffic and at least somebody can advise you [00:17:00] on whether that's the case and what sort of effect you're gonna need. Because usually what low traffic testing means is you're gonna need to have like a 25, 30%, 40% lift even just to be able to tell whether or not it's real or not, or if it's just random variation, you know, in the numbers.

Um, and if that's the case, like you're gonna have to knock it out of the park pretty much every test. Otherwise it becomes this really demoralizing situation where everybody gets all excited, they put a bunch of work into this experiment and then the end result, you know, like my earlier example is like, it's just inconclusive.

We just, we just don't know. And you know, a couple of those happening is fine, but if that's happening almost every test you run, then like any support you had for this testing program is gonna evaporate pretty 

[00:17:47] Gerda: quickly. Yeah. And that's definitely not to say that, like if you don't that type of volume to run these tests and whatever that you can't do cro, like you definitely can.

And there's lots of other ways like, [00:18:00] you know, doing surveys and talking to customers and there's like lots you can still do. It's just a matter of where to focus your energy and at that point, 

[00:18:09] Ryan: Yeah, definitely. Because like the crux of a successful CRO program is in that research anyway, in identifying real customer problems and solving them.

Uh, and if you don't have the volume to run actual ab tests, um, then there's still plenty you can do to understand your customers and figure out ways to address those issues. It, the difference is you just have to implement it and, you know, keep an eye on things. The, the one thing I would say not to do which Typically comes up is like people will say, okay, we don't have enough volume for AB tests, so instead let's just like look at when the change happened and look at the numbers after that.

Um, and like of course, you should always be keeping an eye on things when you're changing things, but. You know, and this type of thing is called like a time series analysis kind of thing. Um, [00:19:00] but the problem with this approach is like you really don't know what sort of external factor could be causing that change.

Like from one week to the next, your traffic mix could change. There could be some sort of big holiday or any number of external factors that are going to affect the results you see. And so you just can't really have that much faith in that type of analysis 

[00:19:24] Gerda: yeah. So. . I mean, there's like a lot that goes into it. So this is kind of like a high level, really like superficial assessment of things, of course.

Mm-hmm. . But the main points that we would recommend you to think about before even starting something is, uh, you know, do you have your organization in a state where people are willing to invest into getting. Long-term results rather than short, quick 

[00:19:50] Ryan: wins. Yep. Having kind of the right mindset about what the purpose of experimentation even is and what the benefits are.

[00:19:57] Gerda: Yeah. Then also have the structure in [00:20:00] place. You know, , everyone knows like what the pipeline looks like, who's in charge of which step, who gets to make the last call on, you know, what happens to a test and so on. And then the third one is,

Make sure that you know how much traffic you have, whether or not you should be doing AB testing, or you should be focusing on different kinds of research and just make sure that you have a grasp on basic statistics or you have someone in your team or.

Your network that can advise you about these topics that are pretty crucial for running tests and so on. 

[00:20:38] Ryan: Yep, exactly. You know, we're not trying, trying to scare anybody off of starting a CRO program. Um, just wanted to kind of make you aware of some of the potential mine fields, and so that you can learn from our experience working on lots and lots of programs with lots of different types of companies and, and different sizes of companies, [00:21:00] especially because the situation can be a lot different in like an enterprise level organization versus.

Something more at the startup side of things where you have the buy-in automatically because the CEO is the one who's pushing for it and stuff like that. Mm-hmm. . 

[00:21:14] Gerda: Yeah. So thanks for watching. If you need help with starting a CRO program, you can find us, uh, on LinkedIn or on our website. Everything's linked everywhere, so yeah, reach out.

More Koalatative Show:
all episodes