After years of building and scaling experimentation programs, I've realized that most organizations fall into one of these 3 categories in terms of maturity: beginner, intermediate, and advanced.
I could invent some new fun terminology for these to get more clicks, (if you're new here, let me just tell you - the CRO community loves little more than arguing over semantics and terminology) but at the end of the day that's what it is.
And you can certainly find more "scientific" reports and assessments out there on this topic that dissect these buckets with much more granularity, backed by some type of more formal research, but for my brain, it seemed easiest to conceptualize it into 3 simple categories based on lived experience.
Here they are with all their characteristics and challenges, summarized in a simple list:
The team is learning to set up their first AB test, which immediately reveals a lot of tech debt, a lack of systems and processes for launching/improving things on the website in general
There's no clear owner or vision for the web, different marketers are competing over real estate on the site on the daily
There is somebody on the team who we call the "Internal CRO Champion". This person is usually a mid-level marketer tasked to find an agency to help build a testing program and fully believes in the cause, yet they usually don't have the authority in the organization to push things forward enough to make significant progress.
(Best case scenarios with the beginner category programs are when the CRO champion is one of the founders)
Extremely long fulfillment process for implementing anything on the website
Lack of buy-in from upper management - experimentation is seen as a cost or inconvenience rather than a necessary growth strategy
There's a need to build all systems for customer research from scratch + figure out how to convert those findings into experiments
Often marketing and web have to report to sales
Analytics is 95% tracking errors and 5% user behavior
A testing tool is installed and a team is running a test or two every once in a while
Tests are usually fuelled by internal teams' heuristics and personal opinions rather than customer data
Tests are often run on "bad stats"- lack of understanding of general experiment statistics
When to stop, how/why should one do MDE calculations, sample sizes are off, significance vs p-values misunderstood, tracking not set up correctly, etc etc etc etc
Session-based analysis is performed rather than user-based analysis
Using visual editors of testing tools to "build" tests
Relying on testing tool numbers to analyze tests
Analytics is 70% tracking errors and 30% user behavior
A dedicated team with autonomy is running experiments consistently across the site every month
Most ideas are backed by clear customer research findings
There is a clear test approval and prioritization framework in place
Test data is sent to a data warehouse from a testing tool and segment analysis is performed consistently
Test reports are shared across the organization and there is leadership buy-in + shared ideas across departments
User-based analysis is performed instead of session-based analysis
Experiments are coded by dedicated AB test developers
A clear process is implemented and dedicated resources are allocated for getting tests through design, development, and QA
Good understanding of basic experiment stats like sample size but the team usually needs help with deeper analysis, for example how to approach "flat" tests
Struggles with getting the test velocity up and keeping the internal team trained in all relevant areas
The other main struggle is usually internal politics and interdepartmental gatekeeping of information
One of the most common scenarios is being afraid of sharing access to certain data streams or necessary marketing platforms, usually under some pretext of security and not only regarding external consultants but within the internal marketing teams as well.
Most of the analytics setup is reliable but needs some work to clean up and maintain it
The elements of the third bucket almost imply the existence of a utopian 4th bucket where none of these issues exist at all and everything is perfectly optimized until the end of time. Well, I have not encountered that bucket yet and the nature of this work by definition means that the work can never be done and challenges will arise as more and more growth happens within the organization.
I've worked with companies across all these three buckets and it's been proven time and time again that the hardest bucket to work with for external consultants is the beginners. This does absolutely not mean that beginners shouldn't or can't do CRO and build experimentation programs, it just means that there needs to be a certain internal process or growing pain experienced before bringing in outside help.
Consultants and fractional teams can certainly help uncover holes in the process but if leadership is convinced that experimentation is going to be a nuisance more than anything else, then there's not much that can be done, looking from the outside in. Culture starts from the top down, always. As mentioned in the list above as well, the best outcomes for the beginner bucket happen when the service is bought and overseen (at least in the beginning) by one of the founders who is also actively working and involved in the marketing activities.
I'd say that the group that has the most to gain and very quickly for that matter, is the intermediates. They've usually figured out some building blocks for the program, have the traffic, and just need some direction to make everything work together and consistently.
In the intermediate phase, it's not that important to be so diligent and strict regarding test statistics yet, because you need to focus on making sure you have the infrastructure, team, and skill to perform consistent research and then maintain a testing program.
Once you've been running at least 2-5 tests a month over some months and you feel like you have the processes in a strong place, it's time to focus on getting that velocity up. And what hinders velocity tends to be the understanding of statistics in these enterprise-level companies that get to that level. Planning out your tests beforehand, understanding what type of effect you can even hope for, where can you run tests simultaneously and how many per month becomes more and more important. And this is where an external consultant can come in super handy. Someone to take a look at things with fresh eyes, identify your strengths and gaps, offer additional training for the team, and help you make sense of the data of your many tests.
We'll send you an email when we publish new content