Experimentation ROI: How To Think Beyond Uplift
Ian Freed was devastated. He was at the helm of a product experiment that cost his company $170 million dollars. He had built a team of 1000 employees and launched one of the most anticipated products in 2014. But this product was termed as a ‘fiasco’ and a ‘debacle’ by the media and was shelved within 13 months.
The CEO here is Jeff Bezos, and the failed product was the Fire Phone.
Download Free: A/B Testing Guide
Before I narrate the consequences of this expensive, failed experiment, let’s take a pause and understand what ROI means.
What is ROI?
Simply put, ROI or Return on investment tells you whether you’re getting your money’s worth from your marketing initiatives.
When the ROI goes out of control, it should set alarm bells ringing.
Therefore, coming back to our story, I would be losing sleep over the Fire Phone if I were Jeff Bezos. Why wasn’t he? The answer, and I quote him, came in a 2018 letter to Amazon’s shareholders:
“As a company grows, everything needs to scale, including the size of your failed experiments. If the size of your failures isn’t growing, you’re not going to be inventing at a size that can actually move the needle.”
Great story, right?
Unfortunately though, if your manager isn’t Bezos, and your company doesn’t have deep pockets like Amazon, you’re most likely in trouble.
Your manager might ask you questions like “Did the variation win?” or “Did this experiment move the needle enough to put it in my town hall presentation?”. If the answer to these questions is NO, the experimentation exercise is termed futile or having no ROI.
Why so, you ask? This is because there is so much literature around us which declares uplift (in conversion rates or revenues) as the only metric to measure the success of experimentation. Everyone expects an uplift from experiment, and then terms the uplift as either ‘present’ or ‘absent’ – there is no in-between.
This approach reduces experimentation to a point in a checklist which companies want to tick off. This is because they see experimentation as the goose, which will suddenly lay golden eggs (read: uplift).
The truth is that though the industry has conditioned us to think this way, this should not be the reasoning behind any experiment.
Why do we experiment?
Essentially, experiments help in achieving the following objectives:
1. Making better decisions
Read that again. It’s not ‘correct’ or ‘right’; it’s making ‘better’ decisions. Experimentation aids data-driven decision making, which is better than based on gut or instinct. Without an experiment, there is no way to objectively denounce an idea.
2. Reducing ambiguities and risk of losing business
It’s always better to test your hypotheses than doing a full rollout of your new website or product features. This, in turn, minimizes the risk of impacting all your current business metrics – whether it’s the conversion rate on your website or feature adoption rates.
3. Prioritizing and learning what works and what does not
Building an experimentation roadmap helps you prioritize and differentiate between the must-haves and good-to-haves. Some experiments may not give you an uplift but will provide you tonnes of learnings for your next one.
The problem arises when ‘HiPPOs’ decide which experiment to run first, and their primary expectation is a revenue uplift. The fundamental purpose of experimentation to make better decisions goes for a toss.
Is expecting revenue uplift so wrong?
Imagine a situation where you, as a CRO practitioner, have been running an experimentation program for a quarter or 6 months with multiple experiments across different pages of your website. Most of these tests have yielded statistically significant winners, and you are elated.
However, when you look at the overall conversion rate of your website, it is almost the same as a quarter before. What do you tell your manager? How do you justify the investments you made in the CRO product which you championed to buy? Does it mean your optimization initiatives are worthless? (Story of your life? Shoot me an email or tag me on Twitter if I nailed it.)
Imagine the same situation–with a quarter spent in testing and continuous optimization–but this time your overall revenue has shot up by 3%. You are elated as this is your moment. But as soon as you enter your CEO’s cabin/zoom room, you see the head of SEO, head of performance marketing, and others already showing how they are responsible for this uplift.
It makes you wonder and doubt how you can isolate the impact of the CRO experiments from other variables and proclaim this as a victory for you and your team.
(Sidenote: we organized a webinar to try and unknot the mystery behind attribution challenges with experimentation.)
It is hard to accurately calculate the ROI of your experimentation program because of issues in forecasting, dealing with multiple variables, and running multiple experiments. CRO practitioners deal in averages, and what might seem an impact ‘wave’ for a specific segment of users might just be a ripple in the ocean when you look at global goals.
And you’re not alone. Almost 53% of CRO professionals cannot calculate ROI of experimentation.
Instead of putting resources behind chasing that one ROI number, I would advise using those resources to increase the testing velocity. This is because the ROI on experimentation is amplified by the customer insights you gain with each experiment you perform. That’s what Bezos did with Ian Freed.
The voice recognition software of the Fire Phone could follow commands and fetch information from the cloud. This feature was talked about and loved by the customers. It made Bezos curious, and he put Freed on a project to build a team and technology to respond to voice commands. Four months later, Echo was born. Jeff Bezos looked back at the incident and remarked:
“While the Fire Phone was a failure, we were able to take our learnings (as well as the developers) and accelerate our efforts building Echo and Alexa.”
Download Free: A/B Testing Guide
So what is the impact of experimentation?
1. Innovation/Breakthrough
Think about the players who have disrupted industries – Netflix, Amazon, Booking, etc. The things that they have in common are a company-wide culture of experimentation and unmatched testing velocity that allows them to fail fast and move on to the next innovation.
“If you have to kiss a lot of frogs to find a prince, find more frogs and kiss them faster and faster.” ~Mike Moran
2. Reducing the risk of new launches
Most people are hesitant to test out bold customer experience (CX) changes because of the effort they put into making these changes. Because what if the change leads to loss of business! Experimentation is the tool that helps mitigate the risk of wasted resources—time and money by helping take ship/no-ship decisions—at the same time inspiring people to not be daunted by testing bold changes.
3. The North Star of all experimentation should be customer experience
Ultimately, every experiment is taking you closer to the CX best suited for your end customer. If there is a winner, you know what your customer likes, and if the experiment has no winners, it means the customers like the status quo.
So, your mission should be to provide the best CX because it will ultimately impact your bottom line. Focus on your long term vision; a few banners and pop-ups might give you an uplift in the short run, but it might hamper the CX and hit your revenues in the long term.
Conclusion
To sum up, we need to reimagine experimentation ROI beyond just the revenue impact. Don’t ignore the money it makes for you, but don’t make that a priority. The compass of your experimentation efforts should move from ‘experimentation for better revenue/conversions’ to ‘experimentation for better decisions.’
Set up your CRO teams as the learning hubs for your business. The main goal for these teams should be to provide customer intelligence to anyone who asks for it. Aiming for higher velocity will move the key metrics for you faster and deliver growth; endless analyses of the atomic impact of each experiment you run won’t.
PS: Watch this webinar if you’re interested in further understanding the value of experiments that fail in getting revenue uplifts.