Key Takeaways
- Embrace a culture of experimentation across your entire company, even if initial ideas don't work as expected, such as the slider example.
- Personalization can be effective, but it's important to find the right balance to avoid coming off as creepy. Using company names instead of personal names was found to be more acceptable.
- Use data to back up your strategies and win over skeptics within your company. For instance, the personalization experiment led to improvements in demo and free trial CTAs.
- Start small with new strategies, prove their effectiveness, and then expand them throughout the site.
- Tie the results of your experiments back to your company goals and financial outcomes. Attribution is crucial to understanding the return on investment from your experiments.
Summary of the session
The webinar, led by Sarah Fruy, former VP of Marketing at Linqia, focuses on the importance of adaptable and data-driven marketing strategies. Sarah shares her experiences in changing the North Star Metric based on the evolving goals of the business, emphasizing the need to understand the customer journey and what drives results. She also discusses the role of experimentation in validating assumptions and formulating hypotheses.
The host facilitated an engaging Q&A session, addressing questions about eliminating assumptions in hypothesis formulation. Attendees are assured that the recording and presentation will be shared post-webinar.
Webinar Video
Top questions asked by the audience
-
In your opinion, does testing increase CPA for paid media?
- by MariaIs it increased CPA for paid media? I would argue that it could reduce your cost per acquisition because you're going to reduce wasted media spend, where I said, like, I've managed the management prog ...rams in the past, and I, like, require a team. If you're going to put, you know, a new campaign out there, we need to be testing something. We need to be learning from it, whether it's the right creative, the right messaging, or whatever. It can also just kind of like be a litmus test because, like, flat results are also interesting. Like, if you're trying to test something new and you get a flat test, like, that means that you're not rocking the boat. So not having a loser is also sometimes a winner depending on what you're trying to do. But in my experience, testing has driven down my cost per acquisition for demand gen. Yeah. -
Have you ever used experimentation results to prove your boss wrong? How to do that? How to use the implementation result and, ensure that you can take it to the senior managers and able to prove them wrong. How to effectively do that?
- by PeterSo I think that's really making sure that you structure your experiment in a way that, you know, Again, like I said, I had an executive who really was passionate about a certain data point when it cam ...e to our marketing strategy and never wanted kind of rock the boat on there. And so if you have a hypothesis around, that would challenge that, you know, framing it up, documenting it, know, we said that, you know, most of our customers like to watch the demo before talking to a salesperson. Here, I'll give you an example. I'll give you a real example so that we can speak to this. At my company, we used to do a live demo, and it was just ingrained in the culture that had to have the live demo because people needed to ask questions. And if they couldn't ask questions, they weren't gonna move this as a sales process because it was a really complicated SaaS product at the time. And I was like, well, you know, I don't really always want to talk to somebody when I'm in the buying cycle. Like, I wanna challenge this, right? So what happens if I do a recorded version of the demo? And so we did an AB test, and we had a live demo form and a recorded demo form, and you could pick which one you wanted to do. The recorded demo, like, blew away the submissions for our live demo by, like, 60% or something like that. It was like a big win. You don't always get, like, really outsized like that, but it was like a really big win for the organization. And so then it was like, okay. Well, the demo that we recorded was an hour. What if we did it in 15 minutes? What if we did it in 8 minutes? We started to iterate on the results of that success, and then I was able to go back to the team, hey, we don't need to do a live demo every single week. We can do these recordings and start to optimize the recordings, and then that led to a whole resource center of, like, on-demand learnings. And so, like, this one point, you know, where someone in the organization or group of people actually were saying, “Hey, this is the best way to do it. We don't wanna change it.” And I challenged that. It actually unlocked a whole new program of training, and so I think it's really just, you know, being able to run the experiment, prove whether maybe the person is right. Like, there's always that chance, but then you can at least validate their opinion. But if they're wrong, having an experiment, being able to document that package it up, and bring it back to management, then you can start to change behaviors on something that may be, out of date or incorrect for your business. -
Did you gate the demo? What was the follow-up?
- by Jessica MillerYes. We were gating the demo because, in terms of our lead gen program, the demo was, like, one of our highest lead gen assets that we had. That being said, we had ungated versions that we would send ...out as well. So that was another thing that we were testing. I think nowadays, like, that's really important, right? It was, like, gating, or ungating assets, whether it's, like, an ebook, a white paper, a demo, but at our company, that was one of those things where we even though we were seeing a lot of these results, when we ungated it, it was like for people that we knew. Because we didn't wanna jeopardize the massive amount of leads that we were getting because it was too important to the business. So that's sometimes where you have to weigh things outside of just the experimentation results of, like, if we lose these people, we felt it was too risky to ungate it at that point in time. -
How long do we need to test in order to determine if this was if it was a failure or not?
- by Diana GonzalezStatsig - statistical significance. So, sometimes I would cut a test early because it just looked like it was tanking. So volatility is something you need to pay attention to as you're looking at the ...experiment. Like, is the the test leveling out? And early on, maybe before you get to Statsig, you can see that the test is flattening out. You're seeing more even results if the volatility is still kind of, bouncing around that you might wanna wait, but generally, like, this will be a part of your experimentation program guidelines as, like, what's your threshold for statistical significance? It doesn't need to be 100%. It might be 80%. You're gonna have to weigh that risk with your team on what you feel comfortable with, but you know, at one point, we wanted to increase the velocity of our test, and we lowered the thresholds for statistical significance for my team as a way to do that. So there are different members that you can pull, but it really you know, again, having consensus with your team on what you feel comfortable with when it comes to making decisions is important there. -
Can you give me an example of a small test you have run versus a bigger, more involved experiment?
- by TravisSure. So, a smaller test would be we like on the homepage, you know, where I saw that, like, that get started button, we changed the copy on that many, many times. Is it gonna be should it be a demo C ...TA? Should it be a free trial CTA? Should it be a contact us CTA? And so that's really simple. Right? We're just changing the copy. I can go in there and do that myself. I don't need a big team to, like, mock something up and you know, come up with a whole workflow plan. So that to me is pretty, like, a low-effort, easy test to run. Something more complicated would be like, you know when we would look at our forms. Forms are really important for conversion marketing. And so when we're changing the the language and the layout of that page, you know, I would need a designer to come up with you know, a new concept. Sometimes it would be changing the design, but the form stays the same. Sometimes we would be changing the form field. And so as you get into, like, all those buttons and different things like that, like, that can be a lot more complicated from a design perspective. Because you need to, like, mock that up and have all these different labels and design guidelines and things like that. And so that would be something that would be more complicated. And so it's really just like looking at the effort of, you know, what resources do you need? Do you need to pull in a product marketer, or a copywriter or a designer? How much development work is this gonna require? Again, like, changing the copy is really easy to go do some of these tests when you're, like, changing your forms are gonna require a developer to actually build out that new form. So that you can test it. And so that requires a lot more effort. And so that would be kind of like two examples of a small test versus a bigger test. -
What are the typical overarching North Star goal of experimentation and marketing?
- by PascalI wouldn't say there's like one North Star Metric that works for every organization that really needs to be specific to your business. At Pantheon, our North Star Metric for the experimentation progra ...m changed over time based on, like, the goals of the business. The last one that we had before I left was we we had said that we wanted to increase the number of hand raises on our website. And so how we defined hand-raisers were people who engage with us on chat. Contacted us through the Contact Us form or called our phone line directly, and so we wanted to increase the volume of people who were raising their hands because what we saw when we looked at the data was that people who reached out to us directly through these contact us formats and raise their hand like, “Hey, I wanna speak with you.” We're more likely to convert to paid customers. And so the more we could get people to engage with us directly in those formats, the more likely we were to win their business. And so that became our North Star Metric was to increase, the volume of interactions between our hand raisers. Other times in the business, you know, like our demo form when I first joined was one of those things where this is, like, the most important transaction on the whole website and everything has to map to, like, getting more demo fills. Over time, we realized that like, that wasn't always the most important path. When we looked at the data, we saw a lot of people were coming to us through our pricing page, and that was another really important metric. And so that, you know, that North Star Metric changed over time. And so it's really taking a deep dive mapping out your customer journey on your website. What is actually driving the results that you're trying to achieve? Is it sales? Is it talking to, a salesperson, you know, is it a click to cart? Is it talking to a salesperson? Is it, you know, maybe you want people to register for your webinars because those are really important to your business? Like, you know, the folks that think that you know, we're, you know, excited about all you joining because help some of their goals. And so, you know, you just really need to figure out what looking at your customer journey is going to achieve the best results and then, you know, anchor on that as your North Star Metric. But not having too many. It should be one. It should be a metric because you need to sort of deprioritize. Well, you know, this didn't help me get more handraisers. So it's a great experiment. I'm gonna put it in the backlog, but right now, we're really trying to get people to reach out to us through our contact channels, and this is not in service of that. And so that's a way for you to prioritize work that's going to service your North Star metric. -
If you eliminate assumptions 100%, how else can a hypothesis be formulated?
- by PascalSo you're just trying to sort of, like, prove out a concept that you feel is true. The way that I would go about that is, like, if you have, again, things like surveys, if there are other data on your ... website and you want to leverage experimentation to prove something to be true, but you don't have, like, that actual, result. You can use experimentation to do that. You know, like, so trying to think of an example of that. So even, like, that second place on our homepage, like, that was an area where, you know, we talked to a bunch of people. We looked at our scroll rates. We realized people were moving past the first part of our site. When we did the user interviews, the thing that came up most commonly was that they thought they hit the bottom of the website already, and that's why they weren't scrolling down to the bottom of the page. So we got qualitative insight. And then we started to change our layout to address that to push people down the page. So that we could turn that, data point on its head and change the experience so that we can drive the results that we wanted. I hope that's answering the question. But I would say just like pulling in, looking at data, and then forming your hypothesis around other pieces of information and proving it out on your website is a way to make hypotheses without basing them on assumptions. So maybe you have a demand gen campaign and this one title is working better, or call to action. You bring that over to the website. It might be different because it's a different environment, right? That CTA on your banner ad might work really well out in a while where you have all your campaigns running. You bring it in-house on your website, and it won't work. And so then we're getting back to this assumption thing of, like, I had a data point. This content works really well over here, but then it's not working well over here. And that's why we experiment, right, is to sort of either validate things that are working somewhere else or understand that different environments yield different results. And people will add it to you as it changes over time. That's the other thing is, like, your experiments will decay. So you're gonna have, like, an outlier. You're gonna have, like, maybe an outlier campaign that does well, but you can't say that that's a winner forever for your business because people are, like, make like, we change one of our buttons to be pink, and we saw the engagement rate go up really, really hard because it was just, like, so shocking to see a pink button on the website. Eventually, people are gonna get used to seeing that pink button on your website. So maybe you need to change it to blue next time. And so that's one of those things where, you need to be constantly be testing because people will get, you know, the the the results will decay over time and needing to keep things fresh in order to keep people engaged and continue to improve results for your business.
Transcription
Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.