Episode 15:

Why customer research beats best practice CRO tactics

About The Episode

Today's episode was a real eye opener!

This week I chatted to Oliver Palmer - Oliver has over 10 years of experience in ecomm, I'm an advisor and consultant for eCommerce leaders seeking to accelerate growth with better customer research.

Now if any of you know me, you'll know I love to give a good list of tactics to implement to make a site perform better. But today - I got seriously challenged on that by Oliver - who made me realise that 'best practice' can only go so far. What's really needed is research - to really understand who your customer is, why they are on your site and what they REALLY want.

We dive deep into the ways you do this - with lots of examples of how you can do this 'at home' on your eComm site.

Ready to scale your eComm Store?

My agency (Webtopia) and coaching programmes have grown over 80 eCommerce businesses - and generated over $20 million in revenue for our clients in the last 12 months.


Let us transform you into the successful, impactful eCommerce brand you deserve to be.


At Webtopia, we turn purpose-driven product businesses just like yours into profitable brands using the power of Facebook™, Instagram™, Google™, TikTok™ and Klaviyo™.


But we don't just 'run ads' - we skyrocket your growth.


Full Episode Transcript

Hello and welcome back everyone. So today's episode, I'll admit, was a real eyeopener for me. This week I chatted to Oliver Palmer. Now Oliver has over 10 years experience in E-com. So he's probably been in e-com specifically longer than me. Uh, and he advises and consults e-comm leaders and fast growing e-comm brands and some really.

Big enterprise level econ brands as well on how to accelerate their growth with better customer research. Now, if any of you know me, you know, I'm a fan of a bullet point list of 10 tactics you can implement to increase your conversion rate. Now, these tactics have based on my experience in the industry, what we're seeing with our clients and they've definitely been known to work.

However, Oliver today in this episode really challenged me on the, this kind of idea of a generic bullet pointed list that works the same for everyone. and he challenged me by saying that what really works in moving the needle in terms of your conversion rate is diving in deep to understand your customer, surveying them, interviewing them, watching what they're doing on the site, looking at the data, really understanding what might be blocking them from converting, why they've come to your site in the first place, what they're trying to achieve.

And therefore what you can change on the site to make it, uh, convert in a much, much stronger way. This research as well could help you in your product development. It could help you with your ads, with your email marketing, and it's sadly an area that's massively neglected. So let's dive in. We'll go through lots of examples of real life applications of these ideas so that you can go away with some, some strategies that you can follow to implement some of these same ideas in your business.

Let's dive in.

Hi Oliver and welcome to the podcast. It's great to have you here. Hi Jessie. Good to be here. Yeah. Awesome. So my first question to you is, why are you so against best practice advice and what's your altern. Every website is, is so different and I can see over the course of many years when they're in a particular industry, sites just become exactly like each other. Everybody's copying what everybody else does and no one really knows what works. But they just sort of have an idea that something that one of their competitors competitors is doing is the right thing to do. And it's often not. And I think the same thing is the case with sort of generic.

Best practice? I think it's a far more productive thing to do, is just actually do some research. It's, it always surprises me how the lengths that people will go to, to avoid actually partaking in actual user research. Yeah, I mean, it's something I bang on about a lot with people that I coach and our, and our clients is like, you need to talk to your customers and find out why they're coming to you, what's driving them to make a purchase, what it is they love about the product.

But doing that in a systematic way is really important. And it's interesting having that lens on when you then make decisions about how to optimize your site. So how do you go about that process? So the first thing that I do with any client, Is we have a really structured interview, Lasts about an hour and a half. I ask them a lot of questions about their business, their goals, you know, try and really get some good context about what they're doing.

Then I ask for a login to their analytics, and I go away for two or three weeks. Also their ad platforms just dig through. Everything to try and reconcile the two pictures and you know, often what you find is just by not being in the business, you have this perspective that everybody who's in it.

Every day doesn't have. Yeah. So the things that I come back with are, are people are always really surprised. So I just try and take a very macro look at everything that's going on, and I like to supplement that with user research as well. You know, talking to a small sample of customers, putting an intercept on a client's website and getting people who don't purchase to opt in for research and finding out why they're not buying. Do you use a particular tool for that? Do you have any recommendations about what kind of questions to ask at that point? Yeah, I sort of grudgingly use Fne which I think is a little bit too expensive for what it is, and it's a bit kind of overblown as a tool.

Mm-hmm. , you could just use a survey tool really. Yeah. To, to get people to opt in. Typically it'll just be where they're at, why they came to the site on that occasion. Find out how often they visit, what else they do. You know, I did some work recently with a wine shop in London and they have a lot of. Abysmally low conversion rate. No one knew why that was the case. Turns out when we talk to a lot of the people that visit their site, they. Love wine. They love seeing what's new.

They love the in-store experience. And so they visit the website a lot, never intending on buying. And you know, these are the sorts of things that you just often need to talk to people to discover, you know, what that particular segment is and how you can cater to them and, and how you can measure it as well.

So what's the action from that particular example? So you discover that people are coming for information purposes, really to the site. How do we then, how did you then change. The situation so that the conversion rate is where it needs to be for that brand. Well, we found some really interesting stuff in that research.

So this is a shop they specialize in, in natural wine. They're, they're based in East London. Most of their customers are in East London, even their online customers. Yeah. And you know, if you pay any attention to the wine space, you see that like, you know, wine subscription club type things have.

Proliferated over the last couple years. Like every man and his dog offers one. These guys have never gone down that path cuz it didn't sort of feel very authentic to them. They offered these sort of really curated like mixed packs of wine, which they felt was you know, a more sort of genuine offering.

And one of the fascinating things we discovered through this research was. People thought that those mixed packs, they put a lot of love and care into. They thought that was them trying to offload dodgy stocks that no one wanted, sort of the lucky dip as it were. What we discovered, sort of inversely, is that people really trust wine subscriptions.

In a way that absolutely was unknown to us and that they really value wine subscriptions because it makes it easier to them for them to buy. So what we discovered in that research was people are coming to the site cuz they love wine and they see love seeing what's there. And they value the in-store experience.

But one of the reasons we found that they value the in-store experience is cuz. Offline experience isn't good enough and it's too hard and it's impenetrable. You go into a wine shop, you know, unless you are a sort of a rare expert who knows exactly what they want, you'll typically talk to somebody and say, Well, I like this and I like this.

What do you recommend? And so the action we took from this was, We need to bring more of in-store online. So adding live chat, they're experimenting with doing in-store video chat, and we're just about to launch a kind of a wine subscription. We're calling it more of a you know, a wine consultation, having a one-on-one consultation just like you would with somebody in store.

And then, you know, deciding what your budget is. Send you wine every month. So, you know, cause we know that there are these people who love the product, but it's, it's just harder for them to buy online. So rather than you know, purchasing only when they come to London or, you know, tracking a cross from South London to East London, we can sort of bring the store to them and, you know, really capitalize on the site and, and that traffic.

That's fascinating. And I think that's the thing, isn't it? Like people have got more comfortable online during the pandemic. But actually we need to mimic as much as possible that human experience. I think brands that like expect people to just come off an ad and buy straight away, especially when it's hyper competitive as it is now, they're not gonna survive.

They need to figure out what makes them different, and how do they bring whatever their unique brand points are to the online experience and make it more, more enriching. I. Awesome. So in that example, you, you asked questions on the website and you did interviews as well in order to come up with a hypothesis as to what would improve the problem that you were solving, which was the conversion rate.

So once you've got some hypotheses, you then do some experiments and use data to decide whether those hypotheses are right or wrong. Do you wanna talk us through a bit more of. That process.

Very early in my career, I was I was a sort of in-house e-commerce manager at what sounds, in retrospect, like a completely ridiculous startup, which started about 2007. And it was a, it was a magazine superstore still kind of exists, doesn't really sell magazines anymore. Founded by a dude who left McKinsey raised a lot of.

Then went back to McKinsey but yeah, so I was the first sort of in-house e-com manager. There and I, along the way, I discovered a tool called user testing.com which you're probably familiar with.

It still exists but it's, it's a lot more enterprise focused now, but back then you could give them something like $50 and they would get somebody to complete tasks on your website for you. And so if you did five of these You could you know, select a sort of demographic group that time it was only Americans.

And I would select the sort of lowest income, lowest educated Americans and the hope of, you know, finding the biggest usability issues. And you would get back a video with a, a screencast of, you know, what they did on your site and they would narrate their thoughts aloud. A lot of tools, you know, like this out there now in the sort of genre of unmoderated remote user testing.

This I think was fairly new. And we ran these, we ran these sort of tests and we said, Okay, can you find a magazine that you like, go through the purchase process you know, right to the end, don't enter any credit card details. And five outta five people got to the end sorry. Five outta five people added the product to their cart and they saw.

10 to, you'll receive your first magazine in 10 to 12 weeks. Now that's standard for the magazine industry because a, it's antiquated. They're, you know, dot matrix printers in warehouses, running sort of very old systems. It's pretty slow to take orders. Also, some magazines come out once a month. Some come out every two months.

Depends on the cycle. So back then we, we made a really conscious choice to be really transparent about it. We thought what would Amazon do? Amazon would be really transparent. They'd tell you you'd be eight to 12 weeks before you get your first order. Every single person looked at that and said, eight to 12 weeks, that's a joke.

No way am I buying from this site. And we had a look at all of our competitors. No one else was saying that. And very trepidacious. We weren't AB testing then, but very trepidatiously. We removed it and kept a close eye on, on you know, complaints, volume and so on. Complaints didn't go up, but conversion went up by a, a significant.

And that's just, just the kind of thing you would never realize if you didn't, you know, get some, get some user eyes on your, on your product. Yeah. So that kind of real user on your site testing is something that you would definitely recommend a brand do. Like what sort of level of traffic do they need to have to be experimenting like that?

Or it doesn't matter? They could do that in the very early. Well, I think, I think everybody should be doing any of that sort of user research no matter what their traffic is. Really. If you don't have the traffic, you can get it from elsewhere. You can use their their tools like, you know, Usability hub and there's sort of other sites that will allow you to test even a prototype before you've built something.

Test a landing page in design form. You can recruit from existing panels. You could run ads to recruit. There's so many ways that you could do it. I think just sooner the. And, and in terms of your question before about when people should start experimenting, I don't think anybody should be experimenting before they really feel like they've exhausted use of research.

I think they're gonna get so much more value out of that process. . Yeah. Okay. So user research is the first piece in the journey towards optimizing your conversion rate. And that's before you should think about doing experiments. A hundred percent. It doesn't sound as cool or, you know, interesting, but I think it just, it's just so much more valuable and then it naturally leads, it feeds into it as well.

All of the hypotheses that you develop out of your user research can feed into your experimentation. Yeah. And so that kind of live user research, doing things on the website, that's to improve the UX so that they convert, is that gonna help with things like the positioning of the product, the messaging, how you explain the value to people as well?

Yeah, I mean, I think it depends as to how you frame the research. So how I normally run these sessions is to you know, we'll typically have outcome in mind. You know, it might be somebody that, you know, wants to expand into a new market or, you know, they might be, or is, or it might be a good question in mind.

They might wanna discover why conversion's very low, or they might have some theories. And that will feed into the sort of exercises that you run with people. So at a sort of a bare minimum, I think to scoop up UX issues, asking people to go through the process of, of purchasing a product and watching them and getting them to verbalize their thoughts as they go through that, that's gonna help scoop up a lot of usability issues.

But additionally it, you know, depends on. What other questions you ask them you know, what other sorts of insights you find. I mean, with that wine shop example, I think that's, we very much got some insights around the positioning of their, their, their mixed pack products and how that relates to subscriptions because we, we were keen to understand that.

So we framed those questions in the research. Yeah. Okay. Interesting. So how do you respond to like e-com business brand owners, , they wanna apply this kind of cookie cutter best practice approach. And I, I will, I will put my hand up and say I'm guilty of giving this kind of advice to people that I help.

So we know there's kind of a playbook to e-commerce. We know, you know, we can see really successful, you know, high scale econ brands and particularly in the US who are doing things very similarly to each other. You know, they have a Shopify site, they have lots of social proof. They have a certain type of product page.

The checkout's very standardized. You know, there's just certain elements that I would always look for on a, on a site to make sure that it is convincing to the user and it like, looks trustworthy and is gonna, is gonna convert. So what's your, what's your answer to that? Like, am I going about it the wrong way by advising them to, to follow the formula?

or is it about following the formula as a starting point and then iterating on it? Like how do you approach that? The formulas definitely exist for a reason and best practices. Certain best practices exist for a reason. I think another way of thinking about best practices and e-commerce is simply it's.

Design patterns. What design patterns to users expect? Where do they expect the add to cart button to be? Yeah. How do they expect the checkout flow to be? I think that absolutely makes sense for there to be you know, some standardized thinking about that. I think Bay Mar does a great job of identifying, you know, what those best practices are.

And I think they're a great starting point. But my, my, Who's that, sorry? The Bayard Institute. Okay. They release these amazing reports on different aspects of e-commerce sites.

So it might be flyout menus or product pages or selecting sizes and color for apparel sites and so on. And they do a lot of research and compare a lot of different sites. And yeah, it's a great resource. I. Definitely worth checking out. Interesting. But my feeling is that, you know, best practice is a great place to start, but I don't think you should end there.

I think best practice plus validation is a great place to be. Mm-hmm. . Yeah. And ultimately you could have the, the most perfectly cookie cutter Shopify site in the world, but if the offer itself how the, the value was being presented and package. It doesn't resonate with the customer, then it still won't convert.

And I think that's what you described with the wine business. Everything was probably all there on their site, but people weren't converting because what they were being offered wasn't solving the problem that they had with buying wine. And you've figured out how to understand that problem and then how to package the offer in a way that solves their problem and beats their objections

I think same with the magazine example, you know, the eight to 12 weeks delivery. It's just something we, we would never have been aware of. Another example I often like to cite when talking about the benefit of user research is when I was working at ee I, one of the first things that I did when I started there was to run some remote unmoderated usability tests.

I had seen that insurance sales online, so handset, phone insurance, sales were very low online almost nonexistent. And I, I sort of dug around within the business and found out that in store, they were considerably higher. And I thought, Well, that's interesting.

Let's have a look at this. And so I ran a It was the British site then called what Users Do, but you ran a test with them, put five users through it and said, Find a handset that you want add insurance to it and proceed to the checkout. One of those people, I think, managed to find insurance.

There was one guy, he was, he was in his sort of seventies or eighties, this old northern guy. He spent about an hour, maybe two hours recording his screen. He was searching on forums, He was Googling, he. Determined not to give up without finding insurance. And what I discovered from this, this sort of validated, my hunch, was that insurance was too hard to find.

And the reason I discovered it was too hard to find was in the telecommunications business. You know, they're selling a commodity product, so everything's dressed up in brand. And so EE at that time didn't call their insurance product insurance, they called it clone phone because it was also you know, data backup.

They give you a new phone with all your old stuff on it. So there was an accordion in the checkout that said clone phone, but nobody clicked on it and nobody called insurance. Why wouldn't you? They just didn't know what it was.

So how, how do you go about running a good experiment that gets you the data and information you need to really move the needle on conversion rate?

One of the most important things when running a good running an AB test is to be very systematic in how you. Actually design the experiment.

You know, I often see hypotheses that say things like, by increasing the size of the hero banner, we expect to increase engagement, which on the face, It kind of doesn't sound awful.

It's like, okay, yeah, yeah. Right. If you're not really thinking about it. All right. Yeah, that's good. When you come to analyze that experiment, you're in just an absolute world of pain. So you say well, what's engagement? You know, and you end up chasing your tail looking for metrics which support your existing hypothesis.

Mm-hmm. . So what is engagement? Is it conversion? Is it clicks? Is it, is it if I'm really desperate, is it bounce rate or time on site? This is just, it's an absolute nightmare. So by using a really strict method of forming your hypothesis there's a guy called Craig Sullivan. He's on Twitter as optimize or die.

He's a great thinker in the optimization space. He's been doing a lot of work on this over the years, and he's got this series of medium articles, which he calls his hypothesis kit. And it. Quite formulaic. It's sort of, it offers guardrails really. It's you know, I talked before about the necessity of discovering concepts elsewhere and then validating them in experimentation.

Mm-hmm. , he bakes that into the hypothesis format. So the hypothesis format starts out with something like cause of the thing that I observed in this research. We are going to do this. And, and my format differs slightly to his, but I, I just boil it down to very simply, by doing X, we expect to see Y as measured by Z.

Yeah. And that gives you a really simple framework for analysis. When you come to see, did that test succeed or fail? You know, you're just looking at Zed, What did Zed. Craig Craig adds another layer onto it as well, which says, Which will benefit the business because of whatever. And . Yeah. Which is another thing that I think people fail on.

You know, oftentimes people in experimentation get a little bit caught up in the fact that they can run, you know, umpteen variations of something and find out, you know, a good example of this is some years ago I was doing work with. A large office supply store here in Australia and they said, Well, we need to increase our email sign up.

We've got this target, we need to get another 500,000 emails, you know, by the end of the year or something like that. So I said, Right. Okay. Well, I guess what we're gonna do is, you know, we know that email popups work. We know that they're kind of annoying. Let's try that for a start, you know, and we'll see, we'll see how that impacts on conversion and other metrics and we'll see how it performs and we can try some other placements as well.

But for whatever reason, these guys got really caught on the idea that, cause they could experiment, they wanted to try everything they said, But what if we put one in the footer? What if we put one, you know, on this page and here and here and here. , and this is a trap that people often fall into where they think, you know, because they can experiment because they have this sort of, you know, scientific lab of a website that they should mm-hmm.

and it's just, it's just theoretical nonsense. You know? You need to start with something practically that you think is going to have a good business impact. There's no point in testing everything

so in that situation, you would say, the most likely thing that's gonna move the needle. We know that popups work cause we've got data on that from somewhere else. Therefore, we're gonna do a popup and see if it moves the needle. And then if it does, and also it impacts positively on the business that these new emails we're getting, actually serving the goal of giving us more money or whatever it is that they're measuring, then we would experiment with additional places to collect the email address and keep iterating.

We've reached the peak point. That's the, as opposed to putting, changing like loads of different places all over the website and then seeing if. as a whole, that moves the needle, right? Running a mega monster experiment that will take forever to achieve significance. It's you know, I think the, the, what you said there about being iterative, that's the point.

You know, I think there's, I really feel like experimentation is kind of the ultimate extension of agile. You know, Agile is just about building something quickly, quickly listening to your customers and you know, using that feedback to iterate and move on to the next thing. Yeah. You know, when done properly experimentation, it's just a, a very system systematized way of, of, of, you know, running an agile program of work.

Yeah. So back to the wine example, because you kind of had some quite generalized observations from the user, from Yeah. The users. What they want and what's holding them back. So how did you turn that learning, that general, that information into specific things that you're gonna change and test for that, for that client?

Well, I think it's, yeah, it's a. , it's a process writing up research, you know, so I, I typically use five users. You probably familiar with that famous example from Jacob Nielsen. You know, five users will find, you know, 90% of usability problems.

Yeah. It's you know, anything's better than you know, one is better than none, but yeah, five is a nice, nice round number and you know, you will get there. There's enough of a sample that you'll get some disparate views, but then it, it really is a process to take the research and distill some meaning from it.

It's a, it's a fairly specialized and involved process to actually draw out those themes and try and frame it as actionable next steps. But it's just a matter of listening and synthesizing. Mm-hmm. . So you build the research. Then from that you have the hypotheses. Then you have like the plan of action in terms of what you're gonna test.

Then you get your data back based on the criteria that you've predefined prior to the test. And if it's successful, then you continue to build and iterate on it. Yeah, I think that's, that's exactly what I should have said Jessie . That, that makes a lot of sense. Cool. Okay. So. What, what advice do you have for e-com brands that are like trying to fit this in amongst everything else they're doing?

You know, how much of a priority should it be and how should they think about like, what's gonna give them the biggest wins the most quickly? Mm. I mean, I just, I'm, I'm just, you know, evangelistic about this. I think everybody, no matter how busy they are, should be doing user research. I think there are, there are so many wins to be had there, and people are, as I mentioned earlier, people are so resistant to doing it, and I can sort of see why it's a little bit uncomfortable sometimes sitting down and hearing hard truths from your users about the website that you put a lot of love and energy into.

Yeah. There's a great quote from the Google's longtime analytics evangelist Aash Kasick that used to be on user testing.com, and it went something like this. He said He said you know, run, run some, run some tests, cry, make the changes. Make a million bucks. Yeah. . And I think you just gotta go through the pain.

But I think that's one of the reasons why people don't like, like talking to users. I think also one of the, part of the appeal I find people want to do AB testing cuz it seems. I don't know. It seems, seems less messy. You know, you just sort of, you know, put a tag on the site, make a change, and you get back some results sitting down and talking to people and understanding their needs and wants and desires and, you know, uncovering the flaws of your site.

It's it's a messy process and yeah, I think it, it can take a particular type of person to be able to do that as well. Mm-hmm. , you need to be able to listen, you have to be sort of empathetic and, you know, this is a mistake that people make.

It's really important to get somebody to do the research who's not gonna take offense and who's gonna be able to get users to actually give you their best insights.

There's a, there's a great book by the team at conversion rate experts called Making Websites Win. And one of the tips they have, which, which I always, always use, is to say, you know, if you were responsible for building a Y frame or designing a website or whatever People can often sense that and they don't want to give you good feedback.

And the conversion rate experts guys say, make a point of saying we didn't design this. The, you know, the rooter you are about it, the better. And they even say, make a point of saying that you're actually a little bit angry with the agency that designed it and then they're really gonna let rip . Cause you're all in it together then like Yeah.

Against, against the enemy. That's so interesting. This is just a slightly different frame, isn't it? Yeah. Wow, this is really food for thought. You've really got me thinking. Yeah, like changing the button color and running an AB test, like it hardly ever yields a, a clear result anyway. So, and probably the reason is because you've not gone deeper into the fundamental reasons why someone would wanna click the button in the first place.

I mean, I have button tests. My, I have to hold my hand up, you know, more or less. The first ever test I ran was a button test and, and like an idiot. I thought I had a statistically significant result because the tool told me I had a statistically significant result. And that's something that happened for a long time.

You know, statistics has been, Really misused in experimentation over the years and has led to these ridiculous case studies that don't exist so much anymore, but for a long time were really promoted by agencies, by software vendors that said, Yeah, we changed a button and we got a 20% conversion uplift.

I mean, it just didn't happen. And if it did happen, The button was, you know, you were fixing something that was obviously broken, the button didn't have enough contrast or something in the first place. Yeah. Something really obvious, but more likely you, you, you misinterpreted the stats, you ran a very low sample size and you didn't truly get a statistically significant result.

It's another thing I caution anybody that's, you know, really keen on getting into AB testing is just to make sure that they. Rigorous in their statistical planning before they start, when they create the hypothesis, go into the stats calculator, work out, decide what your minimum detectable effect is, so you can see punch in your traffic numbers, punch in your baseline conversion rate and say if 5% is a, a good uplift for you you can, you know, punch in those numbers and it will say, we think it will take.

20 years, six months, 20 years infinity to get that result. So you can just save a lot of pain upfront by doing that planning and increase your chance of, or decrease your risk of seeing a false positive. Yeah. Interesting.

Yeah. So, so my background is in enterprise scale experimentation. So working with I started doing this in the UK about 10 years ago. One of the things that I typically tell my clients is they really shouldn't be focusing so much on experimentation. I think there's absolutely benefits in it at a certain scale.

If, if it's never been optimized, if no one's ever done the research, you fix those things and then nothing happens for a while. And I think there's really diminishing returns there unless you are. Very large. I did work for many years with a department store in Australia called Kmart. Quite different to the, the American, Kmart as in not, not bankrupt, very successful.

But a lot of the best experimentation I did for them was really helping not to increase conversion rate, but to validate investment decisions and to help them you know, work out should they be doing more or less of something.

And so this is the kind of luxurious space you get into when you have. You know, millions and millions of visitors every month, you can run small experiments that will have, you know, a 1% conversion lift for them is, you know, many millions of dollars. And when you've got that sort of scale, you can work to very low tolerances and see, you know, what happens with a small lift, but also you can do things.

So we ran a long stretch of experiments where, We just tried not doing stuff. So, so an example of this is they had always somebody in their merchandising team. Would coordinate with marketing and they would get these images that they put on category pages, these sort of lifestyle type images of people wearing their clothes, enjoying their products, and it was a nightmare for them to coordinate because very high stock turnover.

Their compliance team would say that they couldn't show a product in there, which wasn't in stock anymore. They had to, you know, work with marketing. They had to take them down, they had to pay for the images, and they had always done this under the idea. It was good for conversion. Somebody just said we should do this.

You know, people like seeing lifestyle images in context, right? That's gonna be good for conversion. And we worked out that one of the merchandises was spending 40 hours a month on this, you know, all told. So we just tried not doing it. And nothing happened. We put a few million people through the experiment and there was no impact on any of the metrics that we tracked.

So you said something interesting there about not doing experiments.

After a certain point there's diminishing returns. Like can you talk us through when it is a good idea to do experiments and when it's not? At what point in a business's evolution should they start experimenting? , and when should they stop? I'm a real advocate for user research and the best experimentation is always grounded in research.

Mm-hmm. people often, often make the mistake of using experimentation for discovery, so they'll come up with what they think is a really clever idea. And then they'll test it. And I used to do this as well. I thought, this is great. I've got all these clever ideas. They get a conversion, rate's gonna go up so much and I'll test it and I'll be able to prove how clever I am, and every single one of them would fail.

And you know, over the years I've just learnt that always happens because that's not a great idea. It's just something I thought of. I think most business ideas aren't great. Most business ideas don't actually really have a positive impact.

You know, Carman Anderski talk about this a lot. Daniel Carman talks about it in thinking fast and slow. When he looks at, you know, he tracks you know, investment fund managers over a long time and realizes that it's, it's basically just charts and they're rewarding in a certain skill. I saw a stat the other day, 12%.

Only 12% of investment fund managers outperform the nd. Right. So, So yeah. 80, 80, what is that? 88% are underperforming. The index . Yeah. And I think that's, that's true in all business, but we, for a long time we haven't really been able to measure whether things have an impact. And I think people also aren't really incentivized to measure everything.

And when you have this unique situation as we do now where you. Measure the minutiae of what happens on your website. You realize that very few things actually move the needle and it's always surprising what they are and they rarely come from within your own brain. So I think you greatly increase your chances of success if you test concepts are drawn from research data and then you simply validate them at scale.

So where can we follow you and Yeah, where can we follow you? Yeah, so well, I, I blog at oliverpalmer.com intermittently also on LinkedIn. But yeah, best place to get me is oliverpalmer.com and there's there's links to everywhere there.

Awesome. Well, I'll share all those links in the show notes and thanks so much for coming along. I got a ton of value and I'm sure our listeners did too. Great. Thank you very much, Jesse.

© Jessie Healy 2024. All Rights Reserved.