Knowing who your target is so you don't waste time on prospects that aren't your target market is something every start-up works on. So how do you find your target market? One of the most common answers today is a category of web site pages that go by a lot of different names.
- Landing Pages
- Squeeze Pages
- Video Pages
- Sales Pages
- Long Sales Letters
As a WordPress user, you know there are great solutions out there, as themes and as plugins, that will help you create these pages. Each of them will help you collect leads and help you evaluate your product-market fit. So if you've started down this path, you've likely heard that the hallmark of doing this right is something called A/B Testing. In case you don't know what A/B testing is, I'm going to walk thru it real quick before highlighting what not to do.
What is A/B Testing?
A/B testing is really a simple concept. It means that instead of creating a single offer for your target audience, you would create multiple alternatives. These alternatives would be tested alongside your original so you can see which one performs the best. That's it. It's a contest, really. If you're selling a book (like I am), you could create one version of a sales page and then make another version. When you look at the results from each, you'd know which one was better. Another name for this is Split Testing.
How does A/B Testing Work?
The way it works is a little more complicated than my initial definition, but I promise not to scare you. If you use a plugin like Premise, you can create a page, and then you duplicate it. You then adjust the second version. When users request that url (/my-sales-page), the plugin offers the different versions to people (moving thru the different ones). Plugins like Premise work with Google, but there are others that collect the data directly on their own. Here's one that does it directly, and is free. Both end up giving you the data that lets you decide which one you want to use after your split test.
How can you use A/B Testing?
So how can you use this? Well, imagine you have 500 prospects that you want to send an email offer to. Instead of sending it out to all 500, you send it to 50. Then you go back and see which page worked better – and that's the one you keep and send to the remaining 450 prospects.
What Not to Do
Take a look at these guys. Imagine that you want to buy a car from one of these guys. I want you to look closely at each one. Two of them are Matt, two others are Bill. You're going to buy a $35,000 car. So let me ask you a question: Does it matter that his hair has a red tint to it? Does it matter that his tie is more blue or more grey?
Nope. Not one bit.
But this is what people do when they do split testing. They create a landing page and then they change the color of the button – as if that was going to incent an audience to jump off the fence and start purchasing. If they don't change the color of the button, they change the color of the heading, or the title text.
But in reality, all four of these images are basically the same, aren't they? So if I asked you to pick who you would prefer to buy from, it would likely be random.
So in a nutshell, here's how NOT to do split testing: Do not just change your colors and text.
How to Think about A/B Testing Right
The best way to create alternatives is to create alternate theories of the prospect. Which is why I think of split testing as hypotheses testing. Let's look at a simple example.
Imagine you're thinking about booking a vacation. If I were creating a landing page for you that was focused on collecting prospects for a cruise line, I might have three different hypotheses:
- The client cares most about price. They want a deal.
- The client cares about destination – and wants something exotic.
- The client has a specific time when they can cruise and want to know what's available.
You can see how all three of these theories of intent would have an impact on the title, and the rest of the text on the landing page. But I wouldn't be changing the text just because. Every part of the page would be guided and driven by a hypothesis. In the end, the color of the button may be useless (except that you want it to stand out). But showcasing a list of deals would work great for some, but not others who weren't price sensitive. A showcase of distant locations or exotic places would motivate some, but not others. And an active calendar control would be used by some, but others might just leave the page.
Now, when you review your results, what will you see? You'll see the percentage that are motivated by price, by location, and by specific timeline. And based on that, if only the first page had stellar results, you might only launch with that single offer.
Does that make sense? Is that what you're doing?