Yeah, I know. It's an unfortunate reality. But running a message test with less than 1,000 contacts is a waste of time. You'll put the effort in to develop your email versions, but you can't trust the results at the end of the test. Here's why...
A message test works by taking a sample, or a small portion of randomly selected contacts, from an audience and breaking that sample up into test groups. Each email version gets a test group containing the same number of contacts.
Looking at the illustration above, you can see that with an audience of 100 contacts, a sample of 10 randomly selected contacts divides into two test groups of 5 contacts each for a simple A/B test. This simply isn't enough data to choose a winner with any degree of confidence.
"But wait," you say, "version A in the illustration above has 3 opens, which is one more than version B. Doesn't that mean it's the winner?"
That one click could have been an accidental slip of the finger when the contact was scrolling through their email or maybe their laptop was momentarily hijacked by a curious toddler who just wanted to see what happens when you press a button.
Side Note: the best way to prank a marketer is to open and click every email but never fill out a form.
You need enough data to account for these accidents and make sure that an email version truly does result in higher engagement. And for that you need more contacts to ultimately calculate the average engagement over several tests and achieve a high degree of confidence, all of which Motiva automatically does for you as long as you have enough contacts.
Here's a cheat sheet for your reference to determine how many contacts you need to run a test depending on the number of email versions. You need at least the minimum, but you don't need the best case for the test to be successful.
Bottom line, you don't need a ton of contacts, but you do need at least the minimums to make it work.