Why Your Open Rates Have Plummeted (and What You Can Do About It)

Got the 22A Auto-Open Blues? Eloqua's update in response to Apple's Mail Privacy Protection should force marketers to rethink their strategy.

For Open rates, it was the best of times, then it was the worst of times.

Apple announced its Mail Privacy Protection in the summer of 2021 and began rolling it out to its customers in the fall with updates to both MacOS and iOS. For Apple users who use the native mail applications (regardless of their actual email service provider), Mail Privacy Protection automatically scans all incoming mail, then sends back an artificial open signal to the sender.

As expected, many of our customers saw a spike in open rates following Apple's MPP rollout last year.

Eloqua responded by adding features in their 22A update that separated auto-opens from actual human opens in their data. So now, auto-opens no longer appear in Eloqua campaign data, just as an Insight metric.

However, if Eloqua registers a human open or click-through on a message, an Open is inferred and added back to the campaign data.

Considering all the other auto-opens that contributed to artificially high open rates over the years, reported Open rates have gone from slightly inflated (with enterprise auto-opens) to massively inflated (with Apple's MPP) back down to something closer to actual numbers. 

Our customers weren't just staring at open rate baselines that looked similar to pre-MPP numbers. Instead, the reported Open rates were now much lower.

Ouch.

But consider it a bit of tough love.

Open rate as a metric has been unreliable for many years, and it was only getting worse. It's just that now there's no hiding from it.

Also, note that Eloqua identifies auto-opens by looking for Apple's user agent string of "Mozilla/5.0." Unfortunately, this string isn't used by all email scanners (though it is used by Gmail's image prefetch), which means that it's likely that there are still auto-opens slipping into your reported Open rate data.

So what now?

Now it's more important than ever not to rely on Open rates as a reliable engagement metric. Of course, we're not saying Opens should be ignored altogether, as they can serve as a general indicator for things like message testing, but it's time to expand your horizons.

For example, if you have any Decision Steps based on Opens, first ask yourself if that still makes sense for your campaign goals. Keep in mind that Apple Mail users can comprise up to 80% of your audience.

If you are still going to stick with Opens in your Decision Steps, be sure to have a significant Wait step in-between (we generally recommend two days to be safe). If there is no Wait step, you will be further eroding the quality of your campaigns.

Also, it's probably best not to use Open rates for segmentation. As polluted as Open signals are now, it's only going to worsen.

What are reliable metrics?

If you haven't already, now is a great time to focus on other user activities, like Click-throughs, web sessions, app sessions, and purchases. CTR, in particular, will help provide a more accurate picture of engagement since human clicks tell Eloqua that someone opened the message, and that gets registered in the Open rate data.

You'll also want to pace your messaging. Don't rely too much on one particular email. Instead, take your prospects on a journey, spread out to allow for other interactions that signal engagement (click-throughs, web sessions, etc.).

Focus on the content of your emails more than just the elements used to entice prospects to open the message (like subject lines and sender personas).

You'll also want to keep an eye on the auto-opens metric in Insight. It will indicate how much of your audience uses MPP or another email scanner. You'll need to know the scope of the problem.

We designed Motiva to mitigate the issue of unreliable Open signals from the start, long before Apple announced MPP. We assumed that there would be polluted Open signals in every batch and have designed our Message Testing optimizer to filter out those results.

What about Send Time Optimization? Doesn't that rely on Open Data?

Many STO platforms do rely on Open data to determine optimal send times. However, this approach was overly-simplistic and relied on a false sense of security, which has now been bolstered by these last few months of unfiltered MPP.

After all, if the STO is just looking at Open data, and email auto-scanners are sending back signals just after the message is received, that tricks the STO into thinking it's actually performing well, when in fact, it's not doing anything.

Again, Motiva has an advantage here, too, since we already assumed Open data is polluted. Our STO goes beyond opens and incorporates other signals into our STO predictive modeling.

With auto-opens now being filtered out, you may need to rethink your campaign logic. Likewise, how you measure success may need revamping both in the metrics and in a "new" baseline.

Still, for those marketers willing to take on the challenges, you'll see higher engagement translating into a more robust bottom line.

Ready to see how Motiva can work for you? Pick a time to see a demo!

Schedule a Demo

Back to Blog

Related Articles

Why a Staggered Send is Better for Send Time Optimization

Send Time Optimization (STO) isn’t a new concept.  As email marketers, we’ve all thought about when...

How Send Time AI Reaches Your Contacts on Their Schedule

By combining your audience's historical open data with advanced AI modeling, Per Contact Send Time...

Why Your Send Time Optimization Doesn’t Work (and What To Do About It)

The ugly truth is that most Send Time Optimization is overly simplistic and easily fooled.