“We know impressions are inaccurate, but at least they’re directionally correct.”
“We know impressions are inaccurate, but we divide them to be more realistic.”
“We know impressions are inaccurate, but no one is forcing us to change.”
“We know impressions are inaccurate, but.” It’s a common and persistent refrain in the PR industry. Years of having no alternative metrics created a status quo of measuring the potential, rather than the actual.
Unfortunately, impressions are not directionally correct at the article level: highly-trafficked outlets don’t always get higher article readership (i.e. unique visitors to an article page) than lower-traffic outlets. And dividing monthly impressions by 7 or 30 days is not a realistic assessment of how content performs: on the same day in the same publication, one article can get one million readers, another one thousand.
You might say Memo has a refrain of its own: “Impressions are not just inaccurate, they’re misleading.” We’ve published data that shows how impressions distort share of voice and how impressions overlook important outlets. But mechanically, why is this?
Our team analyzes readership data every day. We want to illustrate exactly why impressions obscure the insights that are so glaringly obvious with readership.
Actual readership among a publication’s articles is highly variable
We pulled an entire month’s worth of content published on an outlet that receives roughly 30 million unique monthly visitors. Each box represents an article, and the size of the box represents that article’s readership, i.e. the number of unique visitors to the article in the first 7 days of publication.
The most-read article that month received over 2000x more visitors than the least-read article.
There is brand coverage that completely hit it out of the park, and there is coverage that could benefit from further amplification.
There are article topics that tend to fall on the upper left corner of that graph, and topics that tend to fall on the lower right corner.
There are takedown pieces that blew up, and takedowns that barely made a splash.
Article-level readership provides a wealth of information about earned media performance and strategy. So what about impressions?
Impressions (wrongly) report that every article performs the same
We can visualize the same ~800 articles with potential impressions instead of actual readership. This is what we see:
Did we get a lot of eyeballs on our product press? Does this outlet get high readership on our industry’s news? Is this negative story worth a spokesperson response? Potential reach doesn’t help us answer any of these questions, but readership does.
It’s the difference between a clippings report where every article has the same performance metric (left) versus a readership report where you know exactly how many people saw the coverage (right):
Sure, the report that tallies up to 225,000,000 potential impressions looks impressive. But with 258 million adults in the US, business leaders know it’s a bogus figure.
Lower UVM publications can get more article readership than higher UVM publications
The monthly unique visitors on a site can be a helpful proxy for publisher authority when, for example, trying to understand the landscape or build an initial media list. But impressions are a horrible proxy for article performance, even across different publications.
Everyday we see outlets with relatively low monthly visitors publish articles that receive higher readership than content on relatively high-traffic outlets. (A Memo report further examines this trend.)
In fact, one of the first things new Memo users say is “I can’t believe how many people read our placement on [insert niche outlet].”
To illustrate, here is the same publication visualized above next to a second publication with approximately 75 million UVMs:
The higher-UVM outlet published the most-read articles among the two publications. But there are hundreds of articles on the lower-UVM outlet that received more readership than content on the higher-UVM outlet.
We’ve now illustrated that impressions are 1) not directionally correct, and 2) not a realistic assessment of article performance, no matter how you slice them.
Still, if no one is forcing the issue, why change?
Comms has become more entwined with marketing and business strategy. Its measurement will be too.
One of the biggest PR measurement trends that emerged this year was that Communications teams are working more closely with Marketing and other business functions. With this seat at the table, however, comes expectations of more rigorous measurement.
Our team has worked with some of the earliest adopters of readership data. The Comms groups that embraced this change a year or two ago are already operating at a different level. They’re more strategic with media relations. They’re better equipped to handle crisis stories. They’re giving earned media its due credit in the broader marketing mix.
No longer misled by the false impression (pardon our pun) that content performs uniformly on a publication, they’re making better business decisions.