PR measurement is broken. Desperate for a way to measure value and impact, the industry centered around impressions and volume decades ago. In all corners of business, metrics advanced. Yet, measuring public relations remained staggeringly stagnant. From impressions, we derived reach. From volume, we derived a share of voice. The problem is, there are so many assumptions made that all of these metrics are built on a crumbling foundation.
Risk manager, media researcher, and lecturer at NC State University Jim Pierpoint published a series of research around media monitoring and PR measurements. He unpacks the evolution of volume-based metrics, reach, and why none of it adds up to any accurate calculation of impact. In this blog, we recap some of Pierpoint’s findings along with some of our own.
The foundation of PR measurement is antiquated.
Two professors from NC Chapel Hill conducted a study during the1968 presidential election (published in 1972) to understand how mass media sets the public agenda. They surveyed a sample of 100 Chapel Hill voters and found a near-perfect correlation between what they thought to be the most important issues during the campaign and the issues covered by the press. This study, “The Agenda-Setting Function of Mass Media,” birthed the foundation of PR measurement: the media sets the agenda for the public, so news volume is a measure of public opinion.
Over time, other dimensions were added to the concept of content analysis – e.g. tonality, potential reach based on a publication’s monthly unique visitors, social engagement with an article, etc. But at its core, this 1968 study based on 100 people without any causality told the industry that volume was a sufficient proxy for measuring reach, when we know now that this is not true.
Not all mentions are equal.
All comms people know that a feature in The New York Times is not the same as a mention in a trade publication, and vice versa. Yet, sheer volume metrics count every mention as an equal. Perhaps the next evolution of clip books and volume counts is share of voice – simply comparing volume against the competition. Here’s the thing, it’s still just volume. Sometimes (most of the time) more isn’t better, it’s just more.
In an effort to measure impact over volume, the industry calibrated on measuring PR in terms of circulation, unique visitors, and viewership. Counting circulation numbers for each piece of coverage is fundamentally flawed. If you secure three pieces of coverage in a publication that attracts 6 million unique visitors monthly, you multiply unique monthly visitors by the number of articles published. Truly astronomical numbers derived from potential reach. None of which is rooted in reality.
Does every article or mention of your brand get read by all monthly visitors? Maybe? Hopefully? Probably not.
Traditional media monitoring tools tried to solve for this by adding social listening to account for the added amplification of social channels. But as we’ve shown, social engagement with an article is not correlated with that article’s readership.
Potential reach metrics are exploding beyond reality
It is unrealistic to think that everyone who visits a publication within a month reads every article every time they visit, but that’s how traditional media monitoring tools estimate reach. According to media monitoring estimates cited in Pierpoint’s research, “reach” exceeded 250 million potential readers on 33 different days over the past 12 months for one particular company’s press coverage. Reach exceeded the total adult population of the United States on 33 different days. On 15 of those days, the imputed reach exceeded 300 million, and on the biggest news day, the imputed reach exceeded 600 million.
“According to the media monitoring data, not only did every American adult see news about this company on every one of those days, but for that data point to be accurate, we each saw the news coverage more than once.”
Jim Pierpoint in Proof of Concept: Reach does not equal Readership
Pierpoint points out that the already inflated reach numbers based on monthly visitors compounded with the number of media programs (from three network news broadcasts to dozens of cable and online news programs) amounts to an explosion in potential reach.
In 6 regression analyses between potential reach of published news about companies (aka impressions or UVMs) and actual article readership of that published news (aka unique visitors to an article), Pierpoint found correlations were as low as 0.40 (with 1.0 being a perfect positive correlation, and a “strong” correlation being about 0.7).

To stress test what correlations ranging from 0.40-0.68 mean for using potential reach as a proxy for actual readership, Pierpoint ran “Top 10” tests. He compared the top 10 news days based on potential reach against the top 10 news days based on actual readership. Only 2 days overlapped (80% error rate) for the technology company mentioned earlier, and error rates ranged from 20% to 90% – so…highly inconsistent.
At this point, potential reach is so far removed from reality, it should be a crime to call it reach.
Redefining readership and staying rooted in reality
Measuring potential reach and views will cause PR inflation. The correlation between potential reach and actual reach is statistically unreliable, making it a risky business metric. Pierpoint’s primary takeaway is “to generate actionable insights, we need to measure both news that was published and news people actually read.”
Readership, impressions, and reach are all used interchangeably by tools that thrive on confusion and ambiguity. Potential reach does not mean readership. Impressions do not equal readership. Competitive volume comparisons do not give an accurate depiction of the competitive landscape. Readership means the number of people who read article(s) mentioning your brand. Period. And that’s how you actually measure impact.