Traditional PR metrics like impressions or UVMs grossly overestimate impact. You know it. I know it. The whole industry knows it. But…they make your coverage look good and your big wins look even better.
We took a step back and wondered, do UVMs actually give you anything beyond putting a bow on coverage reports? We analyzed readership data (unique visitors to news coverage), sheer coverage volume, and UVMs to see if they yield comparable insights. Keep reading to see what we found.
Readership varies greatly from article to article and reporter to reporter, even within the same publication.
UVMs suggest 9,545 articles on Generative AI published in the past seven months have reached 467 billion people, 60x higher than the population of our planet. That’s significantly higher than the 130 million people who actually read those articles in aggregate. When we compared readership to UVMs in a scatter plot, it’s clear that there is no relationship between UVMs and how many people actually read an article.
The lack of relationship between readership and UVMs is due in part to the fact that readership varies greatly from article to article and reporter to reporter, even within the same publication. Here are a few examples of the variation within publications:
Readership helps us understand which angles and topics are resonating most with consumers and where they pay attention.
As you can see, average UVMs left specific topics within AI fairly even. It appears as though the 10 topics have fairly similar impacts to one another. Looking at average readership, you can not only see clearly which topics attract the most consumer attention.
Furthermore, because there’s so much variation in readership between articles and reporters within the same publications, ranking outlets by readership tells a much different story than ranking them by aggregate UVMs. When you’re planning a big announcement or deciding where to give an exclusive, those rankings can significantly change your impact.
UVMs even dilute sentiment breakdowns and visibility to attention over time.
Without readership, there would be no way of knowing readers are more interested in reading negative AI coverage than they are in reading positive coverage. Seeing what people are reading helps uncover that people are reading negative news almost as much as positive coverage, which may change your messaging (or even your overall strategy).
Looking at attention over time, you can also see that aggregate UVMs over time rarely dipped below 12 billion. Readership, however, reveals much more dramatic spikes so that you can more clearly see when peoples’ attention peaks. There are a ton of use cases where this could be helpful to a comms team, including one with high stakes: in a crisis.
TL;DR
Readership varies so greatly from article to article and reporter to reporter that just looking at UVMs can be grossly misleading, in terms of measuring success and informing future strategies. Sentiment breakdowns, top topics, top publications, and even trends over time tell different stories. In all cases, readership paints a much more accurate picture that is far more actionable than UVMs.
Methodology
Memo analyzed 9,546 articles about Generative AI published Jan. 1-Aug. 4, 2024 across 64 national news, consumer, and trade outlets. UVMs were pulled from LexisNexis.
See more Memo research in our Resource Center.
