Readership in a crisis: 4 ways PR teams use Memo for Crisis Communications

In today’s hyperconnected world, crises can erupt in an instant. A single tweet, memo, or decision can ignite a firestorm of negative press.

Other times, it’s a slow burn. You hear murmurings of a reporter working on a piece, you get the call for a statement, then you wait.

Either way, a crisis news cycle can mean late nights and lost weekends fielding emails from the C-suite, panicked over the negative stories, and desperate to put out the fire. But just how big are the flames? Are they getting bigger, or starting to die down? Is there even much of a fire at all?

Knowing exactly how many people are reading coverage during a negative news cycle can answer those questions and feel like a lifesaver. Article readership data (i.e. unique visitors to an article) brings weekend-saving clarity and direction for crisis communications and rapid response. Below are 4 ways Memo customers incorporate readership into crisis comms.

To learn more about how accurate readership can uncover the true impact of a crisis, check out Memo’s approach to comms measurement.

1) Verify the extent of a crisis story with article readership

The Comms teams I’ve spoken to all say something similar: “we know when a crisis story is bad, and we know when it’s nothing, but we don’t know about everything in the middle.” 

Just because a national outlet like the New York Times or Forbes published a negative article about your brand doesn’t mean everyone will read it, regardless of what your CEO might fear. For example, these three headlines (in alphabetical order) are from the same publication. One has 2,000 readers, another 200,000, and another 2,000,000 readers:*

“FedEx driver dumped packages at least six times in ‘debacle’”
“Jury awards woman Walmart accused of shoplifting $2.1 million”
“Outages at Slack, other websites paralyze businesses”

This 1,000x differential in article readership is not unusual. (To learn why see our report “3 graphs that illustrate the problem with PR impressions.”

At its very least, having article readership readily available when a story breaks can be, as someone I spoke to once put it, “a chill pill for my CEO.” If the story isn’t gaining traction, responding could only create more noise and awareness than the story had initially.

And at its best, readership provides crucial guidance into managing a crisis after a story breaks, which brings me to my next point:

2) Form a response and allocate resources based on the outlets and angles fueling the fire

When formulating a response in a crisis news cycle, it helps to know what to respond to. In some cases, this could be what’s getting the most attention.

For example, let’s take Starbucks. As many positive articles they receive about the return of the pumpkin spice latte this season, there lately seem to be just as many (if not more) about its baristas moving to unionize. When it comes to hot-button issues, the spin on a story can create a narrative with a life of its own. Take a look at the following headlines:

“Starbucks CEO to unionizing baristas: ‘Why don’t you go somewhere else?’” (New York Post)
“Starbucks Just Fired a Union Organizer for Allegedly Breaking a Sink” (Vice)
“Starbucks weighing better benefits but says they could exclude union workers” (CNBC)

All of these articles came from the same news cycle only a few days apart, but there’s an 8x difference in readership between the least-read and most-read headline.* This data reveals which narratives resonate most with the public, and could help Starbucks target and prioritize a response plan. Should the rapid response team recommend a clarifying statement from the CEO? Or talk to HR about the alleged sink incident? Or get a spokesperson out to CNBC? 

3) Benchmark readership on a crisis internally and against competitors

Comparing the extent of a crisis news cycle against others like it helps communications teams create a benchmark that contextualizes the severity. Put another way, it tells you how bad is bad.

I’ve seen Memo customers do this in a couple of ways. In some cases, they’ll compare readership on a recently concluded news cycle to past crisis events. This allows teams to, for example, track the effect of a response on how quickly issues were contained relative to the past. 

In other cases, they’ll look at crises weathered by competitors or industry comps. Just as share of readership on proactive press shows the initiatives working for your brand and competition, readership share on negative press can reveal which brands are getting hit hardest in the press. For industry-wide crises (e.g. big tech antitrust, cryptocurrency sell offs, etc), this type of readership benchmarking also contextualizes how your company is faring compared to competitors.

4) Identify crisis news readership trends to better equip your team in the future

The first rule of crisis comms is actually talking about crisis comms. Plan for a crisis in advance. Errant tweets, leaked memos, and unpopular decisions will happen. Understanding how past news cycles have played out – the trajectory over time, the readership on spokespeople responses, what outlets and reporters had the biggest impact – can help crisis communications teams anticipate their needs.

As an example, Memo’s insights team found that for one brand’s recent negative news cycle, 79% of readership was driven by articles published within the first three days. The average readership on each article published after that three-day window slowly declined each day. Given the recurring nature of this type of story, the Comms team can operate with clearly defined timing parameters in the future.

To learn more about how accurate readership can uncover the true impact of a crisis, check out Memo’s approach to comms measurement.

*Due to contractual obligations, Memo cannot publicly release our publications’ article-level unique visitor data, so I use differentials and anonymized publications where appropriate.

Share of Voice is a broken metric. Here’s how we fix it.

Key takeaways:

  • Traditional SOV measurement is not just inaccurate; it’s misleading 
  • Share of Readership reveals what’s actually working for your brand and the competition
  • Below we walk through a real-world example of SOV vs Share of Readership

Share of Voice (SOV) is one of the most common metrics that PR & Comms teams use to benchmark the quantity and quality of their coverage against their competitive set. 

A major goal in looking at a brand’s SOV is to figure out your company’s position in the market. By understanding which competitors are succeeding, how they’re doing it, and where, you can identify and act on gaps in strategy.

But even important calculations like SOV aren’t immune to “garbage in, garbage out.” Inputs need to be accurate and transparent to in turn ensure an accurate representation of the competitive landscape.

And regarding that representation of the competitive landscape: What does it mean for a competitor to win? Are they getting written about in more publications? In publications with a higher potential reach? Or is there something else?

The fundamental issue with the traditional methodology for SOV is that teams are still relying on PR estimates like potential reach and volume of mentions as the determining factors to analyze success and their positions in the market. But as I see day-in, and day-out at Memo, not all press is created equally, and content can perform drastically differently within the same publication. Let’s take a look:

How Share of Voice in the press is traditionally calculated

Legacy media monitoring tools like Meltwater and Cision typically allow you to calculate Share of Voice in two ways:

1) Volume of press = ([# of mentions for your brand] / [# of mentions for your brand + competitors]) x 100

2) Potential Reach = ([your brand’s total potential reach] / [aggregate potential reach of your brand + competitors]) x 100

While method #1 gives your team a general understanding as to how often your brand is written about versus your competitors and method #2 gives your team a sense as to the average prominence of the pubs writing about you versus your competitors, both are missing a critically important aspect of coverage performance: what’s working for my competitors? How many people are actually reading these articles? 

Rethinking Share of Voice for 2022 (and beyond)

To accurately assess how you stack up against your competitors, look at Share of Readership (SOR), which gives you a competitive benchmark grounded in reality, and better identifies opportunities to insert your brand in the conversations that resonate most in your industry.  

Let’s look at a real-world example comparing SOV to SOR between two major brands during June 2022: Hulu and Netflix. (Full disclosure: While this is actual data exported directly from a competitive report in Memo’s dashboard, I’ve changed the time period and names of the companies for this article to respect customer privacy.)

Tracking from a media list of the top 400 publications in the US, here is the breakdown in coverage:

  • Hulu mentions: 708 articles, 10.54 billion impressions
  • Netflix mentions: 694 articles, 10.13 billion impressions

Using method #1, Hulu and Netflix have a share of 50.5% and 49.5% of the coverage respectively – about a 50/50 split.

Using method #2, Hulu and Netflix’s share of impressions come out to 51% and 49% – again, about a 50/50 split.

So if I’m Hulu or Netflix, there’s not a whole lot to take away from SOV analyses based on clip counts or impressions, other than to keep chipping away at the competition by doing more of what we’re doing already. This is where treating all press equally, even if from the same publication, can mask critical indicators of competitive performance.

Here’s why: The combined 708 articles Hulu was mentioned in had a total of 6,221,717 readers. Netflix’s 694 articles, however, were read a total number of 12,019,205 times:

Netflix has a Share of Readership of about 66% whereas Hulu has a SOR of about 34%, a much different outcome for the month of June, and a jumping-off point to deeper insights: what press led Netflix to capture more interest from readers? How can Netflix reinforce its dominance? Where are the relevant topics for Hulu to insert itself next month? 

Impressions and clip counts don’t just miss; they mislead

Based on traditional SOV analyses, Hulu and Netflix would have both come to incorrect conclusions about where they sit in the competitive landscape.

And it’s easy to see why traditional SOV would mislead them: they’re fairly similar competitors who get written about with similar frequency in similar publications. Treating each article equally (method #1), or treating each article within a publication equally (method #2) completely ignores the variation in how readers respond to different content. (We published an entire report on readership vs UVMs here.)

If your goal is measuring how consumers are engaging with you versus the competition, shouldn’t success and SOV be defined by the number of people that are actually being reached?

Understanding your competitors’ success can no longer just involve understanding where and how many times they are getting mentioned in the press. Readership finally allows brands to dig deeper and see what’s truly working in the competitive landscape. Will you seize the opportunity?

To learn more about how accurate readership can uncover how you’re really measuring up against the competition, check out Memo’s approach to comms measurement.