Skip to content

What on Earth is Going On with Agency Campaign Measurement?

turbinelabs October 31, 2018
Reading Time: 4 minutes

Just about every week, a brand will forward us a campaign, event, or crisis performance report generated by their PR firm or other agency. The purpose of doing so is to have our team review or, in a growing number of cases, audit for accuracy. And over the last year, we’ve noticed some alarming trends.

I won’t mince words. The amount of manipulated and irrelevant data being used to generate campaign performance and crisis reporting is staggering.We’re seeing enough (and clearly brands are, too) to make us believe this is not a not random trend. Personally, I’m frustrated enough on behalf of brands to want to understand if what we’re seeing is unintentional or by design, isolated or epidemic.

“If you torture the data long enough, it will confess.” — British Economist Ronald Coase

Agencies often have (the challenging) dual roles of being strategic advisors and content creators. Because of the astonishing growth of available data, and the relative ease at which it can be collected and analyzed, campaign results can be easily manipulated or inflated. Imagine if an agency has the ability to lay down the strategic foundation of a campaign, create and disseminate the campaign content, and then measure its performance. Is it in the realm of possibility that, whether purposefully, inadvertently. or subconsciously, results could be presented in a manner favorable to the agency? It’s akin to my fourth grader grading his own homework, on a test where he wrote the questions, in a class he made up in his playroom. In today’s information overloaded environment, this is clearly a risk for brands.

“Tell the truth, but make it fascinating.” — Agency Pioneer David Ogilvy

David Ogilvy clearly didn’t foresee the emergence of social platforms and the power of Boolean queries when he uttered those words over 50 years ago. He also couldn’t have foreseen the amount of pressure brands would ultimately place on their agencies to deliver accurate, data-backed performance reports on their campaigns. Today, brands don’t need the truth to be fascinating. Sometimes, when the data warrants it, the truth needs to hurt.

Today’s media and social monitoring tools are incredibly sophisticated. That sophistication, combined with massive data availability, makes manipulation of results possible — whether it be unintentional or egregious. Within even a moderately complex Boolean query, in any social listening or media monitoring platform, a minor change can substantially alter the perceived performance of an entire campaign. In other words, it’s feasible that one person can change the trajectory of executive decisions involving millions of invested dollars with a single Boolean operator. Think about that.

Photo by Matthew Brodeur on Unsplash

Photo by Matthew Brodeur on Unsplash

Moreover, brand keywords or queries entered into any listening or monitoring platform can deliver dozens (or thousands) of articles that technically contain the keyword, but are, in a practical sense, meaningless. How? A keyword can appear in comments, web tickers, sidebars, popups, related stories, redirected sites, buried in listicles, or one time in the last line of an article. Is it true that the keyword appeared on the web page? Sure. Does that content actually have an impact on the brand, positive or negative? Not even close.

Too many false positives are making their way into reporting relied upon by executives to make critical decisions. In fact, in audits we’ve conducted of reporting delivered to us for auditing, false positives can constitute as much as 45% of the total mention volume included in a campaign performance summary.

How many brands have the time to audit thousands of mentions or links to validate that one out of every two mentions are irrelevant, non-existent, or so low-impact they should not be counted? I don’t know any brands who have either the time or the resources.

The tools are not the problem. Media and social monitoring and visualization tools have opened up a new world of access and engagement that has, in large part, changed how brands engage with their customers, track their competitors, and measure reputation. To be sure, monitoring tools are under no obligation to deliver truthful and ethical outputs. It’s up to the people who use them to do that.

The problem is less about the measurement tools and more about the motivations behind the measurement output. If a brand is paying an agency to perform, there’s tremendous economic and repetitional pressure on the agency to do just that — perform. At the same time, reporting and insights is big business that can fill in lots of retainer hours. This is further incentivized by agency executives eager to blend high margin manual work with declining, or at risk, strategy and content work.

Additionally, the volume of data has grown exponentially, buoyed by massive volumes of irrelevant, misleading, fake, and nefarious content. That, in combination with tools powerful enough to allow people to “torture the data into confession,” means brands can be, and often are, being misled.

No campaign can be perfect. Just as no source, or tool, or individual, is perfect. In this business, there is always going to a margin of error. What we’re seeing, however, is beyond a reasonable margin of error.

To be clear, while I believe there is risk in anyone “grading their own homework,” I am hopeful that the vast majority of PR and ad agencies perform to high ethical standards and are not engaging in purposeful manipulation of performance metrics. But more than any other time in history, it’s just too easy to make slight changes or omissions at the data collection, filtering, or analysis level that make results look better (or less bad) than they actually are.

There is no internal factor that impacts corporate performance more than the decisions made by its executives. More and more, corporate decisions are driven by data and insights. Every time I see a brand make a gaffe or execute a campaign or initiative that falls flat, I wonder to myself what the measurement presentation looked like, who produced the results… and who graded them. You should too.