A Quick Budget Friendly Way to Answer the Question: Is this a Good Marketing Campaign?

By March 9, 2016
Share this:

find_me_datascientist

In the fast-paced startup environment, quick and dirty analytics go a long way. For demand generation marketer,  a pressing (and reoccurring) question that gets asked is, “is this a good campaign?”.

In this brief case study, we’ll detail arguably the simplest, most budget-friendly way of answering this question using a dataset from a popular Silicon Valley giant (that shall remain nameless), and some basic high school-level statistics.

The Main Idea

Let’s say you want to maximize your company’s exposure. With countless campaign options, ranging from posting articles on LinkedIn to filming promotional videos, it’s difficult to know which strategy will reel in the biggest fish. One might argue that simply investing an equal portion of your budget in each campaign option is the wisest choice, as it broadens exposure across a variety of platforms. While this could be the case, it’s more than likely that certain platforms exhibit a higher return on investment (ROI) than others. This is where the analytics comes in.

How It Works

For simplicity, let’s say that your goal is to maximize the number of daily page views you receive on your official website. Similarly, let’s assume you have the following campaign options at your disposal:

  1. Promotional Video
  2. Mass Email
  3. Direct Mail
  4. Internal Blog Post
  5. External Blog Post
  6. Facebook Status Update
  7. LinkedIn Meme Post
  8. Tweet

The first step in this analytics project is to generate some data by running a few experiments. Specifically, you’ll launch each of the campaigns above several times—spaced a few days apart—recording how many page views you receive on our website the day before and the day after you run the campaign.

The reason you run each campaign several times is simple: we don’t want to draw a conclusion from one observation. Doing so would make your conclusions susceptible to things we can’t control (like whether Anne Hathaway happens to be hosting the Oscars the one time we post on Facebook). In fact, we’d like as many instances as possible to be able to draw a robust conclusion. Just be sure to keep in mind that if you are emailing your database or running targeted campaigns, be sure to segment–so you are not hitting the same audience over and over again.

With this in mind, we proceed with our experiments, generating data that looks something like the following.

Screen Shot 2016-02-22 at 4.04.08 PM

Next, we simply subtract the views before the experiment from the views after the experiment to get an idea how each campaign affects page views. Then, we average these numbers for each campaign and divide by the views before.

ci_0

The reason we divide by views before is so we get a percentage instead of a raw number—this makes comparing campaigns much easier. If this percentage is positive, then we have evidence that the associated campaign increases page views on average. Contrarily, if this percentage is negative, then the campaign, on average, results in a decrease of our total page views.

We’re not done yet. Recall from your basic high-school statistics course the concept of standard error. The idea is that even though we have a number (in this case a percentage), we can’t really be certain that’s actually the effect of the campaign; rather, it’s simply what we have observed from our experiments. And since the world is a very complicated place, we can’t assume that our experiments were perfect; we have to accept that there’s room for error and uncertainty. Thus, to be statistically correct about our findings, we’ll calculate the standard error for each campaign option so that we have a range above and below our estimate (sort of like a cushion). Then, we’ll use these standard errors to construct a confidence interval of what the true campaign effect really is and display them side-by-side in a graphic as follows.

ci_2

The graph reads something like “We are 70% sure that external blog posts increase web traffic by 101%, give or take 90%” or “We are 70% sure that mass emails decrease web traffic by an average of 52%, plus or minus 44%.” The color-coding matches the confidence intervals: If our 70% confidence interval is all positive, then we color the interval green; If our 70% confidence interval is entirely negative, then we color the interval red; and if we find that the effect is only sometimes positive, and other times negative, we color the confidence interval gray. The black bars are the center points of each interval.

Why It Matters

From this experiment (without using any marketing technology) we’ve found that certain tools, like blog posts and promotional videos, tend to result in increased exposure. We’ve also found that mass email, on average, can hurt our total page views. Finally, we don’t really have evidence to support that Facebook posting, direct mail, or Twitter are effective marketing tools, but we also can’t claim they hurt us.

Now, this case study was detailed using a Slicon Valley company’s marketing data, so these findings don’t necessarily apply to your organization. Why? Because your company posts on Facebook and Twitter in a different way then they do, and the same goes for mass email, promotional videos, etc. Not to mention you probably have a different audience. However, take this an an example of how you might look at program metrics to determine ROI.

Share this:

Related Posts

s