Big Data, Big Solutions: Understanding Prospects Based on Twitter Data

By February 19, 2016
Share this:
trees in nature

 

 

big-data-didnt-get-promotion

As a data scientist, I’m constantly thinking about problems, solutions, and patterns. And as a data scientist at a predictive marketing company, I’m constantly thinking about how to solve the problems of the marketer. There is so much data out there to work with from a marketer’s CRM, to DMP data, to social media—the list goes on.

Something that keeps coming up for me lately is how powerful social media can be for B2B companies. But not in the sense of how we commonly think about it— in terms of social capital, building relationships, and building up a brand. All of these are incredibly important to a marketing team, but what about market sentiment? Isn’t that important too? People are on social media talking about conferences, products, experiences–everything is fair game really. What if marketers could automatically and consistently scan social media to get a feel for customer sentiment?

With a bit of creative thinking and some data from social platforms like Twitter and LinkedIn, we can leverage data science to quantify customer sentiment in the same way the US population quantifies presidential performance using approval ratings. And yes we are going to talk data science here, but I’m going to make it simple and fun—I promise.

The Main Idea

For clarity of exposition, let’s focus particularly on IBM. Our goal is to somehow quantify consumer sentiment; to develop a metric (i.e. just one number) that will tell us how people feel about IBM on any given day. A great place to find speculation, criticism, and public opinion about almost any topic is Twitter. If we can teach a computer how to distinguish negative Tweets from positive ones, we can quickly gain an idea of how the public feels about IBM without having to read a single Tweet ourselves.

How it Works

The basic building blocks of any predictive model is data: using Twitter’s API, we’ll download the most recent Tweets containing the word or hashtag “IBM.” Each Tweet is at most 140 characters long and can contain words, hashtags, and emoticons. Since computers learn by example, we need to label the Tweets positive or negative. This is called supervising the computer’s learning process.

watson_0

We’ll also break the Tweets up, word-by-word, and record how frequently we see each word. This allows our computer to look for certain keyword patterns in each Tweet.

watson_1_1

Our dataset—the information we’ll feed our computer so it can learn to read Tweets—will end up looking something like this:

Screen Shot 2016-02-19 at 4.03.49 PM

Now that our data is ready to go, we can begin training our computer how to read Tweets. We’ll use a simple and popular linear classifier: the Logistic classifier. The way the training process works is quite simple: our computer will read every Tweet in our dataset and look for key distinctions between the words contained in positive and negative Tweets.

What it will then do is identify keywords contained in these Tweets that are most telling of the user’s sentiment. In this example, my computer found that positive Tweets about IBM tend to contain words like “:)” and “computer”, while words like “Microsoft” and “weird” were associated with negative sentiment. The computer also found that words like “hey” and “you” don’t really tell us much at all about how the user feels, since these words are used in both positive and negative Tweets.

After the training process is complete, we can look back and see how accurately our computer can distinguish between positive and negative Tweets. In this hypothetical example, I got my computer to be about 82% accurate about the user’s sentiment (i.e. 8 out of 10 times, it guessed correctly as to whether a Tweet was positive or negative). Not bad for a simple model like the Logistic Classifier. Now, all we have to do is feed our computer a Tweet and it will tell us, based on the words used in the Tweet, whether the user is saying something positive or negative about IBM.

watson_2

Why This Matters

On any given day, we can tell our computer to visit Twitter, read all Tweets posted about IBM, and, in return, it will tell us (with 82% accuracy) what percentage of those Tweets were positive. Come on, you have to admit that’s cool!

In many ways, this percentage can be taken as an approval rating of IBM and used to verify how successful marketing campaigns are, as well as how the public feels about IBM in general. Since there are roughly 500M Tweets each day, a technology like this saves a substantial amount of time while providing incredible insights for decision makers. Consider the following brief demonstration.

IBM recently released a public API for the Watson supercomputer—an artificially intelligent machine that can interact with humans through speech and natural language processing. Let’s take a look at the behavior of three different metrics around the time the API was released:

watson_3_1

While the stock price tells us a story about how investors received this product release and the number of Google searches inform us about product buzz, our approval rating metric obtained from Twitter captures something a bit more intrinsic: the fact that consumers warm up to the product over time.

This is one particular example of how text classification can be used to inform us in a business setting. Certainly, this technology has a much broader set of applications, from teaching your computer how to read your girlfriend’s impossible text messages to teaching it how to process Federal Reserve press releases for information on future interest rate increases.

See, that wasn’t so bad right?! By using data, marketers can get more precise in how the speak to customers, how they roll out products, how the plan events, and more.


Disclaimer: The tweets used in this article and the stock prices have been altered so as to maintain the privacy and anonymity of individuals and entities discussed therein.

Share this:

Related Posts

s