BrandSavant

Gaining Insight From Social Media Data

Help Me Set My Career Back Five Years

by Tom Webster on February 26, 2013

Tweet

spacer Yes, BrandSavant readers, you have a chance to set my career back by five years–maybe more–by enabling a flood of personally embarrassing content.

On April 15th (tax day!), my beautiful wife Tamsen and I are going to run our very first marathon–and it’s a big one. We have decided to run the Boston Marathon on behalf of the Boys and Girls Clubs of Dorchester, an organization dedicated to providing inspiration, resources, and (most importantly) the power of possibility to some kids who could really use a hand here in the Boston area. As parents ourselves, we know that the most important thing you can give a child is the belief in their own potential to realize their dreams. Knowing that there are kids out there who don’t–yet–believe that is a powerful motivator.

We have some ambitious fundraising goals, so if you’ve ever wanted to show your appreciation for the sporadically-posted and occasionally-tolerable content here at BrandSavant, I’d love it if you could support these kids with a donation–whatever you can manage–at our fundraising page here.

My wife has written a very compelling appeal on that page for the many good reasons this worthy organization could use your help. I, however, have chosen to take the low road. To that end, if you look at the various levels of donation to the right of the page, you’ll see the various ways that I will debase myself to inspire your donations. My mullet-festooned high school picture? Check. Re-enacting the entire “Battle Dance” scene from Pat Benatar’s “Love is a Battlefield” video? Check. My Boz Scaggs impersonation, for YouTube to see? Check check double check.

I’ll stop here, before it gets worse. Please donate if you can, what you can. My video camera stands at the ready. MAKE IT HAPPEN. Donate today!

Tweet

{ 0 comments }

The Survey Is Dead

by Tom Webster on February 25, 2013

Tweet

spacer I came across a couple of things this weekend that *almost* inspired a rant. However, ranting doesn’t agree with me in the long term, so I slept on it, came up with a good link-bait-y title, and wrote this instead.

The first thing I came across was the results of a survey a BrandSavant reader sent me that showed social media platform usage across younger demos. When I saw that their results disagreed *violently* with some of the data put out there by Edison, Pew, Nielsen and other reputable sources, I looked into the methodology. Turns out, it was from a new survey tool that trades survey question responses for access to content across a publishing network–and I’m *not* talking about Google Consumer Surveys, which has a robust and growing network and continues to iterate.

No, this is a different tool, and a different “publishers network” (and, hence, a different sample base) from Google’s. Here’s what I could learn about the sample base for this survey, and for other surveys offered on this platform, expressed in Danish: ∅.

The FAQ offered no guidance on even basic demographics, save for a line that indicated they use “inferred demographics” based on IP addresses, which seems like a lot of work when you could just *ask* me how old I am, considering I am taking your survey in the first place. But I digress.

The tool is not important here, so I’m not linking to it. But the existence of this tool, which does indeed provide “information” quickly and cheaply, is symptomatic of something larger, which I do want to address. But I wasn’t sure about how I would address it until I came across a couple of blog posts this weekend that I nearly left intemperate responses to. I’m glad I didn’t.

I read two posts this weekend from social media marketers that disparaged survey data in favor of real-time social media data. The people who wrote these posts are smart people. So I’m not linking to the posts, because I’m not making it about the posts (or their authors.) Instead, I want to talk about the sentiment behind these posts: that survey data is “artificial,” while social media data is unfiltered and “real.”

The fact is, neither source of data is perfect. Here’s a simple question that no social media monitoring tool alone can answer: what percentage of a brand’s customers or prospects tweet, or read tweets, about that brand? That’s a pretty basic question that a survey can answer that makes all of that social media monitoring data comprehensible.

Pitting survey and server data against each other is a ridiculous false choice. Smart market researchers use *both* to generate insights on behalf of their brand or client. Survey research generates insights in a controlled setting. Social media monitoring generates insights in an uncontrolled setting. As a professional market researcher, I use the latter to generate language and ideas, but I use the former to *test* them scientifically. In short, I use them both. I *need* both and I’m glad to have all of these things in my arsenal.

Put another way, both surveys and social media monitoring are tools in my utility belt. Positing that listening will kill asking is like saying hammers will kill wrenches. What discourages me about these posts, and the data generated by the survey tool I mentioned above, is this: both “tools” generate research data. I do not dispute that. But here is a distinction with a difference:

There is a wide chasm between “research” and “market research.”

Unfiltered social media data that doesn’t address the caveats of social media research gives you: research. Asking questions with a survey tool to a sample you can’t characterize gives you: research.

When a client asks me to find the features and benefits that financial services professionals want to see in an investment portal? I need to give them market research.

Unfiltered DIY questionnaires and raw social media monitoring give you information about the people. Market research researches your target market. It offers actionable insights based upon the data we can gather–both through social media and questionnaires–about a sample of the people you want to reach.

A penultimate thought about what the Internet has done to research: we have never had more access to bulk data than we have today. In a sense, the two items that inspired this post work at opposite ends of the same continuum. While social media monitoring gives us more data about how people react to brands in the wild than we’ve ever had, the spate of DIY research tools gives us more access to directed “answers” than we have ever had. But quantity is not quality. The metric tonnage of data we can mine from Twitter, and the quantity of data thrown off by the DIY “polling” apps of the world, are all the by-products of great tools that can certainly provide research, but need work to make the leap to providing market research.

If you can’t answer within statistically acceptable parameters who tweeted about your brand, or who answered your question, you have research. If you *can,* you have market research. That’s the hard fact.

Finally, a thought to those who continue to believe that surveys have “stopped working” due to the Internet, mobile phones, and other factors: on Election Day in 2012 we fielded over 100,000 surveys in one day across 50 states to provide the sole, lasting record on who voted and why, and to give the major U.S. news networks decision support to project the results of nearly 100 races from the Presidential election on down to various Senate and House races and assorted referenda. No inaccurate projections were made. It wasn’t magic; it was science, which still–refreshingly–works just fine.

The Internet has given my profession an enormously powerful set of tools to work with, including social media monitoring and quick Internet surveys. But the Internet didn’t kill the survey. It made the survey better.

Tweet

{ 3 comments }

“The Usual Caveats” of Social Media Research

by Tom Webster on February 19, 2013

Tweet

Not for the first time today, a piece of research was passed along to me by someone warning that it was social media research, “so the usual caveats apply.”

This phrase is starting to become meaningless to me. Does it have meaning to you? When I look back through various studies I’ve seen that have been derived from social media clickstream data, I often see that they’ve been shared with “the usual caveats.” To me, that phrase has become the new “interesting;” that thing you say about a piece of data when you don’t know what it means.

Here’s the thing about the usual caveats: if you do the work, those caveats are quantifiable. When someone denigrates TV ratings research, for example, because those ratings are based upon a sample of a few hundred viewers who keep diaries/electronic records of what they watch, I’m quick to remind people that this sampling methodology is predictable, operates within known parameters, and is based upon a methodology accredited by the Media Research Council, an independent body established to insure the validity, reliability and effectiveness of those estimates. And by the way, estimates aren’t guesses.

But when people look at social media research and accept it “with the usual caveats,” without knowing what those caveats actually are, they risk great violence to the truth and certainly some damage to their brand, if not their psyche. Accepting this is to accept the fact that *no research* is generally better than *awful research.* After all, doing *no* research means that you have a chance of making a poor decision (or no decision); relying on *crap* research guarantees that bad decision.

So, the next time you review, endorse or pass along a piece of social media research with “the usual caveats,” consider what some of those caveats might actually be. For the most part, they all concern the same thing: how representative of (and therefore projectable to) some known population that data is. And note: social media need not be representative, if it is predictably, quantifiably and/or repeatably non-representative. So, what are my caveats? Here are a few:

1. What specific sources comprise the data, and in what proportions? (BTW, this is where most social media monitoring and research platforms fall flat, and I’m only on #1).

2. Is the data weighted to some known quantity, or did it come straight from the “spigot?”

3. Can we calibrate the data using some known secondary research?

4. Can we calibrate the data with primary research?

5. Can we track the conversational history of the data sources to gauge sample quality?

6. Can I pull a random sample of raw data to manually test against automated descriptive statistics?

7. What do I know about the customers of the brand in question who do not tweet about this product?

Now, do you need to know the answers to all of these things to judge the quality of a piece of social media research? No. There is no such thing as perfect information, whether that data comes from surveys or servers. That’s why these are caveats. These are the things you need to know that you know or don’t know before you know what you know. Or, as one of America’s greatest poets once said,

The Unknown

As we know,

There are known knowns.

There are things we know we know.

We also know

There are known unknowns.

That is to say

We know there are some things

We do not know.

But there are also unknown unknowns,

The ones we don’t know

We don’t know.

I pass this “poem” along with the usual caveats.

Tweet

{ 5 comments }

Some Big News For Podcasting

February 12, 2013
Tweet

TweetMy compadres at Edison and I are very excited to announce that we are now providing podcast measurement ratings and [...]

Tweet
Read the full article →

Why We Brand

February 4, 2013
Tweet

TweetI’m not the only one who has said this, but the GoDaddy ad during the 2013 Super Bowl was *appallingly* [...]

Tweet
Read the full article →

The Forgotten Power of Audio

February 4, 2013
Tweet

TweetThis is, we are told, the era of video. Of images. Of YouTube and Instagram and Vine and .gifs. I [...]

Tweet
Read the full article →

Waystations

January 30, 2013
Tweet

TweetApple is an enormously successful company, and as such has spurred a cottage industry in business writing as we all [...]

Tweet
Read the full article →

The Facebook Flea Market

January 2, 2013
Tweet

TweetToday I logged into Facebook and was greeted with the ad bar below: You know it. You’ve seen it. Facebook [...]

Tweet
Read the full article →

What The Mayans Can Teach Us About Social Media And Marketing

December 21, 2012
Tweet

TweetYeah, ummm…I got nothing. Happy Holidays, and I’ll see you in this space in 2013. Thanks for reading my stuff. [...]

Tweet
Read the full article →

The Election’s Most Important Insights For Marketers Part Two: The Crystal Ball

December 11, 2012
Tweet

TweetThe myth of the “crystal ball” is not a new topic for this blog, but the election brought it back [...]

Tweet
Read the full article →

← Previous Entries

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.