Why Google SERP CTR studies are a waste of time

June 20, 2011 – 8:09 pm
  • Tweet
  • Tweet

Historically, one of the most quoted, used and abused SEO metrics was ranking. It is a cornerstone of reports that are sent to clients, one of the first things we go to check for clients’ sites and usually cause of alarmed phone calls when the ranking drops. Over the time, SEOs have learned to steer away from rankings as the major reported metric, although I have yet to find someone in the industry that doesn’t check them.

spacer

Another exercise that we like to engage in is predicting how much traffic ranking in the top 10 will bring to the website. It quickly became obvious that not every position in the top 10 brings the same amount of traffic. It is quite self-evident that the positions 1-3 will bring more traffic than positions 7-10 but how much more? Effort involved in pushing the site from #5 to #2 can be significantly larger than effort to bring the site from #50 to #10, so is the traffic/conversions worth the effort?

For this need, people have started investigating how the traffic divides across the top 10 locations. There were several methods used in these investigations, the two most prominent ones being eye-tracking and averaging clicks over locations data that comes from analytics or search engine/ISP (mistakenly) released data. I have reviewed a number of such studies and their predictions varied: the first 3 positions get in the neighbourhood of 60-80% of clicks, while the rest divides over the positions #4-#10. You can already see how the data in these studies shows a high degree of variability – 60%-80% is a pretty wide range.

All of that sounds well and good on the paper and the numbers can be transformed into pretty charts. More importantly we have numbers that we can translate into business goals and all clients like to hear that: “if we reach #3, we will get X visitors and if we put in more effort (AKA increase the SEO budget), we can get to the #1 position which will mean X+Y visitors, conversions, whatnot”. However, when one starts looking at the numbers out in the wild, the picture that emerges is significantly different. The numbers do anything but fall into precalculated percentage brackets and the division by percentages fluctuates so wildly that pretty soon, you find yourself tossing your predictions into the bin and all you can say is that improving your rankings will get you more traffic by an unknown factor. This is something that has not escaped the original researchers of CTR distribution: they are constantly trying to improve the accuracy of their findings by focusing the research on similar type of queries (informational, transactional, brand, etc.) or similar type of searchers, but there is a much greater number of parameters that influence the spread of clicks over different positions. I will try to outline a number of such parameters, as I encountered them with some of our own or our clients’ websites:

Universal search (local, shopping, images, video, etc.)

Introduction of different types of results in SERPs was Google’s attempt to cover as many user intentions as possible. This is particularly true of short search queries: when someone searches for [flowers] it is hard to tell whether they want pictures of flowers, instructions of how to care for flower pots, places to buy flowers or time-lapse videos of blooming flowers. Therefore, Google tries to include as many types of results as possible, hoping to provide something for everyone. iProspect’s study done in 2008, shows that 36%, 31% and 17% of users click on Image, News and Video results respectively, which means these results will seriously skew the CTR distribution over the SERPs. This is particularly true of queries with local intent where Google Places results are featured prominently in SERPs. Compare the following 3 SERPs (click to enlarge):

spacer spacer spacer

There is no chance that the #3 position gets the same percentage of clicks on each of these SERPs and the same holds true for every other position. Since serving of universal search results depends on geo-specific (IP geolocation) and personalized (location, search history) parameters, there is no way to accurately predict how the clicks will distribute over the different positions in SERPs

Search intent

Different people mean different things when searching and shorter the query, wider the scope of possible intentions is. We can be pretty sure that someone searching for [online pizza order NY] is looking to order dinner tonight, but person searching for [pizza] may be searching for a recipe, in which case a pizza joint with online ordering website located at #1 will not get the recipe seeking clicks, which will either perform a second search or browse deeper into the SERP and on to 2nd, 3rd page of results. If the majority of searchers are looking for a recipe, then the #1 located pizza vendor will most definitely not get 40-50% of clicks. We have seen this happening with one of the clients that was ranked at #1 position for a single word keyphrase. Google Adwords Keyword tool predicts 110,000 exact match monthly searches in the US. During the month of May, while being ranked at #1 at Google.com for that keyword, the site received about 7,000 US Organic visits. That is 6.36% CTR on a #1 position. Google Webmaster Tools shows a 10% CTR on #1 position for that keyword. Compare that to 50.9% CTR that the eyetracking study from 2008 predicts for #1 position, which means site located at #1 should be receiving around 56K monthly visits and you can get a feel for the gap between the prediction and reality.

spacer

The problem with guessing intent of single query keywords is that it cannot be done before reaching any significant traffic creating positions and that task can be enormous. Low CTRs on the first position also put the whole perceived market value of single keyword exact match domains under question – the predicted amount of traffic that is expected from reaching top locations for the EMD keyword should take into account the difficulty to predict the intent of searchers.

One of the possible benefits of ranking for single-query keywords is appearance in SERPs even if your results are not clicked through. If your Title is unique and memorable, there are higher chances that people will click through it from a second search query SERPs if they have seen your site in the first query. For example, if you rank for [widgets] and your title says “The Cheapest Blue Widgets in the World!”, then there are higher chances of those visitors that perform a second search for [blue widgets] to click on your site, even if you are located in lower positions (granted that your title is not pushed below the fold by Local, Video, Image or other Onebox results). Yahoo has a nice keyword research tool that shows the most common previous and next queries after your keyword of choice (I think it is only relevant for US searches)

spacer

Brands outranking you

According to one of the CTR studies, dropping from #3 to #4 should cost you about 30% of your traffic (if the 3rd place gets 8.5% and 4th place gets 6.06% of total traffic), but is it always the case? Let’s see how drop from #3 to #4 and then back to #3 played on one of the SERPs:

spacer

The SERP for this keyword has no local results nor universal search (images, videos, news). Pay attention to the difference in numbers between the original #3 traffic and the later #3 traffic. The initial drop to #4 cost this site 76% of the original traffic (instead of 30% as the CTR studies predict) and the return to #3 got it back to about 47% of the traffic the site got when at #3 originally. So what happened? The site got outranked by a famous brand. Seeing the brand name in the title and and URL of the #1 listing, enticed searchers to click through that listing in much higher numbers than the 42% that the study predicts. Again, it is not only where you are ranked, it is also who is ranked with you that influences your CTR percentages. My guess is that even when a big , well-known brand doesn’t outrank you, but is located just below your listing, your CTR percentages will also be negatively affected. Seems like Google’s preference for brands is based in actual user behaviour. It would be interesting to see whether there are any significant differences in bounce rates from brand listings too.

Optimized Listing

There are many ways to make your listing on SERP more attractive to clicks.  The linked article outlines the most important ones, however I would like to add some additional points to three of those tactics:

  • Meta Description – In an eye tracking study, performed by a team including, among others, Google’s Laura Granka, (published in a book called Passive Eye Tracking, in a chapter Eye Monitoring in Online Search) it was found that the snippet captured the most attention of the searchers (43% of eye fixations), followed by the Title and the URL (30% and 21% of fixations respectively). The remaining 5% distributed over the other elements of the listing, such as Cached Page link. This finding shows the importance of controlling the content and appearance of your snippet for your main targeted keywords, as an eye-catching snippet can tip the scales of CTR in your favour and increase the number of clicks you get, relatively to your site’s ranking.
  •  

  • Rich Snippets – the above figure of 43% of attention dedicated to the snipped probably becomes even higher when we talk about rich snippets. Google and other search engines have been investing a lot of efforts into encouraging adding a layer of classification onto the data presented on web pages and in case the information is labelled in an accepted way, it will be presented already in the snippet in SERPs. Check out the following SERP:spacer
    The showing of review grade stars in the snippet itself, makes this listing more attractive than the neighbors and it probably increases the CTR far beyond the few percent that the position #6 would get it, according to CTR studies. Similar situation can be observed with other microformatted data, such as event dates, recipes, airplane tickets, etc. Rich snippets will become even more widespread if and when Google starts accepting websites that implement the Schema.org tagging.
  •  

  • Optimized Titles – adding a call to action to the Title or making it stand out in some other way, can increase the CTR percentages of a listing, beyond what you would usually get due to your ranking. This strategy, however, became less influential since Google started deciding what the best Title of your site should be, according to, among other things, anchor text of incoming links. As these changes will be implemented within Google’s discretion, it becomes very hard to predict how many people will click through to your site.

These are only some of the reasons that CTR % can differ greatly from the value predicted by the eyetracking or other studies. It is important to remember that as search engines try to index and incorporate into SERPs more and more types of results, it will be harder to predict the CTR over locations. Even averaging out the figures over a large number of results will give us a number that is not at all useful for predicting the traffic volume received from each position.

(Image courtesy of iamyung)

Posted in General, Google, SEO

  1. 32 Responses to “Why Google SERP CTR studies are a waste of time”

  2. Excellent post it’s great to see someone bring some proof to the table on why estimating traffic is a waste of time.

    I’ve never needed proof… commoncents told me the degree to which query spaces are affected by user intent and a highlty varying quality of SERP results makes this an exercise in futility and definitely not something I want to use in a pitch… it can make you look like a fool or a genius… I have a better idea after watching a query space and have figured out some of the variables that can affect it.

    To your 6% CTR the quality of products/sites/brands in the paid search above organic make broad statements about CTR also useless as most of those studies were done before Google started camoflaging the PPC results directly above the organic.

    By Terry Van Horne on Jun 20, 2011

  3. Hey, quite a very good interesting points at the beginnin of your article, which result fluid and useful.
    If I were you I would have omitted the last bit with the optimisation suggestions, just because they are always the same.

    By Andrea Moro on Jun 20, 2011

  4. Branko,

    Thank you for taking the time to put in the legwork on this one. Personally, I find it laughable that people make such claims. I’ve had to help more than a couple clients understand that it’s bogus “data”.

    By Alan Bleiweiss on Jun 21, 2011

  5. Thanks for the thorough post, clarifies many things a long suspected but didn’t want to spend the time to investigate myself.

    By Rick Bucich on Jun 21, 2011

  6. Good extraction of knowledge! I agree, I used to focus on two word terms and after seeing my pages on the first page and looking search volume query estimates, CTRs did not increase as much as I thought they should. Long tails that had focused intent are much better for CTRs and conversions. This is especially true in broad keyword terms found within larger eCommerce ontologies.

    By Scott on Jun 21, 2011

  7. Another major factor i’ve observed that I suspect makes CTR forecasting on many keywords total folly is Instant+Suggest. Sometimes as you enter a shorter root keyword, suggest will spit out a longer string of words that might be enjoying a spike in popularity. Instant will then generate a SERP that serves the longer tail keyword without many even realizing, potentially undermining the search volume forecasts (which are also dodgy as hell) and any visibility you earned for the broader (shorter) keyword. Go ahead and calculate a CTR on that.

    By Minchala on Jun 21, 2011

  8. Excellent article!
    Thanks for sharing these informations.

    In my experience the CTR informations in Google Webmaster Tool are often incorrect.

    Greetings from Italy

    By Francesco on Jun 21, 2011

  9. Very interesting findings. I’m glad to see that long tail is still our best friend. I was discussing this with a colleague and we both agreed, that’s how “normal” people search. Unlike the “SEO” industry or people who work online we forget to look at results from the clients perspective and how the user actually searches. Or god forbid that there are other variables that can affect your results. Thanks spacer

    By Gabriella Sannino on Jun 21, 2011

  10. Brilliant post !

    I agree on the fact that you can’t predict a CTR for a rank and a keyword as you don’t know what the person is reaaly looking for (in case of a single query).

    This is why in SEO it is essential to focus on keywords that drive conversions rather than keywords (single query words) that drives trafic because those are too generic !

    By davidseo on Jun 21, 2011

  11. excellent summary, thanks for putting all that together and linking the findings together.

    I agree, WMT data should not be trusted as such, rather used in relative ways.

    I also agree that if position is good, it’s not the king, especially if you start talking conversion as well as CTR…..

    all the best

    By KBSD on Jun 21, 2011

  12. Brilliant article! Glad you’ve put all these arguments together. It’s a very controversial topic and causes many ‘battles’ among SEO folks…The SERP landscape changes so rapidly and so many things can influence the way results on google are displayed in a given day, add to it the user intent – it really is hard to predict what users will click on! I’ve seen examples of increased ranking bringing less traffic and the other way round. I think using the CTR data for forecasting in nowadays can put you in a danger position.

    By Jowita Blakjovi on Jun 21, 2011

  13. Great study and good to see some actual data (unlike most articles that say you should look at CTR).

    We’ve seen noticeable increases just from changing a Meta title and description, which would clearly have poached CTR % from a competitor.

    The best way to tell why you’re losing clicks for a term in the same result is just to look at the SERP’s. Are other ads better? That could be why.

    By Koozai Mike on Jun 21, 2011

  14. I think this is an excellent post although I do not agree 100% that volumes are a waste of time as using Hitwise is the best tool I have used for looking at trends, market share and also click volumes across certain periods of time etc (there are lots of options), google trends is also useful but the google keyword tool numbers are way out which does make predictions using that tool a waste of time.

    By @kfovargue on Jun 21, 2011

  15. Awesome post. Great point about the connection between single query keywords and follow-on intent queries.

    It seems to me the caveat is this…if you’re a high-ranking brand for certain types of intent queries, you can build a relatively accurate model for similar types of intent queries. For example, if I have a site that ranks well for “find an apartment in Austin/Dallas/Houston,” I should be able to build a relatively accurate model that shows the value of improving rank performance for other cities. I’d expect more variability for sites like job search sites and e-commerce sites, but it seems like you could still build a model that’s predictive enough to be valuable in many cases.

    Seems like for this to work/make sense, you need two things: 1) has to be focused on baskets of very similar intent keywords, and 2) have to be a site with a TON of similar intent keywords and enough ranking history to model (otherwise, the value that comes from building a model isn’t worth the cost).

    interested in hearing your thoughts..

    By Paul May on Jun 21, 2011

  16. Thanks all for great comments. Here are some of responses:

    @kfovargue: i have yet to see one of these tools that gives somewhat useful predictions. They are either PPC oriented (Spyfu, SEMRush) or completely and utterly off mark with their numbers (Compete for example). Actually i have seen Compete underestimate traffic by the factor of 3 and overestimate by the factor of 9. Totally useless and i have heard others have similar experiences. I suggest you compare reportas of these tools to your website Analytics nubers and see how close they are and what CTR % you see.

    @Paul May: in principle I agree with you. That is why i wrote:
    “We can be pretty sure that someone searching for [online pizza order NY] is looking to order dinner tonight, but person searching for [pizza] may be searching for a recipe, in which case a pizza joint with online ordering website located at #1 will not get the recipe seeking clicks, which will either perform a second search or browse deeper into the SERP and on to 2nd, 3rd page of results.”
    However, there is a limit to these predictions. One of them is brand ranking around you. What if one of your local competitors starts a TV campaign that totally increases their brand awareness? They could “steal” a big portion of clicks even without outranking you. What if Google decides to enter that vertical (as they did with mortgages, hotel prices, credit cards, flights, etc.). Suddenly you are competing with a megabrand, whose listing is usually presented above the fold in a click attracting manner? What if your SERP starts showing product ads or images? The amount of changes Google can inflict on the main gateway to your site is countless so CTR oftoday may be totally different from the CTR of tomorrow

    By neyne on Jun 21, 2011

  17. I really enjoyed this article. I found it particularly interesting that the appearance of the major brand had such a great effect on the click through rate. Thanks so much for taking the time to lay this out in such a clear way.

    By Marjory - MediaWhiz on Jun 21, 2011

  18. Glad someone finally came out and said all of this. SEOs have known this for a long time, especially if you work at an agency and review many sites.

    Now we just need to figure out how to not have clients FORCE us into making predictions on how long it will take to get ranking position “X” and how much traffic to expect from it!

    By Miguel @organicseoconsultant.com on Jun 21, 2011

  19. Well Miguel, lets hope this post will help you convince the clients spacer

    By neyne on Jun 21, 2011

  20. Great post. In particular, I would add that B2B companies selling high ticket items and services should pay even LESS attention to these figures. It’s likely RFPs or RFQs will go out to 5 or 6 of the top ranking companies in the SERPs, unless they completely botch the sales call.

    I stopped worrying about that metric myself almost a full year ago. Without any negative impact.

    By Kevin on Jun 21, 2011

  21. Thanks for posting this, Branko.

    Some great insights here spacer

    By Glen Allsopp on Jun 22, 2011

  22. Brilliant blogpost, I’ve said it before, SEO is now a bit like online anthropology. Just understanding the technical and algorithmical areas of the search engines are a waste, understanding how people search and interact with the SERPs is the future.

    By Lisa Myers on Jun 22, 2011

  23. Excellent article, absolutely correct, its very difficult to predict the actual CTR on every link, but the idea of implementing Snippets is excellent.

    By Quantum Mutual Fund on Jun 22, 2011

  24. Great info, thank you. I wish the study had included SERP that included image and video thumbnails since they are known to get a significant CTR no matter where they are on the 1st page.

    By Dave on Jun 22, 2011

  25. Superb article.

    SERPs are definitely becoming less test-friendly as they feature richer types of search on their site: video, product, news, etc. This doesn’t include how social search will be personalizing search to what extent.

    I think it would also be interesting to see how different generations of users behave when searching. Lots of big questions that still need to be tackled–that could be broken down by age–include: a) do users still look at ads, and b) how do types of search perform across different generations

    Back to the original point, I hope advertisers and clients would happen upon your post!

    By Alvin on Jun 28, 2011

  26. Excellent article!
    Thanks for sharing these informations.

    In my experience the CTR informations in Google Webmaster Tool are often incorrect.

    By seo arizona on Jun 30, 2011

  27. I think the best way for SERP CTR study would be to target one keyword and Capture all the rankings on SERPs (be it Video, Images or New – I understand that we cant be on news all the time but still to some extent…) and then measure the CTRs from analytics of these dieerent webpages. Then it would be interesting to understand user behaviour and hope we get approximate clues which can then be presumed for cluster of keywords which are related to the targeted one…

    By Kumar on Jul 19, 2011

  28. i agree that it’s a waste of time to say with certainty, but it could help drive decision making if you make sure that positions don’t = guarantee.

    it should be possible to take the studies that have been done, estimate the traffic in google keyword tool, find an avg for each position according to the past research and find a probability using t distribution. but even that would depending heavily on the appearance of universal listings, maps and as you mentioned, better known brands in the listings along with you.

    clients/executives just need to understand that the SERPs are a living, breathing organism. they adapt, adjust and change daily and the impact of those changes fluctuates. for big companies with thousands to millions of keywords it’s difficult to track the constantly changing impact, but estimating it still holds some value.

    By joe on Aug 2, 2011

  29. Thank you for your valuable information about SERP.It’s really informative for me. I found it particularly interesting that the appearance of the major brand had such a great effect on the click through rate.

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.