The Shatzkin Files

Doing SEO right requires research into the audience, not maximum knowledge of the book

Posted by Mike Shatzkin on February 17, 2015 at 9:06 am


There is a core point that Pete McCarthy made clear to us when we first started working with him on digital marketing challenges a year or so ago, which, critical though it is, seems extremely difficult for publishers to take on board.

For all our careers, descriptive copy — catalog copy, title information sheets, press releases — about any book was written by somebody who really knew the book. That normally meant it was drafted by a junior editor or marketer who had read every word of the manuscript, and perhaps even worked on developing it.

But in today’s world, where the most important job of descriptive copy is to make the book “discoverable” through search to the person likely to buy it, it must be written with knowledge of the potential audiences, and that knowledge can only be gathered through research.

The reason for this change is not hard for anybody to understand. Almost all publisher-generated copy until the past ten years was intended for B2B intermediaries: buyers at accounts, book reviewers and editors, or librarians. It was their job to translate an accurate description of what was in the book for their audiences. Most consumers never saw publisher-generated copy except if they were browsing a shelf and chose to pick up a book and read its flap or cover copy, which usually differs only slightly from the B2B copy.

And whether or not consumers today see publisher-generated copy on a product page, search engines do, and consumers are increasingly driven by what search engines tell them. Writing copy without knowledge of the potential audiences, the language they use, the frequency with which specific search terms arise, the ability to interpret what they mean about consumer intent, and the other people, places, and things (let alone books!) competing for those terms, is not going to achieve the desired results for discovery, no matter how accurately and eloquently the book’s content is described.

Even if the logic is fully absorbed and appreciated, the challenge for most publishers to change their process for creating descriptive copy is substantial. We’ve now replaced “knowledge of the book”, which has usually been routinely gained through work that takes place before the copy is needed, with “research into the audience”, a separate task that can take a couple of hours or more and requires a dedicated effort.

(A parenthetical point here: if that audience research were done before the book was completely written, it could inform what content would sell best, not just what descriptive copy would be most readily discovered. That’s where publishers have to go in the long run, which would actually suggest that editorial staff needs to learn the audience research techniques as urgently as marketers. And we will add the massive understatement that knowing what this research would tell you can be extremely helpful in gauging the true potential audience for a book or author, which would influence the amount you’d calculate would be a sensible advance.)

The research exercise we’re suggesting is a prerequisite doesn’t just take time: it takes knowledge and skill, as does applying what is learned to the copy. Even if the knowledge were there and distributed across all the people who write descriptive copy today — and there is no publisher on the planet in which it is — the time required for the research would tax the resources of any house.

And that’s before we get to the distractions that can make publishers forget the core point, and they are plentiful.

The most recent one we’re aware of arose twice recently, two weeks ago in a piece by Porter Anderson and then again last week in an article in Publishing Perspectives  which featured the new tech-driven book deconstruction and analytical capabilities developed by an ebook distributor called Trajectory. They acquired the assets of another auto-analysis engine, described in the piece as a “book discovery site”, called Small Demons.

What Small Demons and now Trajectory do — somewhat like BookLamp, which was acquired by Apple — is use natural language processing and semantic indexing to identify characteristics of the book that can be discerned by examining the writing. Small Demons seemed to focus on proper nouns, so it could find all the books that had action taking place in Paris. Trajectory and BookLamp focused as well on writing style, sentiment analysis, and story construction.

The logic is that if you like books set in wealthy suburbs with handsome 34-year old male protagonists who break four hearts before falling hopelessly in love and who speak eloquently with the frequent use of five dollar words, and then get chased by bad guys until the heroine comes to the rescue in the last chapter, we can find them for you.

Even before I met Pete McCarthy, it seemed dubious to me that the kinds of similarities these analyses could document really predicted what a person would want to read based on what they’d read before. This logic would only make sense if the objective were to recommend a “next book” to a reader, assuming they liked what they were reading and wanted their next book to provide a similar experience. (There clearly are readers like this and they are very visible in fiction genres, but I’m quite skeptical that most readers are like this.)

But if the point to the analysis is to create copy that will promote “discovery”, off-page keywords or even “comps”,  and you buy Pete McCarthy’s premise that delivering solid SEO (search engine optimization) depends primarily on “understanding audiences”, it is clear that calling this kind of analysis a tool to aid “discovery” is a massive misnomer that mostly leads to a wild goose chase.

In fact, it is doubling down on the very thing the industry needs to rethink. It is not nearly as important to develop a deeper tech-assisted understanding of “the book” as it is to do research into the audience. And analyzing a book’s text doesn’t deliver that understanding.

The promise of BookLamp, Small Demons, and, presumably, Trajectory, is that they can deliver an analysis that requires little or no staff time because they use sophisticated technology. And the main barrier to wider adoption of Pete McCarthy’s SEO techniques is that they require research that, even using the best tools, will take 2-to-4 hours of human investigation before the first word of copy can be written.

If you’re looking for books that are similar in style and content, the tech can help you and you should use it. But if what you want is to make your book pop in the searches of likely readers, you can’t dodge the work. And finding a book that is similar in writing style, pacing, and story construction really won’t help you at all.

“Discovery” is often discussed by publishers as though it were a problem consumers consciously have. I don’t think they do. My own unproven paradigm is that there are people who are always reading a book and people who are not. The former group knows well how to find them, but using search is part of many of their arsenals. For the latter group, the books tend to find them rather than the other way around, but today the best way for the book to find them would be when they’re searching for something else and a book would be a relevant return to the query. In either case, publishers have a vested interest in showing up for the right searches for the right people at the right times.

The Logical Marketing Agency we’ve built around Pete’s knowledge of digital marketing offers a variety of ways to help publishers with this challenge, including both having us do the audience analysis for particular books and delivering training seminars that can teach a publisher’s staff what it needs to know.

4 Comments »

Tags: Apple, BookLamp, Logical Marketing Agency, Pete McCarthy, Small Demons, Trajectory
Posted in Authors, Direct response, General Trade Publishing, Marketing, New Models, Supply-Chain |

No, the Big Five are not a cartel and it really ignores reality to label them as one

Posted by Mike Shatzkin on January 29, 2015 at 1:15 pm


One of the best-attended breakout sessions of Digital Book World 2015 was the discussion called “Should Amazon Be Constrained, and Can they Be?” which shared the very last slot on the two day program. That conversation was moderated by veteran New Yorker journalist Ken Auletta, and included Annie Lowrey of New York Magazine, thriller author Barry Eisler, and Barry Lynn of the New America Foundation.

It turns out that the two Barrys, who have pretty much diametrically opposed positions on Amazon (Lynn wants them investigated by the DoJ as a competition-stifling monopoly; Eisler casts them, for the most part, as the heroes of the book business’s digital transition) have a common position on the Big Five publishers. They refer to them as a “cartel”. Eisler is sneeringly dismissive of “New York”, which he refers to the way Republicans of the 1980s referred to “Moscow”, as an obvious pejorative. He appears befuddled by how anybody interested in the well-being of authors and the reading public could take the side of these publishers who maintain high prices for books, contract with authors to pay them smaller percentages of sales than Amazon does (either through Amazon’s own publishing operations or through their self-publishing options), and notoriously reject a very high percentage of the authors who come to them for deals.

Perhaps because the focus was Amazon, perhaps because Eisler was both emphatic and entertaining in his roasting of the publishing establishment, and perhaps because the facts to defend them are not well known, neither moderator Auletta nor panelist Lowrey challenged the big publisher baiting from Eisler with which Lynn mostly agreed.

It was just as well that I wasn’t on the panel. I am not certain that Amazon can or should be constrained, but I am damn sure that the Big Five publishers are not villains, and they are certainly not a cartel. They do seem to be extremely poor defenders of their own virtue but they are doing yeoman work maintaining the value in the old publishing model — for themselves and for authors — while adjusting to changes in their ecosystem that require that they develop strong B2C capabilities while maintaining their traditional B2B model, the death of which has been greatly exaggerated. If I’d been on that stage, the discussion of Amazon would have been diverted when the trashing of the big publishers began.

I took the step of confirming in an email exchange my recollection of the counts in Eisler’s very entertaining, persuasive, and unchallenged indictment of the big publishers.

1. Their basic contract terms are all the same, which it felt at the time he was suggesting demonstrated collusion, but which in our subsequent exchange he clarified he interprets as evidence of “asymmetrical market power and a lack of meaningful competition”;

2. They pay too low royalties on ebooks, which he also attributes to their “asymmetrical power” and “an implicit recognition that publishers come out ahead if they don’t compete on digital royalties”;

3. They only pay royalties twice a year, rather than more frequently or more promptly, which Eisler also attributes to a lack of competition;

4. The term of big publisher contracts is normally “life of copyright”, which Eisler calls “forever terms”, and;

5. They reject a lot of authors. Here Eisler clarifies that this is not an “indictment, just an axiom”. I agree when he applauds self-publishing for creating a better world where “readers have more to choose from”. But we quickly part company again because he characterizes self-publishing as freeing us from a world where “an incestuous cartel” makes “virtually all the decisions about what tiny fraction of books readers will every have a meaningful opportunity to learn of and read”.

In our exchange, Eisler expressed the belief that “the only reason people have been okay with this is that the Big Five are ‘my people'”. So they get a pass which he likens to what conservatives gave George Bush or liberals give Barack Obama. (In another point of disagreement between us, Eisler seems to find very little difference between the Democratic and Republican parties. I guess that is some people’s way of saying “nonpartisan”. What it says to me is “not discerning”.) And Eisler finds it “interesting” that the publishing revolution has “people decry” Amazon for “doing, or often only for potentially one day doing, the very things that are the definition of the Big Five.” (I have problems with this too, because none of the big publishers have a dominant market share selling books online and ebooks. In other words, Amazon and the publishers really aren’t comparable. Check back with me if any of the big publishers builds — or buys — a market-leading retailer.)

I’m going to plead “no contest” to the charge that the Big Five are “my people”, which I hope won’t discredit my arguments any more than the fact that Eisler is an Amazon-published author discredits his. But the cartoon picture of publishing in Eisler’s reviled “New York”, where some small group of extremely like-minded people apply their narrow views to effectively restrict what people read is a massive distortion of reality. Let me try to set the record straight about this world so many of my friends inhabit and with which I’ve been interacting for the better part of five decades.

First of all, the Big Five have plenty of competition: from each other, as well as from smaller niche publishers who may but be “big” but certainly aren’t “small”. (That is why the big ones so often buy the smaller ones — they add scale and simultaneously bring heterogeneous talent in-house). They are all quite aware of the authors housed elsewhere among them who might be wooable. In fact, since we have started doing our Logical Marketing work, we have done several jobs which were big author audits commissioned by publishers who wanted to steal the author, not by the one which presently has them signed. Eisler explicitly resisted accusing the publishers of “collusion”, but he does accuse them of “not competing” with each other. That is an accusation that is simply not supported by the facts. Nobody who has spent any time talking to people who work in big houses could possibly get the impression that they don’t compete.

(In fact, a friend of mine just moved from one big house to another. He is explicitly persona non grata at his prior employer. Now, in this case, I think the house that lost him is behaving childishly, but it certainly underscores the fact that they believe they are in intense competition and now this one-time colleague has gone over to “the other side”.)

But the big flaw in Eisler’s logic is the same one that dooms Hugh Howey’s “Author Earnings” project to irrelevance: the assumption that the per-copy royalty terms and rights splits are the most important element of publishing contracts. In fact, they’re not. Actually, those terms matter in 20 percent or fewer of the agented author contracts with the Big Five. Why? Because the agents get the publishers to pay advances that don’t earn out!

In fact, I have been told by three different big houses what they calculated the percentage of their revenues paid to authors amounted to. We could call that the true royalty rate. The three numbers were 36, 40, and 42 percent. That includes what they paid for sales of paperbacks, all of which carry “stipulated” royalties of well less than 10 percent of the cover price (and therefore below 20 percent of revenue).

Take that on board. Big publishers are paying 40 percent of their revenue to authors! That leaves them 60 percent to pay everything else: overheads, manufacturing, and profits! Compare that to the margin Amazon has even if they pay a 35 percent digital royalty, or compare it to what anybody else has in any other business after paying to acquire the raw material for what they sell. If there were really an “asymmetrical” power equation favoring publishers, you’d think they could acquire the author contracts for a bit less, wouldn’t you?

Not only were the authors’ collective royalty rates much higher than contracts stipulated, the authors got most of that money in advance, eliminating the authors’ risk. The only contracts on which the royalty terms matter are those that do earn out (and, arguably, those that are close). For all the others, most of Eisler’s list of complaints is irrelevant. And, for the record, I have never heard an author complain about that show of confidence, the work that follows in helping him or her reach an audience (which benefits all involved), nor the cash upfront.

More frequent accounting doesn’t matter if you aren’t owed any money. And if the solution to “forever” contracts were that you could buy your way out by paying back what you got in advances that your book didn’t “earn”, how many authors would do that?

But, in fact, agented authors don’t have forever contracts; agents have been negotiating performance clauses for publishers to keep rights for years. And, on top of that, no author in the US can possibly have a “forever” contract because the copyright law of 1978 requires the publisher to revert rights to the copyright holder after 35 years on request. Agents tell me this is has been resulting in additional “advances” for re-upped books for the past couple of years. Note: this is the law. No publisher disputes it. But the “forever contract” argument ignores it.

But, even beyond that, the negative characterization of Big Five New York publishing is terribly unfair.

First of all, the standard terms in big house contracts are almost always more generous than the terms in smaller publisher contracts. Few — if any — of the smaller ones pay a hardcover royalty as high as 15 percent of list. Although higher digital royalties can sometimes be found, usually those are from publishers who have little capacity to deliver print sales, so digital royalties is all you’re going to get. (That might be okay for a romance novel where a big majority of sales could be digital. It would be disaster for the author of just about anything except genre fiction.) And some smaller publishers actually pay less than 25 percent for digital royalties.

So the Big Five terms are generally better and they routinely pay agented authors advances that no other publisher would attempt to match.

But, beyond that, the idea that they are a “cartel” (a characterization enthusiastically seconded by Amazon critic Barry Lynn after it was introduced by Amazon supporter Eisler), is really preposterous. In fact, the Big Five are, to varying degrees, federations of imprints that even compete internally for books, sometimes to the extent that they will bid against each other when an agent conducts an auction. And it would appear from Eisler’s pre-Amazon publishing history that he himself has, in fact, been the beneficiary of bidding competition among major houses.

The internal-to-the-house competition occurs because of the way big publishers are organized. It has been understood for decades that some aspects of a publisher’s operation benefit from scale and size and other functions must remain small. In general, publishers deliver accounting, manufacturing, and sales as centralized functions and editorial acquisition and development, packaging and design, and marketing as localized capabilities housed within the imprints. The power of imprints, which are individual editorial units, varies, but it is generally the case that they have autonomy over their acquisitions and must “compete” internally for the centralized services.

The digital transition is definitely straining that organizational structure. Having the by-title P&L responsibilities distributed makes it more difficult for houses to organize cross-imprint initiatives for everything from direct sales to audience-centric (vertical- or subject-oriented) marketing. Having multiple imprints that all contain “general” lists i

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.