I never know what to make of books organized in this pseudo-glossary style, where the only organization is alphabetical-by-topic-title. It seems like an abdication of the author’s responsibility to impose meaningful structure. It also makes the reviewer’s task slightly artificial – no doubt the encouraged mode of reading is dipping and sampling (perhaps with odd free moments in the smallest room of your house), but of course as a reviewer I felt obligated to read it cover-to-cover. So how will my experience match up with yours?
With that said, The ABC of SEO is surprisingly meaty, and rewards cover-to-cover reading too. It’s really a set of small essays, each of which is organized nicely on its own, and is surprisingly dense with technical info. It becomes increasingly clear that the glossary use-case is a fiction (would you really turn to this book for definitions of terms like “Competition”?), but it supports browsing through the table of contents for the topics you care about. Before the alphabetically-organized portion, George gives a good and balanced overview of the search-engine ecosystem, and the roles that engines, publishers, SEOs, and advertisers play in it.
The book really shines in the longer in-depth technical entries, where George explains details of crawlers, webservers, log files, and so on. Entries I thought were particularly strong or compelling (in alphabetical order, naturally 🙂 : Altavista, Anchor Text, Banning, Black Hat SEO, Content Targeted Advertising, In-Bound Links, Keywords, Misspellings, Robots and Spiders, and (especially) Traffic Analysis.
I like the point that George makes at several points in the text: no contract for services has been signed between SEOs (or webmasters) and the engines. Webmasters have no obligation to abide by rules set up by the search engines; engines have no obligation to include or rank a given site, and are entirely within their rights to exclude sites if they feel it improves user experience. I like this because it clarifies a debate that often gets a little hysterical and/or moralistic in both directions.
The book has a copyright of 2005, which (in this fast-moving domain) dates it slightly – it’s clear from the text, for example, that MSN was just launching its own in-house web search engine at the time of writing (with some entries written before, some after). Also, in general, it has to be said that the book focuses much more on Google than on the other major engines, including a lot of focus on PageRank itself. Although there are entries for both Y! Search and MSN, you’ll find nothing particularly detailed there on aspects of those engines not shared by Google.
George also cautions against some practices that might confuse SE crawlers, without naming specific engines that might be confused. He’s right in general that _some_ engines have had problems with these constructs, especially in the past. But it’s worth clarifying that Yahoo!’s crawler in particular has no problem with the following:
o Framesets. Whether or not framing is good web design, the crawler can snarf up the framed components into one bundle, without problem.
o Dynamic URLs. In general, it’s good to minimize the number of arguments after the ‘?’ in the URL, particularly when the arguments don’t affect the content and cause the same content to have many URLs (as with session IDs). But the crawler does not have any inherent problem with such URLs.
o Invalid HTML. No engine that I know of will discard your pages just because the HTML is badly formed (e.g. has start tags without corresponding end tags).
Overall, The ABC of SEO gets an enthusiastic thumbs-up for detail, accuracy, and sensible advice.
Leave a reply to manu Cancel reply