Posted by Justin Briggs

 One frustrating aspect of link building is not knowing the value of a link. Although experience, and some data, can make you better at link valuation, it is impossible to know to what degree a link may be helping you. It’s hard to know if a link is even helping at all. Search engines do not count all links, they reduce the value of many that they do count, and use factors related to your links to further suppress the value that’s left over. This is all done to improve relevancy and spam detection.

Understanding the basics of link-based spam detection can improve your understanding of link valuation and help you understand how search engines approach the problem of spam detection, which can lead to better link building practices.
I’d like to talk about a few interesting link spam analysis concepts that search engines may use to evaluate your backlink profile. 
I don’t work at a search engine, so I can make no concrete claims about how search engines evaluate links. Engines may use some, or none, of the techniques in this post. They also certainly use more (and more sophisticated) techniques than I can cover in this post. However, I spend a lot of time reading through papers and patents, so I thought it’d be worth sharing some of the interesting techniques.

#1 Truncated PageRank

Truncated PageRank
The basics of Truncated PageRank are covered in the paper Linked-based Characterization and Detection of Web Spam. Truncated PageRank is a calculation that removes the direct “link juice” contribution provided by the first level(s) of links. So a page boosted by naïve methods (such as article marketing) are receiving a large portion of the PageRank value directly from the first layer. However, a link from a well linked to page will receive “link juice” contribution from additional levels. Spam pages will likely show a Truncated PageRank that is significantly less than the PageRank. The ratio of Truncated PageRank to PageRank can be a signal to indicate the spamminess of a link profile.

#2 Owned / Accessible Contributions

Links can be bucketed into three general buckets.
  1. Links from owned content – Links from pages that search engines have determined some level of ownership (well-connected co-citation, IP, whois, etc.)
  2. Links from accessible content – Links from non-owned content that is easily accessible to add links (blogs, forums, article directories, guest books, etc.)
  3. Links from inaccessible content – Links from independent sources.
A link from any one of these source is neither good nor bad. Links from owned content, via networks and relationships, are perfectly natural. However, a link from inaccessible content could be a paid link, so that bucket doesn’t mean it’s inherently good. However, knowing the bucket a link falls into can change the valuation.
Owned Contribution
This type of analysis on two sites can show a distinct difference in a link profile, all other factors being equal. The first site is primarily supported on links from content it directly controls or can gain access to. However, the second site has earned links from a substantially larger percentage of unique, independent sources. All things being equal, the second site is less likely to be spam.

#3 Relative Mass

Relative Mass accounts for the percent distribution of a profile for certain types of links. The example of the pie charts above demonstrates the concept of relative massive.
Relative Mass
Relative Mass is discussed more broadly in the paper Link Spam Detection Based on Mass Estimation. Relative Mass analysis can define a threshold at which a page is determined “spam”. In the image above, the red circles have been identified as spam. The target page now has a portion of value attributed to it via “spam” sites. If this value of contribution exceeds a potential threshold, this page could have its rankings suppressed or the value passed through these links minimized. The example above is fairly binary, but there is often a large gradient between not spam and spam.
This type of analysis can be applied to tactics as well, such as distribution of links from comments, directories, articles, hijacked sources, owned pages, paid links, etc. The algorithm may provide a certain degree of “forgiveness” before its relative mass contribution exceeds an acceptable level.

#4 Counting Supporters / Speeds to Nodes

Another method of valuing links is by counting supporters and the speed of discovery of those nodes (and the point at which this discovery peaks).
counting supporters
A histogram distribution of supporting nodes by hops can demonstrate the differences between spam and high quality sites. 
supporters histogram
Well-connected sites will grow in supporters more rapidly than spam sites and spam sites are likely to peak earlier. Spam sites will grow rapidly and decay quickly as you move away from the target node. This distribution can help signify that a site is using spammy link building practices. Because spam networks have higher degrees of clustering, domains will repeat upon hops, which makes spam profiles bottleneck faster than non-spam profiles.
Protip: I think this is one reason that domain diversity and unique linking root domains is well correlated with rankings. I don’t think the relationship is as naïve as counting linking domains, but an analysis like supporter counting, as well as Truncated PageRank, would make receiving links from a larger set of diverse domains more well correlated with rankings.

#5 TrustRank, Anti-TrustRank, SpamRank, etc.

The model of TrustRank has been written about several times before and is the basis of metrics like mozTrust. The basic premise is that seed nodes can have both Trust and Spam scores which can be passed through links. The closer to the seed set, the higher the likelihood you are what that seed set was defined as. Being close to spa
m, makes you more likely to be spam, being close to trust, makes you more likely to be trusted. These values can be judged inbound and outbound.
I won’t go into much more detail than that, because you can read about it in previous posts, but it comes down to four simple rules.
  • Get links from trusted content.
  • Don’t get links from spam content.
  • Link to trusted content.
  • Don’t link to spam content.
This type of analysis has also been used to use SEO forums against spammers. A search engine can crawl links from top SEO forums to create a seed set of domains to perform analysis. Tinfoil hat time….

#6 Anchor Text vs. Time

Monitoring anchor text over time can give interesting insights that could detect potential manipulation. Let’s look at an example of how a preowned domain that was purchased for link value (and spam) might appear with this type of analysis.
anchor text over time 
This domain has a historical record of acquiring anchor text including both brand and non-branded targeted terms. Then suddenly that rate drops and after time a new sudden influx of anchor text, never seen before, starts to come in. This type of anchor text analysis, in combination with orthogonal spam detection approaches, can help detect the point in which ownership was changed. Links prior to this point can then be evaluated differently.
This type of analysis, plus some other very interesting stuff, is discussed in the Google paper Document Scoring Based on Link-Based Criteria.

#7 Link Growth Thresholds

Sites with rapid link growth could have the impact dampened by applying a threshold of value that can be gained within a unit time. Corroborating signals can help determine if a spike is from a real event or viral content, as opposed to link manipulation.
link growth thresholds
This threshold can discount the value of links that exceed an assigned threshold. A more paced, natural growth profile is less likely to break a threshold. You can find more information about historical analysis in the paper Information Retrieval Based on Historical Data.

#8 Robust PageRank

Robust PageRank works by calculating PageRank without the highest contributing nodes.
robust pagerank
In the image above, the two strongest links were turned off and effectively reduced the PageRank of a node. Strong sites often have robust profiles and do not heavily depend on a few strong sources (such as links from link farms) to maintain a high PageRank. Robust PageRank calculations is one way the impact of over-influential nodes can be reduced. You can read more about Robust PageRank in the paper Robust PageRank and Locally Computable Spam Detection Features.

#9 PageRank Variance

The uniformity of PageRank contribution to a node can be used to evaluate spam. Natural link profiles are likely to have a stronger variance in PageRank contribution. Spam profiles tend to be more uniform.
 pagerank variance
So if you use a tool, marketplace, or service to order 15 PR 4 links for a specific anchor text, it will have a low variance in PR. This is an easy way to detect these sorts of practices.

#10 Diminishing Returns

One way to minimize the value of a tactic is to create diminishing marginal returns on specific types of links. This is easiest to see in sitewide links, such as blogroll links or footer paid links. At one time, link popularity, in volume, was a strong factor which lead to sitewides carrying a disproportionate amount of value.
link building diminishing returns
The first link from a domain carries the first vote and getting additional links from one particular domain will continue to increase the total value from a domain, but only to a point. Eventually inbound links from the same domain will continue to experience diminishing returns. Going from 1 link to 3 links from a domain will have more of an effect than 101 links to 103 links.
Protip: Although it’s easy to see this with sitewide links, I think of most link building tactics in this fashion. In addition to ideas like relative mass, where you don’t want one thing to dominate, I feel tactics lose traction overtime. It is not likely you can earn strong rankings on a limited number of tactics, because many manual tactics tend to hit a point of diminishing returns (sometimes it may be algorithmic, other times it may be due to diminishing returns in the competitive advantage). It’s best to avoid one-dimensional link building.

Link Spam Algorithms

All spam analysis algorithms have some percentage of accuracy and some level of false positives. Through the combination of these detection methods, search engines can maximize the accuracy and minimize the false positives.
Web spam analysis allows for more false positives than email spam detection, because there are often multiple alternatives to replace a pushed down result. It is not like email spam detection, which is binary in nature (inbox or spam box). In addition to this, search engines don’t have to create binary labels of “spam” or “not spam” to effectively improve search results. By using analysis, such as some of those discussed in this post, search engines can simply dampen rankings and minimize effects.
These analysis techniques are also designed to decrease the ROI of specific tactics, which makes spamming harder and more expensive. The goal of this post is not to stress about what links work, and which don’t, because it’s hard to know. The goal is to demonstrate some of the problem solving tactics used by search engines and how this impacts your tactics.

Do you like this post? Yes No

Location-based check-in service Foursquare and local deals site Groupon are partnering up to offer real-time daily deals, Foursquare confirmed today. Several deals have already gone live in Chicago, and deals all over the U.S. and Canada should be live by Sunday.


Google has announced their new Page Speed Service. In essence, it’s a combination of proxy servers, Content Delivery Networks (CDN), and web page optimizers which Google states will produce speed gains of 25-60% for most websites.

The service is being offered to a limited set of web developers at no cost. After the trial period, Page Speed will be released to everyone and, although there are no details, “pricing will be competitive” (source: Official Google Code blog).

To use the service, it’s simply a matter of registering and adding a new DNS CNAME record to your domain. As well as providing a gzipped proxy server for static files, the service can also rewrite your pages for web performance best-practices:

  • CSS files can be combined, minimized, and moved to the HTML head
  • JavaScript files can be combined and minimized using Google’s Closure Compiler
  • images can scaled and optimized

All features are optional so you can, for example, disable the Closure Compiler if it breaks your JavaScript code.

Google provides a page test comparison service at It estimated that’s home page would enjoy a 13% speed increase — I suspect that’s primarily owing to JavaScript file concatenation.

Tremendous or Troublesome?

Depending on the price, the Page Speed Service could be ideal for inefficient static pages running on slow servers. It may be more cost-effective than spending money on further development or hosting.

Unfortunately, there are a few downsides:

  • Bare domains are not supported, i.e. you must use rather than That’s a shame — I’ve been dropping the “www” from my sites.
  • HTTPS pages are not supported.
  • Flash, streamed audio, streamed video and files over 50MB are not supported.
  • POST requests greater than 2MB are not supported.
  • You’re unlikely to experience significant speed gains on web applications running server-side code.
  • Domains hosted on Blogger, Google Sites or Google App Engine are not supported.

Speaking as a web developer, the service makes me slightly uncomfortable. Like many, I ensure my sites are optimized by combining files, minimizing the code, reducing HTTP requests and using CDNs where possible. For Page Speed to be attractive, I wouldn’t want to lose control, configuration would have to be easy, I wouldn’t want my code to be rewritten, and the price would have to be cheaper than upgraded hosting.

Risk is another factor which needs to be assessed. Will Page Speed offer additional redundancy or two points of failure? I suspect it will depend on the quantity of static vs generated content on your website.

Finally, are you willing to hand your website keys to Google? Their services are more reliable than most, but this is a new product which could experience teething problems. Conspiracy theorists will also see this as another step toward Google’s global domination. Google Search considers page speed factors so could the company become an all-powerful web host which undermines sites not using their network?

Technically, Google Page Speed an amazing solution which should boost the download speeds for most sites — especially those which are inefficiently coded. However, I’m not convinced many good web developers will adopt it. And would bad developers understand the service or care enough to recommend it?

Time will tell if Google’s Page Speed Service is a success. Please let us know your opinions…

A last-minute rewrite of the bill expands the information that commercial internet providers are required to store to include customers’ names, addresses, phone numbers, credit card numbers, bank account numbers and temporarily-assigned IP addresses.

Posted by caseyhen

Want to avoid the next Panda Update and improve your websites quality? This week Will Critchlow from Distilled joins Rand to discuss an amazing idea of Will’s to help those who are having problem with Panda and others who want to avoid future updates. Feel free to leave your thoughts on his idea and anything you might do to avoid Panda.


Video Transcription

Rand: Howdy, SEOmoz fans. Welcome to a very special edition of Whiteboard Friday. I am joined today by Will Critchlow, founder and Director of Distilled, now in three cities – New York, Seattle, London. My God, 36 or 37 people at Distilled?

Will: That’s right. Yeah, it’s very exciting.

Rand: Absolutely amazing. Congratulations on all the success.

Will: Thank you.

Rand: Will, despite the success that Distilled is having, there are a lot of people on the Web who have been suffering lately.

Will: It’s been painful.

Rand: Yeah. What we’re talking about today is this brilliant idea that you came up with, which is essentially to replicate Google’s Panda questionnaire, send it out to people, and help them essentially improve your site, make suggestions for management, for content producers, content creators, for people on the Web to improve their sites through this same sort of search signals that Panda’s getting.

Will: That’s right. I would say actually the core thing of this, what I was trying to do, is persuade management. This isn’t necessarily about things that we as Internet marketers don’t know. We could just look at the site and tell people this, but that doesn’t persuade a boss or a client necessarily. So a big part of this was about persuasion as well.

So, background, I guess, people probably know but Goggle gave this questionnaire to a bunch, I think they used students mainly to assess a bunch of websites, then ran machine learning algorithms over the top of that so that they could algorithmically determine the answer.

Rand: Take a bunch of metrics from maybe user and usage data, from possibly linked data, although it doesn’t feel like linked data, but certainly onsite analysis, social signals, whatever they’ve got. Run these over these pages that had been marked as good or bad, classified in some way by Panda questionnaire takers, and then produce results that would push down the bad ones, push up the good ones, and we have Panda, which changed 12% of search results in the U.S.

Will: Yeah, something like that.

Rand: And
possibly more.

Will: And repeatedly now, right? Panda two point whatever and so forth. So, yeah, and of course, we don’t know exactly what questions Google asked, but . . .

Rand: Did you try to find out?

Will: Obviously. No luck yet. I’ll let you know if I do. But there’s a load of hints. In fact, Google themselves have released a lot of these questions.

Rand: That’s true. They talked about it in the Wired article.

Will: They did. There have been some that have come out on Search Engine Land I think as well. There have been some that have come out on Twitter. People have referred to different kinds of questions.

Rand: Interesting. So you took these and aggregated them.

Will: Yeah. So I just tried to pull . . . I actually ignored quite a chunk that I found because they were hard to turn into questions that I could phrase well for the kinds of people I knew I was going to be sending this questionnaire to. Maybe I’ll write some more about that in the accompanying notes.

Rand: Okay.

Will: I basically ended up with some of these questions that were easy to have yes/no answers for anybody. I could just send it to a URL and say, "Yes or no?"

Rand: Huh, interesting. So, basically, I have a list of page level and domain level questions that I ask my survey takers here. I put this into a survey, and I send people through some sort of system. We’ll talk about Mechanical Turk in a second. Then, essentially, they’ll grade my pages for me. I can have dozens of people do this, and then I can show it to management and say, "See, people don’t think this is high enough quality. This isn’t going to get past the Panda filter. You’re in jeopardy."

Will: That’s right. The first time I actually did this, because I wasn’t really sure whether this was going to be persuasive or useful even, so I did it through a questionnaire I got together and sent it to a small number of people and got really high agreement. Out of the 20 people I sent the questionnaire to, for most questions you’d either see complete disagreement, complete disarray, basically people saying don’t know, or you’d see 18 out of 20 saying yes or 18 out of 20 saying no.

Rand: Wow.

Will: With those kind of numbers, you don’t need to ask 100 people or 1,000 people.

Rand: Right. That’s statistically valid.

Will: This is looking like people think this.

Rand: People think this article contains obvious errors.

Will: Right. Exactly. So I felt like straight away that was quite compelling to me. So I just put it into a couple of charts in a deck, took it into the client meeting, and they practically redesigned that "catch me" page in that meeting because the head of marketing and the CEO were like okay, yeah.

Rand: That’s fantastic. So let’s share with people some of these questions.

Will: And they’re simple, right, dead simple.

Rand: So what are the page level ones?

Will: Page level, what I would do is typically find a page of content, a decent, good page of content on the site, and Google may well have done this differently, but all I did was say find a recent, good, well presented, nothing desperately wrong with it versus the rest of the content on the site. So I’m not trying to find a broken page. I’m just trying to say here’s a page.

Rand: Give me something average and representative.

Will: Right. So, from SEOmoz, I would pick a recent blog post, for example.

Rand: Okay, great.

Will: Then I would ask these questions. The answers were: yes, no, don’t know.

Rand: Gotcha.

Will: That’s what I gave people. Would you trust the information presented here?

Rand: Makes tons of sense.

Will: It’s straightforward.

Rand: Easy.

Will: Is this article written by an expert? That is deliberately, vaguely worded, I think, because it’s not saying are you certain this article’s written by an expert? But equally, it doesn’t say do you think this article . . . people can interpret that in different ways, but what was interesting was, again, high agreement.

Rand: Wow.

Will: So people would either say yes, I think it is. Or if there’s no avatar, there’s no name, there’s no . . . they’re like I don’t know.

Rand: I don’t know.

Will: And we’d see that a lot.

Rand: Interesting.

Will: Does this article have obvious errors? And I actually haven’t found very many things where people say yes to this.

Rand: Gotcha. And this doesn’t necessarily mean grammatical errors, logical errors.

Will: Again, it’s open to interpretation. As I understand it, so was Google’s. There are some of these that could be very easily detected algorithmically. If you’re talking spelling mistakes, obviously, they can catch those. But here, where we’re talking about they’re going to run machine learning, it could be much broader. It could be formatting mistakes. It could be . . .

Rand: Or this could be used in concert with other questions where they say, boy, it’s on the verge and they said obvious errors. It’s a bad one.

Will: Exactly.

Rand: Okay.

Will: Does the article provide original content or information? A very similar one. Now, as SEOs, we might interpret this as content, right?

Rand: But a normal survey taker is probably going to think to themselves, are they saying something that no one has said before on this topic?

Will: Yeah, or even just, "Do I get the sense that this has been written for this site rather than just cribbed from somewhere?"

Rand: Right.

Will: And that may just be a gut feel.

Rand: So this is really going to hurt the Mahalos out there who just aggregate information.

Will: You would hope so, yeah. Does this article contain insightful analysis? Again, quite vague, quite open, but quite a lot of agreement on it. Would you consider bookmarking this page? I think this is a fascinating question.

Rand: That’s a beautiful one.

Will: Obviously, again, here I was sending these to a random set of people, again which, as I understand it, is very similar to what Google did. They didn’t take domain experts.

Rand: Ah, okay.

Will: As I understand it. They took students, so smart people, I guess.

Rand: Right, right.

Will: But if it’s a medical site, these weren’t doctors. They weren’t whatever. I guess some people would answer no to this question because they’re just not interested in it.

Rand: Sure.

Will: You send an SEOmoz page to somebody who’s just not . . .

Rand: But if no one considers bookmarking a page, not even consider it, that’s . . .

Will: Again, I think the consider phrasing is quite useful here, and people did seem to get the gist, because they’ve answered all of the questions by this point. I would send the whole set to one person as well. They kind of get what we’re asking. Are there excessive adverts on this page? I love this question.

Tom actually was one of the guys, he was speculating early on
that this was one of the factors. He built a custom search engine, I think, of domains that had been hit by the first Panda update, and then was like, "These guys are all loaded with adverts. Is that maybe a signal?" We believe it is, and this is one of the ones that management just . . . so this was the one where I presented a thing that said 90% of people who see your site trust it. They believe that it’s written by experts, it’s quality content, but then I showed 75% of people who hit your category pages think there are too many adverts, too much advertising.

Rand: It’s a phenomenal way to get someone to buy in when they say, "Hey, our site is just fine. It’s not excessive. There’s tons of websites on the Internet that do this."

Will: Yeah.

Rand: And you can say, "Let’s not argue about opinions."

Will: Yes.

Rand: "Let’s look at the data."

Will: Exactly. And finally, would you expect to see this article in print.?

Rand: This is my absolute favorite question, I’ve got to say, on this list. Just brilliant. I wish everyone would ask that of everything that they put on the Internet.

Will: So you have a chart that you published recently that was the excessive returns from exceptional content.

Rand: Yeah, yeah.

Will: Good content is . . .

Rand: Mediocre at this point in terms of value.

Will: And good is good, but exceptional actually has its exponential. I think that’s a question that really gets it.

Rand: What’s great about this is that all of the things that Google hates about content farms, all of the things that users hate about not just content farms but content producers who are low quality, who are thin, who aren’t adding value, you would never say yes to that.

Will: What magazine is going to go through this effort?

Rand: Forget it. Yeah. But you can also imagine that lots of great pieces, lots of authentic, good blog posts, good visuals, yeah, that could totally be in a magazine.

Will: Absolutely. I should mention that I think there’s some caveats in here. You shouldn’t just take this blindly and say, "I want to score 8 out of 8 on this." There’s no reason to think that a category page should necessarily be capable of appearing in print.

Rand: Or bookmarked where the . . .

Will: Yes, exactly. Understand what you’re trying to get out of this, which is data to persuade people with, typically, I think.

Rand: Love it, love it. So, last set of questions here. We’ve got some at the domain level, just a few.

Will: Which are similar and again, so the process, sometimes I would send people to the home page and ask them these questions. Sometimes I would send them to the same page as here. Sometimes it would be a category page or just kind of a normal page on the site.

Rand: Right, to give them a sense of the site.

Will: Yeah. Obviously, they can browse around. So the instructions for this are answer if you have an immediate impression or if you need to take some time and look around the site.

Rand: Go do that.

Will: Yeah. Would you give this site your credit card details? Obviously, there are some kinds of sites this doesn’t apply to, but if you’re trying to take payment, then it’s kind of important.

Rand: A little bit, a little bit, just a touch.

Will: There’s obvious overlaps with all of this, with conversion rate optimization, right? This specific example, "Would you trust medical information from this site," is one that I’ve seen Google refer to.

Rand: Yeah, I saw that.

Will: They talk about it a lot because I think it’s the classic rebuttal to bad content. Would you want bad medical content around you? Yeah, okay. Obviously, again only applies if you’re . . .

Rand: You can swap out medical information with whatever type is . . .

Will: Actually, I would just say, "Would you trust information from this site?" And just say, "Would you trust it?"

Rand: If we were using it on moz, we might say, "Would you trust web marketing information? Would you trust SEO information? Would you trust analytics information?"

Will: Are these guys domain experts in your opinion? This is almost the same thing. Would you recognize this site as an authority? This again has so much in it, because if you send somebody to, no matter what the website is, they’re probably going to say yes because of the brand.

Rand: Right.

Will: If you send somebody to a website they’ve never heard of, a lot of this comes down to design.

Rand: Yes. Well, I think this one comes down to . . .

Will: I think an awful lot of it does.

Rand: A lot of this comes down to design, and authority is really branding familiarity. Have I heard of this site? Does it seem legitimate? So I might get to a great blog like, and I might think to myself, I’m not very familiar with the world of web marketing. I haven’t heard of StuntDouble, so I don’t recognize him as an authority, but yeah, I would probably trust SEO information from this site. It looks good, seems authentic, the provider’s decent.

Will: Yeah.

Rand: So there’s kind of that balance.

Will: Again, it’s very hard to know what people are thinking when they’re answering these questions, but the degree of agreement is . . .

Rand: Is where you get something. So let’s talk about Mechanical Turk, just to end this up. You take these questions and put them through a process using Mechanical Turk.

Will: So I actually used something called, which is essentially a little bit like Google Doc spreadsheets. It’s very similar to Google Doc spreadsheets, but it has an interface with Mechanical Turk. So you can just literally put the column headings as the questions. Then, each row you have the page that you want somebody to go to, the input, if you like.

Rand: The URL field.

Will: So, and then you select how many rows you want, click submit to Mechanical Turk, and it creates a task on Mechanical Turk for each row independently.

Rand: Wow. So it’s just easy as pie.

Will: Yeah, it’s dead simple. This whole thing, putting together the questionnaire and gathering it the first time, took me 20 minutes.

Rand: Wow.

Will: I paid $0.50 an answer, which is probably slightly more than I would have had to, but I wanted answers quickly. I said, "I need them returned in an hour," and I said, "I want you to maybe have a quick look around the website, not just gut feel. Have a quick look around." I did it for 20, got it back in an hour, cost me 10 bucks.

Rand: My God, this is the most dirt cheap form of market research for improving your website that I can think of.

Will: It’s simple but it’s effective.

Rand: It’s amazing, absolutely amazing. Wow. I hope lots of people adopt this philosophy. I hope, Will, you’ll jump into the Q&A if people have questions about this process.

Will: I will. I will post some extra information, yeah, definitely.

Rand: Excel
lent. And thank you so much for joining us.

Will: Anytime.

Rand: And thanks to all of you. We’ll see you again next week for another edition of Whiteboard Friday. Take care.

Will: Bye.

Video transcription by

Do you like this post? Yes No

Google has just released a new tool that will help webmasters speed up their page load time.

Google’s new Page Speed Service takes many of the optimizations outlined in the company’s Page Speed Online API and applies it to sites automatically.

It’s a turnkey online service that automatically takes care of the optimizations by rewriting pages and delivering them to users using Google’s servers.

The tool works by having users point the CNAME for their URL at Google’s own servers. From there, Google can do the optimizations and rewrite pages as needed.

On the Google Code blog, Google says that it has seen speed improvements from 25% to 60% on some sites. Google has a gallery and a comparison test that users can try themselves.

Right now, the tool is only available to a limited set of webmasters, but you can request access by filling out this form. Google says that pricing will be competitive.

It’s rare that Google rolls out plans for a pay service, but this is a case where we think it makes sense. Would you be interested in using Google’s services to automatically optimize your website page load?

More About: Google, page speed, page speed services, website optmization

For more Dev & Design coverage:

were introducing Hotel Finder, a new experimental search tool specifically designed to help you find that perfect hotel. Google Hotel Finder makes it easy to narrow down the options: Figure out where to stay: To help you figure out where the action is, Hotel Finder shines a “tourist spotlight” on the most visited areas of U.S. cities.

If Local Search was a building, it was look something like this…

The 3 Pillars of Local Search Reviews

(photo from the Telegraph)

There are new things being added, subtracted, re-arranged, re-named, left unfinished, and stolen all the time in the industry. We have seen the rise of deals, check-ins, mainstream local results in SERP’s, review fights between the giants, and countless other methods of connecting local businesses to customers.

Because of this, the local search industry has developed the phenomena of  “storm chasers”. Meaning those that will change their strategy often. Run to every new update, and try to be the first to find the “secret” that will cash out quickly until the next big storm comes.


This past week Google Places made a few cosmetics changes which “disapparated” citations from the Places page which stopped counting and highlighting 3rd party reviews. This is what the search results look like now:

The 3 Pillars of Local Search Reviews

And here for the Places page review section, only Google reviews are showing and 3rd party sites have been relegated to links below.

The 3 Pillars of Local Search Reviews

The Citations have already been discussed, but I wanted to put in my two-bits in.  A storm chaser would translate this update as a mad dash for Google Places reviews. They would forget 3rd party sites (Yelp, Citysearch, etc.) and make sure that when their listing are compared with others on Google Places competitors, they would have the most reviews. Instead, it would be a wise thing to focus on the 3 pillars of local search reviews. The pillars are never changing, as they hold up the entire review strategy for the local search industry.

One: Diversity


Google has put ranking weight on the number of different sites with business reviews. Now is that reason enough to diversify? If not, how about the countless issues that Google has had with misplacing reviews? More importantly, Google Places is not the only way that people find information about a business.

The Yelp mobile app in the Apple Store currently has 105,969 reviews while Google Places has 2373 reviews. Also, some industries have review specific sites (think Urbanspoon) that provide a great gathering system for the food industry. A site like’s reviews can be left with a simple Facebook login, and their reviews feed to both Bing and Google (now as a link) and also has an impressive amount of people searching on their network as a stand-alone site.


  • Incorporate review links on your website (I generally like to have 4 different portals) that cover all your bases.

The 3 Pillars of Local Search Reviews

  • Ask your Facebook friends to leave you reviews on CitySearch

The 3 Pillars of Local Search Reviews

  • If you are in a popular Yelp industry or place, then print out and highlight some of your reviews in your place of business.

The 3 Pillars of Local Search Reviews

  • If you have an email list, find all the gmail users and send an email asking for reviews on Google Places. If they have a yahoo address send them a link to yahoo local. And if they have a Hotmail address, then send them an invite for gmail 😉

The key is to make sure you don’t just do one thing to get reviews. Incorporate a lot of methods, but make sure you do have a list of sites that you want promote reviews on and try to guide people there. Most users have one they prefer in the top 4-5 review sites.

Two: Consistency


Like most forms of user generated content; reviews are extremely difficult to work into your daily business. Most companies I see get really excited for a while and will get a plethora of reviews, then go months without asking for them. Here are a few reasons to have a steady flow of reviews…

  • It looks natural. Getting reviews in mass amounts quickly seems extremely spammy.
  • People look at dates of reviews. If they see recent reviews the trust level goes up substantially in determining business quality.
  • You are constantly getting feedback on your product or service.


  • Create a review pamphlet that sits in a prominent place in your business and can be passed out in bags or with receipts.
  • Include links to review portals in your emails as part of your signature.
  • Include review information on invoices or receipts.
  • Set up a calendar with a different review campaign each month (with alarm notices so you don’t forget)

The key is to simply make the decision that reviews are important to your business, and not give up or give in. Nobody in your space will be able to compete long term against a consistent strategy.

Three: Reliability


Contrary to popular belief, reliability centers around bad and mediocre reviews. While most business owners would do anything to have a 5 star business, a normal business cannot avoid criticism. Nothing sends a flag to users more than a plethora of 5 star reviews with no mention of  4’s, 3’s, or 2’s. So, don’t be afraid to let anyone and everyone leave a review.  An employee might make a mistake, or you might come up against a mean old nag. A perfect business isn’t one with perfect reviews, it’s one that deals with their reviews and feedback perfectly. As long as you promote people leaving reviews, then you should end up with a very accurate portrayal of your business. Feedback can help you to progress in areas of need.

Comments from the owners/managers on reviews are also very important. It’s a voice that a business can portray in response to negative, and positive, feedback alike. By responding, you show that you are real.

Don’t buy reviews. But if I can’t stop you then make sure that the reviews are a mix of feedback. Mostly good, sometimes mediocre, and once in a while bad.


  • Highlight a review a month on posters in your business. People who come in will see you are actively monitoring + promoting reviews and will trust what they see + read.
  • Set up for a paid service like which will notify you any time a review is published so that you can act accordingly.
  • If you want to take a chance, highlight both positive reviews, and no so positive reviews on a testimonials page, and explain what you have done to remedy the problem.
  • Have a history of reviews. When I read hotel reviews where there are 100’s or 1000’s of reviews, I know that there is no way they can all be fabricated. When I see a business with 2-3 reviews…I wonder.

Build the Pillars

It might be a lot easier to have a storm chaser focused strategy. People do it all the time with links and citations, but they are not seen and read by your customers the way a review is. If you do one thing right in local, make it your reviews. Build on a strong and diverse platform that will allow you roll with the changes with ease because your strategy will be based on pillars.

Check out the SEO Tools guide at Search Engine Journal.

The 3 Pillars of Local Search Reviews

As this is a search business column, I thought I would move this month’s post to the furthest end of the search spectrum with a post on linkbait.  It’s one of the areas of SEO where there’s sometimes a reluctance to fully commit by clients, primarily due to it being a speculative activity. In my experience, clients are appreciative of transparent SEO delivery, and increasingly see it as a strategic and established marketing discipline.  That said, linkbait is still clearly an area that, considering the results it can deliver for the investment, is under-utilised by digital marketing strategies, and I know there is eagerness by many for this to change.

MEC Interaction recently had a full-time, freelance linkbaiter, Danny Ashton, come in to our offices to provide our link-builders and SEO account managers with some inspiration and understanding around linkbait campaigns for big and small brands.  I thought it might be helpful to share a little insight on what got Danny into this creative game and tips he can share with SEJ readers.  So here goes…

What first got you into linkbait?

It was at my first proper SEO conference that met a character called Lyndon Antcliff who I knew already from his blog CornwallSEO. Prior to this conference I had been reading a lot about linkbait but didn’t quite know how to learn the skills to become a master. After a number of beers and mediocre tapas, Lyndon suggested I joined his linkbait training course that was starting the following week.

After a chat with my boss, I started the course and literally spend hours a day trying new linkbait and chatting regularly with Lyndon. I had quite a few successes with some of my “test” linkbait and it was like a light bulb went off in my head, plus I had never had so much fun doing “SEO.”

Who has been your biggest influence in linkbait and why do you love him so?

The now infamous “13 Year Old Steal Dad’s Credit Card to buy Hookers” really caught my attention to the sheer power of linkbait. This article also led me to become aware of Lyndon who later became my mentor and helped me immensely. Learning from a master provided me with the confidence to try out ideas that I would never dare on my own.

Intrinsic to successful linkbait is a great deal of creativity; what is your process to spur this on?

Getting away from the computer is the only way to get great ideas. Being plugged in and reading other stuff always creates incentive to just rehash old ideas. My most successful baits have always been completely unique and could only have been thought of once disconnected from the grid.

I also like to travel and being out of your comfort zone allows you to think of ideas you would never think of in rainy Manchester! I could also say meditation but I don’t want to get all new age but it really does help to get the creative juices flowing.

What are typical metrics to measure the success of linkbait, beyond the obvious?

Links are the usual metric but I also like to include the interaction and social signals that a bait has led to. I have had baits create 100’s of highly thought out intelligent comments and it can improve the breadth and quality of the article. Yet, links are still number one but high level of comments and social signals also has some great SEO benefits but they are not my number one goal.

What were the results of your most effective linkbait campaign?  What was key to this success?

Sadly I can’t talk about linkbaits from clients but I do sometimes test my linkbait skills on some of my own sites and one I am particularly pleased one I did for an air purifier review site!

I had literally tried everything idea to try and bait this niche but nothing was really taking hold and it didn’t help that the domain had two dashes!

The idea came to me whilst I was playing basketball outside whilst looking at my garden plants. After thinking of the idea of air purifier plants I then searched JSTOR (as my housemate was still a student) to find if there had been any scientific studies. Luckily for me there had been quite a few studies and I then spent the next few hours collating the research and putting the idea together.

After pushing this article to a few key sites, it got picked up by some major authority sites such as Gizmodo and Unplggd and I cracked open the champagne. The article also got picked up by Stumbleupon and the links just started pouring in from all parts of the web. To date the linkbait has received over 55,000 visitors and still gets links every few days and it ranks highly for my target keywords.

What is the process for how you go about selling linkbait to clients?

I’ll be honest I rarely sell linkbait to people as most of my clients are word of mouth and already know what I am capable of. Although I think having a good collection of successful examples and providing good ideas really helps to put the client at ease.

What 3 top tools do you use every day that people interested in linkbait should use?

My number one productivity tool which most of you will be aware of is Microsoft live writer. Prior to finding live writer we used to load up images using the WordPress admin which is a total pain when publishing baits on a regular basis.

I am not sure if you would define these as tools but try and read as many offline books as possible. Spending time only reading online restricts your vision as to what is capable.

I also think you should invest in small notepad and pen as many of your best ideas will come to you when you away from your computer.

Another tip would be to look at the major blogs in your niche and use open site explorer and look at the top pages tab to see the type of content that attracts links. Regularly check out the type of content that attracts links and try to understand why this content works.

If you find creativity a chore, then look for good ideas that have been executed badly. Turn a top 3 tip list into a 15 Must Know Tips Before You Die list…

[Editors comment: I know from the consulting session that there are many more tools he could have shared here, but I guess he’s keeping a few key tools under his belt for now…]

What one final BIG tip can you share with SEJ readers to spur them onto linkbait greatness?

When you have completed your linkbait article leave it alone for a day and then try to spend another few hours trying to make it awesome. Getting an article to an exceptional level can skyrocket the amount of links that it will gain. There are plenty of good articles published daily so you need your article to stand out from the crowd by being exceptional.

Check out the SEO Tools guide at Search Engine Journal.

Interview with Linkbait Insider, Danny Ashton

WebmasterWorld’s hot topics and discussions you may have missed in the last few days. (-Subscription required)