Trending this month

February 18, 2016

5 best practices on creating spiderable link structure for search crawlers

Links are the bedrock of the worldwide web. Search engines rely on the links to  rank websites . Search algorithms depend a lot of the the link graph which are  created by human editors
The quality of a site and its ultimate chances in appearing on th SERPs is determined to a large extent by the search spiders which crawls these sites, picking up linking signals on who links to it . Each link is used as a citation and a positive signal for the site that is linked to

This means  you need your website to be search friendly to allow the  crawlers to spider your site. However  many site owners obfuscate their sites  navigation , which in turn obfuscate the links structure" to such an extend that the search crawlers cannot find them which, limits spiders accessibility and thus impacting SERP rankings

Described below are some  the best practices in creating a link structure for your website. Each of these factors affects the crawlers ability to spider your site.

 1)Link in submission required forms: Search spiders cannot read submitted content or forms which are accessible only via a form as they are invisible to search engines 

 2)Links in hard to parse java script : If you use java script for links , you may find that search engines either do not crawl or give very ,little weightage to the embedd links( In June 2014, google announced enhanced crawling of Java script and CSS . For a review on how your site may render, go to google search console- crawl- Fetch as Google ( you need to login to google webmaster tools) 

3)Link in Flash,Java and other Plugins :Links  embedded inside Java and plugins are invisible to the search engines.In theory the  engines are making progress in detecting links within flash, but don't rely too heavily on this 

4)Links in powerpoint, pdf are no different from Flash, java and other plugins. Search engines sometimes report links seen in pdf and powerpoints, but its not yet clear how much they count 

5)Avoid Linking  to  pages with"No Follow" or Robots.txt  . If your link is  pointing to pages blocked by meta robots tag or ,rel="NoFollow, it is almost equal to a dead link .Both these factors " prevent" search crawlers to " pass on the page rank juice to the pages which are linked from there as well as the "links ability" to serve as a citation for other websites

 6) Links on pages with hundreds of links : Google ( According to Matt Cutts) its guidelines on linking states that its crawler will stop spidering the page having more that 100 links" Though this is just an indicative number. Limiting your links to a number between 100-200 " on a web page will ensure that the crawlability of the page in question is not affected .