Trending this month

Showing posts with label Search Marketing. Show all posts
Showing posts with label Search Marketing. Show all posts

March 24, 2016

how metasearch engines differ from general search engines

how metasearch engines works : how different are they as compared to traditional search engines 

 Metesearch engines  are  search engine aggregator, which extracts the search results from different search engines and presents the users with the top ranked sites .. Metasearch engines do not maintain databases of its own, but rather it extracts  and combines data from results from  several other search engines  and displays it in a search engine SERP format.

One of the biggest advantages  that metasearch engines have is that " They often combine the best search results across many search engines and displays the best results based  on the user query for the keyword"
 The more results  that can be seen in one place the better it is for users .Among the top 2 biggest metasearch engines are 1) Dogpile and Duckduckgo.

While Dogpile pulls results from Yahoo and Google in one place, Duckduckgo gives you a overview of search results without creating a personalised feed of your search rankings.This creates a more filtering approach  to all the indexed data gathered from the other search engine sources.The ads appear at the top and bottom  with organic search  called " web results" appearing in the middle .The Biggest proposition which  duckduckgo provides  is the fact that  its the only known search engine that does not track user behavior 

After pulling the necessary data from known search engines, The metasearch engines , retains the top  search rankings from the separate search engines and presents the user with top ranked sites  This is different from running an algorithm  to find the best sites for a keyword by creating an index  

March 23, 2016

5 most popular kinds of search engine spam

 How to spot search engine spam : 5 tell tale signs of spam impacting search engines

Search engine spams refer to an  attempt to deceive search engines by telling them to override " existing" search engine best practices  and " laying emphasis on a set of given criteria, which under ideal conditions does not deserve to be ranked at all.

In this post we discuss the most popular kind of search engine spam and how to recognize them.However never try to use them no matter how much you are tempted, as this will only result in your site being black listed

HIDDEN LINKS : White texts or links on a  white background renders  texts invisible to the users  unless the text is highlighted  by right clicking the mouse .Spammers then use relevant keywords or hyperlink that the spiders can read and counts as relevant 

TEXT LINKS HIDDEN BY A LAYER :  One of the tricks  most used by black hat seo webmasters  is to use CSS to  hide spiderable  content under this page which is not visible to the naked eye or by highlighting the page 

DOORWAY PAGES : Doorway pages are web pages  that are made to meet specific search algorithmic  requirements  for various search engines,and are not meant to be shown to ordinary users .In short the doorway pages do not earn the ranking but deceive the search engines into ranking by design  whose main intention is to spam the search engine index  so that it appears high in the SERPS. However  when a user clicks on it , it is automatically redirected /  to another site or page within the same site..

UNCLICKABLE LINKS: creating a link that has  only a single1-x-1 pixel as an anchor , that uses  the period on a sentence as an anchor or  that has no anchor at all . For users there is nothing to click, but the search engine can still follow the link

CLOAKING : In cloaking  the content showed to search engines and the version which is shown to the user browsers  are different. Spammers may cloak by IP  address( information used to find out where your computer or server may be located  or the user agent : HTTP header  describing whether  your a person or a search robot which is requesting the page. When a user is identified as a search spider  a server side script delivers  different version of a web page , in which the content is different from what is viewable by the searching user browser

February 21, 2016

5 tips on how to make your content management system friendly for seo

5  Ways to ensuring SEO benefits while deciding your content management system

While looking to publish a website many webmasters might wonder if the selection of CMS plays a role in seo and how to ensure that " you fine tune your CMS to make it SEO friendly
The truth is that CMS  does play a huge role in seo. The top 3 CMS  happens to be Jhoomla,Drupal and Wordpress, out of which wordpress has the largest marketshare
Lets take a look on the the basic things you need to keep in mind while deciding the CMS and how to ensure your CMS functionality plays a big role in ensure your search visibility

TITLE TAG CUSTOMIZATION : A search engine friendly CMS has to ensure that each title tags are customised based on the url not only at a page level but also enable rules for particular webpages Sites that run on blogger and wordpress often use the date as a url. xyz.wordpress/post/21-02-2016 . This is seo unfriendly and should be avoided. Replace the date in the url with the post title . The biggest issues CMS faces is not to customise the title tags with the url or the theme of the post
 For example if you have a site on cameras ..and your url is .www.a1cameras4 , and your CMS only allows you to create the title , where the tag always has to start with your domain name followed by a colon, followed by the article you post, Your on the brink of seo disaster

Lets see the example below. In the above site , a post on the top 10 cameras has a url which is If your CMS allows you only to create your title which starts with your website name for example in the above post ( the  title shows A 1 cameras for you  repeats for every ul and post , then you  are treading dangerously .You should be able to customize each url with customized title and meta tags

PAGINATION CONTROLS :Pagination can be the bane of website search rankings so controlling it with inclusion of more items per page and  more contextually relevant anchor text  is recommended .Instead of next or previous page at the bottom of   you can use titles like "more eCommerce news", or latest trends on online marketing

 301 FUNCTIONALITY: Many CMS lack this critical feature which plays an very crucial role in redirection of content when necessary.. Using 301 permanent redirection tells the search crawlers to treat a non www version and www version as the same url, therefore informing the crawlers to pass on the benefits and link juice to the same url. 301 redirection is used when you have a new domain or have a newer version and wish to pass on the search benefits to the new one, thereby helping to preserve the search benefits of the the older version.This also helps dodging from keyword cannibalization

IMAGE HANDLING :mage Handling and alt attributes:: alt attribute are a must have feature , which is used as an anchor text when you use an image link.( However remember on terms of search preference text links are more advisable than image links.) However if you are using it, ensure that the CMS have this alt tag functionality when helps search engines understand " the relevance the content of your image . Images in CMS navigational elements should preferably use CSS image replacement rather than merely al tag attributes

STATIC CATCHING OPTIONS :Static Catching options is a must for the CMS you are considering for your website: Many CMS currently offer caching options which makes perfect sense if a page receives consistently higher traffic from social or news portals. A bulk CMS often make extraneous database connections which may increase load and and overwhelm the server if caching is not in place.. This might affect and lessem your potential inbound links.

MULTILEVEL CATEGORIZATION STRUCTURE:.If your CMS does not allow you to nest subcategories into categories , subcategories to internal categories, rethink your CMS options. This limited functionality of the CMS will not allow you to use your site structure and internal hierarchical linking structure.

 META NO INDEX FOR LOW VALUE PAGES : even if you use rel= NoFollow for your internal pages , other sites might still link to you, or some low value pages might rank ahead of the pages you intend to optimize for . Check if your CMS allows to you use NoIndex  for those pages which have a low value, like about us, contact us or FAQ's 
This  iis a better way to handle these low value pages which you do not intend to show up in the SERPs

why should you stop commenting on blogs

Many site owners use Spam tactics by creating bots that crawls around the web looking for open forums and blogs  where its easy to add comment  and get a backlink to their website , which leaves behind automated comments bordering on spam, which are most of the times, not related to the content of the site in question
The great majority of these sites are deleted  by the  rel=NoFollow  tag or by the blogs software content management systems. however the spammers do not care as they operate on a huge scale

Reason number 2 why you should avoid comments across blogs and forums is most of the times, the comments do not have any relevance to the topic. If you do have to comment ,ensure you stick on the context and relevance, this way. For Example sites like quora and a  the seo forum  like webmastertools, has user generated content which is editorially very sound.While blog and forum site owners might use rel=NoFollow tag, as we saw in the previous post, Search engines can decide the quality of the post and the site which is linked too, while the no follow tag might be used, it only stops passing of the link juice to the page linked to and not the indexing

Link Farms  and Number of links: due to the nature of the blogs and forums, just anyone can leave behind a comment, irrespective of relevance or being topical. Over a number of time Forums and Blogs  acquire a huge number of links, with search engines viewing the links as not relevant or even ignoring  the links .Worse a webmaster that has many related sites , might ask a link back in exchange for 10 other sites he owns.  This way all links are interlinked and your site might be a part of the "link farms" which google penalises very heavily

February 20, 2016

5 facts about rel=no follow attribute to keep in mind before optimizing your search

In 2005 all 3 search engines,Yahoo, Bing an google agreed to support an initiative to reduce the effectiveness of automated spam
Unlike the meta robots version of NoFollow, a new directive was employed as an attribute with within an or link tag to indicte the fact that the linking site " does not vouch for the quality of the linked page.

With In short the rel=nofollow tag was intended for search spiders not to pass on the link juice to the third party link which the website is linking to originally this enabled to " stop automated links appearing o blogs as comments, forums and other user generated content siteswhere links were liberally splashed around, to fool the search engine to crawl and pass on the usual benefits of the search benefits

 In due course of time" it was seen" most website owners used content from other sites, but used the tag rel=no follow" to stop the link juice flowing to the linked page. However google guidelines say that " only paid links" or links attained through dubious methods should be used as rel=noFollow tag. Google also says that " when linking a site " which is editorially good" you should not be using the " rel=no follow tag.

Please note that although the rel=noFollow tag is used to indicate search crawlers from passing on the linking benefits, it does not stop indexing the link( despite the lack of semantic logic)
You can implement the no Follow link as follows a <a href="" rel="NoFollow">

In  2009, Matt Cutts wrote a post which suggests that" link juice " associated with NoFollowed link is discarded rather than reallocated , In theory you can still use rel=NoFollow  many times you want, however using it on internal links does not bring the type of benefit webmasters and seo preference which it once used to

One word of caution, is  using it many times across external links too many times,can be flagged as a site being overoptimized. the thumb rule here is out of 10 posts use no follow for 7 of them , while for posts which  you use from third parties" no do use rel=no Follow for sites which are editorially seen as very strong

February 18, 2016

5 best practices on creating spiderable link structure for search crawlers

Links are the bedrock of the worldwide web. Search engines rely on the links to  rank websites . Search algorithms depend a lot of the the link graph which are  created by human editors
The quality of a site and its ultimate chances in appearing on th SERPs is determined to a large extent by the search spiders which crawls these sites, picking up linking signals on who links to it . Each link is used as a citation and a positive signal for the site that is linked to

This means  you need your website to be search friendly to allow the  crawlers to spider your site. However  many site owners obfuscate their sites  navigation , which in turn obfuscate the links structure" to such an extend that the search crawlers cannot find them which, limits spiders accessibility and thus impacting SERP rankings

Described below are some  the best practices in creating a link structure for your website. Each of these factors affects the crawlers ability to spider your site.

 1)Link in submission required forms: Search spiders cannot read submitted content or forms which are accessible only via a form as they are invisible to search engines 

 2)Links in hard to parse java script : If you use java script for links , you may find that search engines either do not crawl or give very ,little weightage to the embedd links( In June 2014, google announced enhanced crawling of Java script and CSS . For a review on how your site may render, go to google search console- crawl- Fetch as Google ( you need to login to google webmaster tools) 

3)Link in Flash,Java and other Plugins :Links  embedded inside Java and plugins are invisible to the search engines.In theory the  engines are making progress in detecting links within flash, but don't rely too heavily on this 

4)Links in powerpoint, pdf are no different from Flash, java and other plugins. Search engines sometimes report links seen in pdf and powerpoints, but its not yet clear how much they count 

5)Avoid Linking  to  pages with"No Follow" or Robots.txt  . If your link is  pointing to pages blocked by meta robots tag or ,rel="NoFollow, it is almost equal to a dead link .Both these factors " prevent" search crawlers to " pass on the page rank juice to the pages which are linked from there as well as the "links ability" to serve as a citation for other websites

 6) Links on pages with hundreds of links : Google ( According to Matt Cutts) its guidelines on linking states that its crawler will stop spidering the page having more that 100 links" Though this is just an indicative number. Limiting your links to a number between 100-200 " on a web page will ensure that the crawlability of the page in question is not affected .

February 15, 2016

the highest cost per click keyword list in bing


"The biggest cost per click keywords in Bing"
Lawers and Attorney's and structural settlement are among the top 10 most expensive keywords in BIING.

worldwide indexed pages by google stands at 50billion vs Bings 5 billion

"number of pages indexed by google vs Bing"

Bing vs Google's Indexed Pages

The size  of the world wide web : Google index vs Bing Index .The chart shows The size of the World Wide Web ) indexed  pages  compared with last 5 years, 3 years, last 6 months. The above chart shows  how many pages has been indexed  by Google.for the last  2 years. So far Google has indexed over 50 billion webpages since 2014 to  15th February,2016.

The saffron color shows the number of pages indexed by Bing  since  2014 which stands at 5 billion as of February 16th,2016 .The Dutch Indexed Web contains at least 241.41 million pages ( as of Monday, 15 February, 2016

Number of Pages Indexed by google in the last 3 months

Tool to check  worldwide Indexed pages by search engines in Real 

5 linking tricks that search engines hates

The  5 linking tricks that search engine hates : Before you create your SEO strategy , you need to remember what are the linking guidelines which is preferred by Google and linking practices which are considered a very low quality by Google.We take a look at the best practices on creating organic links, identify how to look for quality links and how to understand if you unknowingly linked to a spam website
. Click here to scroll down to find out the best linking practices  and which linking tricks are hated by  search engines

WHICH SITES ARE  LINK SPAMS OR LINK FARMS  This is one of the most common and the biggest  mistake a site owner commits when you link your site to spam sites or link farms. You might do this unknowingly but the repercussions are huge and you might kiss goodbye to SEO for  the next 6 months to one year
Many webmaster falsely think that " any link  is good for your search rankings.However this thinking is flawed and completely the below linking guideline explains

1)What are link farms :Link farms happen when you link out to sites which might have a dozen other sites and interlinks them. Suppose a webmaster  asks you to link back to his sites , while he does the same. Now you  fall prey into a vicious cycle of interlinked sites which just link to others   and you do the same
This is a problem has this  webmaster might have 10 sites which originates  from  the same IP, which is easily picked  up by Google .Any search engine including  Google can  easily  find out i these sites have something of value to readers or has been created just to pass on the  link juice in order to rank better.By associating with such a site, you run the risk of being penalized or downgraded

2)  Not having relevant links :  Every site is based on 1) relevance , 2) Topical and 3) who links to you. A site on pet food  having links to  a movie review does not make sense. . Concentrate on your niche  topic and create relevant content , the relevant  backlinks will follow .

3) How to research for high quality and Relevant links : The best way to get the best links is to find your competitor links and soliciting for links from those sites . Finding the competitor sites is easy. All you have to do us use the operator in Google search ( replace competitor link with your domain
The reason  why  this will provide you non spam links is that Google has already penalized those sites and will not show those sites as they have stopped indexing them

4) Checking  the quality of your backlinks  and finding if your site has been marked as spam  There are many tools and sites which will help you to knowing   the quality of your backlinks  and check  your site and if they have been linking to  for spam

Backlink Sites/Tols

1 Yahoo site explorer  ( one of the best)
2)Majestic SEO tool

 Tools to check spam sites: There are many such sites that inform you if  your site link is  pointing to spam pages  or the quality of your  pages or  your link quality( However do not take them very seriously as , these are prone to error as they can only make an estimated guess)


February 14, 2016

the 5 biggest server and hosting issues that affect search results

Thankfull  only a handful of search and server issues impact search optimization . However when overlooked they spiral into big problems  and hence its worthwhile to  understand hosting issues along with server problems that can negate the effect of a well optimized site

Here are among the biggest web hosting and server issues which impact search visibility

1)Server timeouts :if a search engine makes a request that isn't served within the bots limited  time, your pages might not be included even in the google index  and will without doubt rank poorly for your given keyword.( As no indexable content is found)

2)Slow response time : response time are extremely vital for your search visibilty as the  search crawlers will have tough time indexing your page.If the response time is low,, its less likely the spider will continue to wait for your site to open as  the crawlers come yo visit your site for minimum and limited time, and non failure to serve your pages during that time will negatively impact your site's seo

3)Shared IP address . Basic concerns are speed, potential of being linked to sites that originate from the same IP also implies that you are a part of link spam or a spammy neighborhood .

"the 5 biggest server and hosting issues  that affect search results"
How hosting and server issues affect search
4)Bot redirection and handling Some system admins and webmasters  goes overboard with protection and restricted action to files  if a single visitor makes more than certain  number of request in a given time frame.. This will only serve to constantly limit the spider's ability  to crawl and will effect your search ranking

5)Server geography : Though this is not a major issue with search, but search engine does consider the location of the sever in determining if the site's  content  is relevant  from  the local search perspective . According to Google around 40% of searches have a local search criteria in terms of search results

February 12, 2016

how to measure the value of the link : 5 key metrics

one of the main asked question   is how to measure the value of a link. Among several factors,there are top 5 metrics which Google and Bing considers as holy grail.  Here are some important metrics to look at  to give you an idea  on measuring  link value ?

  • Linking Page Rank :Where does the linking page, rank for the term or phrase  you wish to rank for? If a webpage ranks no 1 for "  keyword" digital cameras below $100 across nevada" . This link is probably worth almost 10 times as compared to a link from sites that rank 20

  • Link Authority : A link from an authoritative site  is 10 times more worth as compared to just another site.For example if you have a news site, an inbound link  fom CNN is considered much more important than  just another  news site.
  • The number of links in a page . A link page which has 10 outbound links is worth more than a link page with 100 links.This happens  for the simple reason that every link passes on  the link juice to the site its linked to. A page having 100 links will need  to share  the link juice to 100 external sites, whereas  a site having 10 external links will be  sharing the link juice  only among 10 external sites 

  • Paid Links : Sites which has paid links, loses its ability and credibility  to share link juice, as most search engine crawlers now  are able to distinguish between paid links  and organic lnks . In addition Google may penalize a site  retroactively for buying links

  • Page Rank. Although the page rank factor has diminished over the years. Webmasters still cosiders the PR of the linking site . A PR of zero out of 10 might  mean either the site is new and google is yet to index or the site has been penalised. Read the 6th criteria below

 Inlinks  to the page or domain : It might be useful to check  the number of in links  for the  domain or a webpage  you wish to get a link from

While this criteria is not  a very important metric however a higher number of incoming  links for that page means that " the site " commands a leadership position in its niche and a gold standard  as compared to just any other site 4 -14 inbound links

 No Follow Links Be aware of "no follow" links. A" no follow" link means that google will not consider your "inbound  links  to pass on the link juice as  as the webmaster has instructed  crawlers   not to   treat the link as authoritative ,by using rel +"no follow"

February 8, 2016

when to use a sub domain vs a domain

" the argument of using a sub domain vs domain name"

When to use a subdomain instead of a domain 

  • The debate between a domain and a subdomain has been there for years. What most people do not understand is that creating a subdomain has no significance unless you create a totally different product line or your company has launched a new product and has a totally new content and would like to use a catchy subdomain
  • One good reason for using a subdomain is that you would look more authoritative to users However take into account that it is much less expensive to use a subfolder and and have slightly less panache that it is to educate through branding an advertising
  •  However a subfolder will work 99% of the time.Keeping content on a single root domains and single subdomain gives the maximum seo benefits as the engine passes on all the positive metrics , including the backlinks and the PR earned by the site
  • Subdomains are definitely not worth the time and  not a popular choice if SEO is a prime concern. Subdomain may iherit the ranking benefits and benefits of the root domain they are hosted underneath, but they do not always do Subdomains may be used when keyword usage in the doman name is of critical importance For example own, you can pull in quality search traffic for the specific term used toyota trucks with a microsite

How to display different content to search engines and visitors

A variety of strategies are used to segment content delivery. The idea is to serve content that are not meant for search engines in a un spiderable format( placing text in images flash files and plugins)

However dont use these formats for the purpose for cloaking, rather you should use them for if they bring substantial benefit to users If you want to show the search engines , you dont want visitors to see ,you can use CSS formatting( preferably not display:none) as the engines might have filters to track this
 However keep in mind that  search engines are very way of webmasters to use such tactics. Use  cloaking only if it brings substantial user benefits

 Tactics to show  different content for search engines and users  Robot.txt files : This file is located at the root level of your domain( which you can use to 1)Prevent crawlers form accessing non public from parts of your website 2) Block search engines from accessing index scripts,utilities or other types of code, 3) Auto discovery of XML sitemaps.

The robot.txt must reside in it root directory and should be in small case. Any other format is not valid for search engines

 Syntax of robot,txt file : The basic syntax of robot.txt is fairly simple. You specific a robot name such as google bot and specify an action. Some of the major actions you can specify are

Disallow : use this to specify google bot not crawl your certain parts of your website
NoIndex : Use this page for telling the bots not to iindex your site in its SERPs. ( this might be used when you wish to hide duplicate content pages in your site)

heres an example of "robot.txt file"

User agent : Googlebot Disallow: 
User agent :msn bot #Block all robot  tmp and logs directories ( the has symbol# )may be used for comments within a robot.txt file where everything after the # on that line be ignored 

One additional problem webmasters run into , is   when they have ssl installed so that pages may served v iaHTTP and HTTPS .
However the search engine will not interpret this as robot.txt file at as a guiding their crawl behavior on
For this you need to create additional robots.txt file at so if you want toallow crawling of all pages served from your https server you would need to implement the following FOR HTTP user agent: * disallow FOR HTTPS user agent:* disallow: