Online Marketing - Website Optimization - SEO Solutions - SEO - SEO Marketing - Web Design - WebSite Optimization - SEO Marketing - WebSite Promotion - Web Development

Products & Services | SEO Packages | Quote Your Project | Our Customers | Corporate Info | Contact Us | Free Tools | SEO Articles

Stumbling Instead of Crawling

You are probably thinking chiefly of your human visitors when you set up your website’s navigation, as well you should. But certain kinds of navigation structures will trip up spiders, making it less likely for those visitors to find your site in the first place. As an added bonus, many of the things you do to your site that will make it easier for a spider to find content, will often make it easier for visitors to navigate your site.

It’s worth keeping in mind, by the way, that you might not want spiders to be able to index everything on your site. If you own a site with content that users pay a fee to access, you probably don’t want a Google bot to grab that content and show it to anyone who enters the right keywords. There are ways to deliberately block spiders from such content. In keeping with the rest of this article, which is intended mainly as an introduction, they will only be mentioned briefly here.

Dynamic URLs are one of the biggest stumbling blocks for search engine spiders. In particular, pages with two or more dynamic parameters will give a spider fits. You know a dynamic URL when you see it; it usually has a lot of "garbage" in it such as question marks, equal signs, ampersands (&) and percent signs. These pages are great for human users, who usually get to them by setting certain parameters on a page. For example, typing a zip code into a box at weather.com will return a page that describes the weather for a particular area of the US – and a dynamic URL as the page location.

There are other ways in which spiders don’t like complexity. For example, pages with more than 100 unique links to other pages on the same site can make them get tired with just one look. A spider may not follow each link. If you are trying to build a site map, there are better ways to organize it.

Pages that are buried more than three clicks from your website’s home page also might not be crawled. Spiders do not like to go that deep. For that matter, many humans can get “lost” on a website with that many levels of links if there is not some kind of navigational guidance.

Pages that require a "Session ID" or cookie to enable navigation also might not be spidered. Spiders are not browsers, and do not have the same capabilities. They may not be able to retain these forms of identification.

Another stumbling block for spiders is pages that are split into "frames". Many web designers like frames; it allows them to keep page navigation in one place even when a user scrolls through content. But spiders find pages with frames confusing. To them, content is content, and they have no way of knowing which pages should go in the search results. Frankly, many users do not like pages with frames either; rather than providing a cleaner interface, such pages often look cluttered.

Most of the stumbling blocks above are ones you may have accidentally put in the way of spiders. This next set of stumbling blocks includes some that website owners might use on purpose to block a search engine spider. While I mentioned one of the most obvious reasons for blocking a spider above (content that users must pay to see), there are certainly others: the content itself might be free, but should not be easily available to everyone, for example.

Pages that can be accessed only after filling out a form and hitting “Submit” might as well be closed doors to spiders. Think of them as not being able to push buttons or type. Likewise, pages that require use of a drop down menu to access might not be spidered, and the same holds true for documents that can only be accessed via a search box.

Documents that are purposefully blocked will usually not be spidered. This can be handled with a robots meta tag or robots.txt file.

Pages that require a login block search engine spiders. Remember the “spiders can’t type” observation above. Just how are they going to log in to get to the page?

Finally, I would like to make a special note of pages that redirect before showing content. Not only will that not get your page indexed, it could get your site banned. Search engines refer to this tactic as "cloaking" or "bait-and-switch". You can check Google s guidelines for webmasters (http://www.google.com/intl/en/webmasters/guidelines.html) if you have any questions about what is considered legitimate and what isn’t.

Now that you know what will make spiders choke, how do you encourage them to go where you want them to? The key is to provide direct HTML links to each page you want the spiders to visit. Also, give them a shallow pool to play in. Spiders usually start on your home page; if any part of your site cannot be accessed from there, chances are the spider won’t see it. This is where use of a site map can be invaluable.

© SEO-Solutions.co.cr All Rights Reserved 2007