Between 75% and 98.8% of visitors to Web sites come from searches made at search engines. If you're going to get high levels of traffic - and hence
levels of ROI you're looking for - it's very important that
search engines can access all
information on your Web site.Do
search engines know about all of your pages?
You can find out which pages on your site
search engines know about by using a special search. If you search for 'site:' and your Web site address,
search engine will tell you all of
pages on your Web site it knows about.
For example, search for: site:webpositioningcentre.co.uk in Google. Yahoo or MSN Search, and it will tell you how many pages they know about.
If
search engines haven't found some of
pages on your Web site, it is probably because they are having trouble spidering them. ('Spidering' is when
search engine uses an automated robot to read your Web pages.)
Spiders work by starting off on a page which has been linked to by another Web site, or that has been submitted to
search engine. They then read and follow any links they find on
page, gradually working their way through your whole Web site.
At least, that's
theory.
The problem is, it's easy to confuse
spiders - especially as they are designed to be wary of following certain kinds of link.
Links which confuse spiders
If your links are within a large chunk of JavaScript code,
spider may not be able to find them, and will not be able to follow
links to your other pages.
This can happen if you have 'rollovers' as your navigation - for instance, pictures that change colour or appearance when you hover your mouse pointer over them. The JavaScript code that makes this happen can be convoluted enough for
spiders to ignore it rather than try to find links inside.
If you think your rollovers are blocking your site from being spidered, you will need to talk to your Web designers about changing
code in to a 'clean link' - a standard HTML link, with no extra code around it - that is much easier for
spiders to follow.