Site Maps: A Force to be Reckoned WithWritten by Kristy Meghreblian
Another important component of search engine optimization is use of site maps. If you want visitors -- and search engine spiders -- to find every page on your Web site, a site map can be your biggest ally especially if you have a lot of content on your site (and if you’ve been reading all advice on our site, you should know by now that more content you have better your chances are for top ranking).
So, what is a site map? Basically, it is a navigation tool. It lets visitors know what information you have, how it is organized, where it is located with respect to other information, and how to get to that information with least amount of clicks possible. A good site map is more than a hyperlinked index, which only provides user with a list of alphabetically arranged terms.
Site maps also provide lots of nutritious spider food for search engine robots that crawl your site and eventually index it. Once robot gets to site map, it can visit every page on your entire site because all information is clearly indicated on that one page. However, in order for your site map to work most effectively, you must include a link to your site map in navigation on every page of your site.
How Can Search Engines Help You with Your Business?Written by Dmitry Antonoff, Irina Ponomareva
What Are Search Engines? Most of us often face problem of searching web. Nowadays, global network is one of most important sources of information there is, its main goal being to make information easily accessible. That's where main problem arises: how to find what you need among all those innumerable terabytes of data. The World Wide Web is overloaded with various stuff related to diverse interests and activities of human beings who inhabit globe. How can you tell what a site is devoted to without visiting it? Besides, number of resources grew as quickly as Internet’s own development, and many of them closely resembled each other (and still do). This situation necessitated finding a reliable (and at same time fast) way to simplify search process, otherwise there would be absolutely no point to World Wide Web. So, development and deployment of first search engines closely followed birth of World Wide Web. * How It All Began At start, search engines developed quite rapidly. The "grandfather" of all modern search engines was Archie, launched in 1990, creation of Alan Emtage, a student at McGill University, Montreal. Three years later, University of Nevada System Computing Services deployed Veronica. These search engines created databases and collected information on files existing in global network. But they were soon overwhelmed by fast growth of net, and others stepped forward. World Wide Web Wanderer was first automated Internet robot, whereas ALIWEB, launched in Autumn of 1993, was first rough model of a modern web directory that is filled up by site owners or editors. At about same time, first 'spiders' appeared. These were: JumpStation, World Wide Web Worm, and Repository-Based Software Engineering** starting new era of World Wide Web search. Google and Yahoo are two of their better-known descendants. http://galaxy.com/info/history2.html. Search Engines Today Modern web searchers are divided into two main groups: • search engines and • directories. Search engines automatically 'crawl' web pages (by following hyperlinks) and store copies of them in an index, so that they can generate a list of resources according to users' requests (see ‘How Search Engines Work’, below). Directories are compiled by site owners or directory editors (in other words, humans) according to categories. In truth, most modern web search combine two systems to produce their results. How Search Engines Work All search engines consist of three main parts: • spider (or worm); • index; and • search algorithm. The first of these, spider (or worm), continuously ‘crawls’ web space, following links that lead both to within limits of a website and to completely different websites. A spider ‘reads’ all pages’ content and passes data to index. The Index is second part of a search engine. It is a storage area for spidered web pages and can be of a huge magnitude (Google’s index, for example is said to consist of three billion pages). The third part of a search engine system is most sophisticated. It is search algorithm, a very complicated mechanism that sorts an immense database within a few seconds and produces results list. Looking like a web page (or, most often, lots of pages), it contains links to resources that match users' queries (i.e., relevant resources). The most relevant ones (as search engine sees it) are nearer top of list. They are ones most likely to be clicked by user of search engine. A site owner should therefore take heed of site's relevancy to keywords it is expected will be used to find it. http://www.searchenginewatch.com/webmasters/article.php/2168031 A Relevancy calculation algorithm is unique for every search engine, and is a trade secret, kept hidden from public. However, there are some common principles, which will be discussed in following paragraph. http://www.searchenginewatch.com/webmasters/article.php/2167961 What to Do to Have Your Web Site Found through Search Engines There are some simple rules to make your resource relevant enough to be ranked in top 10 by majority of search engines. Rule 1: Work on body copy A search engine determines topic of your site judging by textual information (or content) of every page. Of course, it cannot comprehend content way humans do, but this is not critical. It is much more important to include keywords, which are found and compared with users' queries by programme. The more often you use targeted keywords, better your page will be ranked when a search on those keywords is made. You can increase relevancy of your targeted keywords still more if you include them in HTML title of your page ( tag), in subheaders (