Continued from page 1
The Knowing - Search engines know by crawling. What they know goes far beyond what is commonly perceived by most users, webmasters and SEOs. While vast storehouse we call Internet provides billions upon billions of pages of data for search engines to know they also pick up more than that. Search engines know a number of different methods for storing data, presenting data, prioritizing data and of course, way of tricking engines themselves.
While search engine spiders are crawling web they are grabbing stores of data that exist and sending it back to datacenters, where that information is processed through existing algorithms and sp@m filters where it will attain a ranking based on engine's current understanding of way Internet and documents contained within it work.
Similar to way we process an article from a newspaper based on our current understanding of world, search engines process and rank documents based on what they understand to be true in way documents are organized on Internet.
The Learning - Once it is understood that search engines rank documents based on a specific understanding of way Internet functions, it then follows that in order to insure that new document types and technologies are able to be read and that algorithm be changed as new understandings of functionality of Internet are uncovered a search engine must have ability to "learn".
Aside from a search engine needing ability to properly spider documents stored in newer technologies, search engines must also have ability to detect and accurately penalize sp@m and as well as accurately rank websites based on new understandings of way documents are organized and links arranged. Examples of areas where search engines must learn in an ongoing basis include but are most certainly not limited to:
- Understanding relevancy of content between sites where a link is found
- Attaining ability to view content on documents contained within new technologies such as database types, Flash, etc.
- Understanding various methods used to hide text, links, etc. in order to penalize sites engaging in these tactics
- Learning from current results and any shortcoming in them, what tweaks to current algorithms or what additional considerations must be taken into account to improve relevancy of results in future.
The learning of a search engine generally comes from uber-geeks hired by and users of search engines. Once a factor is taken into account and programmed into algorithm it them moves into "knowing" category until next round of updates.
How This Helps in SEO
This is point at which you may be asking yourself, "This is all well-and-good but exactly how does this help ME?" An understanding of how search engines function, how they learn, and how they live is one of most important understandings you can have in optimizing a website. This understanding will insure that you don't simply apply random tricks in hopes that you've listened to right person in forums that day but rather that you consider what is search engine trying to do and does this tactic fit with long term goals of engine.
For a while keyword density sp@mming was all rage among less ethical SEOs as was building networks of websites to link together in order to boost link popularity. Neither of these tactics work today and why? They do not fit with long-term goals of search engine. Search engines, like humans, want to survive. If results they provide are poor then engine will die a slow but steady death and so they evolve.
When considering any tactic you must consider, does this fit with long-term goals of engine? Does this tactic in general serve to provide better results for largest number of searches? If answer is yes then tactic is sound.
For example, overall relevancy of your website (i.e. does majority of your content focus on a single subject) has become more important over past year or so. Does this help searcher? The searcher will find more content on subject they have searched on larger sites with larger amounts of related content and thus this shift does help searcher overall. A tactic that includes addition of more content to your site is thus a solid one as it helps build overall relevancy of your website and gives visitor more and updated information at their disposal once they get there.
Another example would be in link building. Reciprocal links are becoming less relevant and reciprocal-links between unrelated sites are virtually irrelevant. If you are engaging in reciprocal link building insure that sites you link to are related to your site's content. As a search engine I would want to know that a site in my results also provided links to other related sites thus increasing chance that searcher was going to find information that they are looking for one way or another without having to switch to a different search engine.
In short, think ahead. Understand that search engines are organic beings that will continue to evolve. Help feed them when they visit your site and they will return often and reward your efforts. Use unethical tactics and you may hold a good position for a while but in end, if you do not use tactics that provide for good overall results, you will not hold your position for long. They will learn.
Dave Davies is the CEO of Beanstalk Search Engine Positioning. He has been optimizing and ranking websites for over four years and has a solid history of success. Beanstalk is happy to offer guaranteed search engine positioning services to its clients.