Google Flaws and FixesWritten by Martin Winer
There have been many criticisms recently of Google and it's page rank algorithm. First, to criticisms of Google itself. Has anyone noticed that Google currently indexes 4 billion odd web pages? Very few of us had time to trod through Google's white paper, but those who did, noticed that 4 billion is Google's upper limit on ability to address or identify and rank web pages. Basically, Google is full. I noticed problem when my site: www.rankyouragent.com wasn't getting visited very often and wasn't being indexed properly. If you have a new site, Google appears to be very slow in adding it and noticing new sites that link to you. Don't take my word for it, feel free to Google search: "google broken" (feel free to note irony). Technical problems aside, there is a broader algorithmic problem inherent to page link algorithm. The basic tenet is that a link from one site to another is a vote for that site. Regrettably that algorithm is only valid in an internet where that algorithm isn't public knowledge. Of course, everyone and their grandmothers know how Google Page Rank works. As a result we see link spam. Competitors to www.rankyouragent.com such as: http://www.mostreferred.com/ have created thousands of different domain names that point back to their bread and butter main page. They list 6100 backward links, some of which from similar web domains like: newfoundland.mostreferred.ca/. This link spam is insurmountable for a new site with anything less than a gigantic promotion budget. As a result many people are denied a shot at viewing competitors website.
| | New revolution on Search EnginesWritten by Omair Aasim
Aiming to provide users with best search results, free website submission and unbiased website ranking, ObjectSearch.com has launched an open source search engine based on Nutch.org's search. ObjectSearch looks to solve problems related to search result manipulation and information overload.ObjectSearch claims that their open source approach provides an "alternative to commercial web search engines. Only open source search results can be fully trusted to be without bias." This premise goes against major search engines in methods in which they rank search results. "All existing major search engines have proprietary ranking formulas, and will not explain why a given page ranks as it does. Additionally, some search engines determine which sites to index based on payments, rather than on merits of sites themselves. Objects Search, on other hand, has nothing to hide and no motive to bias its results or its crawler in any way other than to try to give each user best results possible." Each result that appears on OS's results page contains four different links. They have cached links, which displays page that OS downloaded. Results have an explanation link that describes how site received its ranking, result links feature an anchor link that shows a list of incoming anchors that have been indexed for page in question and finally plain text link - displays plain text version of page that Objects Search downloaded.
|