The Metaphors of the Net - Part I

Written by Sam Vaknin


I. The Genetic Blueprint

A decade afterrepparttar invention ofrepparttar 118808 World Wide Web, Tim Berners-Lee is promotingrepparttar 118809 "Semantic Web". The Internet hitherto is a repository of digital content. It has a rudimentary inventory system and very crude data location services. As a sad result, most ofrepparttar 118810 content is invisible and inaccessible. Moreover,repparttar 118811 Internet manipulates strings of symbols, not logical or semantic propositions. In other words,repparttar 118812 Net compares values but does not knowrepparttar 118813 meaning ofrepparttar 118814 values it thus manipulates. It is unable to interpret strings, to infer new facts, to deduce, induce, derive, or otherwise comprehend what it is doing. In short, it does not understand language. Run an ambiguous term by any search engine and these shortcomings become painfully evident. This lack of understanding ofrepparttar 118815 semantic foundations of its raw material (data, information) prevent applications and databases from sharing resources and feeding each other. The Internet is discrete, not continuous. It resembles an archipelago, with users hopping from island to island in a frantic search for relevancy.

Even visionaries like Berners-Lee do not contemplate an "intelligent Web". They are simply proposing to let users, content creators, and web developers assign descriptive meta-tags ("name of hotel") to fields, or to strings of symbols ("Hilton"). These meta-tags (arranged in semantic and relational "ontologies" - lists of metatags, their meanings and how they relate to each other) will be read by various applications and allow them to processrepparttar 118816 associated strings of symbols correctly (placerepparttar 118817 word "Hilton" in your address book under "hotels"). This will make information retrieval more efficient and reliable andrepparttar 118818 information retrieved is bound to be more relevant and amenable to higher level processing (statistics,repparttar 118819 development of heuristic rules, etc.). The shift is from HTML (whose tags are concerned with visual appearances and content indexing) to languages such asrepparttar 118820 DARPA Agent Markup Language, OIL (Ontology Inference Layer or Ontology Interchange Language), or even XML (whose tags are concerned with content taxonomy, document structure, and semantics). This would bringrepparttar 118821 Internet closer torepparttar 118822 classic library card catalogue.

Even in its current, pre-semantic, hyperlink-dependent, phase,repparttar 118823 Internet brings to mind Richard Dawkins' seminal work "The Selfish Gene" (OUP, 1976). This would be doubly true forrepparttar 118824 Semantic Web.

Dawkins suggested to generalizerepparttar 118825 principle of natural selection to a law ofrepparttar 118826 survival ofrepparttar 118827 stable. "A stable thing is a collection of atoms which is permanent enough or common enough to deserve a name". He then proceeded to describerepparttar 118828 emergence of "Replicators" - molecules which created copies of themselves. The Replicators that survived inrepparttar 118829 competition for scarce raw materials were characterized by high longevity, fecundity, and copying-fidelity. Replicators (now known as "genes") constructed "survival machines" (organisms) to shield them fromrepparttar 118830 vagaries of an ever-harsher environment.

This is very reminiscent ofrepparttar 118831 Internet. The "stable things" are HTML coded web pages. They are replicators - they create copies of themselves every time their "web address" (URL) is clicked. The HTML coding of a web page can be thought of as "genetic material". It contains allrepparttar 118832 information needed to reproducerepparttar 118833 page. And, exactly as in nature,repparttar 118834 higherrepparttar 118835 longevity, fecundity (measured in links torepparttar 118836 web page from other web sites), and copying-fidelity ofrepparttar 118837 HTML code -repparttar 118838 higher its chances to survive (as a web page).

Are Reciprocal Links Dead?

Written by Tom Henricks


Is reciprocal Linking dead? I just read an article at SitePro News that really rings my bell. It was written by Mike Banks Valentine. Obviously being a very successful webmaster, one must pay attention to his words and his most recent article really mirrors my thoughts. Having just completed constructing a new website for my personal use, I have been looking for optimization improvements.

The current rage is “reciprocal linking” so I went about searching for some reciprocal links. To my dismay I discovered websites that contained lists of links in directories that appear nearly useless as far as pointing potential customers to my site. I completed arrangements for link swaps with a few, only to go back and find it impossible to find my link. Caught up inrepparttar frenzy, I had begun to get drawn in to this game. Then I took a step backward. This craziness can’t be adding anything of value to any of these websites. Should I participate anymore? I think not. However it was reading Mike’s words this morning that brought homerepparttar 118807 point that was nagging away at me and sitting in some little corner of my head. This can’t be a good thing and it can’t hold up as a relevant marker for search engine ranking. What I am seeing is just too ridiculous. I am an amateur webmaster by world standards but I don’t like to be a “gimmick man.” I would really like my website to stand on it’s own two feet and have it judged by what you are reading. This links thing appeared to be a farce to me. I’m not into a bunch of “flash” and I won’t rely on gimmicks.

Cont'd on page 2 ==>
 
ImproveHomeLife.com © 2005
Terms of Use