Bright Planet, Deep WebWritten by Sam Vaknin
www.allwatchers.com and www.allreaders.com are web sites in sense that a file is downloaded to user's browser when he or she surfs to these addresses. But that's where similarity ends. These web pages are front-ends, gates to underlying databases. The databases contain records regarding plots, themes, characters and other features of, respectively, movies and books. Every user-query generates a unique web page whose contents are determined by query parameters.The number of singular pages thus capable of being generated is mind boggling. Search engines operate on same principle - vary search parameters slightly and totally new pages are generated. It is a dynamic, user-responsive and chimerical sort of web.These are good examples of what www.brightplanet.com call "Deep Web" (previously inaccurately described as "Unknown or Invisible Internet"). They believe that Deep Web is 500 times size of "Surface Internet" (a portion of which is spidered by traditional search engines). This translates to c. 7500 TERAbytes of data (versus 19 terabytes in whole known web, excluding databases of search engines themselves) - or 550 billion documents organized in 100,000 deep web sites. By comparison, Google, most comprehensive search engine ever, stores 1.4 billion documents in its immense caches at www.google.com. The natural inclination to dismiss these pages of data as mere re-arrangements of same information is wrong. Actually, this underground ocean of covertintelligence is often more valuable than information freely available or easily accessible on surface. Hence ability of c. 5% of these databases to charge their users subscription and membership fees. The average deep web site receives 50% more traffic than a typical surface site and is much more linked to by other sites. Yet it is transparent to classic search engines and little known to surfing public. It was only a question of time before someone came up with a search technology to tap these depths (www.completeplanet.com).
| | The Polyglottal InternetWritten by Sam Vaknin
http://www.everymail.com/ The Internet started off as a purely American phenomenon and seemed to perpetuate fast-emerging dominance of English language. A negligible minority of web sites were in other languages. Software applications were chauvinistically ill-prepared (and still are) to deal with anything but English. And vast majority of net users were residents of two North-American colossi, chiefly USA. All this started to change rapidly about two years ago. Early this year, number of American users of Net was surpassed by swelling tide of European and Japanese ones. Non-English web sites are proliferating as well. The advent of wireless Internet - more widespread outside USA - is likely to strengthen this unmistakable trend. By 2005, certain analysts expect non-English speakers to make up to 70% of all netizens. This fragmentation of an hitherto unprecedentedly homogeneous market - presents both opportunities and costs. It is much more expensive to market in ten languages than it is in one. Everything - from e-mail to supply chains has to be re-tooled or customized. It is easy to translate text in cyberspace. Various automated, web-based, and free applications (such as Babylon or Travlang) cater to needs of casual user who doesn't mind quality of end-result. Virtually every search engine, portal and directory offers access to these or similar services. But straightforward translation is only one kind of solution to tower of Babel that Internet is bound to become.
|