Friday, January 2, 2009

Deep Web Research for 2009

Each year Marcus Zillman, director of the Virtual Private Library, posts an exhaustive research guide of links to "deep web" research resources. You can find it here on LLRX.

No web search engine, whether it be Google, Yahoo, or whatever, can crawl everything on the web. The "deep web," a/k/a the "invisible web" or "hidden web," represents those resources that no bot, algorithm, spider, or any other kind of artificial intelligence can find. Zillman estimates that there are about 20 billion web pages that traditional search engines can find; he also estimates that there are about 1 trillion web pages that cannot be found.

If you feel like you're missing something when you're searching the web - you are.

No comments: