- Web harvesting
Web harvesting is an implementation of a
Web crawler that uses human expertise or machine guidance to direct the crawler toURLs which compose a specialized collection or set of knowledge. Web harvesting can be thought of as focused or directed Web crawling.Purpose
Web harvesting allows Web-based search and retrieval applications, commonly referred to as
search engines , to index content that is pertinent to the audience for which the harvest is intended. Such content is thus virtually integrated and made searchable as a separate Web application. General purpose search engines, such asGoogle andYahoo! index all possible links they encounter from the origin of their crawl. In contrast, search engines based on Web harvesting only index URLs to which they are directed. This implementation strategy has the effect of creating a searchable application that is faster, due to the reduced size of the index; and one that provides higher quality and more selective results since the indexed URLs are pre-filtered for the topic or domain of interest. In effect, harvesting makes otherwise isolated islands of information searchable as if they were an integrated whole.Process
Web Harvesting begins by identifying and specifying as input to a computer program a list of URLs that define a specialized collection or set of knowledge. The computer program then begins to download this list of URLs. Embedded hyperlinks that are encountered can be either followed or ignored, depending on human or machine guidance. A key differentiation between Web harvesting and general purpose Web crawlers is that for Web harvesting, crawl depth will be defined and the crawls need not recursively follow URLs until all links have been exhausted. The downloaded content is then indexed by the search engine application and offered to information customers as a searchable Web application. Information customers can then access and search the Web application and follow hyperlinks to the original URLs that meet their search criteria.
Focused web harvesting
Focused web harvesting is similar to the targeted web crawler. Instead of let the
general purpose crawler to harvest the web, the mechanism works under a certain pre-defined conditions to specify the information [L.T. Handoko, "A new approach for scientific data dissemination in developing countries: a case of Indonesia", Proceeding of the UN/ESA/NASA Workshops on Basic Space Science and the International Heliophysical Year, [http://arxiv.org/abs/0711.2842 arXiv:0711.2842] (2007).] [Z. Akbar and L.T. Handoko, "A Simple Mechanism for Focused Web-harvesting", Proceeding of the International Conference on Advanced Computational Intelligence and Its Applications, [http://arxiv.org/abs/0809.0723 arXiv:0809.0723] (2008).] . Especially this mechanism is intended to realize an indirectdata integration . The first implementation on this kind of data integartion can be found at the Indonesian Scientific Index - ISI [ [http://www.isi.lipi.go.id Indonesian Scientific Index] ] which integrates all information related to the science and technology inIndonesia .References
ee also
*
Web crawler
*Search engine
Wikimedia Foundation. 2010.