Abstract

Crawlers are software which can traverse the internet and retrieve web pages by hyperlinks. In the face of the large number of websites, traditional web crawlers cannot function well to get the relevant pages effectively. To solve these problems, focused crawlers utilize semantic web technologies to analyze the semantics of hyperlinks and web documents. The focused crawler is a special-purpose search engine which aims to selectively seek out pages that are relevant to a predefined set of topics, rather than to exploit all regions of the web. The main characteristic of focused crawling is that the crawler does not need to collect all web pages, but selects and retrieves only the relevant pages. So the major problem is how to retrieve the maximal set of relevant and quality pages. To address this problem, we have designed a focused crawler which calculates the relevancy of block in web page. The Block is partitioned by VIPS algorithm. Page relevancy is calculated by sum of all block relevancy scores in one page. It also calculates the URL score for identifying whether a URL is relevant or not for a specific topic.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call