Abstract
Search engines and indices were created to help people find information amongst the rapidly increasing number of World Wide Web (WWW) pages. The search engines automatically visit and index pages so that they can retum good matches for their users' queries. The way that this indexing is done varies from engine to engine and the detail is usually secret although the strategy is sometimes made public in general terms. The search engines' aim is to return relevant pages quickly. On the other hand, the author of a Web page has a vested interest in it rating highly, for appropriate queries, on as many search engines as possible. Some authors have an interest in their page rating well for a great many types of query indeed—spamming has come to the Web. We treat modelling the workings of WWW search engines as an inductive inference problem. A training set of data is collected, being pages returned in response to typical queries. Decision trees are used as the model class for the search engines' selection criteria although this is not to say that search engines actually contain decision trees. A machine learning program is used to infer a decision tree for each search engine, an information-theory criterion being used to direct the inference and to prevent over-fitting.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.