Abstract

Large sets of Web page links, colinks, or URLs sometimes need to be counted or otherwise summarized by researchers to analyze Web growth or publishing. Computing professionals also use them to evaluate Web sites or optimize search engines. Despite the apparently simple nature of these types of data, many different summarization methods have been used in the past. Some of these methods may not have been optimal. This article proposes a generic lexical framework to unify and extend existing methods through abstract notions of link lists and URL lists. The approach is built upon decomposing URLs by lexical segments, such as domain names, and systematically characterizing the counting options available. In addition, counting method choice recommendations are inferred from a very general set of theoretical research assumptions. The article also offers practical advice for analyzing raw data from search engines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.