Abstract

AbstractAn empirical investigation of the role of documents in relevance judgments is reported. Abstracts previously judged relevant, partially relevant, and nonrelevant to each of 61 questions were compared to see whether textual differences could be found which might reasonably account for the rating differences. The results of this comparison were fairly clear‐cut characterizations in each case of relevant and partially relevant abstracts. These characterizations were found to be expressible largely as meaningful co‐occurrences of terms closely related to the question. It is suggested that the textual bases of user choices may be more understandable than has been supposed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.