Abstract

This article attempts to develop a measure of what we call “judicial responsiveness,” which, roughly stated, concerns the extent to which judicial opinions reflect the arguments made by the parties in their briefs. We applied two methods of automated content analysis to the briefs and opinion in each of a set of 30 cases decided by the First Circuit, measuring for similarity based on computations of word counts and citation percentages. We then compared the results of those methods to the results of manual coding of the same documents. The existence of statistically significant correlations among the measures supports the conclusion that our automated methodologies serve as a valid means of assessing responsiveness. We argue that these investigations can inform a range of scholarly debates, including efforts to assess judicial quality and the influence of ideology on judging, as well as debates over specific components of the judicial process, such as the use of unpublished opinions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.