Abstract

To evaluate the intraobserver and interobserver reliability and main influencing factors of the King, Lenke, and PUMC classification systems for idiopathic scoliosis (IS) using radiographs that had not been premeasured. Fifty-six patients with IS, 9 male and 47 female, aged 15.4 (11 - 18), underwent preoperative X-ray photography of spine standing full-length posteroanterior and lateral films and left and right supine side-bending radiographs. The films were read by four orthopedic surgeons independently to do measurement and typing according to the King, Lenke, and PUMC classification systems respectively. Two weeks later, the sequence of the X-ray films was re-ordered and the 4 surgeons read them and did measurement and typing once more. The average percentage of intraobserver and interobserver agreement was calculated. Kappa coefficients were used to determine the intraobserver and interobserver reliability. King classification demonstrated fair interobserver reliability and excellent intraobserver reliability. The mean interobserver reliability was 65.8% (Kappa coefficient = 0.542), while the intraobserver reliability was 82.6% (Kappa coefficient = 0.767). The main reason of disagreement was distinguishment of the King type II and type III. Another reason was assessment of King type V. The complete Lenke classification demonstrated poor reliability. The mean interobserver reliability was 50.0% (Kappa coefficient = 0.438), and the intraobserver reliability was 47.0% (Kappa coefficient = 0.402). The 3 components of Lenke classification had fair interobserver and intraobserver reliability when they were examined separately. The main reasons for disagreement arose from judging whether there was a structural upper thoracic curve and assigning sagittal thoracic modifier. The PUMC type demonstrated excellent intraobserver and interobserver reliability with a mean interobserver reliability of 87.8% (Kappa coefficient = 0.757), and a mean intraobserver reliability of 92.9% (Kappa coefficient = 0.958). Its subtypes demonstrated fair agreement with the mean interobserver reliability of 70.2% (Kappa coefficient = 0.629), and a mean intraobserver reliability of 74.1% (Kappa coefficient = 0.674). The main reasons for disagreement were definition of a curvature and Cobb angle measurement. King and PUMC classification systems have higher interobserver and intraobserver reliability than Lenke classification. The reliability levels of the 3 classification systems are all influenced by many factors. Judgment of an upper thoracic curve, variable of Cobb angle measurement, and relationship of the central vertical sagittal line to apex of curve are the common reasons for disagreement in these 3 classifications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.