Abstract

The objective of this pilot study was to assess the inter-rater reliability of a newly developed A3 Quality Assessment (QA) rubric to evaluate the quality of completed Plan-Do-Study-Act (PDSA) projects that used an A3 Thinking Tool (A3) for problem solving. One A3 was independently reviewed by 7 PDSA experts using 5 main levels and 22 sublevels. Evaluations were compared and coded for agreement and used for statistical analysis. Fleiss’ kappa statistics was performed to test for inter-rater reliability between experts across 5 main and 22 sublevels. Preliminary results suggest that the A3 QA rubric meets reliability criteria with a moderate level of agreement beyond chance alone (κ = 0.44) and it is applicable to measure progress on problem solving abilities spearheaded via PDSA cycles. Additional verification testing is needed across multiple A3 improvement projects completed in multiple A3 Thinking templates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call