DIGEST is a validated, open-source method to grade the severity of pharyngeal dysphagia from the modified barium swallow (MBS) study. Dissemination and implementation of DIGEST is rising, making it critical to understand reliability and facilitators of accurate implementation among users. The aim was to assess reliability of the tool among speech-language pathology (SLP) raters practicing at multiple sites before and after review of a DIGEST training manual and evaluate confidence of DIGEST use pre-and post-training. Thirty-two SLPs from 5 sites participated in a blinded longitudinal DIGEST rating study. Raters were provided a standardized training set of MBS (n = 19). Initial SLP ratings (round 1, R1) were followed by a 2-4week break before raters rated a re-keyed MBS set (round 2, R2). A minimum 4-8week wash-out period then preceded self-study of the DIGEST training manual whichwas followed by a final rating (round 3, R3) and a post-manual survey afterwards. Baseline reliability (R1) of overall DIGEST was on average k = 0.70, reflecting agreement in the substantial range. Seventy-five percent of raters (24/32) demonstrated reliability ≥ 0.61 in the substantial to almost perfect range prior to training. Inter-rater reliability significantly improved from R1 to R3 after review of the DIGEST manual, with the largest change in DIGEST-Efficiency (mean change: DIGEST k = .04, p = .009, DIGEST-Safety k = .07, p = 0.03, and DIGEST-Efficiency k = .14, p = 0.009). Although DIGEST reliability at baseline was adequate in the majority of raters, self-study of the DIGEST training manual significantly improved inter-rater reliability and rater confidence using the DIGEST method, particularly when assigning DIGEST-Efficiency grade. These early data show promise that provider training may be useful to aid in fidelity of DIGEST implementation among SLP clinical users with varying DIGEST experience.