Speaking skill assessment is gaining great interest in the field of assessment nowadays. Literature has highlighted reliability of raters in rating a speaking performance as one of the challenges due to human’s subjective nature. This study has attempted to explore the influence of rater training on rater reliability in the assessment of a spoken task. A qualitative research design was used and, semi-structured interview was employed to obtain data for this study. A total of 21 secondary school teachers participated in the study. They were raters trained to assess an oral English interaction test. Data were analyzed using thematic content analysis which resulted in three main categories i.e. importance of rater training, effects of rater training on rater reliability, and improvement of rater training. The results show that rater training is essential before any rating is to be done, and its effects include, among others, maintaining rating consistency, exposure to test task, and criteria for grading. While suggestions to improve rater training sessions are related to the length, frequency, and quality of training.