This paper develops a Machine Learning (ML) model to classify the sentiment of review aspects in the peer review text. Reviewers use the review aspect as paper quality indicators such as motivation, originality, clarity, soundness, substance, replicability, meaningful comparison, and summary during the review process. The proposed model addresses the critique of the existing peer review process, including a high volume of submitted papers, limited reviewers, and reviewer bias. This paper uses citation functions, representing the author motivation to cites previous researches, as the main predictor. Specifically, the predictor comprises citing sentence features representing the scheme of citation functions, regular sentence features representing the scheme of citation functions for non-citation sentence, and reference-based representing the source of citation. This paper utilizes the paper dataset from the International Conference on Learning Representations (ICLR) 2017-2020, which includes sentiment values (positive or negative) for all review aspects. Our experiment on combining XGBoost, oversampling, and hyper-parameter optimization revealed that not all review aspects can be effectively estimated by the ML model. The highest results were achieved when predicting Replicability sentiment with 97.74% accuracy. It also demonstrated accuracies of 94.03% for Motivation and 93.93% for Meaningful Comparison. However, the model exhibited lower effectiveness on Originality and Substance (85.21% and 79.94%) and performed less effectively on Clarity and Soundness with accuracies of 61.22% and 61.11%, respectively. The combination predictor was the best for the 5 review aspects, while the other 2 aspects were effectively estimated by regular sentence and reference-based predictors.