With the expanding influence of the Internet, an increasing number of individuals rely on viewer reviews to make informed decisions about whether to watch a movie or TV series. However, the prevalence of manipulated or "navy" reviews, employed by companies to boost their products' reputation, has created a significant challenge. While numerous studies have dissected film and drama reviews, a notable gap exists in discerning genuine audience feedback from deceptive ones. This article's research focus is on evaluating the model's capacity to effectively differentiate between authentic audience comments and navy reviews and delving into the complexities encountered when the model assesses comments, as well as highlighting the disparities between model-generated judgments and human assessments. This article first collects a large amount of different types of comment data, annotates these data, and then uses these data to train and fine tune the BERT model. Finally, the results are obtained and analyzed to determine the reasons. This article found that the accuracy rate of the model's judgment comments is around 71.08%, which is more accurate and stable. However, there are still some issues when judging comments with emojis and emoticons, and certain data is needed to support the judgment of comments for different movies or dramas. There are also certain issues with the dataset, as the data is manually annotated, and the annotation of the dataset itself may also be influenced by the annotator, which may lead to inaccurate judgments.