Abstract

Machine learning research to detect political bias in articles has boomed in recent years. However, there is still no widely accepted and effective word embedding technique for detecting bias. This paper explores the connection between political bias and word embedding models and deduces factors to consider when selecting and developing word embedding techniques. Three classic word embedding models are introduced into experiments to conduct comparisons to achieve this goal. Contextual meaning is observed to lose efficiency in the task. In contrast, frequency is the most relevant feature in predicting media bias. Simultaneously, this paper discovers a unique accuracy distribution generated by Random Forest through experiments. Experiments reveal that it has apparent advantages in accuracy when predicting left-biased articles, which may relate to features undiscovered.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.