Abstract

Due to the amount of work needed in manual sentiment analysis of written texts, techniques in automatic sentiment analysis have been widely studied. However, compared to manual sentiment analysis, the accuracy of automatic systems range only from low to medium. In this study, we solve a sentiment analysis problem by crowdsourcing. Crowdsourcing is a problem solving approach that uses the cognitive power of people to achieve specific computational goals. It is implemented through an online platform, which can either be paid or volunteer-based. We deploy crowdsourcing applications in paid and volunteer-based platforms to classify teaching evaluation comments from students. We present a comparison of the results produced by crowdsourcing, manual sentiment analysis, and an existing automatic sentiment analysis system. Our findings show that the crowdsourced sentiment analysis in both paid and volunteer-based platforms are considerably more accurate than the automatic sentiment analysis algorithm but still fail to achieve high accuracy compared to the manual method. To improve accuracy, the effect of increasing the size of the crowd could be explored in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call