Human-bot symbiosis and misinformation propagation: Exploring the mechanisms of social bot participation from the perspective of emotional contagion
Human-bot symbiosis and misinformation propagation: Exploring the mechanisms of social bot participation from the perspective of emotional contagion
- Research Article
6
- 10.2196/39984
- Dec 16, 2022
- JMIR Rehabilitation and Assistive Technologies
BackgroundA critical gap in our knowledge about social media is whether we can alleviate accessibility barriers and challenges for individuals with traumatic brain injury (TBI), to improve their social participation and health. To do this, we need real-time information about these barriers and challenges, to design appropriate aids.ObjectiveThe aim of this study was to characterize the ways people with TBI accessed and used social media websites and understand unique challenges they faced.MethodsWe invited 8 adults with moderate to severe TBI to log onto their own Facebook page and use it as they regularly would while thinking aloud. Their comments were recorded and transcribed for qualitative analysis. We first analyzed participants’ utterances using a priori coding based on a framework proposed by Meshi et al to classify adults’ motives for accessing social media. We next used an open coding method to understand the challenges that people with TBI faced while using Facebook. In other words, we analyzed participants’ needs for using Facebook and then identified Facebook features that made it challenging for them to meet those needs.ResultsParticipants used all categories of codes in the framework by Meshi et al and provided detailed feedback about the Facebook user interface. A priori coding revealed 2 dimensions that characterized participants’ Facebook use: whether they were active or passive about posting and self-disclosure on Facebook and their familiarity and fluency in using Facebook. The open coding analysis revealed 6 types of challenges reported by participants with TBI, including difficulty with language production and interpretation, attention and information overload, perceptions of negativity and emotional contagion, insufficient guidance to use Facebook, concerns about web-based scams and frauds, and general accessibility concerns.ConclusionsResults showed that individuals with TBI used Facebook for the same reasons typical adults do, suggesting that it can help increase social communication and reduce isolation and loneliness. Participants also identified barriers, and we propose modifications that could improve access for individuals with brain injury. On the basis of identified functions and challenges, we conclude by proposing design ideas for social media support tools that can promote more active use of social media sites by adults with TBI.
- Research Article
55
- 10.1016/j.ipm.2022.103197
- Nov 25, 2022
- Information Processing & Management
Network distribution and sentiment interaction: Information diffusion mechanisms between social bots and human users on social media
- Research Article
- 10.1177/14604582251381175
- Oct 1, 2025
- Health informatics journal
Objectives: During the early phase of the COVID-19 outbreak, misinformation spread rapidly, hindering effective health communication and fueling xenophobic violence. The politicization of health issues, along with the manipulation by social bots and astroturfing accounts, posed significant challenges. This study aims to investigate how misinformation spreads through social media, involving malicious actors like trolls and bots, and explores emotional contagion during public health crises. Methods: Using a computational methodology that combines semantic modeling, social network analysis, bot identification, emotion analysis, and time series analysis, the study analyzed over 700,000 tweets from February to July 2020. Results: The findings reveal that inauthentic actors amplified negative emotions, particularly among news and political actors, while positive emotions were less prominent. Astroturfing accounts acted as key nodes, perpetuating negative emotional contagion. Conclusion: This study provides a framework for monitoring emotional responses in public health crises, with findings applicable beyond COVID-19 to other public health emergencies.
- Research Article
- 10.32628/ijsrst52411222
- Apr 2, 2024
- International Journal of Scientific Research in Science and Technology
Malicious social bots generate fake tweets and automate their social relationships either by pretending to be a followers or by creating multiple fake accounts with malicious activities. Moreover, malicious social bots post shortened malicious URLs in the tweets to redirect the requests of online social networking participants to some malicious servers. Hence, distinguishing malicious social bots from legitimate users is one of the most important tasks in the Twitter network. To detect malicious social bots, extracting URL-based features (such as URL redirection, frequency of shared URLs, and spam content in URL) consumes less amount of time in comparison with social graph-based features (which rely on the social interactions of users). Furthermore, malicious social bots cannot easily manipulate URL redirection chains. In this article, learning automata-based malicious social bot detection (LA-MSBD) algorithm is proposed by integrating a trust computation model with URL-based features for identifying trustworthy participants (users) in the Twitter network. The proposed trust computation model contains two parameters, namely, direct trust and indirect trust. Moreover, the direct trust is derived from Bayes’ theorem, and the indirect trust is derived from the Dempster– Shafer theory (DST) to determine the trustworthiness of each participant accurately. Finally, we shown the user tweet data in terms of graph visualization of bar chart and pie chart of the system. Experimental results shown the better performance of the system.
- Research Article
51
- 10.1109/tcss.2020.2992223
- Aug 1, 2020
- IEEE Transactions on Computational Social Systems
Malicious social bots generate fake tweets and automate their social relationships either by pretending like a follower or by creating multiple fake accounts with malicious activities. Moreover, malicious social bots post shortened malicious URLs in the tweet in order to redirect the requests of online social networking participants to some malicious servers. Hence, distinguishing malicious social bots from legitimate users is one of the most important tasks in the Twitter network. To detect malicious social bots, extracting URL-based features (such as URL redirection, frequency of shared URLs, and spam content in URL) consumes less amount of time in comparison with social graph-based features (which rely on the social interactions of users). Furthermore, malicious social bots cannot easily manipulate URL redirection chains. In this article, a learning automata-based malicious social bot detection (LA-MSBD) algorithm is proposed by integrating a trust computation model with URL-based features for identifying trustworthy participants (users) in the Twitter network. The proposed trust computation model contains two parameters, namely, direct trust and indirect trust. Moreover, the direct trust is derived from Bayes' theorem, and the indirect trust is derived from the Dempster-Shafer theory (DST) to determine the trustworthiness of each participant accurately. Experimentation has been performed on two Twitter data sets, and the results illustrate that the proposed algorithm achieves improvement in precision, recall, F-measure, and accuracy compared with existing approaches for MSBD.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.