Abstract
To properly capture the complexity of influence propagation phenomena in real-world contexts, such as those related to viral marketing and misinformation spread, information diffusion models should fulfill a number of requirements. These include accounting for several dynamic aspects in the propagation (e.g., latency, time horizon), dealing with multiple cascades of information that might occur competitively, accounting for the contingencies that lead a user to change her/his adoption of one or alternative information items, and leveraging trust/distrust in the users’ relationships and its effect of influence on the users’ decisions. To the best of our knowledge, no diffusion model unifying all of the above requirements has been developed so far. In this work, we address such a challenge and propose a novel class of diffusion models, inspired by the classic linear threshold model, which are designed to deal with trust-aware, non-competitive as well as competitive time-varying propagation scenarios. Our theoretical inspection of the proposed models unveils important findings on the relations with existing linear threshold models for which properties are known about whether monotonicity and submodularity hold for the corresponding activation function. We also propose strategies for the selection of the initial spreaders of the propagation process, for both non-competitive and competitive influence propagation tasks, whose goal is to mimic contexts of misinformation spread. Our extensive experimental evaluation, which was conducted on publicly available networks and included comparison with competing methods, provides evidence on the meaningfulness and uniqueness of our models.
Highlights
Since the early applications in viral marketing, the development of information diffusion models and their embedding in optimization methods has provided effective support to address a variety of influence propagation problems.due to the shrinking boundary between real and online/virtual social life (Bessi et al 2014) along with the unlimited misinformation spots over the Web, e.g., fake news (Kumar et al 2016; Kim et al 2018), deciding whether a source of information is reliable or not has become a delicate task
We propose a novel class of diffusion models, named Friend-Foe Dynamic Linear Threshold Models (F2DLT)
The reduction of nC-Friend-foe dynamic linear threshold models (F2DLT) to Linear Threshold Model with quiescence time (LTqt) is meaningful since the two models are proved to be equivalent, as we report in the following theoretical result
Summary
A few studies on the spreading of fake news and hoaxes (Metaxas and Mustafaraj 2010; Mustafaraj and Metaxas 2017) argued that, the likelihood of people to be deceived by a spreading information item is increased because assessing the reliability and trustworthiness of the source generating and/or sharing such item becomes harder Within this view, one side effect is the tendency of users to access information from like-minded sources (Koutra et al 2015) and at the same time, to be trapped inside information bubbles, favoring network polarization phenomena (Garimella et al 2017). There is no guarantee about the effectiveness of such corrections: on the contrary, highlighting a fake news may even produce a backfire effect, i.e., driving users’ attention towards the misleading piece of information
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.