How Far Can Online Trust Go? Analysing False Information Sharing Behaviour on Social Media

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

How Far Can Online Trust Go? Analysing False Information Sharing Behaviour on Social Media

Similar Papers
  • Front Matter
  • Cite Count Icon 29
  • 10.1016/j.jtcvs.2015.10.016
Using social media effectively in a surgical practice
  • Oct 22, 2015
  • The Journal of Thoracic and Cardiovascular Surgery
  • Mara B Antonoff

Using social media effectively in a surgical practice

  • Research Article
  • Cite Count Icon 39
  • 10.29252/jech.7.2.65
Global Challenge of Health Communication: Infodemia in the Coronavirus Disease (COVID-19) Pandemic
  • May 1, 2020
  • Journal of Education and Community Health
  • Hamid Allahverdipour

Global Challenge of Health Communication: Infodemia in the Coronavirus Disease (COVID-19) Pandemic

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 18
  • 10.3390/su15064753
Systematic Review of Misinformation in Social and Online Media for the Development of an Analytical Framework for Agri-Food Sector
  • Mar 7, 2023
  • Sustainability
  • Ataharul Chowdhury + 3 more

The ubiquity of social and online media networks, the credulousness of online communities, coupled with limited accountability pose a risk of mis-, dis-, mal-, information (mis-dis-mal-information)—the intentional or unintentional spread of false, misleading and right information related to agri-food topics. However, agri-food mis-dis-malinformation in social media and online digital agricultural communities of practice (CoPs) remains underexplored. There is also a limited theoretical and conceptual foundation for understanding mis-dis-malinformation topics in the agri-food sectors. The study aims to review mis-dis-malinformation literature and offer a framework to help understand agri-food mis-dis-malinformation in social media and online CoPs. This paper performs a systematic review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The review shows that many disciplines, particularly communication, social media studies, computer science, health studies, political science and journalism, are increasingly engaging with mis-dis-malinformation research. This systematic research generates a framework based on six thematic categories for holistically understanding and assessing agri-food mis-dis-malinformation in social and online media communities. The framework includes mis-dis-malinformation characterization, source identification, diffusion mechanisms, stakeholder impacts, detection tactics, and mis-dis-malinformation curtailment and countermeasures. The paper contributes to advancing the emerging literature on ‘controversial topics’, ‘misinformation’, and ‘information integrity’ of the virtual agri-food advisory services. This is the first attempt to systematically analyze and incorporate experience from diverse fields of mis-dis-malinformation research that will inform future scholarly works in facilitating conversations and advisory efforts in the agri-food sector.

  • Research Article
  • Cite Count Icon 15
  • 10.3389/fdata.2021.623794
Social Media Big Data: The Good, The Bad, and the Ugly (Un)truths.
  • Jun 1, 2021
  • Frontiers in Big Data
  • Alton M K Chew + 1 more

OPINION article Front. Big Data, 01 June 2021 | https://doi.org/10.3389/fdata.2021.623794

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.pedhc.2022.04.002
Knowledge, Attitudes, and Beliefs of Pediatric Health Care Workers: Understanding the Response to COVID-19.
  • Apr 6, 2022
  • Journal of Pediatric Health Care
  • Emily Gibbons + 5 more

Knowledge, Attitudes, and Beliefs of Pediatric Health Care Workers: Understanding the Response to COVID-19.

  • Research Article
  • Cite Count Icon 152
  • 10.1111/acem.14036
Information and Disinformation: Social Media in the COVID-19 Crisis.
  • Jun 24, 2020
  • Academic Emergency Medicine
  • Michael Gottlieb + 1 more

The novel coronavirus disease of 2019 (COVID‐19) is a global pandemic with over 4.7 million cases and 316,000 deaths worldwide.1 Social media, defined as “electronic communication through which users create online communities to share information, ideas, personal messages, and other content,”2 has played an important role during the COVID‐19 pandemic. In fact, social media usage amongst the public has previously been demonstrated to significantly increase in cases of natural disasters and crises.3 However, it is important to consider the benefits and limitations of this medium.

  • Research Article
  • Cite Count Icon 1
  • 10.5204/mcj.2892
Conspiracy
  • Mar 17, 2022
  • M/C Journal
  • Naomi Smith + 3 more

Conspiracy

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 11
  • 10.2196/51127
Perceptions of Health Misinformation on Social Media: Cross-Sectional Survey Study.
  • Apr 30, 2024
  • JMIR infodemiology
  • Anna Gaysynsky + 3 more

Health misinformation on social media can negatively affect knowledge, attitudes, and behaviors, undermining clinical care and public health efforts. Therefore, it is vital to better understand the public's experience with health misinformation on social media. The goal of this analysis was to examine perceptions of the social media information environment and identify associations between health misinformation perceptions and health communication behaviors among US adults. Analyses used data from the 2022 Health Information National Trends Survey (N=6252). Weighted unadjusted proportions described respondents' perceptions of the amount of false or misleading health information on social media ("perceived misinformation amount") and how difficult it is to discern true from false information on social media ("perceived discernment difficulty"). Weighted multivariable logistic regressions examined (1) associations of sociodemographic characteristics and subjective literacy measures with misinformation perceptions and (2) relationships between misinformation perceptions and health communication behaviors (ie, sharing personal or general health information on social media and using social media information in health decisions or in discussions with health care providers). Over one-third of social media users (35.61%) perceived high levels of health misinformation, and approximately two-thirds (66.56%) reported high perceived discernment difficulty. Odds of perceiving high amounts of misinformation were lower among non-Hispanic Black/African American (adjusted odds ratio [aOR] 0.407, 95% CI 0.282-0.587) and Hispanic (aOR 0.610, 95% CI 0.449-0.831) individuals compared to White individuals. Those with lower subjective health literacy were less likely to report high perceived misinformation amount (aOR 0.602, 95% CI 0.374-0.970), whereas those with lower subjective digital literacy were more likely to report high perceived misinformation amount (aOR 1.775, 95% CI 1.400-2.251). Compared to White individuals, Hispanic individuals had lower odds of reporting high discernment difficulty (aOR 0.620, 95% CI 0.462-0.831). Those with lower subjective digital literacy (aOR 1.873, 95% CI 1.478-2.374) or numeracy (aOR 1.465, 95% CI 1.047-2.049) were more likely to report high discernment difficulty. High perceived misinformation amount was associated with lower odds of sharing general health information on social media (aOR 0.742, 95% CI 0.568-0.968), using social media information to make health decisions (aOR 0.273, 95% CI 0.156-0.479), and using social media information in discussions with health care providers (aOR 0.460, 95% CI 0.323-0.655). High perceived discernment difficulty was associated with higher odds of using social media information in health decisions (aOR 1.724, 95% CI 1.208-2.460) and health care provider discussions (aOR 1.389, 95% CI 1.035-1.864). Perceptions of high health misinformation prevalence and discernment difficulty are widespread among social media users, and each has unique associations with sociodemographic characteristics, literacy, and health communication behaviors. These insights can help inform future health communication interventions.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1093/acrefore/9780190625979.013.957
Social Media and Finance
  • Nov 20, 2024
  • J Anthony Cookson + 2 more

Social media platforms have become a common source of information about financial markets. There is a growing academic literature on financial social media—mainly focused on StockTwits, Twitter, Seeking Alpha, and Reddit’s WallStreetBets—and the related areas of social economics and information economics. Social media has been publicly linked to major market events, such as the GameStop short squeeze and the run on Silicon Valley Bank, further intensifying interest in the topic. Two major strands of research on financial social media have emerged: research that uses social media as a lens to examine more general economic behavior and research that directly investigates the effects of social media on financial markets. The social media as a lens approach uses social media platforms to study investor beliefs, investment approaches, and information acquisition that are typically impossible to observe in other data environments. The second strand of social media research investigates social media’s effects on the information environment of financial markets. While most research projects combine these two approaches, they often lean towards one or the other. Studies using social media as a lens have provided insights into questions about how investors update their beliefs, the measurement and consequences of investor disagreement, and even how social networks transmit information. Research on the effects of social media on financial markets has examined whether there is useful information in social media for financial markets, how social media affects information efficiency, and whether social media disseminates incomplete or false information. Many open questions remain, especially as new social media formats emerge.

  • Research Article
  • Cite Count Icon 9
  • 10.5204/mcj.2862
Burden of the Beast
  • Mar 17, 2022
  • M/C Journal
  • Bronwyn Fredericks + 6 more

Burden of the Beast

  • Research Article
  • Cite Count Icon 45
  • 10.2200/s00496ed1v01y201304dmk007
Provenance Data in Social Media
  • May 24, 2013
  • Synthesis Lectures on Data Mining and Knowledge Discovery
  • Geoffrey Barbier + 3 more

Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.

  • Research Article
  • 10.5530/ijper.52.2.23
Social Media: A Double Edged Sword for Accessing Health Care Information
  • Apr 1, 2018
  • Indian Journal of Pharmaceutical Education and Research
  • Asha Sasidharan + 1 more

Technology has improved speed and ease of communication using various platforms. Recent improvements in technology using Social Media (SM) has dramatically changed the way communication happens. Success stories of Facebook, Twitter, LinkedIn, WhatsApp, blog sites are some examples of how different media are used by people to communicate using digital technology and social media. Access to information, including healthcare, using SM is on the rise owing to its ease and accessibility. Since people have the provision to share their views, experiences, voice opinions and share knowledge on different aspects of healthcare using social media, patients, caregivers and health professionals have started accessing health related information more frequently using SM. SM has inherent limitation to be misinterpreted, misinformed, mis-represented when not used properly. Patients are increasingly turning to internet and online sources for accessing health information. Poor quality of information, damage to professional image, breaches of patient privacy, violation of patient-healthcare professional boundaries, licensing issues and legal issues are identified as challenges to the use of SM for health information by various researchers. Due to advances in technology it has become easy to access information using SM, which could be a double edged sword. This narrative review tries to highlight advantages and loopholes in using SM for accessing healthcare information. The objective of the current study is to understand use of SM for accessing healthcare based on empirical evidence drawn from studies carried out in both developed and developing countries. Studies show that correct, timely and authentic information related to various aspects of healthcare should to be shared through SM whereas incorrect and misleading health information on SM could be harmful, if not catastrophic. Governments and apex bodies involved in healthcare regulation shall have regulations as well as SM presence.

  • Research Article
  • Cite Count Icon 23
  • 10.1108/qmr-02-2017-0063
Lies, brands and social media
  • Jan 14, 2019
  • Qualitative Market Research: An International Journal
  • Tracy Tuten + 1 more

Purpose The purpose of this study is to illustrate the influence of media coverage and sentiment about brands on user-generated content amplification and opinions expressed in social media. Design/methodology/approach This study used a mixed-method approach, using a brand situation as a case example, including sentiment analysis of social media conversations and sentiment analysis of media coverage. This study tracks the diffusion of a false claim about the brand via online media coverage, subsequent spreading of the false claim via social media and the resulting impact on sentiment toward the brand. Findings The findings illustrate the influence of digital mass communication sources on the subsequent spread of information about a brand via social media channels and the impact of the social spread of false claims on brand sentiment. This study illustrates the value of social media listening and sentiment analysis for brands as an ongoing business practice. Research limitations/implications While it has long been known that media coverage is in part subsequently diffused through individual sharing, this study reveals the potential for media sentiment to influence sentiment toward a brand. It also illustrates the potential harm brands face when false information is spread via media coverage and subsequently through social media posts and conversations. How brands can most effectively correct false brand beliefs and recover from negative sentiment related to false claims is an area for future research. Practical implications This study suggests that brands are wise to use sentiment analysis as part of their evaluation of earned media coverage from news organizations and to use social listening as an alert system and sentiment analysis to assess impact on attitudes toward the brand. These steps should become part of a brand’s social media management process. Social implications Media are presumed to be impartial reporters of news and information. However, this study illustrated that the sentiment expressed in media coverage about a brand can be measured and diffused beyond the publications’ initial reach via social media. Advertising positioned as news must be labeled as “advertorial” to ensure that those exposed to the message understand that the message is not impartial. News organizations may inadvertently publish false claims and relay information with sentiment that is then carried via social media along with the information itself. Negative information about a brand may be more sensational and, thus, prone to social sharing, no matter how well the findings are researched or sourced. Originality/value The value of the study is its illustration of how false information and media sentiment spread via social media can ultimately affect consumer sentiment and attitude toward the brand. This study also explains the research process for social scraping and sentiment analysis.

  • Peer Review Report
  • 10.7554/elife.85767.sa0
Editor's evaluation: Changing the incentive structure of social media platforms to halt the spread of misinformation
  • Feb 19, 2023
  • Claire M Gillan

Full text Figures and data Side by side Abstract Editor's evaluation eLife digest Introduction Results Discussion Materials and methods Data availability References Decision letter Author response Article and author information Metrics Abstract The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social ‘carrots’ (e.g., ‘likes’) and ‘sticks’ (e.g., ‘dislikes’) that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement. Editor's evaluation This important paper outlines a novel method for reducing the spread of misinformation on social media platforms. A compelling series of experiments and replications support the main claims, which could have significant real-world societal impact. https://doi.org/10.7554/eLife.85767.sa0 Decision letter eLife's review process eLife digest In recent years, the amount of untrue information, or ‘misinformation’, shared online has increased rapidly. This can have profound effects on society and has been linked to violence, political extremism, and resistance to climate action. One reason for the spread of misinformation is the lack of incentives for users to share true content and avoid sharing false content. People tend to select actions that they believe will lead to positive feedback (‘carrots’) and try to avoid actions that lead to negative feedback (‘sticks’). On most social media sites, these carrots and sticks come in the form of ‘like’ and ‘dislike’ reactions, respectively. Stories that users think will attract ‘likes’ are most likely to be shared with other users. However, because the number of likes a post receives is not representative of how accurate it is, users share information even if they suspect it may not be accurate. As a result, misinformation can spread rapidly. Measures aimed at slowing the spread of misinformation have been introduced to some social media sites, such as removing a few virulent spreaders of falsities and flagging misleading content. However, measures that change the incentive structure of sites so that positive and negative feedback is based on the trustworthiness of the information have not yet been explored. To test this approach, Globig et al. set up a simulated social media site that included ‘trust’ and ‘distrust’ buttons, as well as the usual ‘like’ and ‘dislike’ options. The site featured up to one hundred news stories, half of which were untrue. More than 900 participants viewed the news posts and could react using the new buttons as well as repost the stories. The experiment showed that participants used the ‘trust’ and ‘distrust’ buttons to differentiate between true and false posts more than the other options. As a result, to receive more ‘trust’ responses and less ‘distrust’ responses from other users, participants were more likely to repost true stories than false ones. This led to a large reduction in the amount of misinformation being spread. Computational modeling revealed that the participants were paying more attention to how reliable a news story appeared to be when deciding whether to repost it. Globig et al. showed that adding buttons to highlight the trustworthiness of posts on social media sites reduces the spread of misinformation, without reducing user engagement. This measure could be easily incorporated into existing social media sites and could have a positive impact on issues that are often fuelled by misinformation, such as vaccine hesitancy and resistance to climate action. Introduction In recent years, the spread of misinformation online has skyrocketed, increasing polarization, racism and resistance to climate action and vaccines (Barreto et al., 2021; Rapp and Salovich, 2018; Tsfati et al., 2020; Van Bavel et al., 2021). Existing measures to halt the spread, such as flagging posts, have had limited impact (e.g., Chan et al., 2017; Grady et al., 2021; Lees et al., 2022). We hypothesize that the spread of misinformation on social media platforms is facilitated by the existing incentive structure of those platforms, where social rewards (in the form of ‘likes’ and ‘shares’) are dissociated from the veracity of the information (Figure 1a, left panel, Sharot, 2021). The rationale for this hypothesis is as follows: users can discern true from false content to a reasonable degree (Allen et al., 2021; Pennycook and Rand, 2019). Yet, because misinformation generates no less retweets and ‘likes’ than reliable information (Lazer et al., 2018; Vosoughi et al., 2018), and online behavior conforms to a reinforcement-learning model by which users are reacting to social rewards (Lindström et al., 2021; Brady et al., 2021) users have little reason to use their discernment to guide their sharing behavior. Thus, people will share misinformation even when they do not trust it (Pennycook et al., 2021; Ren et al., 2021). Figure 1 Download asset Open asset Theoretical framework. (a) The current incentive structure (blue) is such that the veracity of shared information is dissociated from rewards (‘carrots’) and punishments (‘sticks’). That is, true information and misinformation may lead to roughly equal number of rewards and punishments. An optimal incentive structure (orange) is such that sharing true information is rewarded with more ‘carrots’ than sharing misinformation, which in turn is penalized with more ‘sticks’ than true information. To create an optimal environment, an intervention is needed by which the number of rewards and punishments are directly associated with the veracity of information. (b) We test one such possible intervention (Experiment 1). In particular, we allow people to engage with posts using ‘trust’ reaction buttons and ‘distrust’ reaction buttons (orange). The rationale is that they will use these reactions to discern true from false information more so than ‘like’ and ‘dislike’ reaction buttons. (c) As a result, to obtain a greater number of ‘trust’ carrots and a smaller number of ‘distrust’ sticks in response to a post, people in the optimal environment (orange) will share more true than misinformation compared to those in the suboptimal environment which includes no feedback at all (gray), and those in an environment where the association between veracity of information and number of carrots and sticks is weak (blue). This second step is tested in Experiments 2 and 3. To halt the spread, an incentive structure is needed where ‘carrots’ and ‘sticks’ are directly associated with accuracy (Figure 1a, right panel, Sharot, 2021). Such a system will work with the natural human tendency to select actions that lead to the greatest reward and avoid those that lead to punishment (Skinner, 1966). Scientists have tested different strategies to reduce the spread of misinformation, including educating people about fake news (Guess et al., 2020; Traberg et al., 2022), using a prompt to direct attention to accuracy (Kozyreva et al., 2020; Pennycook et al., 2021; Pennycook et al., 2020) and limiting how widely a post can be shared (Jackson et al., 2022). Surprisingly, possible interventions in which the incentive structure of social media platforms is altered to reduce misinformation have been overlooked. Here, we test the efficacy of such a structure by slightly altering the engagement options offered to users. Specifically, we add an option to react to posts using ‘trust’ and ‘distrust’ buttons (Figure 1b). We selected these buttons because trust by definition is related to veracity – it is defined as ‘a firm belief in the reliability, truth, ability, or strength of someone or something’ (Oxford Dictionary). We hypothesize that (1) people will use the ‘trust’ and ‘distrust’ buttons to discern true from misinformation more so than the commonly existing engagement options (such as a ‘like’ button; Figure 1b, top panel). By ‘discernment’ we mean that true posts will receive more ‘trusts’ reactions than ‘distrusts’ reactions and vice versa for false posts. This will create an environment in which rewards (‘trusts’) and punishments (‘distrusts’) are more directly associated with the veracity of information. Thus, (2) when exposed to this environment, users will start sharing more true information and less false information in order to obtain more ‘trust’ carrots and fewer ‘distrust’ sticks (Figure 1b, bottom panel). The new feedback options could both reinforce user behavior that generates trustworthy material and signal to others that the post is dependable. We also test environments in which participants receive only ‘trusts’ (a different number of trust for different posts) or only ‘distrusts’ (a different number of distrust for different posts) to examine if and how the impact of small vs large positive feedback (‘trust’) on discernment differs from the impact of small vs large negative feedback (distrust’). It has been proposed that the possibility of reward is more likely to reinforce action than the possibility of punishment, while the possibility of punishment is more likely to reinforce inaction (Guitart-Masip et al., 2014; Guitart-Masip et al., 2011; Guitart-Masip et al., 2012). This may translate to a large number of ‘trust’ selectively increasing sharing of true information without decreasing sharing of misinformation and vice versa for large number of ‘distrust’. Further, being mindful of potential differences in sharing behavior across political parties (e.g., Grinberg et al., 2019; Guess et al., 2020) we test participants from both sides of the political divide. To that end, over six experiments 951 participants engaged in simulated social media platforms where they encountered true and false information. In Experiment 1, we examined whether participants would use ‘trust’ and ‘distrust’ buttons to discern true from false information more so than existing ‘like’ and ‘dislike' buttons (Figure 1b, replication: Experiment 4). In Experiments 2 and 3, we tested whether new groups of participants would share more true than false information in social media platforms that introduce real ‘trust’ and ‘distrust’ feedback from other participants (Figure 1c, replication: Experiments 5 and 6). The intuition is that ‘trust’ and ‘distrust’ reactions will naturally be used to indicate veracity and thus provide a reward structure contingent on accuracy, thereby reducing the sharing of misinformation and generating a healthier information ecosystem. Using computational modeling we provide insights into the specific mechanism by which our intervention improves sharing discernment. Results Participants use ‘trust’ and ‘distrust’ buttons to discern true from false information (Experiment 1) In a first step, we examined whether participants used ‘trust’ and ‘distrust’ reactions to discern true from false information more so than ‘like’ and ‘dislike’ reactions. In Experiment 1, participants saw 100 news posts taken from the fact-checking website Politifact (https://www.politifact.com; see Figure 2). Half of the posts were true, and half were false. Participants were given the opportunity to react to each post using ‘like’, ‘dislike’, ‘trust’, and ‘distrust’ reaction buttons. They could select as many buttons as they wished or none at all (skip). Five participants were excluded according to pre-determined criteria (see Materials and methods for details). Thus, 106 participants (52 Democrats, 54 Republican, Mage = 40.745, SDage ± 14.479; female = 54, male = 52) were included in the analysis. See Figure 2—figure supplement 1 for full instructions. Figure 2 with 1 supplement see all Download asset Open asset Task (Experiment 1). Participants observed a series of 100 posts in random order (50 true, 50 false). Their task was to react using one or more of the ‘like’, ‘dislike’, ‘trust’, or ‘distrust’ buttons or to skip. The task was self-paced. We then examined whether participants used the different reaction buttons to discern true from false information. Discernment was calculated as follows, such that high numbers always indicate better discernment: For ‘like’: Discernment=Proplikestrue−Proplikesfalse For ‘dislike’: Discernment=Propdislikesfalse−Propdislikestrue For ‘trust’: Discernment=Proptruststrue−Proptrustsfalse For ‘distrust’: Discernment=Propdistrustsfalse−Propdistruststrue With Prop indicating the proportion of that response out of all true posts, or out of all false posts, as indicated. These discernment scores were calculated for each participant separately and then entered into a 2 (type of reaction: ‘trust’ and ‘distrust’/‘like’ and ‘dislike’) by 2 (valence of reaction: positive, i.e., ‘like’, ‘trust’/negative, i.e., ‘dislike’, ‘distrust’) within-subject analysis of variance (ANOVA). Political orientation was also added as a between-subject factor (Republican/Democrat), allowing for an interaction of political orientation and type of reaction to assess whether participants with differing political beliefs used the reaction buttons in different ways. The results reveal that participants’ use of ‘(Dis)Trust’ reaction buttons (M = 0.127; SE = 0.007) was more discerning than their use of ‘(Dis)Like’ reaction buttons (M = 0.047; SE = 0.005; F(1,104) = 95.832, p < 0.001, partial η2 = 0.48, Figure 3). We additionally observed an effect of valence (F(1,105) = 17.33, p < 0.001, partial η2 = 0.14), with negatively valenced reaction buttons (e.g., ‘dislike’ and ‘distrust’, M = 0.095, SE = 0.007) being used in a more discerning manner than positively valenced reaction buttons (e.g., ‘like’ and ‘trust’, M = 0.087, SE = 0.005) and an effect of political orientation (F(1,104) = 25.262, p < 0.001, partial η2 = 0.2), with Democrats (M = 0.115, SE = 0.007) being more discerning than Republicans (M = 0.06, SE = 0.005). There was also an interaction of type of reaction and political orientation (F(1,104) = 24.084, p < 0.001, partial η2 = 0.19), which was characterized by Democrats showing greater discernment than Republicans in their use of ‘(Dis)Trust’ reaction buttons (F(1,104) = 33.592, p < 0.001, partial η2 = 0.24), but not in their use of ‘(Dis)Like’ reaction buttons (F(1,104) = 2.255, p = 0.136, partial η2 = 0.02). Importantly, however, both Democrats (F(1,51) = 93.376, p < 0.001, partial η2 = 0.65) and Republicans (F(1,53) = 14.715, p < 0.001, partial η2 = 0.22) used the ‘(Dis)Trust’ reaction buttons in a more discerning manner than the ‘(Dis)Like’ reaction buttons. Figure 3 with 1 supplement see all Download asset Open asset Participants use ‘trust’ and ‘distrust’ reactions to discern true from false information. ‘Distrust’ and ‘trust’ reactions were used in a more discerning manner than ‘like’ and ‘dislike’ reactions. Y axis shows discernment between true and false posts. For positive reactions (e.g., ‘likes’ and ‘trusts’), discernment is equal to the proportion of positive reactions for true information minus false information, and vice versa for negative reactions (‘dislikes’ and ‘distrusts’). X axis shows reaction options. Data are plotted as box plots for each reaction button, in which horizontal lines indicate median values, boxes indicate 25/75% interquartile range and whiskers indicate 1.5 × interquartile range. Diamond shape indicates the mean discernment per reaction. Individuals’ mean discernment data are shown separately as gray dots. Symbols above each box plot indicate significance level compared to 0 using a t-test. N=106, ***p < 0.001. One-sample t-tests against zero further revealed that participants’ use of each reaction button discerned true from false information (‘like’: M = 0.06; SE = 0.006; t(105) = 10.483, p < 0.001, Cohen’s d = 1.018; ‘trust’: M = 0.099; SE = 0.01; t(105) = 9.744, p < 0.001, Cohen’s d = 0.946; ‘dislike’: M = 0.034; SE = 0.007; t(105) = 4.76, p < 0.001, Cohen’s d = 0.462; ‘distrust’: M = 0.156; SE = 0.01; t(105) = 15.872, p < 0.001, Cohen’s d = 1.542). Thus far, we have shown that participants use ‘(Dis)Trust’ reaction buttons in a more discerning manner than ‘(Dis)Like’ reaction buttons. As social media platforms care about overall engagement not only its quality, we examined how frequently participants used the different reaction buttons. An ANOVA with the same specifications as above was conducted, but this time submitting frequency of reaction as the dependent variable. We found that participants used ‘(Dis)Trust’ reaction buttons more often than ‘(Dis)Like’ reaction buttons (percentage use of reaction out of all trials: ‘trust’: M = 28.057%; ‘distrust’: M = 34.085%; ‘like’: M = 18.604%; ‘dislike’: M = 23.745%; F(1,104) = 36.672, p < 0.001, partial η2 = 0.26). In addition, negative reaction buttons (‘distrust’ and ‘dislike’: M = 28.915%, SE = 1.177) were used more frequently than positive reaction buttons (‘trust’ and ‘like’: M = 23.33%, SE = 1.133; F(1,105) = 16.96, p < 0.001, partial η2 = 0.07). No other effect was significant. Interestingly, we also found that participants who skipped more posts were less discerning (R = −0.414, p < 0.001). Together, the results show that the new reaction options increase engagement. The results hold when controlling for demographics, when not including political orientation in the analysis, and allowing for an interaction between type of reaction and valence (see Supplementary files 1 and 2). The results also replicate in an independent sample (Experiment 4, see Materials and methods for details; and Figure 3—figure supplement 1, Supplementary file 3). ‘Trust’ and ‘distrust’ incentives improve discernment in sharing behavior (Experiment 2) Thus far, we have shown that participants use ‘(Dis)Trust’ reaction buttons in a more discerning manner than ‘(Dis)Like’ reaction buttons. Thus, an environment which offers ‘(Dis)Trust’ feedback is one where the number of ‘carrots’ (in the form of ‘trusts’) and the number of ‘sticks’ (in the form of ‘distrusts’) are directly associated with the veracity of the posts. It then follows that submitting participants to such an environment will increase their sharing of true information (to receive ‘trusts’) and reduce their sharing of misinformation (to avoid ‘distrusts’). To test this, we ran a second experiment. A new group of participants (N = 320) were recruited to engage in a simulated social media platform. They observed the same 100 posts (50 true, 50 false) shown to the participants in Experiment 1, but this time instead of reacting to the posts they could either share the post or skip it (see Figure 4 and Figure 4—figure supplements 1 and 2 for full instructions). They were told that if they chose to share a post, it would be shared to their feed such that the other participants would be able to see the post and would then be able to react to it in real time (feedback). Depending on the environment participants were in, which varied between subjects, they could receive feedback in the form of the number of users who (1) ‘disliked’, (2) ‘liked’, (3) ‘distrusted’, or (4) ‘trusted’ their posts. We also included a (5) baseline condition, in which participants received no feedback. If participants selected to skip, they would observe a white screen asking them to click continue. Data of 32 participants were not analyzed according to pre-determined criteria (see Materials and methods for details). Two-hundred and eighty-eight participants (146 Democrats, 142 Republicans, Mage = 38.073, SDage ± 13.683; female = 147, male = 141) were included in the analysis (see Materials and methods for details). Discernment=Proprepoststrue−Proprepostsfalse Figure 4 with 2 supplements see all Download asset Open asset Task. In Experiment 2 on each of 100 participants observed a post (50 true, 50 false They then chose whether to share it or skip They were told that if they chose to share a post, it would be shared to their feed such that other participants would be able to see the post and react to it in real time (feedback). Depending on the environment participants were in, they could either observe the number of (1) (N = (2) ‘likes’ (N = (3) ‘distrusts’ (N = or (4) ‘trusts’ (N = feedback. The feedback was in the number of reactions from Experiment 1, the participants the reactions were in real time as by a the feedback participants could then click continue. If participants selected to skip, they would observe a white screen asking them to click In the environment (N = participants received no feedback. Experiment 3 was to Experiment 2 with (1) Depending on the environment participants were in, they could either observe the number of both and ‘likes’ (N = both ‘distrusts’ and ‘trusts’ (N = or no feedback = (2) In Experiment 3, we selected posts true, false) to which Republicans and Democrats had on to using the ‘trust’ button in Experiment Discernment was calculated for each participant by the proportion of sharing false information from the proportion of sharing true information. discernment indicates greater sharing of true than false information. These scores were into a between-subject ANOVA with type of feedback (‘trust’ and ‘distrust’/‘like’ and valence i.e., ‘like’, ‘trust’/negative, i.e., ‘dislike’, ‘distrust’ vs and political orientation as We also for an interaction of political orientation and type of feedback. We observed an effect of type of feedback = p < 0.001, partial η2 = such that participants shared more true than false information in the ‘(Dis)Trust’ environments (M = SE = than the ‘(Dis)Like’ environments (M = SE = = p < 0.001, partial η2 = and environment (M = SE = = p = 0.001, partial η2 = Figure participants who received ‘trust’ feedback (M = SE = were more discerning in their sharing behavior than those who received ‘like’ feedback (M = SE = = p = partial η2 = who received ‘distrust’ feedback (M = SE = were more discerning than those who received ‘dislike’ feedback (M = SE = = p = partial η2 = We further observed a interaction between type of feedback and political orientation = p = partial η2 = 0.02). Democrats (M = SE = were more discerning than Republicans (M = SE = = p < 0.001, partial η2 = this was smaller in those who received ‘(Dis)Trust’ feedback (M = SE = compared to those who received ‘(Dis)Like’ feedback (M = SE = = p = partial η2 = and by smaller than those who received no feedback (M = SE = = p = partial η2 = There was no between the = p = partial η2 = No other effects were significant. as of posts shared out of all not across environments = p = partial posts shared out of all trials: = = = = = see Supplementary file 4 for for true and false Figure 5 with 1 supplement see all Download asset Open asset the incentive structure of social media environments discernment of information shared. (a) Participants in an environment where ‘(Dis)Trust’ feedback was introduced shared more true information relative to false information than participants in an environment where only ‘(Dis)Like’ feedback was or no feedback at all Y axis shows that is, proportion of true posts shared minus proportion of false posts shared. X axis shows the group environment (type of (b) This was the of the of the post environment, to number of the posts included in the indicates of where discernment in equal across the ‘(Dis)Like’ and ‘(Dis)Trust’ As can be all are above the indicating that in all discernment is greater in an environment that offers ‘(Dis)Trust’ feedback. Y axis shows discernment in the ‘(Dis)Trust’ environment, X axis shows discernment in the ‘(Dis)Like’ (c) Experiment 3 showed the same results as Experiment Data are plotted as box plots for each in which horizontal lines indicate median values, boxes indicate 25/75% interquartile and whiskers indicate 1.5 × interquartile range. Diamond shape indicates the mean discernment per reaction. Individuals’ mean discernment data are shown separately as gray above each box plot indicate significance level compared to 0 using a < 0.001, < Results hold when controlling for demographics, when not including political orientation in the analysis, and allowing for an interaction between type of reaction and valence (see Supplementary files 5 and 6). Results replicate in an independent sample (Experiment see Materials and methods for details; and Figure supplement 1, Supplementary file To – participants in Experiment 2 whether to share content or skip. They then observed the reaction of other participants to their post this was in real but for we them reactions of participants from Experiment 1). participant in Experiment 2 observed only one type of feedback. For only is it that ‘distrusts’ The rationale this is that for given post, true or some users will distrust the However, true posts will receive fewer ‘distrusts’ than false posts. It is the number of ‘distrusts’ per post that The participants are to the number of ‘distrusts’ they To this, they post more true posts and fewer false posts. if the participants were of the number of they would skip on Participants do not do because sharing in and of is and 2012). The results indicate that participants are to the number of ‘distrusts’ per posts not to the number of ‘distrusts’ over all posts. The same rationale for the participants that only observe They receive more ‘trusts’ for true than false posts. It is the of ‘trusts’ that is associated with This participants to post more true posts and fewer false posts in order to the number of ‘trusts’ per if participants were of the number of they would share on Participants do not do This indicates that they are to the number of ‘trusts’ per post not to number over all posts. user of social media platforms could to when a for many people will be with only a of The is to positive feedback per The same rationale as above for ‘likes’ and that those are less associated with veracity, thus impact discernment The posts included in the experiment a range of including environment, and As observed in Figure the effect of ‘(Dis)Trust’ environment on discernment is observed of content Thus far, our results show that the incentive structure of social media platforms by the number of ‘carrots’ and ‘sticks’ with information veracity could be a to reduce the spread of If feedback discernment in sharing it is that it may in turn improve belief To test this, we participants at the of the experiment to indicate how accurate they a post was on a from to accurate in whether a post was true or false was calculated as follows: for false posts was equal to the participants’ accuracy and for true posts it was equal to 100 minus their

  • Research Article
  • Cite Count Icon 1
  • 10.2196/56761
Age Variation Among US Adults’ Social Media Experiences and Beliefs About Who Is Responsible for Reducing Health-Related Falsehoods: Secondary Analysis of a National Survey
  • Nov 27, 2024
  • JMIR Aging
  • Prathyusha Galinkala + 2 more

BackgroundWe live in a digital age where social media has become an essential part of people’s lives. It is also one of the leading platforms responsible for spreading health-related falsehoods. This study explores who adults of different age groups perceive as responsible for reducing health-related falsehoods on social media.ObjectiveDespite growing concern over older adults’ exposure to false health information on social media, little research examines their beliefs on how to address the problem. This study examines how the age of US adults is associated with their reported experiences with health-related falsehoods on social media and their beliefs about who should be tasked with reducing such falsehoods.MethodsThis study is a secondary analysis of data from the 2022 Health Information National Trends Survey, a nationally representative survey of US adults (18 years and older). Multivariable logistic regressions estimated how a respondent’s age was associated with their self-reported social media use, their difficulty to detect health-related falsehoods on social media, their discussion of health information found on social media with medical providers, and their beliefs regarding who should be responsible for reducing health-related falsehoods on social media. Regression estimates were adjusted for respondents’ sociodemographic and health characteristics.ResultsDaily social media use decreased with respondents’ age. Respondents aged 50‐64 years (b=0.515, P=.01) and 65‐74 years (b=0.697, P=.002) were more likely than respondents aged 18‐34 years to report they strongly agree that it is difficult for them to detect health-related falsehoods on social media. Compared to younger adults, older adults (65‐74 years: b=0.818, P=.002; 75 years and older: b=1.058, P<.001) were more likely to believe medical providers should be responsible for reducing online falsehoods.ConclusionsIn addition to ongoing efforts by social media platforms to detect and remove falsehoods, the findings suggest medical providers should be tasked with discrediting health-related falsehoods on social media for older adults. However, time during the clinical visit is limited. Future research is needed to discover new approaches and tools tailored to older adults to assist with filtering and discrediting health-related falsehoods on social media.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.