Abstract

Full text Figures and data Side by side Abstract Editor's evaluation eLife digest Introduction Results Discussion Materials and methods Data availability References Decision letter Author response Article and author information Metrics Abstract The powerful allure of social media platforms has been attributed to the human need for social rewards. Here, we demonstrate that the spread of misinformation on such platforms is facilitated by existing social ‘carrots’ (e.g., ‘likes’) and ‘sticks’ (e.g., ‘dislikes’) that are dissociated from the veracity of the information shared. Testing 951 participants over six experiments, we show that a slight change to the incentive structure of social media platforms, such that social rewards and punishments are contingent on information veracity, produces a considerable increase in the discernment of shared information. Namely, an increase in the proportion of true information shared relative to the proportion of false information shared. Computational modeling (i.e., drift-diffusion models) revealed the underlying mechanism of this effect is associated with an increase in the weight participants assign to evidence consistent with discerning behavior. The results offer evidence for an intervention that could be adopted to reduce misinformation spread, which in turn could reduce violence, vaccine hesitancy and political polarization, without reducing engagement. Editor's evaluation This important paper outlines a novel method for reducing the spread of misinformation on social media platforms. A compelling series of experiments and replications support the main claims, which could have significant real-world societal impact. https://doi.org/10.7554/eLife.85767.sa0 Decision letter eLife's review process eLife digest In recent years, the amount of untrue information, or ‘misinformation’, shared online has increased rapidly. This can have profound effects on society and has been linked to violence, political extremism, and resistance to climate action. One reason for the spread of misinformation is the lack of incentives for users to share true content and avoid sharing false content. People tend to select actions that they believe will lead to positive feedback (‘carrots’) and try to avoid actions that lead to negative feedback (‘sticks’). On most social media sites, these carrots and sticks come in the form of ‘like’ and ‘dislike’ reactions, respectively. Stories that users think will attract ‘likes’ are most likely to be shared with other users. However, because the number of likes a post receives is not representative of how accurate it is, users share information even if they suspect it may not be accurate. As a result, misinformation can spread rapidly. Measures aimed at slowing the spread of misinformation have been introduced to some social media sites, such as removing a few virulent spreaders of falsities and flagging misleading content. However, measures that change the incentive structure of sites so that positive and negative feedback is based on the trustworthiness of the information have not yet been explored. To test this approach, Globig et al. set up a simulated social media site that included ‘trust’ and ‘distrust’ buttons, as well as the usual ‘like’ and ‘dislike’ options. The site featured up to one hundred news stories, half of which were untrue. More than 900 participants viewed the news posts and could react using the new buttons as well as repost the stories. The experiment showed that participants used the ‘trust’ and ‘distrust’ buttons to differentiate between true and false posts more than the other options. As a result, to receive more ‘trust’ responses and less ‘distrust’ responses from other users, participants were more likely to repost true stories than false ones. This led to a large reduction in the amount of misinformation being spread. Computational modeling revealed that the participants were paying more attention to how reliable a news story appeared to be when deciding whether to repost it. Globig et al. showed that adding buttons to highlight the trustworthiness of posts on social media sites reduces the spread of misinformation, without reducing user engagement. This measure could be easily incorporated into existing social media sites and could have a positive impact on issues that are often fuelled by misinformation, such as vaccine hesitancy and resistance to climate action. Introduction In recent years, the spread of misinformation online has skyrocketed, increasing polarization, racism and resistance to climate action and vaccines (Barreto et al., 2021; Rapp and Salovich, 2018; Tsfati et al., 2020; Van Bavel et al., 2021). Existing measures to halt the spread, such as flagging posts, have had limited impact (e.g., Chan et al., 2017; Grady et al., 2021; Lees et al., 2022). We hypothesize that the spread of misinformation on social media platforms is facilitated by the existing incentive structure of those platforms, where social rewards (in the form of ‘likes’ and ‘shares’) are dissociated from the veracity of the information (Figure 1a, left panel, Sharot, 2021). The rationale for this hypothesis is as follows: users can discern true from false content to a reasonable degree (Allen et al., 2021; Pennycook and Rand, 2019). Yet, because misinformation generates no less retweets and ‘likes’ than reliable information (Lazer et al., 2018; Vosoughi et al., 2018), and online behavior conforms to a reinforcement-learning model by which users are reacting to social rewards (Lindström et al., 2021; Brady et al., 2021) users have little reason to use their discernment to guide their sharing behavior. Thus, people will share misinformation even when they do not trust it (Pennycook et al., 2021; Ren et al., 2021). Figure 1 Download asset Open asset Theoretical framework. (a) The current incentive structure (blue) is such that the veracity of shared information is dissociated from rewards (‘carrots’) and punishments (‘sticks’). That is, true information and misinformation may lead to roughly equal number of rewards and punishments. An optimal incentive structure (orange) is such that sharing true information is rewarded with more ‘carrots’ than sharing misinformation, which in turn is penalized with more ‘sticks’ than true information. To create an optimal environment, an intervention is needed by which the number of rewards and punishments are directly associated with the veracity of information. (b) We test one such possible intervention (Experiment 1). In particular, we allow people to engage with posts using ‘trust’ reaction buttons and ‘distrust’ reaction buttons (orange). The rationale is that they will use these reactions to discern true from false information more so than ‘like’ and ‘dislike’ reaction buttons. (c) As a result, to obtain a greater number of ‘trust’ carrots and a smaller number of ‘distrust’ sticks in response to a post, people in the optimal environment (orange) will share more true than misinformation compared to those in the suboptimal environment which includes no feedback at all (gray), and those in an environment where the association between veracity of information and number of carrots and sticks is weak (blue). This second step is tested in Experiments 2 and 3. To halt the spread, an incentive structure is needed where ‘carrots’ and ‘sticks’ are directly associated with accuracy (Figure 1a, right panel, Sharot, 2021). Such a system will work with the natural human tendency to select actions that lead to the greatest reward and avoid those that lead to punishment (Skinner, 1966). Scientists have tested different strategies to reduce the spread of misinformation, including educating people about fake news (Guess et al., 2020; Traberg et al., 2022), using a prompt to direct attention to accuracy (Kozyreva et al., 2020; Pennycook et al., 2021; Pennycook et al., 2020) and limiting how widely a post can be shared (Jackson et al., 2022). Surprisingly, possible interventions in which the incentive structure of social media platforms is altered to reduce misinformation have been overlooked. Here, we test the efficacy of such a structure by slightly altering the engagement options offered to users. Specifically, we add an option to react to posts using ‘trust’ and ‘distrust’ buttons (Figure 1b). We selected these buttons because trust by definition is related to veracity – it is defined as ‘a firm belief in the reliability, truth, ability, or strength of someone or something’ (Oxford Dictionary). We hypothesize that (1) people will use the ‘trust’ and ‘distrust’ buttons to discern true from misinformation more so than the commonly existing engagement options (such as a ‘like’ button; Figure 1b, top panel). By ‘discernment’ we mean that true posts will receive more ‘trusts’ reactions than ‘distrusts’ reactions and vice versa for false posts. This will create an environment in which rewards (‘trusts’) and punishments (‘distrusts’) are more directly associated with the veracity of information. Thus, (2) when exposed to this environment, users will start sharing more true information and less false information in order to obtain more ‘trust’ carrots and fewer ‘distrust’ sticks (Figure 1b, bottom panel). The new feedback options could both reinforce user behavior that generates trustworthy material and signal to others that the post is dependable. We also test environments in which participants receive only ‘trusts’ (a different number of trust for different posts) or only ‘distrusts’ (a different number of distrust for different posts) to examine if and how the impact of small vs large positive feedback (‘trust’) on discernment differs from the impact of small vs large negative feedback (distrust’). It has been proposed that the possibility of reward is more likely to reinforce action than the possibility of punishment, while the possibility of punishment is more likely to reinforce inaction (Guitart-Masip et al., 2014; Guitart-Masip et al., 2011; Guitart-Masip et al., 2012). This may translate to a large number of ‘trust’ selectively increasing sharing of true information without decreasing sharing of misinformation and vice versa for large number of ‘distrust’. Further, being mindful of potential differences in sharing behavior across political parties (e.g., Grinberg et al., 2019; Guess et al., 2020) we test participants from both sides of the political divide. To that end, over six experiments 951 participants engaged in simulated social media platforms where they encountered true and false information. In Experiment 1, we examined whether participants would use ‘trust’ and ‘distrust’ buttons to discern true from false information more so than existing ‘like’ and ‘dislike' buttons (Figure 1b, replication: Experiment 4). In Experiments 2 and 3, we tested whether new groups of participants would share more true than false information in social media platforms that introduce real ‘trust’ and ‘distrust’ feedback from other participants (Figure 1c, replication: Experiments 5 and 6). The intuition is that ‘trust’ and ‘distrust’ reactions will naturally be used to indicate veracity and thus provide a reward structure contingent on accuracy, thereby reducing the sharing of misinformation and generating a healthier information ecosystem. Using computational modeling we provide insights into the specific mechanism by which our intervention improves sharing discernment. Results Participants use ‘trust’ and ‘distrust’ buttons to discern true from false information (Experiment 1) In a first step, we examined whether participants used ‘trust’ and ‘distrust’ reactions to discern true from false information more so than ‘like’ and ‘dislike’ reactions. In Experiment 1, participants saw 100 news posts taken from the fact-checking website Politifact (https://www.politifact.com; see Figure 2). Half of the posts were true, and half were false. Participants were given the opportunity to react to each post using ‘like’, ‘dislike’, ‘trust’, and ‘distrust’ reaction buttons. They could select as many buttons as they wished or none at all (skip). Five participants were excluded according to pre-determined criteria (see Materials and methods for details). Thus, 106 participants (52 Democrats, 54 Republican, Mage = 40.745, SDage ± 14.479; female = 54, male = 52) were included in the analysis. See Figure 2—figure supplement 1 for full instructions. Figure 2 with 1 supplement see all Download asset Open asset Task (Experiment 1). Participants observed a series of 100 posts in random order (50 true, 50 false). Their task was to react using one or more of the ‘like’, ‘dislike’, ‘trust’, or ‘distrust’ buttons or to skip. The task was self-paced. We then examined whether participants used the different reaction buttons to discern true from false information. Discernment was calculated as follows, such that high numbers always indicate better discernment: For ‘like’: Discernment=Proplikestrue−Proplikesfalse For ‘dislike’: Discernment=Propdislikesfalse−Propdislikestrue For ‘trust’: Discernment=Proptruststrue−Proptrustsfalse For ‘distrust’: Discernment=Propdistrustsfalse−Propdistruststrue With Prop indicating the proportion of that response out of all true posts, or out of all false posts, as indicated. These discernment scores were calculated for each participant separately and then entered into a 2 (type of reaction: ‘trust’ and ‘distrust’/‘like’ and ‘dislike’) by 2 (valence of reaction: positive, i.e., ‘like’, ‘trust’/negative, i.e., ‘dislike’, ‘distrust’) within-subject analysis of variance (ANOVA). Political orientation was also added as a between-subject factor (Republican/Democrat), allowing for an interaction of political orientation and type of reaction to assess whether participants with differing political beliefs used the reaction buttons in different ways. The results reveal that participants’ use of ‘(Dis)Trust’ reaction buttons (M = 0.127; SE = 0.007) was more discerning than their use of ‘(Dis)Like’ reaction buttons (M = 0.047; SE = 0.005; F(1,104) = 95.832, p < 0.001, partial η2 = 0.48, Figure 3). We additionally observed an effect of valence (F(1,105) = 17.33, p < 0.001, partial η2 = 0.14), with negatively valenced reaction buttons (e.g., ‘dislike’ and ‘distrust’, M = 0.095, SE = 0.007) being used in a more discerning manner than positively valenced reaction buttons (e.g., ‘like’ and ‘trust’, M = 0.087, SE = 0.005) and an effect of political orientation (F(1,104) = 25.262, p < 0.001, partial η2 = 0.2), with Democrats (M = 0.115, SE = 0.007) being more discerning than Republicans (M = 0.06, SE = 0.005). There was also an interaction of type of reaction and political orientation (F(1,104) = 24.084, p < 0.001, partial η2 = 0.19), which was characterized by Democrats showing greater discernment than Republicans in their use of ‘(Dis)Trust’ reaction buttons (F(1,104) = 33.592, p < 0.001, partial η2 = 0.24), but not in their use of ‘(Dis)Like’ reaction buttons (F(1,104) = 2.255, p = 0.136, partial η2 = 0.02). Importantly, however, both Democrats (F(1,51) = 93.376, p < 0.001, partial η2 = 0.65) and Republicans (F(1,53) = 14.715, p < 0.001, partial η2 = 0.22) used the ‘(Dis)Trust’ reaction buttons in a more discerning manner than the ‘(Dis)Like’ reaction buttons. Figure 3 with 1 supplement see all Download asset Open asset Participants use ‘trust’ and ‘distrust’ reactions to discern true from false information. ‘Distrust’ and ‘trust’ reactions were used in a more discerning manner than ‘like’ and ‘dislike’ reactions. Y axis shows discernment between true and false posts. For positive reactions (e.g., ‘likes’ and ‘trusts’), discernment is equal to the proportion of positive reactions for true information minus false information, and vice versa for negative reactions (‘dislikes’ and ‘distrusts’). X axis shows reaction options. Data are plotted as box plots for each reaction button, in which horizontal lines indicate median values, boxes indicate 25/75% interquartile range and whiskers indicate 1.5 × interquartile range. Diamond shape indicates the mean discernment per reaction. Individuals’ mean discernment data are shown separately as gray dots. Symbols above each box plot indicate significance level compared to 0 using a t-test. N=106, ***p < 0.001. One-sample t-tests against zero further revealed that participants’ use of each reaction button discerned true from false information (‘like’: M = 0.06; SE = 0.006; t(105) = 10.483, p < 0.001, Cohen’s d = 1.018; ‘trust’: M = 0.099; SE = 0.01; t(105) = 9.744, p < 0.001, Cohen’s d = 0.946; ‘dislike’: M = 0.034; SE = 0.007; t(105) = 4.76, p < 0.001, Cohen’s d = 0.462; ‘distrust’: M = 0.156; SE = 0.01; t(105) = 15.872, p < 0.001, Cohen’s d = 1.542). Thus far, we have shown that participants use ‘(Dis)Trust’ reaction buttons in a more discerning manner than ‘(Dis)Like’ reaction buttons. As social media platforms care about overall engagement not only its quality, we examined how frequently participants used the different reaction buttons. An ANOVA with the same specifications as above was conducted, but this time submitting frequency of reaction as the dependent variable. We found that participants used ‘(Dis)Trust’ reaction buttons more often than ‘(Dis)Like’ reaction buttons (percentage use of reaction out of all trials: ‘trust’: M = 28.057%; ‘distrust’: M = 34.085%; ‘like’: M = 18.604%; ‘dislike’: M = 23.745%; F(1,104) = 36.672, p < 0.001, partial η2 = 0.26). In addition, negative reaction buttons (‘distrust’ and ‘dislike’: M = 28.915%, SE = 1.177) were used more frequently than positive reaction buttons (‘trust’ and ‘like’: M = 23.33%, SE = 1.133; F(1,105) = 16.96, p < 0.001, partial η2 = 0.07). No other effect was significant. Interestingly, we also found that participants who skipped more posts were less discerning (R = −0.414, p < 0.001). Together, the results show that the new reaction options increase engagement. The results hold when controlling for demographics, when not including political orientation in the analysis, and allowing for an interaction between type of reaction and valence (see Supplementary files 1 and 2). The results also replicate in an independent sample (Experiment 4, see Materials and methods for details; and Figure 3—figure supplement 1, Supplementary file 3). ‘Trust’ and ‘distrust’ incentives improve discernment in sharing behavior (Experiment 2) Thus far, we have shown that participants use ‘(Dis)Trust’ reaction buttons in a more discerning manner than ‘(Dis)Like’ reaction buttons. Thus, an environment which offers ‘(Dis)Trust’ feedback is one where the number of ‘carrots’ (in the form of ‘trusts’) and the number of ‘sticks’ (in the form of ‘distrusts’) are directly associated with the veracity of the posts. It then follows that submitting participants to such an environment will increase their sharing of true information (to receive ‘trusts’) and reduce their sharing of misinformation (to avoid ‘distrusts’). To test this, we ran a second experiment. A new group of participants (N = 320) were recruited to engage in a simulated social media platform. They observed the same 100 posts (50 true, 50 false) shown to the participants in Experiment 1, but this time instead of reacting to the posts they could either share the post or skip it (see Figure 4 and Figure 4—figure supplements 1 and 2 for full instructions). They were told that if they chose to share a post, it would be shared to their feed such that the other participants would be able to see the post and would then be able to react to it in real time (feedback). Depending on the environment participants were in, which varied between subjects, they could receive feedback in the form of the number of users who (1) ‘disliked’, (2) ‘liked’, (3) ‘distrusted’, or (4) ‘trusted’ their posts. We also included a (5) baseline condition, in which participants received no feedback. If participants selected to skip, they would observe a white screen asking them to click continue. Data of 32 participants were not analyzed according to pre-determined criteria (see Materials and methods for details). Two-hundred and eighty-eight participants (146 Democrats, 142 Republicans, Mage = 38.073, SDage ± 13.683; female = 147, male = 141) were included in the analysis (see Materials and methods for details). Discernment=Proprepoststrue−Proprepostsfalse Figure 4 with 2 supplements see all Download asset Open asset Task. In Experiment 2 on each of 100 trials participants observed a post (50 true, 50 false content). They then chose whether to share it or skip (self-paced). They were told that if they chose to share a post, it would be shared to their feed such that other participants would be able to see the post and react to it in real time (feedback). Depending on the environment participants were in, they could either observe the number of (1) ‘dislikes’ (N = 45), (2) ‘likes’ (N = 89), (3) ‘distrusts’ (N = 49), or (4) ‘trusts’ (N = 46) feedback. The feedback was in fact the number of reactions gathered from Experiment 1, though the participants believed the reactions were in real time as indicated by a rotating cogwheel (1 s). Once the feedback appeared, participants could then click continue. If participants selected to skip, they would observe a white screen asking them to click continue (self-paced). In the Baseline environment (N = 59) participants received no feedback. Experiment 3 was identical to Experiment 2 with two distinctions: (1) Depending on the environment participants were in, they could either observe the number of (i) both ‘dislikes’ and ‘likes’ (N = 128), (ii) both ‘distrusts’ and ‘trusts’ (N = 137), or (iii) no feedback (Baseline, N = 126). (2) In Experiment 3, we selected 40 posts (20 true, 20 false) to which Republicans and Democrats had on average reacted to similarly using the ‘trust’ button in Experiment 1. Discernment was calculated for each participant by subtracting the proportion of sharing false information from the proportion of sharing true information. High discernment indicates greater sharing of true than false information. These scores were submitted into a between-subject ANOVA with type of feedback (‘trust’ and ‘distrust’/‘like’ and ‘dislike’/Baseline), valence (positive, i.e., ‘like’, ‘trust’/negative, i.e., ‘dislike’, ‘distrust’ vs neutral/no feedback) and political orientation (Republican/Democrat) as factors. We also allowed for an interaction of political orientation and type of feedback. We observed an effect of type of feedback (F(1,281) = 15.2, p < 0.001, partial η2 = 0.051), such that participants shared more true than false information in the ‘(Dis)Trust’ environments (M = 0.18, SE = 0.018) than the ‘(Dis)Like’ environments (M = 0.085, SE = 0.019, F(1,225) = 14.249, p < 0.001, partial η2 = 0.06) and Baseline environment (M = 0.084, SE = 0.025; F(1,150) = 10.906, p = 0.001, partial η2 = 0.068, Figure 5a). Moreover, participants who received ‘trust’ feedback (M = 0.176, SE = 0.026) were more discerning in their sharing behavior than those who received ‘like’ feedback (M = 0.081, SE = 0.021, F(1,131) = 10.084, p = 0.002, partial η2 = 0.071). Those who received ‘distrust’ feedback (M = 0.175, SE = 0.026) were more discerning than those who received ‘dislike’ feedback (M = 0.092, SE = 0.039, F(1,90) = 5.003, p = 0.028, partial η2 = 0.053). We further observed a trend interaction between type of feedback and political orientation (F(1,281) = 2.939, p = 0.055, partial η2 = 0.02). While Democrats (M = 0.213; SE = 0.014) were generally more discerning than Republicans (M = 0.017; SE = 0.016; F(1,281) = 77.392, p < 0.001, partial η2 = 0.216), this difference was smaller in those who received ‘(Dis)Trust’ feedback (M = 0.082, SE = 0.034) compared to those who received ‘(Dis)Like’ feedback (M = 0.23, SE = 0.03; F(1,224) = 4.879, p = 0.028, partial η2 = 0.021) and by trend smaller than those who received no feedback (M = 0.229, SE = 0.045; F(1,149) = 3.774, p = 0.054, partial η2 = 0.025). There was no difference between the latter two (F(1,188) = 0.00, p = 0.988, partial η2 = 0.00). No other effects were significant. Overall engagement, measured as percentage of posts shared out of all trials, did not differ across environments (F(1,281) = 1.218, p = 0.271, partial η2=0.004; Mean % posts shared out of all trials: Baseline = 27.712%; Dislike = 35.889%; Like = 33.258%; Distrust = 32.51%; Trust = 30.435%; see Supplementary file 4 for means for true and false posts). Figure 5 with 1 supplement see all Download asset Open asset Altering the incentive structure of social media environments increases discernment of information shared. (a) Participants (N=288) operating in an environment where ‘(Dis)Trust’ feedback was introduced shared more true information relative to false information than participants operating in an environment where only ‘(Dis)Like’ feedback was available, or no feedback at all (Baseline) Y axis shows discernment, that is, proportion of true posts shared minus proportion of false posts shared. X axis shows the group environment (type of feedback). (b) This was the case regardless of the topic of the post (politics, science, health, environment, society, other). Bubble size corresponds to number of the posts included in the study. Diagonal dashed line indicates point of equivalence, where discernment in equal across the ‘(Dis)Like’ and ‘(Dis)Trust’ environments. As can be seen, all circles are above the dashed line indicating that in all cases discernment is greater in an environment that offers ‘(Dis)Trust’ feedback. Y axis shows discernment in the ‘(Dis)Trust’ environment, X axis shows discernment in the ‘(Dis)Like’ environment. (c) Experiment 3 (N=391) showed the same results as Experiment 2. Data are plotted as box plots for each reaction, in which horizontal lines indicate median values, boxes indicate 25/75% interquartile range, and whiskers indicate 1.5 × interquartile range. Diamond shape indicates the mean discernment per reaction. Individuals’ mean discernment data are shown separately as gray dots; symbols above each box plot indicate significance level compared to 0 using a t-test.***p < 0.001, **p < 0.01. Results hold when controlling for demographics, when not including political orientation in the analysis, and allowing for an interaction between type of reaction and valence (see Supplementary files 5 and 6). Results replicate in an independent sample (Experiment 5, see Materials and methods for details; and Figure 5—figure supplement 1, Supplementary file 7). To recap – participants in Experiment 2 decided whether to share content or skip. They then observed the reaction of other participants to their post (they believed this was happening in real time, but for simplicity we fed them reactions of participants from Experiment 1). Each participant in Experiment 2 observed only one type of feedback. For example, only ‘distrusts’. How is it that observing ‘distrusts’ alone increases discernment? The rationale behind this design is that for any given post, true or false, some users will distrust the post. However, true posts will receive fewer ‘distrusts’ than false posts. It is the number of ‘distrusts’ per post that matters. The participants are motivated to minimize the average number of ‘distrusts’ they receive. To achieve this, they should post more true posts and fewer false posts. Of course, if the participants were simply trying of minimize the total number of distrusts, they would just skip on every trial. Participants do not do that, however. Potentially because sharing in and of itself is rewarding (Tamir and Mitchell, 2012). The results indicate that participants are sensitive to the number of ‘distrusts’ per posts not just to the total number of ‘distrusts’ over all posts. The same rationale holds for the participants that only observe ‘trusts’. They receive more ‘trusts’ for true than false posts. It is the magnitude of ‘trusts’ that is associated with veracity. This motivates participants to post more true posts and fewer false posts in order to maximize the average number of ‘trusts’ per post. Of course, if participants were simply trying of maximize the total number of ‘trusts’, they would just share on every trial. Participants do not do that, however. This indicates that they are sensitive to the number of ‘trusts’ per post not just to total number over all posts. Any user of social media platforms could relate to this; when posting a tweet, for example, many people will be disappointed with only a handful of ‘hearts’. The user’s goal is to maximize positive feedback per post. The same rationale as above holds for ‘likes’ and ‘dislikes’ except that those are less associated with veracity, thus impact discernment less. The posts included in the experiment covered a range of topics including politics, science, health, environment, and society. As observed in Figure 5b, the effect of ‘(Dis)Trust’ environment on discernment is observed regardless of content type. Thus far, our results show that changing the incentive structure of social media platforms by coupling the number of ‘carrots’ and ‘sticks’ with information veracity could be a valuable tool to reduce the spread of misinformation. If feedback promotes discernment in sharing behavior, it is plausible that it may in turn improve belief accuracy. To test this, we asked participants at the end of the experiment to indicate how accurate they thought a post was on a scale from inaccurate (0) to accurate (100). Participants’ error in estimating whether a post was true or false was calculated as follows: for false posts error was equal to the participants’ accuracy rating and for true posts it was equal to 100 minus their rat

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call