Many network scientists have investigated the problem of mitigating or removing false information propagated in social networks. False information falls into two broad categories: disinformation and misinformation. Disinformation represents false information that is knowingly shared and distributed with malicious intent. Misinformation in contrast is false information shared unwittingly, without any malicious intent. Many existing methods to mitigate or remove false information in networks concentrate on methods to find a set of seeding nodes (or agents) based on their network characteristics (e.g., centrality features) to treat. The aim of these methods is to disseminate correct information in the most efficient way. However, little work has focused on the role of uncertainty as a factor in the formulation of agents’ opinions. Uncertainty-aware agents can form different opinions and eventual beliefs about true or false information resulting in different patterns of information diffusion in networks. In this work, we leverage an opinion model, called Subjective Logic (SL), which explicitly deals with a level of uncertainty in an opinion where the opinion is defined as a combination of belief, disbelief, and uncertainty, and the level of uncertainty is easily interpreted as a person’s confidence in the given belief or disbelief. However, SL considers the dimension of uncertainty only derived from a lack of information (i.e., ignorance), not from other causes, such as conflicting evidence. In the era of Big Data, where we are flooded with information, conflicting information can increase uncertainty (or ambiguity) and have a greater effect on opinions than a lack of information (or ignorance). To enhance the capability of SL to deal with ambiguity as well as ignorance, we propose an SL-based opinion model that includes a level of uncertainty derived from both causes. By developing a variant of the Susceptible-Infected-Recovered epidemic model that can change an agent’s status based on the state of their opinions, we capture the evolution of agents’ opinions over time. We present an analysis and discussion of critical changes in network outcomes under varying values of key design parameters, including the frequency ratio of true or false information propagation, centrality metrics used for selecting seeding false informers and true informers, an opinion decay factor, the degree of agents’ prior belief, and the percentage of true informers. We validated our proposed opinion model using both the synthetic network environments and realistic network environments considering a real network topology, user behaviors, and the quality of news articles. The proposed agent’s opinion model and corresponding strategies to deal with false information can be applicable to combat the spread of fake news in various social media platforms (e.g., Facebook).