Misinformation has dire implications for both public welfare and the operational aims of user-generated content platforms. As a result, platforms have adopted various content moderation policies aimed at decreasing the volume and impact of misinformation. However, implementing new platform policies runs the risk of decreasing user contribution and alienating core users, and results regarding the efficacy of such policies are mixed. Herein, we empirically assess a prominence reduction policy applied to a problematic group that is high in misinformation. The goal of this policy is to reduce the visibility of misinformation on the platform (rather than deleting misinformation or banning users). The results show that while prominence reduction diminishes misinformation dissemination in the focal group, this method also results in a spillover of misinformation to topically related spaces. This spillover is short-lived and driven primarily by a small set of problematic users. As misinformation is not contagious, we find that this spillover to external groups diminishes over time. Finally, prominence reduction is found to have no impact on non-misinformation contribution on the studied platform. The findings of this study have important implications for platform operations and provide useful recommendations for managers regarding effective ways to reduce the spread of misinformation.