In recent years, hate speech has gained relevance in social networks and other virtual media because of its intensity and its relationship with violent acts against members of protected groups. Due to the huge amounts of user-generated contents, a great effort has been made to develop automatic tools to aid the analysis and moderation of this speech, at least in its most threatening forms. One of the limitations of current approaches to automatic hate speech detection is the lack of context; most studies and resources focus on isolated messages, without considering any type of conversational context or even the topic being discussed. This severely restricts the available information for determining whether a post on a social network should be tagged as hateful or not. In this work, we assess the impact of adding contextual information to the hate speech detection task. In particular, we study a Twitter subdomain consisting of replies to posts by news outlets, which provides a natural environment for contextualized hate speech detection.We collected a novel corpus in the Rioplatense dialectal variety of Spanish focusing on hate speech associated with the COVID-19 pandemic, and manually annotated it using carefully designed guidelines. Our classification experiments using state-of-the-art transformer-based machine learning techniques show evidence that adding contextual information improves the performance of hate speech detection for two proposed tasks (binary and multi-label prediction), increasing their Macro F1 by 4.2 and 5.5 points, respectively. These results highlight the importance of exploiting contextual information for the task of hate speech detection. We make our code, models, and corpus available for further research.