ABSTRACT Chatbots possess great potential benefits, yet concerns persist regarding users adopting inappropriate, offensive language. This research delved into the influence of user characteristics on verbally aggressive behaviours towards social chatbots. Employing a mixed-method study, we examined individual characteristics such as personal dispositions, offensive language patterns, academic majors, and prior experiences with conversational agents. Findings from a ten-day field experiment involving 33 participants using a real-world Telegram-based chatbot app unveiled that users' anthropomorphism, computer-related major, and gender significantly impact their moral emotions and evaluations of the chatbot's capabilities. Moreover, employing offensive language towards the chatbot detrimentally impacted users' perceptions of its abilities, helpfulness, and likability. The research findings advocate for ongoing monitoring and effective resolution of users' behaviours regarding the use of offensive language in their interactions with a chatbot. Additionally, the results underscore the importance of incorporating diverse perspectives into chatbot design to address biases and offensive utterances.