Discord, which was originally created for the gamer community, can now be found used by hobby groups and communities that are used for shared learning purposes. But the downside is the gamer culture that comes with it. Rude and toxic words that are synonymous with the gamer community should be avoided in study group communities. Meanwhile, the facilities for minimizing harsh and toxic words are still limited to word filters that can be tricked so that they can still be sent to the chat room. This can trigger conflict and interfere with learning activities together. This paper proposed an information assistance chatbot that is able to answer question, and conflict prevention with detection toxic sentences using pre-processing from NLP (Natural Language Processing) and text classification so that the chatbot is able to limit toxic sentences a little more accurately than the word filter feature alone. Also, Chatbots are given the ability to determine the value / level of toxic conversations so that they are had been able to determine the punishment action to be carried out by warning, suspending, or even being issued for the most severe cases. In addition, by looking at the frequency of sending messages from several senders, which indicates toxic, it was able to determine when the conflict occurs. The result shows that chatbot can work fine to answer question and detecting toxic include do punishment to toxic sender. With 10% error on detecting conflict and 30% error on answer question. That 30% error false positive on make an answer that should not be answered.