ABSTRACTIn this paper, we present data findings from the pilot study focusing on utilizing content moderators from Brainly, a social learning Q&A platform, to assess the quality of answers. Because it can be argued that Brainly users who actively moderate contents may have better contextual understandings of how users interact with each other through question‐answering activities, and which answers are more likely relevant and appropriate to a question in a context of Brainly. The findings indicate that helpfulness, informativeness, and relevance are the most critical factors that have impacts on the quality of answers. Further content analysis also identified two new criteria : 1) descriptiveness – evaluating how well answers provide descriptive summaries through detailed and additional information, and 2) explicitness – clearly constructing answers to reduce vagueness of what information answerers intend to provide to satisfy an asker's need.