In human–robot interaction studies, trust is often defined as a process whereby a trustor makes themselves vulnerable to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and robots. To better understand how vulnerability affects human–robot trust, we first reviewed the literature to create a conceptual model of vulnerability with four vulnerability categories. We then performed a meta-analysis, first to check the overall contribution of the variables included on trust. The results showed that overall, the variables investigated in our sample of studies have a positive impact on trust. We then conducted two multilevel moderator analysis to assess the effect of vulnerability on trust, including: (1) an intercept model that considers the relationship between our vulnerability categories and (2) a non-intercept model that treats each vulnerability category as an independent predictor. Only model 2 was significant, suggesting that to build trust effectively, research should focus on improving robot performance in situations where the users are unsure how reliable the robot will be. As our vulnerability variable is derived from studies of human–robot interaction and researcher reflections about the different risks involved, we relate our findings to these domains and make suggestions for future research avenues.