Social computing platforms facilitate interpersonal harms that manifest across online and physical realms such as sexual violence between online daters and sexual grooming through social media. Risk detection AI has emerged as an approach to preventing such harms, however a myopic focus on computational performance has been criticized in HCI literature for failing to consider how users should interact with risk detection AI to stay safe. In this paper we report an interview study with woman-identifying online daters (n=20) about how they envision interacting with risk detection AI and how risk detection models can be designed pursuant to such interactions. In accordance with this goal, we engaged women in risk detection model building exercises to build their own risk detection models. Findings show that women anticipate interacting with risk detection AI to augment - not replace - their personal risk assessment strategies. They likewise designed risk detection models to amplify their subjective and admittedly biased indicators of risk. Design implications involve the notion of personalizable risk detection models, but also ethical concerns around perpetuating problematic stereotypes associated with risk.