Open-ended survey questions can enable researchers to gain insights beyond more commonly used closed-ended question formats by allowing respondents an opportunity to provide information with few constraints and in their own words. Open-ended web probes are also increasingly used to inform the design and evaluation of survey questions. However, open-ended questions are more susceptible to insufficient or irrelevant responses that can be burdensome and time-consuming to identify and remove manually, often resulting in underuse of open-ended questions and, when used, potential inclusion of poor-quality data. To address these challenges, we developed and publicly released the Semi-Automated Nonresponse Detection for Survey text (SANDS), an item nonresponse detection approach based on a Bidirectional Transformer for Language Understanding model, fine-tuned using Simple Contrastive Sentence Embedding and targeted human coding, to categorize open-ended text data as valid or likely nonresponse. This approach is powerful in that it uses natural language processing as opposed to existing nonresponse detection approaches that have relied exclusively on rules or regular expressions or used bag-of-words approaches that tend to perform less well on short pieces of text, typos, or uncommon words, often prevalent in open-text survey data. This paper presents the development of SANDS and a quantitative evaluation of its performance and potential bias using open-text responses from a series of web probes as case studies. Overall, the SANDS model performed well in identifying a dataset of likely valid results to be used for quantitative or qualitative analysis, particularly on health-related data. Developed for generalizable use and accessible to others, the SANDS model can greatly improve the efficiency of identifying inadequate and irrelevant open-text responses, offering expanded opportunities for the use of open-text data to inform question design and improve survey data quality.