ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. It is necessary to assess the usability and acceptability of this promising tool. As an innovative technology, the intention to use ChatGPT can be studied in the context of the Technology Acceptance Model (TAM). To develop and validate a TAM-based survey instrument that could be employed to examine the successful integration and use of ChatGPT in healthcare education. The survey tool was created based on the TAM framework and the tool comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for the participants who used ChatGPT. Using a convenient sampling approach, the survey link was circulated electronically among university students during February-March 2023. Exploratory factor analysis (EFA) was used to assess the construct validity of the survey instrument. The final sample comprised 458 respondents with a median age of 20 years and a majority of undergraduate students (n=442, 96.5%). Only 109 respondents (23.9%) heard of ChatGPT prior to participation and only 55 self-reported ChatGPT use before the study (11.3%). EFA analysis on the attitude and usage scales showed significant Bartlett's tests of sphericity (P<.001) and adequate Kaiser-Meyer-Olkin measures (0.823 for attitude scale and 0.702 for usage scale), confirming the factorability of the correlation matrices. The EFA showed that three constructs explained a cumulative total of 69.3% variance in the attitude scale and these subscales represented perceived risks, attitude to technology/social influence, and anxiety. For the ChatGPT usage scale, EFA showed that four constructs explained a cumulative total of 72.0% variance in the data and comprised the perceived usefulness, perceived risks, perceived ease of use, and behavior/cognitive factors. All of the ChatGPT attitude and usage sub-scales showed good reliability with Cronbach alpha values >0.78 for all the deduced sub-scales. The TAME-ChatGPT demonstrated good reliability, validity, and usefulness in assessing the attitudes towards ChatGPT among healthcare students. The findings highlighted the importance of considering risk perceptions, usefulness, ease of use, attitudes towards technology, and behavioral factors when adopting ChatGPT as a tool in healthcare education. This information can aid the stakeholders in creating strategies to support optimal and ethical use of ChatGPT and to identify the potential challenges hindering its successful implementation. Future research is recommended to guide the effective adoption of ChatGPT in healthcare education.