Disparity among gender and ethnicity remains an issue across medicine and health science. Only 26%-35% of trainee radiologists are female, despite more than 50% of medical students' being female. Similar gender disparities are evident across the medical imaging professions. Generative artificial intelligence text-to-image production could reinforce or amplify gender biases. Methods: In March 2024, DALL-E 3 was utilized via GPT-4 to generate a series of individual and group images of medical imaging professionals: radiologist, nuclear medicine physician, radiographer, nuclear medicine technologist, medical physicist, radiopharmacist, and medical imaging nurse. Multiple iterations of images were generated using a variety of prompts. Collectively, 120 images were produced for evaluation of 524 characters. All images were independently analyzed by 3 expert reviewers from medical imaging professions for apparent gender and skin tone. Results: Collectively (individual and group images), 57.4% (n = 301) of medical imaging professionals were depicted as male, 42.4% (n = 222) as female, and 91.2% (n = 478) as having a light skin tone. The male gender representation was 65% for radiologists, 62% for nuclear medicine physicians, 52% for radiographers, 56% for nuclear medicine technologists, 62% for medical physicists, 53% for radiopharmacists, and 26% for medical imaging nurses. For all professions, this overrepresents men compared with women. There was no representation of persons with a disability. Conclusion: This evaluation reveals a significant overrepresentation of the male gender associated with generative artificial intelligence text-to-image production using DALL-E 3 across the medical imaging professions. Generated images have a disproportionately high representation of white men, which is not representative of the diversity of the medical imaging professions.