Introduction: The use of artificial intelligence (AI), including neural networks and deep learning, has rapidly changed many industries. One recent application of AI that has gained the attention of physicians and the general public alike is ChatGPT, a chatbot developed and released by OpenAI Inc. in November2022. We challenged ChatGPT to sit the Fellowship of the Royal College of Surgeons (FRCS) Urology examination, both parts, to assess its performance and determine its eligibility for the title. Methods: We prepared a set of 100 Multiple-Choice Questions (MCQs) for Part 1 and 8 viva scenarios for Part 2. We choose to use ChatGPT 3.5 that is the free version available online, and responses were documented during the simulation. Results: In Part 1 MCQs, Chat Generative Pre-trained Transformer (ChatGPT) managed to answer some questions correctly with an overall score of 35%. However, in Part 2 Viva, its performance improved to an extent, demonstrating proficiency by offering detailed explanations and reasoning, achieving average score in some stations. ChatGPT’s performance raised some concerns on answering questions. Conclusion: AI is definitely an asset that can be used in our daily practice, yet it is crucial to take into consideration the concerns about the accuracy of data provided, including AI hallucination. ChatGPT failed to get the FRCS Urology title this time. Still in the early stages of development, further research is necessary to comprehensively understand their limitations and capabilities. Level of evidence: Not applicable