Skip to main content
RACS ASC 2025
Using ChatGPT as a Tool for Surgical Education and Training – A Pilot Through Practice GSSE Exam
Poster
Edit Your Submission
Edit

Poster

Presentation Description

Institution: Nepean Hospital - NSW, Australia

Purpose: Artificial intelligence (AI) has emerged as a transformative tool in medical education. While prior studies have evaluated AI’s performance on global certification exams, such as the Plastic Surgery In-Training Exam and the European Hand Certification Exam, its potential for assisting in preparation for the General Surgical Science Examination (GSSE) remains unexamined. This study investigates the utility of ChatGPT-4.0 as an adjunct resource by evaluating its performance on a practice GSSE exam. Methodology: GSSE questions were sourced from the question bank covering Anatomy, Pathology and Physiology. These questions were submitted to ChatGPT in isolated chat sessions to account for its adaptive learning within a conversation. Results: The AI’s performance was evaluated across three attempts per category. Anatomy scores were 15/20, 13/20, and 12/20, reflecting challenges in addressing complex anatomical topics. Pathology demonstrated stronger outcomes with consistent scores of 16/20, 19/20, and 16/20, showcasing its effectiveness in conceptual understanding. Physiology yielded robust results, with scores of 17/20, 19/20, and 15/20, highlighting ChatGPT’s capability in handling physiological queries. Conclusion: ChatGPT-4.0 shows promise as a supplementary educational tool for surgical trainees, particularly for reinforcing theoretical concepts and facilitating repetitive practice. However, its limitations, including inaccuracies in nuanced topics and occasional "hallucination" of answers without citations, warrant caution in its application. Further research is needed to explore its broader utility and long-term impact on surgical education and training outcomes.
Presenters
Authors
Authors

Dr Sarah Huang -