Recent developments in conversational AI potentially offer advanced features that may support studying. However, they are not sufficiently developed to replace more established learning methods or the human dimension of group learning.
It is the case that AI is advancing rapidly, offering the possibility that it may become a standard component of postgraduate training in the future. Its integration will necessitate careful attention to ethical considerations (Boudi et al., Reference Boudi, Boudi, Chan and Boudi2024), including compliance with GDPR, as well as potential adaptations to the structure of postgraduate education.
In this letter, I describe my own personal experience and reflections on using a hybrid approach to preparing for postgraduate psychiatry membership examinations. This approach integrated traditional study groups, structured supervision, and peer learning with the novel use of conversational artificial intelligence (AI).
Psychiatric training continues to evolve alongside rapid advances in technology and shifting educational expectations. As a psychiatry trainee in Ireland, I sought a more flexible, personalised, and effective approach to preparing for the Objective Structured Clinical Examinations (OSCEs). These assessments, central to the membership process with the College of Psychiatrists of Ireland, closely resemble the Clinical Assessment of Skills and Competencies (CASC) conducted by the Royal College of Psychiatrists in the UK. These exams demand more than textbook recall – they require diagnostic precision (guided by ICD-11 or DSM-5), concise and compassionate communication, risk formulation, and evidence-based management under strict time pressure. For trainees balancing clinical duties, family obligations, and – in some cases – language barriers, traditional study models can be difficult to sustain.
To address these limitations, I experimented with a hybrid model of AI-assisted learning alongside traditional group study. I used the paid version of ChatGPT-4o, which enabled sustained, high-quality interaction by offering extended memory, faster processing, customisable prompts, elevated usage limits, and enhanced stability (Öncü et al., Reference Öncü, Torun and Ülkü2025) At the outset, I explained what an OSCE is and the specific format I wanted to follow: seven-minute stations, with ChatGPT always assuming the role of the patient and me as the candidate (doctor). I requested structured feedback after each interaction, covering key elements such as time management, clinical structure, areas for improvement, and strengths.
The model complied reliably, offering both simulated responses and structured feedback, for example, “Your risk formulation was strong, but you didn’t ask about protective factors.” This allowed for rapid, targeted improvement.
While there is no one-size-fits-all approach – since effectiveness depends on the user’s expertise and objectives – I found ChatGPT to be highly adaptable. It was able to simulate OSCE stations accurately with minimal instruction. For example, a simple prompt like: “You are a 42-year-old woman presenting to the emergency department with depressive symptoms, psychotic features, and suicidal ideation” elicited a realistic and engaging scenario without the need for extensive context or explanation (Cross et al., Reference Cross, Kayalackakom, Robinson, Vaughans, Sebastian and Hood2025).
In addition, I asked ChatGPT to help generate mnemonics to support memory retention of complex criteria. We used standard ones like DIGFAST (Distractibility, Impulsivity, Grandiosity, Flight of ideas, Activity increase, Sleep deficit, Talkative) for mania and created helpful anagrams such as A MEGA PICS (Anhedonia, Mood low, Energy low, Guilt, Appetite low, Psychomotor agitation/retardation, Ideation suicidal, Concentration low, Sleep reduced/increased) for depression. Importantly, I used ChatGPT to assist in the translation of clinical language into patient-friendly terms. For instance, “nephrogenic diabetes insipidus” was reframed as “a possible effect on the kidneys that might make you feel thirsty or need to pass urine more often.” This helped build confidence and fluency in communicating clearly and compassionately. As a non-native English speaker, I found the AI particularly helpful for refining fluency, pronunciation, and consultation style.
The pressure-free environment allowed for repetition, improving confidence and verbal structuring of complex tasks like risk assessment, safety planning, and psychoeducation (e.g. explaining clozapine treatment). When prompted by ChatGPT, I repeated the explanation until fluent, creating a looped learning experience. Much of this study occurred while walking through Phoenix Park in Dublin, during lunch breaks, or commuting. Traditional solo study often felt cognitively taxing because of isolation, lack of immediate feedback, and the monotony of rote repetition. By contrast, AI-based interaction provided dynamic dialogue, instant clarification, and continuous variation in phrasing and prompts. This transformed repetition from a draining task into an engaging process, sustaining attention and motivation over long periods. Studying in natural environments further enhanced focus and reduced fatigue – an effect rarely achieved when sitting at a desk staring at a screen or notes – and aligns with cognitive psychology findings that light and movement improve memory and attention. Educationally, dialogue-based rehearsals, active recall with feedback, and the self-directed improvements on prior knowledge align with the constructivist and situated learning models described by Vygotsky (Reference Vygotsky1978) and Bruner.
Some limitations emerged. ChatGPT occasionally referenced epidemiological data that conflicted with global data – for instance, reporting ADHD prevalence as 15% without clarifying this to be USA data. Fine-tuning was required to achieve useful responses. I learned to give clearer prompts, I had to explain exam expectations, and specify grading domains – such as empathy, rapport, mental state structure, and risk assessment. When instructions were not clear, ChatGPT sometimes returned confusing outputs, such as role-playing both patient and doctor simultaneously or skipping into monologue. I was not only learning from the tool use – I was also actively teaching and calibrating it (Adarkwah et al., Reference Adarkwah, Badu, Osei, Adu-Gyamfi, Odame and Käthe2025). The benefit was that once trained, it could retain that knowledge and support repeated, tailored practice.
Functional issues also arose; it could not read PDFs or interpret screenshots of clinical station scenarios from OSCE/CASC textbooks, although pasting actor instructions from online versions of these books usually worked, there were occasional failures – likely due to server load or input length limits.
Based on my current experience, I would not recommend ChatGPT to learners without a strong foundational knowledge in the topic. Without the ability to independently verify or correct inconsistencies, there is a genuine risk of being misled.
In conclusion, this letter offers a reflective model of how conversational AI – when used intentionally and critically – can support psychiatry trainees preparing for clinical exams. It enhances flexibility, enables repeated rehearsal, and simulates clinical conversations in a cognitively sustainable way. While not a substitute for supervision or peer learning, ChatGPT proved to be a valuable aid in preparation for postgraduate psychiatry clinical examinations. This approach is easily replicable and adaptable across specialties, training levels, and geographic contexts, allowing it to be tailored to diverse needs and demands.
Financial support
The authors received no financial support for this correspondence.
Competing interests
The authors declare no competing interests.
Ethical standards
This Letter to the Editor did not require ethical approval.