Sir,
Re: Generative AI Large Language Models and Reassurance-Seeking in Obsessive–Compulsive Disorder.
I write to highlight – and prompt reflection on – the evolving relationship between generative artificial intelligence (AI), particularly large language models (LLMs) such as ChatGPT, and obsessive–compulsive disorder (OCD). While research in this area is still emerging, there is growing recognition among clinicians and academics that LLMs are facilitating novel patterns of compulsive reassurance-seeking and checking which merit consideration.
Classically, reassurance-seeking in OCD has involved repeated appeals to family, friends, or health professionals in a vain pursuit of certainty. With the advent of the internet over the past three decades, a digital analogue has emerged in the form of the Google search bar. “Cyberchondria” – a pattern of repeated internet searches for medical information that ultimately results in distress rather than reassurance – has become familiar across clinical settings (Doherty-Torstrick et al. Reference Doherty-Torstrick, Walton and Fallon2016; Starcevic & Berle, Reference Starcevic and Berle2013). Although this phenomenon is more closely tied to health anxiety, it exemplifies a broader trend of online reassurance-seeking, which has also been observed in OCD – especially among individuals with more severe symptoms – where such behaviours reinforce the disorder’s pathological cycle (Parsons, Reference Parsons2025; Parsons & Alden, Reference Parsons and Alden2022).
The arrival of LLMs represents a further evolution. Unlike static webpages or the inherently finite patience of human interlocutors, ChatGPT and its competitors provide seemingly limitless courteous and adaptive responses. Each reply shifts subtly in response to the nuances of the user’s question, creating a sense of personalised engagement, while their conversational yet authoritative tone fosters a human-like credibility (Parsons, Reference Parsons2025). Available round the clock and devoid of social cues that typically signal “enough is enough”, these models may unintentionally give rise to cycles of compulsive reassurance seeking that would have been self-limiting in bygone years (Golden & Aboujaoude, Reference Golden and Aboujaoude2024).
As AI tools become embedded in daily life, their potential to reinforce compulsive reassurance-seeking grows increasingly relevant to clinical practice. Clinicians may consider screening for LLM use during OCD assessments or discuss healthy ways of interacting with AI tools during psychoeducation. With early research into AI’s role in OCD diagnosis and treatment now underway, a healthy scepticism is important as ever to ensure these technologies support, rather than impede, our patients’ recoveries (Golden & Aboujaoude, Reference Golden and Aboujaoude2024; Kim et al. Reference Kim, Pacheco, Golden, Aboujaoude, van Roessel, Gandhi, Mukunda, Avanesyan, Xue, Adeli, Kim, Saggar, Stirman, Kuhn, Supekar, Pohl and Rodriguez2025)
Yours sincerely,
Dr Dara Friars,
Psychiatry Registrar, Cluain Mhuire Mental Health Service, Blackrock, Co. Dublin
Dr Grainne Flynn,
Consultant Psychiatrist, Cluain Mhuire Mental Health Service, Blackrock, Co. Dublin
Funding statement
The authors have no funding to declare.
Competing interests
The authors have no conflicts of interest to declare.