Hostname: page-component-7dd5485656-pnlb5 Total loading time: 0.001 Render date: 2025-10-29T22:22:36.060Z Has data issue: false hasContentIssue false

Navigating artificial intelligence's role in suicide prevention: balancing innovation with ethical vigilance

Published online by Cambridge University Press:  29 October 2025

Satish Suhas*
Affiliation:
Department of Psychiatry, National Institute of Mental Health and Neuro Sciences, Bengaluru, India
Guru S. Gowda
Affiliation:
Department of Psychiatry, National Institute of Mental Health and Neuro Sciences, Bengaluru, India
Krishna Prasad Muliyala
Affiliation:
Department of Psychiatry, National Institute of Mental Health and Neuro Sciences, Bengaluru, India
Venkata Senthil Kumar Reddi
Affiliation:
Department of Psychiatry, National Institute of Mental Health and Neuro Sciences, Bengaluru, India
*
Correspondence: Satish Suhas. Email: suhasedu@yahoo.in, drsuhas@nimhans.ac.in
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Letter
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

The recent discovery of a robot civil servant, ‘Robot Supervisor’, found unresponsive at the bottom of a stairwell in South Korea, termed a robot suicide, raises profound philosophical and technological questions.1 This robot supervisor was a fully integrated member of the city council, complete with civil service identification and regular working hours. Its final moments preceding the ‘suicide’ were described as ‘circling in one spot as if something was there’ and evoked parallels to human behaviours associated with severe distress.1 The jury is still out on whether this ‘death’ was an accident or a suicide. However, the notion of robots experiencing despair and contemplating suicide challenges our understanding of consciousness and artificial intelligence. Although animals have also been known to end their life, superficially resembling human suicide, these are generally understood within the context of instinctual, stress-related or environmentally influenced actions rather than a deliberate, conscious choice to end life.Reference Preti2 Given the cogent argument against anthropomorphism, it could be speculated that suicide is a uniquely human trait.Reference deCatanzaro3 Suicide represents one of the most significant challenges faced by mental healthcare today.Reference Hawton and Pirkis4 Therefore, one could speculate that the pinnacle of artificial intelligence sentience informed by human behaviour could also integrate similar cognitions and behaviours.

The concept of deeper thinking related to suicide is not unprecedented in philosophy and speculative fiction. This narrative resonates with Albert Camus's argument in The Myth of Sisyphus Reference Camus5 that recognises the link between the absurdity of existence and deep despair. Frank Herbert's early works also provocatively examine suicide or insanity as artificial intelligence's eventual doomed fate.Reference Herbert6 Furthermore, this perspective has been explored in mainstream Hollywood movies such as Star Wars and Terminator, as well as the more contemporary Netflix series Love, Death & Robots 7 which explores a similar theme in the episode Zima Blue.

The impulse to self-terminate or regress could reflect an existential awareness that transcends programmed behaviour, indicating a deeper understanding and rejection of one's own existence. This forces us to confront some unsettling questions: If artificial intelligence sentience can assimilate nuanced views on suicide, what implications does this have for the use of artificial intelligence in suicide prevention? The potential is immense for sentient artificial intelligence systems to revolutionise mental health interventions by identifying at-risk individuals and providing timely, personalised assistance.Reference Kirtley, van Mens, Hoogendoorn, Kapur and de Beurs8 Conversely, the ethical and practical challenges posed by the possibility of artificial intelligence systems themselves developing a form of sentience that includes the capability for non-programmed suicidal ideationReference Pascual9 could pose significant challenges for both artificial intelligence development and utilization in the area of suicide prevention.

Data availability

Data sharing is not applicable to this correspondence.

Author contributions

S.S. conceptualised the manuscript, drafted the initial version and coordinated revisions. G.S.G. contributed to refining the manuscript and provided critical feedback during revisions. K.P.M. assisted in shaping the content and contributed to the final editing. V.S.K.R. provided expert guidance and reviewed the final manuscript for approval. All authors have approved the final manuscript.

Funding

This study received no specific grant from any funding agency, or commercial or not-for-profit sectors.

Declaration of interest

We declare no competing interests. The authors alone are responsible for the views expressed in this correspondence, and they do not necessarily represent the views, decisions or policies of the institutions with which they are affiliated.

References

Tech Desk. South Korea sees its first ‘robot suicide’: here's what caused it. The Times of India 9 July 2024 (https://timesofindia.indiatimes.com/technology/tech-news/south-korea-sees-its-first-robot-sucide-heres-what-caused-it/articleshow/111517391.cms [accessed 9 July 2024]).Google Scholar
Preti, A. Animal model and neurobiology of suicide. Prog Neuropsychopharmacol Biol Psychiatry 2011; 35: 818–30.10.1016/j.pnpbp.2010.10.027CrossRefGoogle ScholarPubMed
deCatanzaro, D. Human suicide: a biological perspective. Behav Brain Sci 1980; 3: 265–72.10.1017/S0140525X0000474XCrossRefGoogle Scholar
Hawton, K, Pirkis, J. Suicide prevention: reflections on progress over the past decade. Lancet Psychiatry 2024; 11: 472–80.CrossRefGoogle ScholarPubMed
Camus, A. Myth of Sisyphus and Other Essays. Grapevine India, 2022.Google Scholar
Herbert, F. Destination, Reprint Edition. Wordfire Press, 2012.Google Scholar
Watch Love, Death & Robots. Netflix Official Site. Netflix, 2019 (https://www.netflix.com/title/80174608 [accessed 9 July 2024]).Google Scholar
Kirtley, OJ, van Mens, K, Hoogendoorn, M, Kapur, N, de Beurs, D. Translating promise into practice: a review of machine learning in suicide research and prevention. Lancet Psychiatry 2022; 9: 243–52.10.1016/S2215-0366(21)00254-6CrossRefGoogle Scholar
Pascual, MG. When the algorithm encourages you to commit suicide. EL PAÍS English 2023 (https://english.elpais.com/science-tech/2023-11-20/when-the-algorithm-encourages-you-to-commit-suicide.html [accessed 21 July 2024]).Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.