Hostname: page-component-68c7f8b79f-lqrcg Total loading time: 0 Render date: 2025-12-19T06:08:08.526Z Has data issue: false hasContentIssue false

Challenges, Biases, and Solutions in Using Large Language Models Like ChatGPT for Public Health Communication and Crisis Management

Published online by Cambridge University Press:  02 December 2025

Usha Rana
Affiliation:
School of Gender and Development Studies (SOGDS), Indira Gandhi National Open University , New Delhi, India
Rupender Singh*
Affiliation:
Department of Computer and Information Engineering, Khalifa University , Abu Dhabi, United Arab Emirates
*
Corresponding author: Rupender Singh; Email: rupender.singh@ku.ac.ae
Rights & Permissions [Opens in a new window]

Abstract

Information

Type
Letter to the Editor
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Society for Disaster Medicine and Public Health, Inc

Dear Editor,

In many Asian countries, public health communication faces unique challenges due to the region’s diverse cultures, languages, and varying levels of digital infrastructure. The rapid spread of misinformation during health crises, such as the COVID-19 pandemic, highlighted the need for effective communication tools to reach vast and heterogeneous populations. Large language models (LLMs) like ChatGPT offer a promising solution to these challenges by providing accurate, timely, and accessible information across different languages and cultural contexts. In countries like India, China, and Indonesia, where internet penetration varies significantly between urban and rural areas, the deployment of LLMs can help bridge communication gaps. These models can be fine-tuned to address local languages and dialects, ensuring that public health messages are comprehensible and culturally appropriate. Moreover, in densely populated regions, where misinformation can spread rapidly through social media, LLMs can be instrumental in countering false narratives by generating reliable and context-specific information that resonates with the local population.Reference De Angelis, Baglivo and Arzilli 1 , Reference Tian, Jin and Yeganova 2

Enhancing Public Health Communication

One of the most promising applications of LLMs in public health is their ability to combat misinformation, which has become increasingly prevalent during health crises.Reference Tuccori, Convertino and Ferraro 3 Misinformation can spread rapidly through social media and other online platforms, leading to public confusion and mistrust. LLMs like ChatGPT can be trained to identify and counteract false information by generating accurate and clear explanations that are easily understood by the general public. These models can be deployed across multiple platforms, ensuring that consistent and reliable information is available where it is most needed. Moreover, LLMs can assist public health organizations in creating tailored communication strategies that address the specific concerns of different populations.Reference Casigliani, de Nard and de Vita 4 For example, during a pandemic, an LLM could be used to generate messages that are culturally sensitive and linguistically appropriate, thereby improving engagement and compliance with public health guidelines. The ability of LLMs to generate content in multiple languages and adapt to various cultural contexts makes them invaluable tools for global health communication efforts.

Supporting Crisis Management

In addition to enhancing communication, LLMs can play a critical role in crisis management by aiding in the processing and analysis of vast amounts of data. During a public health emergency, decision-makers are often inundated with information from multiple sources, including government reports, scientific research, and real-time data from health care providers. LLMs can assist in filtering this information, highlighting the most relevant data points and providing summaries that facilitate quicker and more informed decision-making.Reference Zarocostas 5 , Reference Schwartz, Boulware and Lee 6 For instance, during the early stages of the COVID-19 pandemic, health authorities had to rapidly synthesize information about the virus, its transmission, and effective mitigation strategies. LLMs could have been employed to automate the review of emerging research, identifying key findings and trends that would inform public health responses. Furthermore, these models can be used to simulate various crisis scenarios, providing valuable insights into potential outcomes and helping to prepare more effective response strategies.

Challenges and Ethical Considerations

While the potential benefits of LLMs in public health are substantial, their use also raises important challenges and ethical concerns. One of the primary challenges is the potential for bias in the models. LLMs are trained on large datasets that may contain inherent biases, which can be reflected in the outputs generated by the model. This could lead to the dissemination of skewed information or the reinforcement of harmful stereotypes, particularly in communications that are meant to reach vulnerable or marginalized populations. To address these concerns, it is essential to implement rigorous oversight and validation processes when deploying LLMs in public health contexts. This includes ensuring that the data used to train these models is representative and that outputs are regularly reviewed by human experts to identify and correct any biases.Reference Gordijn and Ten Have 7 Additionally, transparency in how these models are used and how decisions are made is crucial to maintaining public trust. Another practical consideration is the digital divide. While LLMs can enhance communication, their benefits may not be evenly distributed if certain populations lack access to the necessary digital infrastructure.Reference Gordijn and Ten Have 7 , Reference McBride, Arden and Chater 8 Ensuring that the deployment of these technologies does not exacerbate existing inequalities is a critical aspect of their implementation.

In addition to these risks, the misuse of LLMs to manipulate public opinion is a serious concern. Some actors may use these tools to spread false or misleading information that looks credible. During a health crisis, this can make the situation worse. It can weaken trust in public health messages and create panic. To deal with this, transparency is key. Public health agencies must clearly state when they use AI-generated content. They must also review these messages carefully. Education is also important. People should understand how these models work and what limits they have. There is a need for clear rules and ongoing review. We must work toward responsible use of these tools in public health settings. Political and economic competition also shapes how LLMs are developed and used. Countries and companies may design models that reflect their values or advance their interests. This can lead to biased outputs, selective messaging, or false narratives. During a public health crisis, these risks become more dangerous. They can weaken trust, create confusion, and mislead vulnerable populations. Some challenges listed in Table 1, such as bias, lack of transparency, and cultural insensitivity, may worsen under such conditions. To prevent this, we need stronger international oversight. Regulations should focus not only on accuracy but also on fairness and accountability. Independent review and global cooperation are essential. Public health communication must be protected from political influence. LLMs should support public welfare, not private or national agendas.

Table 1. Key Challenges in implementing large language models in public health communication and crisis management, with proposed solutions

Proposing Regulations for Responsible Use

Given the potential risks and challenges associated with the use of LLMs, it is crucial to establish clear regulations that govern their deployment in public health settings. These regulations should address the ethical use of AI, focusing on issues such as data privacy, bias mitigation, and the accountability of AI-generated content.Reference Rothkopf 9 One approach could be the development of a regulatory framework that includes mandatory impact assessments for AI tools used in public health.Reference Langguth and Pogorzelski 10 These assessments would evaluate the potential social, ethical, and economic impacts of deploying LLMs, ensuring that the benefits outweigh the risks. Additionally, the framework could mandate regular audits of AI systems to monitor their performance and adherence to ethical standards. Furthermore, there should be guidelines on the transparency of AI-generated content. Public health organizations using LLMs should disclose when information is generated by AI and provide mechanisms for public feedback. This transparency would help build trust and allow for the continuous improvement of AI tools.

Future Prospects

Looking ahead, the integration of LLMs like ChatGPT into public health systems offers exciting possibilities for improving both communication and crisis management. As these models continue to evolve, their ability to provide real-time, personalized information will likely become even more sophisticated, allowing for more dynamic and responsive public health strategies. However, realizing the full potential of LLMs in public health will require a collaborative approach that brings together technologists, public health experts, ethicists, and policymakers. By working together, these stakeholders can ensure that LLMs are developed and deployed in ways that maximize their benefits while minimizing risks, ultimately leading to a more informed and resilient public health system.

It is also important to compare the performance of different LLMs in real-world settings. Models such as ChatGPT, DeepSeek, Claude, and Med-PaLM are trained in different ways and respond differently to the same questions. Outputs may vary in clarity, accuracy, tone, or cultural alignment. Future research should test how these models perform on the same health-related prompts. This will help identify strengths and weaknesses. It will also help public health organizations choose the most appropriate model for their specific needs. Such comparisons can guide responsible model selection and improve public trust in AI-generated information.

Author contribution

Usha Rana: Conceptualization, literature review, ethical analysis, and drafting of manuscript; Rupender Singh: Conceptualization, technical content development, manuscript revision, and final approval.

Funding statement

No external funding was received for this study.

Competing interests

The authors declare that they have no financial, professional, or personal conflicts of interest that could have influenced the development or publication of this manuscript.

Ethical standard

The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

References

De Angelis, L, Baglivo, F, Arzilli, G, et al. ChatGPT and the rise of large language models: the new AI-driven infodemic threat in public health. Front Public Health. 2023. doi:10.1007/s11019-023-10136-0CrossRefGoogle ScholarPubMed
Tian, S, Jin, Q, Yeganova, L, et al. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. arXiv preprint. Published online 2023. doi:10.48550/arXiv.2306.10070CrossRefGoogle Scholar
Tuccori, M, Convertino, I, Ferraro, S, et al. The impact of the COVID-19 “Infodemic” on drug-utilization behaviors: implications for Pharmacovigilance. Drug Saf. 2020;43:699709. doi:10.1007/s40264-020-00965-wCrossRefGoogle ScholarPubMed
Casigliani, V, de Nard, F, de Vita, E, et al. Too much information, too little evidence: is waste in research fueling the COVID-19 infodemic? BMJ. 2020;370:1847. doi:10.1136/bmj.m2672Google Scholar
Zarocostas, J. How to fight an infodemic. Lancet. 2020;395:676. doi:10.1016/S0140-6736(20)30461-XCrossRefGoogle ScholarPubMed
Schwartz, IS, Boulware, DR, Lee, TC. Hydroxychloroquine for COVID-19: the curtains close on a comedy of errors. Lancet Reg Health Am. 2022;11:100268. doi:10.1016/j.lana.2022.100268Google Scholar
Gordijn, B, Ten Have, H. ChatGPT: Evolution or Revolution? Med Health Care Philos. 2023;26:12. doi:10.1007/s11019-023-10136-0CrossRefGoogle ScholarPubMed
McBride, E, Arden, MA, Chater, A, et al. The impact of COVID-19 on health behaviour, well-being, and long-term physical health. Br J Health Psychol. 2021;26:259270. doi:10.1111/bjhp.12520CrossRefGoogle ScholarPubMed
Rothkopf, DJ. When the buzz bites back. Washington Post. May 11, 2003. Accessed March 25, 2025. https://www.washingtonpost.com/archive/opinions/2003/05/11/when-the-buzz-bites-back/bc8cd84f-cab6-4648-bf58-0277261af6cd/Google Scholar
Langguth, J, Pogorzelski, M. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329:637639. doi:10.1001/jama.2023.1344Google Scholar
Figure 0

Table 1. Key Challenges in implementing large language models in public health communication and crisis management, with proposed solutions