No CrossRef data available.
Published online by Cambridge University Press: 30 June 2025
Against the proliferation of large language model (LLM) based Artificial Intelligence (AI) products such as ChatGPT and Gemini, and their increasing use in professional communication training, researchers, including applied linguists, have cautioned that these products (re)produce cultural stereotypes due to their training data. However, there is a limited understanding of how humans navigate the assumptions and biases present in the responses of these LLM-powered systems and the role humans play in perpetuating stereotypes during interactions with LLMs. In this article, we use Sequential-Categorial Analysis, which combines Conversation Analysis and Membership Categorization Analysis, to analyze simulated interactions between a human physiotherapist and three LLM-powered chatbot patients of Chinese, Australian, and Indian cultural backgrounds. Coupled with analysis of information elicited from LLM chatbots and the human physiotherapist after each interaction, we demonstrate that users of LLM-powered systems are highly susceptible to becoming interactionally entrenched in culturally essentialized narratives. We use the concepts of interactional instinct and interactional entrenchment to argue that whilst human–AI interaction may be instinctively prosocial, LLM users need to develop Critical Interactional Competence for human–AI interaction through appropriate and targeted training and intervention, especially when LLM-powered tools are used in professional communication training programs.