Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-b5cpw Total loading time: 0 Render date: 2025-09-19T15:42:59.778Z Has data issue: false hasContentIssue false

11 - Challenges of Generative AI on Human–AI Interaction and Collaboration

Published online by Cambridge University Press:  19 September 2025

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China
Get access

Summary

As generative AI technologies continue to advance at a rapid pace, they are fundamentally transforming the dynamics of human–AI interaction and collaboration, a phenomenon that was once relegated to the realm of science fiction. These developments not only present unprecedented opportunities but also introduce a range of complex challenges. Key factors such as trust, transparency, and cultural sensitivity have emerged as essential considerations in the successful adoption and efficacy of these systems. Furthermore, the intricate balance between human and AI contributions, the optimization of algorithms to accommodate diverse user needs, and the ethical implications of AI’s role in society pose significant challenges that require careful navigation. This chapter will delve into these multifaceted issues, analyzing both user-level concerns and the underlying technical and psychological dynamics that are critical to fostering effective human–AI interaction and collaboration.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Ananya, . (2024). AI Image Generators Often Give Racist and Sexist Results: Can They Be Fixed? Nature, 627, 722725. https://doi.org/10.1038/d41586-024-00674-9CrossRefGoogle ScholarPubMed
Chen, X., Xie, H., Zou, D., & Hwang, G. J. (2020). Application and Theory Gaps during the Rise of Artificial Intelligence in Education. Computers and Education: Artificial Intelligence, 1, 100002.Google Scholar
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the Impact of Anthropomorphism on Consumer Response to Chatbots from the Perspective of Trust and Relationship Norms. Information Processing & Management, 59(3), 102940.Google Scholar
Cymek, D. H., Truckenbrodt, A., & Onnasch, L. (2023). Lean Back or Lean in? Exploring Social Loafing in Human–Robot Teams. Frontiers in Robotics and AI, 10, 1249252.10.3389/frobt.2023.1249252CrossRefGoogle ScholarPubMed
Dăniloaia, D. F., & Turturean, E. (2024). Knowledge Workers and the Rise of Artificial Intelligence: Navigating New Challenges. SEA: Practical Application of Science, 12(35).Google Scholar
Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT Outperforms Humans in Emotional Awareness Evaluations. Frontiers in Psychology, 14, 1199058. https://doi.org/10.3389/fpsyg.2023.1199058CrossRefGoogle ScholarPubMed
Filippova, K. (2020). Controlled Hallucinations: Learning to Generate Faithfully from Noisy Data. [arXiv preprint]. arXiv:2010.05873.Google Scholar
Gilliard, C. (2022, January 2). Crime Prediction Keeps Society Stuck in the Past. WIRED. www.wired.com/story/crime-prediction-racist-history/Google Scholar
Guo, X., & Chen, Y. (2024). Generative AI for Synthetic Data Generation: Methods, Challenges and the Future [arXiv preprint]. arXiv:2403.04190.Google Scholar
Haase, J., & Hanel, P. H. (2023). Artificial Muses: Generative Artificial Intelligence Chatbots have Risen to Human-level Creativity. Journal of Creativity, 33(3), 100066.10.1016/j.yjoc.2023.100066CrossRefGoogle Scholar
Haleem, A., Javaid, M., Singh, R. P., Rab, S., & Suman, R. (2021). Hyperautomation for the Enhancement of Automation in Industries. Sensors International, 2, 100124.10.1016/j.sintl.2021.100124CrossRefGoogle Scholar
Han, M. C. (2021). The Impact of Anthropomorphism on Consumers’ Purchase Decision in Chatbot Commerce. Journal of Internet Commerce, 20(1), 4665.10.1080/15332861.2020.1863022CrossRefGoogle Scholar
Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does Human–AI Collaboration Lead to More Creative Art? Aesthetic Evaluation of Human-made and AI-generated Haiku Poetry. Computers in Human Behavior, 139, 107502.10.1016/j.chb.2022.107502CrossRefGoogle Scholar
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., … & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), 138.Google Scholar
Jenks, C. J. (2024). Communicating the Cultural Other: Trust and Bias in Generative AI and Large Language Models. Applied Linguistics Review, 16(2), 787795.Google Scholar
Jones, M. L., Kaufman, E., & Edenberg, E. (2018). AI and the Ethics of Automating Consent. IEEE Security & Privacy, 16(3), 6472.10.1109/MSP.2018.2701155CrossRefGoogle Scholar
Kaswan, K. S., Dhatterwal, J. S., Malik, K., & Baliyan, A. (2023, November). Generative AI: A Review on Models and Applications. In 2023 International Conference on Communication, Security and Artificial Intelligence (ICCSAI) (pp. 699704). IEEE.Google Scholar
Khurana, D., Koli, A., Khatter, K., & Singh, S. (2023). Natural Language Processing: State of the Art, Current Trends and Challenges. Multimed Tools Appl, 82, 37133744. https://doi.org/10.1007/s11042-022-13428-4CrossRefGoogle ScholarPubMed
Königs, P. (2022). Artificial Intelligence and Responsibility Gaps: What Is the Problem? Ethics and Information Technology, 24(3), 36.10.1007/s10676-022-09643-0CrossRefGoogle Scholar
Li, L., Mu, X., Li, S., & Peng, H. (2020). A Review of Face Recognition Technology. IEEE Access, 8, 139110139120.10.1109/ACCESS.2020.3011028CrossRefGoogle Scholar
Li, M., Guo, F., Li, Z., Ma, H., & Duffy, V. G. (2024). Interactive Effects of Users’ Openness and Robot Reliability on Trust: Evidence from Psychological Intentions, Task Performance, Visual Behaviours, and Cerebral Activations. Ergonomics, 67(11), 16121632.10.1080/00140139.2024.2343954CrossRefGoogle ScholarPubMed
Liu, P. (2023). Reflections on Automation Complacency. International Journal of Human–Computer Interaction, 40(22), 73477363.Google Scholar
Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in Artificial Intelligence: From a Foundational Trust Framework to Emerging Research Opportunities. Electronic Markets, 32(4), 19932020.10.1007/s12525-022-00605-4CrossRefGoogle Scholar
Mariadassou, S., Klesse, A. K., & Boegershausen, J. (2024). Averse to What: Consumer Aversion to Algorithmic Labels, but Not Their Outputs? Current Opinion in Psychology, 58, 101839. https://doi.org/10.1016/j.copsyc.2024.101839Google ScholarPubMed
Mlonyeni, P. M. T. (2025). Personal AI, Deception, and the Problem of Emotional Bubbles. AI & Society, 40, 19271938. https://doi.org/10.1007/s00146-024-01958-4CrossRefGoogle Scholar
Ng, K. K., Chen, C. H., Lee, C. K., Jiao, J. R., & Yang, Z. X. (2021). A Systematic Literature Review on Intelligent Automation: Aligning Concepts from Theory, Practice, and Future Perspectives. Advanced Engineering Informatics, 47, 101246.10.1016/j.aei.2021.101246CrossRefGoogle Scholar
Novella, R., Rosas-Shady, D., & Alvarado, A. (2023). Are We Nearly There Yet? New Technology Adoption and Labor Demand in Peru. Science and Public Policy, 50(4), 565578.Google Scholar
Peng, C., van Doorn, J., Eggers, F., & Wieringa, J. E. (2022). The Effect of Required Warmth on Consumer Acceptance of Artificial Intelligence in Service: The Moderating Role of AI–Human Collaboration. International Journal of Information Management, 66, 102533.Google Scholar
Raiaan, M. A. K., Mukta, M. S. H., Fatema, K., Fahad, N. M., Sakib, S., Mim, M. M. J., ... & Azam, S. (2024). A Review on Large Language Models: Architectures, Applications, Taxonomies, Open Issues and Challenges. IEEE Access, 12, 2683926874.10.1109/ACCESS.2024.3365742CrossRefGoogle Scholar
Relyea, C., Maor, D., Durth, S., & Bouly, J. (2024, August 7). Gen AI’s Next Inflection Point: From Employee Experimentation to Organizational Transformation. McKinsey & Company. www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/gen-ais-next-inflection-point-from-employee-experimentation-to-organizational-transformationGoogle Scholar
Sengar, S. S., Hasan, A. B., Kumar, S., & Carroll, F. (2024). Generative Artificial Intelligence: A Systematic Review and Applications [arXiv preprint]. arXiv:2405.11029.Google Scholar
Tao, Y., Viberg, O., Baker, R. S., & Kizilcec, R. F. (2024). Cultural Bias and Cultural Alignment of Large Language Models. PNAS Nexus, 3(9), 346.Google ScholarPubMed
Tonmoy, S. M., Zaman, S. M., Jain, V., Rani, A., Rawte, V., Chadha, A., & Das, A. (2024). A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models [arXiv preprint]. arXiv:2401.01313.Google Scholar
Wang, X., Zhu, X., & Wang, Y. (2022). The Impact of Robot Application on Manufacturing Employment. Journal of Quantitative Technology Economics, 39(4), 88106.Google Scholar
Westphal, M., Vössing, M., Satzger, G., Yom-Tov, G. B., & Rafaeli, A. (2023). Decision Control and Explanations in Human–AI Collaboration: Improving User Perceptions and Compliance. Computers in Human Behavior, 144, 107714.Google Scholar
Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making Sense of Recommendations. Journal of Behavioral Decision Making, 32(4), 403414.10.1002/bdm.2118CrossRefGoogle Scholar
Zhu, Y., Shi, H., Hashmi, H. B. A., & Wu, Q. (2023). Bridging Artificial Intelligence-based Services and Online Impulse Buying in E-retailing Context. Electronic Commerce Research and Applications, 62, 101333.10.1016/j.elerap.2023.101333CrossRefGoogle Scholar

Accessibility standard: Unknown

Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×