Population Health

October 9, 2024

Initiative-funded project studies how culturally sensitive communication styles affects health AI

A physician types on a cell phoneThe capabilities of conversational artificial intelligence (AI) have improved dramatically in healthcare settings, with the potential to promote mental health and to Cognitive Behavioral Therapy. However, current AI-based chatbots lack the understanding of cultural differences needed to support communication with patients and ensure positive health outcomes. Communication styles vary across cultures and prove significant when it comes to evaluating the to physicians鈥 recommendations.

In spring 2023, an interdisciplinary team of 天美影视传媒 researchers received a Tier 1 pilot grant from the UW鈥檚 Population Health Initiative to investigate how the enhancement of AI health chatbots could improve communication between patients from multicultural backgrounds. The study hopes to build a foundation for advancing generative AI-based chatbots that can consider different cultural backgrounds and increase the likelihood of positive health outcomes as a result.

鈥淭here is a lot of research about AI being biased and that鈥檚 mostly related to the content they learned from the facts and data, but I think that kind of bias is slightly different from the cultural sensitivity that we鈥檙e trying to enhance here,鈥 said , co-investigator and assistant professor of Communication.

鈥淲ith this project, we鈥檙e trying to understand this tacit side of communication like cultural sensitivity and discover the ways in which it relates to the bigger question of equity in AI. While we need to address the bias content of AI, we also know that as AI becomes more and more communicative that there are these hidden implicit biases like communication styles based on culture,鈥 said Liao.

The project鈥檚 aims begin with evaluating the communication styles of providers, clients with different cultural backgrounds and AI-based chatbots, followed by the creation of several therapy bot prototypes that will be tested in providing evidence-based psychotherapy to a diverse group of family caregivers. The results from prototype testing will offer an understanding for whether individuals from different cultural backgrounds prefer distinct communication styles and if matching these improves user engagement and outcomes.

, a biomedical informatics and medical education PhD student in the School of Medicine, explained the significance of communication styles in healthcare by citing a study where Chinese American patients complied at a higher rate to physician recommendations when spoken to in a calm and authoritative manner.

鈥淚 think that this opens the question of whether people want to feel like their chatbot therapist is human or just a chatbot, which could eliminate an alliance between the patient and the robot,鈥 said Xie.

, co-investigator and associate professor of Nursing & Healthcare Leadership at UW Tacoma, reiterated the project鈥檚 aim of discovering how cultures influence AI鈥檚 effectiveness in promoting effective mental health care.

鈥淭he whole goal of our project is not to generalize cultures, but to understand if there are things that are relevant to culture that extend beyond race and ethnicity,鈥 said Yuwen. 鈥淚n our team, we have been exploring two cultures, Latinos and Chinese, and their acceptance of how a chatbot-based solution could help them. So far, we have not found a universal rejection of AI, but those who are reserved about AI use may have challenges with physical or mental barriers like illiteracy or vision loss.鈥

The project began in July 2024 and will last for seven months, with hopes of guiding future intervention optimization in mental health care and creating a path towards the design of a future trial that further examines the impact of different communication styles on clinical outcomes in chatbot-delivered psychotherapy.

Currently, the project鈥檚 team is creating several therapy bot prototypes with different communication styles and expects to move into usability testing in November. , lead co-investigator and associate professor of information systems at Foster School of Business, discussed the challenges found early on and how the project has adjusted as a result.

鈥淭he very first step of our project was the literature review, which was very broad and reviewed AI communication style and non-AI communication style across different cultures, in healthcare, and not in healthcare,鈥 said Hwang. 鈥淭his step was a little challenging for us in terms of figuring out which literature to focus on, but we moved on and identified the key dimensions we needed to focus on in terms of the communication styles.鈥

, an information systems and operation management PhD student in the Foster School of Business, noted the unexpectedly positive benefits of working with an interdisciplinary team of researchers and how this collaboration has led to a more enriching project experience.

鈥淚 noticed how we all have different expertise and it fascinated me how much I learned from others and how this expanded my horizons,鈥 said Baik. 鈥淐ollaborating exclusively within a single academic discipline might have directed our research in a narrower path. However, the diverse input we are receiving is making our study more comprehensive and rigorous.鈥

The project鈥檚 research team hopes to receive funding from organizations like the National Institutes of Health, where they can expand on previous work done to improve the use of AI in biomedical and behavioral research.

鈥淭he biggest challenge for us is once we look into it, there鈥檚 so much more we can research, so we鈥檙e definitely working towards other grants to continue supporting this project,鈥 said Yuwen. 鈥淭his work is valuable and will have huge implications for anyone who might be building a chatbot for communication while bringing that cultural sensitivity piece to light.鈥