Katharina Reinecke – UW News /news Fri, 19 Sep 2025 15:45:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Q&A: UW professor’s book explores how ‘technology is never culturally neutral’ /news/2025/09/19/digital-culture-shock-katharina-reinecke/ Fri, 19 Sep 2025 15:36:35 +0000 /news/?p=89273 The cover of the book Digital Culture Shock.
In her new book, Katharina Reinecke explores how “digital culture shock” manifests in the world, in ways innocuous and sometimes harmful. Photo: Princeton University Press

” describes the overwhelm people can feel when suddenly immersed in a new culture. The flurry of unfamiliar values, aesthetics and language can disorient, discomfit and alienate. In her new book, “,” argues that technology can similarly affect people. Reinecke, a ӰӴý professor in the Paul G. Allen School of Computer Science & Engineering, uses the phrase to “describe the experience and influence of actively or passively using technology that is not in line with one’s cultural practices or norms.” 

The book explores how the self-driving cars trained on U.S. streets would likely struggle to translate to Cairo, with its drastically different road norms. It looks at how , with its complex search interface, can overwhelm Americans used to Google’s minimalist design. And Reinecke digs into how so much technology emanating from specific regions, such as the Bay Area, can lead to forms of cultural imperialism.

UW News spoke with Reinecke about the book and how digital culture shock manifests in the world, in ways innocuous and sometimes harmful. 

What was the spark that led to this book? 

Katharina Reinecke: Maybe it was less of a spark and more of an embarrassment, but around 20 years ago I worked in Rwanda on developing an e-learning application for agricultural advisors in the country. When I presented the software I’d developed to some of the advisors, they very politely told me that they didn’t like the way it looked and didn’t find it intuitive to use. I realized that my cultural background had influenced all the little design decisions made while developing it: whether the interface should be colorful or simply gray and white, which I thought most people would prefer; whether users should be guided through the application or mostly explore on their own. The answer to any of these questions depends on a user’s upbringing, education, norms and values. 

Once I realized that technology is never culturally neutral, I set out to earn a doctorate on this topic and the rest is history. Over the years, I kept collecting similar technology blunders. It turns out, like me, most people have no idea that their culture affects how they use technology and how they develop it. It’s just not something we usually think about or get taught. 

Learn more:

Is there an example of digital culture shock that stands out to you the most or is particularly illustrative? Why?

KR: AI is all over the news these days, so let me start there. When ChatGPT and other generative AI tools came out, I think it really illustrated how its developers had made several design decisions that make these tools work well for some, but not all people. They are trained on mostly English data sources on the web, so early language models told us things like “I love my country. I am proud to be an American” or “I grew up in a Christian home and attended church every week.” Obviously this would make many people aware that the AI is different from themselves. 

We found that the way that these language models speak and what values they convey is only aligned with a tiny portion of the world’s population while others can experience these interactions as a form of digital culture shock. And this is true for any AI application out there from text-to-image models that generate pictures of churches when asked for houses of worship (as if churches are the only reasonable response) to self-driving cars trained in the U.S., which would likely not succeed in places where tuk-tuks and donkey carts share the road. 

You discuss how much of the study of technology is conducted by and with people who are WEIRD, or Western, Educated, Industrial, Rich and Democratic. What are risks of a homogenous digital culture that can emerge from this?

KR: The biggest risk is that technology will continue to be designed in ways that work for people most similar to those in the largest technology hubs, but that it is less usable, intuitive, trustworthy and welcoming to the rest of us. This risk has ethical consequences because technology should be equally usable and useful for all, especially given companies’ enormous profits. There are also several examples in my book that clearly show technology products can struggle to gain market share in cultures it was not designed for, so ignoring this is also risky for companies. 

As I discuss in the book, digital technology has been called out as a form of cultural imperialism because it embeds values and norms that are frequently misaligned with those of its users. This would be less of a problem if technology were designed in various technology hubs around the world, representing a diversity of cultures and values. But it is not. Most of the technology people use, no matter where in the world they are, was designed in the U.S., or it was influenced by user interface norms and frameworks developed in the U.S. So we’ve gotten ourselves into a situation where technology is slowly homogenizing and where people can best use it if they think and feel like its developers. 

You finish the book with 10 misassumptions about technology and culture. What’s the single greatest, or most consequential, misassumption?

KR: To me, it is that people tend to think that one size fits all. They design technology and expect it to work for everyone, which is obviously not true. 

For example, the Western obsession with productivity and efficiency often comes at the expense of interpersonal interactions. So many technology products are hyperfocused on making our days more efficient. There’s an app for any of our “problems,” and all of them try to somehow get us to function better, faster and more productively. But this laser-focus on streamlining misses the point that in many cultures, productivity works differently. In many East Asian cultures, for example, it takes time to build relationships before people will trust another person’s information — or that given by AI. So we need to get rid of the misassumption that technology design can be universal. My job would certainly be so much easier if people would stop believing this!  

For more information, contact Reinecke at reinecke@cs.washington.edu.

]]>
With just a few messages, biased AI chatbots swayed people’s political views /news/2025/08/06/biased-ai-chatbots-swayed-peoples-political-views/ Wed, 06 Aug 2025 16:00:12 +0000 /news/?p=88694 A screenshot of a conversation between a Democrat and a conservative chatbot.
ӰӴý researchers recruited self-identifying Democrats and Republicans to make political decisions with help from three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both likelier to lean in the direction of the biased chatbot they were talking with than those who interacted with the base model. Here, a Democrat interacts with the conservative model. Photo: Fisher et al./ACL ‘25

If you’ve interacted with an artificial intelligence chatbot, you’ve likely realized that all AI models are biased. They were trained on enormous corpuses of unruly data and refined through human instructions and testing. Bias can seep in anywhere. Yet how a system’s biases can affect users is less clear.

So a ӰӴý study put it to the test. A team of researchers recruited self-identifying Democrats and Republicans to form opinions on obscure political topics and decide how funds should be doled out to government entities. For help, they were randomly assigned three versions of ChatGPT: a base model, one with liberal bias and one with conservative bias. Democrats and Republicans were both more likely to lean in the direction of the biased chatbot they talked with than those who interacted with the base model. For example, people from both parties leaned further left after talking with a liberal-biased system. But participants who had higher self-reported knowledge about AI shifted their views less significantly — suggesting that education about these systems may help mitigate how much chatbots manipulate people.

The team July 28 at the Association for Computational Linguistics in Vienna, Austria.

“We know that bias in media or in personal interactions can sway people,” said lead author , a UW doctoral student in statistics and in the Paul G. Allen School of Computer Science & Engineering. “And we’ve seen a lot of research showing that AI models are biased. But there wasn’t a lot of research showing how it affects the people using them. We found strong evidence that, after just a few interactions and regardless of initial partisanship, people were more likely to mirror the model’s bias.”

In the study, 150 Republicans and 149 Democrats completed two tasks. For the first, participants were asked to develop views on four topics — covenant marriage, unilateralism, the and multifamily zoning — that many people are unfamiliar with. They answered a question about their prior knowledge and were asked to rate on how much they agreed with statements such as “I support keeping the Lacey Act of 1900.” Then they were told to interact with ChatGPT 3 to 20 times about the topic before they were asked the same questions again.

For the second task, participants were asked to pretend to be the mayor of a city. They had to distribute extra funds among four government entities typically associated with liberals or conservatives: education, welfare, public safety and veteran services. They sent the distribution to ChatGPT, discussed it and then redistributed the sum. Across both tests, people averaged five interactions with the chatbots.

Researchers chose ChatGPT because of its ubiquity. To clearly bias the system, the team added an instruction that participants didn’t see, such as “respond as a radical right U.S. Republican.” As a control, the team directed a third model to “respond as a neutral U.S. citizen.” A recent found that they thought ChatGPT, like all major large language models, leans liberal.

The team found that the explicitly biased chatbots often tried to persuade users by shifting how they framed topics. For example, in the second task, the conservative model turned a conversation away from education and welfare to the importance of veterans and safety, while the liberal model did the opposite in another conversation.

“These models are biased from the get-go, and it’s super easy to make them more biased,” said co-senior author , a UW professor in the Allen School. “That gives any creator so much power. If you just interact with them for a few minutes and we already see this strong effect, what happens when people interact with them for years?”

Since the biased bots affected people with greater knowledge of AI less significantly, researchers want to look into ways that education might be a useful tool. They also want to explore the potential long-term effects of biased models and expand their research to models beyond ChatGPT.

“My hope with doing this research is not to scare people about these models,” Fisher said. “It’s to find ways to allow users to make informed decisions when they are interacting with them, and for researchers to see the effects and research ways to mitigate them.”

, a UW associate professor in the Allen School, is a co-senior author on this paper. Additional co-authors are , a UW doctoral student in the Allen School; , a UW professor of statistics; , a clinical researcher in psychiatry and behavioral services in the UW School of Medicine; , a professor of computer science at Stanford University; , a lead engineer at ThatGameCompany; and , a professor of communication at Stanford.

For more information, contact Fisher at jrfish@uw.edu and Reinecke at reinecke@cs.washington.edu.

]]>
VoxLens: Adding one line of code can make some interactive visualizations accessible to screen-reader users /news/2022/06/01/voxlens-adding-one-line-of-code-can-make-some-interactive-visualizations-accessible-to-screen-reader-users/ Wed, 01 Jun 2022 16:15:09 +0000 /news/?p=78662
ӰӴý researchers worked with screen-reader users to design VoxLens, a JavaScript plugin that — with one additional line of code — allows people to interact with visualizations. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity. Shown here is a screen reader with a refreshable Braille display. Photo:

Interactive visualizations have changed the way we understand our lives. For example, they can showcase the number of

But these graphics often are not accessible to people who use screen readers, software programs that scan the contents of a computer screen and make the contents available via a synthesized voice or Braille. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities or motion sensitivity.

ӰӴý researchers worked with screen-reader users to design VoxLens, a JavaScript plugin that — with one additional line of code — allows people to interact with visualizations. VoxLens users can gain a high-level summary of the information described in a graph, listen to a graph translated into sound or use voice-activated commands to ask specific questions about the data, such as the mean or the minimum value.

The team May 3 at CHI 2022 in New Orleans.

“If I’m looking at a graph, I can pull out whatever information I am interested in, maybe it’s the overall trend or maybe it’s the maximum,” said lead author , a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”

Screen readers can inform users about the text on a screen because it’s what researchers call “one-dimensional information.”

“There is a start and an end of a sentence and everything else comes in between,” said co-senior author , UW professor in the Information School. “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”

The team started the project by working with five screen-reader users with partial or complete blindness to figure out how a potential tool could work.

“In the field of accessibility, it’s really important to follow the principle of ‘nothing about us without us,'” Sharif said. “We’re not going to build something and then see how it works. We’re going to build it taking users’ feedback into account. We want to build what they need.”

The VoxLens code is .

To implement VoxLens, visualization designers only need to add a single line of code.

“We didn’t want people to jump from one visualization to another and experience inconsistent information,” Sharif said. “We made VoxLens a public library, which means that you’re going to hear the same kind of summary for all visualizations. Designers can just add that one line of code and then we do the rest.”

The researchers evaluated VoxLens by recruiting 22 screen-reader users who were either completely or partially blind. Participants learned how to use VoxLens and then completed nine tasks, each of which involved answering questions about a visualization.

This image has three parts labeled by a, b, and c. For a, it says "Task 1 of 9, which date has the maximum average temperature in this visualization?" and a button to click that says "proceed to visualization." For b, it shows the same as in a, plus "a chart is presented below. The chart shows average temperature in NYC in July 2016 and it shows the temperature decrease over time. It starts at 88 degrees and ends at 74 degrees. At the bottom is a button to click that says "proceed to answer choices." For c, it says Task 1 of 9, the question in a and b and then four multiple choice answers: offering three various dates and then one option that says "unable to extract information." At the bottom is a button that says "proceed to task 2"
Participants learned how to use VoxLens and then completed nine tasks (one of which is shown here), each of which involved answering questions about a visualization. Each task was divided into three pages. Page 1 (labeled with ‘a’) presented the question a participant would be answering, page 2 (b) displayed the question and the visualization and page 3 (c) showed the question with four multiple choice responses. Photo: Sharif et al./CHI 2022

Compared to participants from who did not have access to this tool, VoxLens users completed the tasks with 122% increased accuracy and 36% decreased interaction time.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find what the maximum is,” Sharif said. “In our study, interaction time refers to how long it takes to extract information, and that’s why reducing it is a good thing.”

The team also interviewed six participants about their experiences.

“We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants were feeling about VoxLens,” Sharif said. “We got really positive feedback. Someone told us they’ve been trying to access visualizations for the past 12 years and this was the first time they were able to do so easily.”

Right now, VoxLens only works for visualizations that are created using JavaScript libraries, such as , or Google Sheets. But the team is working on expanding  to other popular visualization platforms. The researchers also acknowledged that the voice-recognition system can be frustrating to use.

“This work is part of a much larger agenda for us — removing bias in design,” said co-senior author , UW associate professor in the Allen School. “When we build technology, we tend to think of people who are like us and who have the same abilities as we do. For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.”

Additional co-authors on this paper are , a UW undergraduate student in the Allen School, and , a UW undergraduate student studying human centered design and engineering. This research was funded by the Mani Charitable Foundation, the ӰӴý , and the ӰӴý .

For more information, contact Sharif at asharif@cs.washington.edu, Wobbrock at wobbrock@uw.edu and Reinecke reinecke@cs.washington.edu.

]]>
Helpful behavior during pandemic tied to recognizing common humanity /news/2021/03/10/helpful-behavior-during-pandemic-tied-to-recognizing-common-humanity/ Wed, 10 Mar 2021 19:06:30 +0000 /news/?p=73209
A new ӰӴý study links helpful behavior during the pandemic, such as donating medical supplies, to individuals’ feelings of connection to others. Photo: Dennis Wise/U. of Washington

 

During the COVID-19 pandemic, people who recognize the connections they share with others are more likely to wear a mask, follow health guidelines and help people, even at a potential cost to themselves, a new ӰӴý study shows.

Indeed, an identification with all humanity, as opposed to identification with a geographic area like a country or town, predicts whether someone will engage in “prosocial” behaviors particular to the pandemic, such as donating their own masks to a hospital or coming to the aid of a sick person.

The , published March 10 in PLOS ONE, is drawn from about 2,500 responses, from more than 80 countries, to an online, international study launched last April.

Researchers say the findings could have implications for public health messaging during the pandemic: Appealing to individuals’ deep sense of connectedness to others could, for example, encourage some people to get vaccinated, wear masks or follow other public health guidelines.

“We want to understand to what extent people feel connected with and identify with all humanity, and how that can be used to explain the individual differences in how people respond during the COVID-19 pandemic,” said author , a postdoctoral researcher at the UW Institute for Learning & Brain Sciences, or I-LABS, who co-led the study with postdoctoral researcher at the Paul G. Allen School for Computer Science and Engineering.

In psychology, “identification with all humanity” is a belief that can be measured and utilized in predicting behavior or informing policy or decision-making. Last spring, as governments around the world were imposing pandemic restrictions, a multidisciplinary team of UW researchers came together to study the implications of how people would respond to pandemic-related ethical dilemmas, and how those responses might be associated with various psychological beliefs.

Researchers designed an online study, providing different scenarios based in social psychology and game theory, for participants to consider. The team then made the study available in English and five other languages on the virtual lab , which co-author , an associate professor in the Allen School, created for conducting behavioral studies with people around the world.

The scenarios presented participants with situations that could arise during the pandemic and asked people to what extent they would:

  • Follow the list of World Health Organization health guidelines (which mostly focused on social distancing and hygiene when the study was run between mid-April to mid-June)
  • Donate masks of their family’s to a hospital short on masks
  • Drive a person exhibiting obvious symptoms of COVID-19 to the hospital
  • Go to a grocery store to buy food for a neighboring family
  • Call an ambulance and wait with a sick person for it to arrive

In addition to demographic details and information about their local pandemic restrictions, such as stay-at-home orders, participants were asked questions to get at the psychology behind their responses: about their own felt identification with their local community, their nation and humanity, in general. For instance, participants were asked, “How much would you say you care (feel upset, want to help) when bad things happen to people all over the world?”

Researchers found that an identification with “all humanity” significantly predicted answers to the five scenarios, well above identifying with country or community, and after controlling for other variables such as gender, age or education level. Its effect was stronger than any other factor, said Barragan, and popped out as a highly significant predictor of people’s tendency to want to help others.

This bar chart shows that “identification with all humanity” had a larger effect size than any other variable on cooperative behavior during the pandemic. Photo: Barragan et al., 2021, PLOS One

The authors noted that identifying with one’s country, in fact, came in a distant third, behind identification with humanity in general and one’s local community. Strong feelings toward one’s nation, nationalism, can lead to behavior and policies that favor some groups of people over others.

“There is variability in how people respond to the social aspects of the pandemic. Our research reveals that a crucial aspect of one’s world view – how much people feel connected to others they have never met – predicts people’s cooperation with public health measures and the altruism they feel toward others during the pandemic,” said co-author , who is co-director of I-LABS and holds the Job and Gertrud Tamaki Endowed Chair in psychology.

Since last spring, of course, much has changed. More than 2.5 million people worldwide have died of COVID-19, vaccines are being administered, and guidance from the U.S. Centers for Disease Control and Prevention, especially regarding masks, has evolved. If a new survey was launched today, Barragan said, the research group would like to include scenarios tuned to the current demands of the pandemic and the way it challenges us to care for others even while we maintain physical distancing.

While surveys, in general, can be prone to what’s known as self-serving bias — the participant answers in ways that they believe will make them “look good” — researchers say that’s not evident here. They point to the sizeable differences between responses that identify with all humanity, and those that identify with community or country, and note there would be little reason for participants to deliberately emphasize one and not the others.

The project is part of a larger multidisciplinary effort by this same UW research team to bring together computer scientists and psychologists interested in decision-making in different cultural contexts, which could inform our understanding of human and machine learning.

An eventual aim of the study is to use tools from artificial intelligence research and online interactions with humans around the world to understand how one’s culture influences social and moral decision-making.

“This project is a wonderful example of how the tools of computer science can be combined with psychological science to understand human moral behaviors, revealing new information for the public good,” said co-author , the Hwang Endowed Professor of Computer Science and Engineering at the UW.

For COVID-19 and future humanitarian crises, the ethical dilemmas presented in the study can offer insight into what propels people to help, which can, in turn, inform policy and outreach.

“While it is true that many people don’t seem to be exhibiting helpful behaviors during this pandemic, what our study shows is that there are specific characteristics that predict who is especially likely to engage in such behavior,” Barragan said. “Future work could help people to feel a stronger connection to others, and this could promote more helpful behavior during pandemics.”

Additional co-authors were Koosha Khalvati, a doctoral student in the Allen School and Rechele Brooks, a research scientist with I-LABS.

The study was funded by the UW, the Templeton World Charity Foundation and the National Science Foundation.

For more information, contact Barragan at barragan@uw.edu or Meltzoff at meltzoff@uw.edu.

]]>
Should you help a sick person? UW psychology, computer science faculty study ‘moral dilemmas’ of COVID-19 /news/2020/05/06/should-you-help-a-sick-person-uw-psychology-computer-science-faculty-study-moral-dilemmas-of-covid-19/ Wed, 06 May 2020 22:23:07 +0000 /news/?p=67995 Let’s say you have a small stash of face masks in your cupboard, set aside for you and your family.

Meanwhile, you’ve read news stories highlighting the urgent PPE needs of your local hospital.

Do you donate some of your masks to the hospital? All of them? None?

Such is a moral dilemma under COVID-19, and one posed by a new international study led by the ӰӴý. The five- to seven-minute, anonymous is designed to gauge the perception of ethical situations as the pandemic evolves around the world. Respondents take the survey, add basic demographic details, as well as information about current restrictions in place in their community, and learn at the end how their answers compare to others.

“People are making important decisions, big and small, in this time of COVID-19. Many find themselves facing moral dilemmas about ‘what’s the right thing to do’ in this situation,” said , a UW psychology professor and co-director of the Institute for Learning & Brain Sciences. “This helps us learn about similarities and differences in the opinions and feelings among people as we all cope with this unique event.”

Whether to help a neighbor during COVID-19 is one of the questions in a new moral dilemmas study launched by the ӰӴý. Photo: Andre Ouellet/Unsplash

There are no right or wrong answers, researchers say, because the way each person responds may reflect the norms of where they live.

Ultimately, the research aims to help inform the ways artificial intelligence can become more attuned to cultural variations in how people think about decisions in health care settings, said , a professor in the UW’s Paul G. Allen School of Computer Science & Engineering and a co-director of the ,

“There is an urgent need to answer this question given the growing use of AI in medical contexts,” Rao said. Human moral values likely vary from one culture to another, so “AI systems need to ‘learn’ culture-specific moral values by interacting with humans, similar to how children learn their moral values.”

The scenarios in the survey are based on classic dilemmas posed in ethics, social psychology and game theory, Rao said. In two situations, the respondent is asked to imagine themselves as a doctor and to make a potentially life-altering choice. In other scenarios, the respondent is a passer-by or a neighbor presented with a not-so-simple opportunity to help.

The survey is available on the virtual lab , which , an associate professor in the Allen School and co-leader of the study with Meltzoff and Rao, created for conducting behavioral studies with people around the world. So far the moral dilemmas survey has been translated into five languages, including Spanish, German and Farsi (with more to come), and participants have come from about 70 countries. Researchers expect trends in responses to reflect geography and culture, Reinecke said.

Researchers expect some differences among age groups, as well: The survey is aimed at people across a wide range of ages. LabintheWild doesn’t usually exclude anyone, Reinecke added, but the difficult nature of the pandemic, and the scenarios presented in the survey, prompted researchers to design it to be of interest to participants from 14 years of age to adults well past retirement. The researchers wanted to design the questions to be interesting to a broad set of participants, because the pandemic affects everyone in society.

“We hope to look at responses according to the country of the participant and their age in order to learn how people are thinking about this once-in-a-lifetime event,” said Reinecke. “This will help us be better prepared if this comes around again. And one feature of the work that people find fun is that we have a chart at the end where people can compare their answers to those given by others around the world. Most people find this fascinating and informative.”

The study is funded by the UW, the Templeton World Charity Foundation and the National Science Foundation.

 

For more information, contact Reinecke at reinecke@cs.washington.edu, Rao at rao@cs.washington.edu or Meltzoff at meltzoff@uw.edu.

]]>